Message boards : Current tests : Ralph & Rosetta optimized for GPU CUDA ?
Author | Message |
---|---|
Zarck Send message Joined: 17 Mar 06 Posts: 6 Credit: 5,188 RAC: 0 |
http://setiweb.ssl.berkeley.edu/beta/cuda.php it's possible for Seti ? and for Rosetta/Ralph ? @+ *_* |
robertmiles Send message Joined: 13 Jan 09 Posts: 103 Credit: 331,865 RAC: 0 |
What I've seen indicates that it's possible but rather costly in programmer time, due to the large number of changes needed to make the programs fit in a GPU. Not likely to happen on BOINC projects that keep updating their software as often as Ralph/Rosetta do, only on projects that have a large amount of data to crunch but not much need to keep updating the software used to do it. |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
I hope ralph produce a OpenCl fork of code.. http://www.opencldev.com/ |
robertmiles Send message Joined: 13 Jan 09 Posts: 103 Credit: 331,865 RAC: 0 |
I've seen a thread saying that their current algorithm uses so much memory for each processor core that they're unlikely to get a significant speedup from a GPU version, and have therefore decided to stop even trying for now. It would need an entirely new version of their application program, based on a new algorithm, and likely also another computer language for writing it. |
Jim Send message Joined: 13 Feb 08 Posts: 4 Credit: 339,036 RAC: 0 |
Since Ralph and Rosetta are Research type programs the software needs to be changed and tweeked all the time. As I understand it they just don't have the time AND staff to be able to code the GPU software needed on any kind of timely basics. I don't work here or represent the project, that's just my understanding of the problem. |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
Since Ralph and Rosetta are Research type programs the software needs to be changed and tweeked all the time. As I understand it they just don't have the time AND staff to be able to code the GPU software needed on any kind of timely basics. I agree. But also rosetta/ralph researchers have to consider: 1 The enormous power of gpu cards (a “simple†6950 have 2,2 tflops on single precision!!) , increase every 6/8 months. 2 The great improvements of software side (Cuda and OpenCL). The problem, i think, it’s that not every kind of project can be converted for gpu (rosetta memory problems is well known). For example, the poem@home project have a opencl cpu client since march, but they have problems to convert it for gpu…. |
uBronan Send message Joined: 26 May 07 Posts: 2 Credit: 9,863 RAC: 0 |
Well just my two cents on this matter are : They just should try one of the gpu guru's out there which made it possible to a few projects which are now famous (milkyway/collatz) Its simple you have to trust the knowledge available from for instance crunch3r. Which as far as i know helps all projects who was struggling with conversions to gpu processing. Or maybe Raistmer who is another guru already busy transforming seti into a multi gpu environment first at nvidia but since a while also on ati. If these guys don't see a fresh approach on how to do the actual math and can not find a performance gain into gpu processing then its simply not possible. But more and more projects finally starts learning from the knowledge available from the user base. Now i know thats sometimes its hard to think different but its a learning process and sometimes needs some fresh ideas how to approach a certain problem Ofcourse i do not expect that a project has a working solution in a short time but as we can see on einstein they are experimenting with it as well. This project has also very complex calculations but slowly we see some minor progress in the development in their beta software too. In that project some parts of the jobs can be assisted by the gpu as a old school co-processor which in time will speed up some parts of the work. Opencl is in itself not yet mature enough the reason is simple nvidia who is also in this open standard should help work on it, but does not really want it to succeed. They rather see their cuda adopted that is why its more mature, to be honest i do actually believe they do more harm then good in opencl. The ati stream processing is not a good solution if you need complex work been done ( but if it can do it will do it lightning fast) the commands available todo work are very limited. |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
Another way may be: not to convert to gpu all the code of rosetta cpu wus, but create a "sub-projet" with specific functions (and a little part of code) and test it largely...ralph is here. After they will add, little by little, functions and code of rosetta to this gpu project. But we don't know the code, so this is only ideas.... |
uBronan Send message Joined: 26 May 07 Posts: 2 Credit: 9,863 RAC: 0 |
That is exactly how einstein got to an working gpu processing portion of the work In the first betas it seemed as if gpu did not enhance any of the work, and even did sometimes worse then the normal cpu version. But later they took another approach and with some help got to a now perfect working gpu units system which partially (much of it ) uses the gpu Still some work gets done by the cpu simply because of the complexity of this work. That approach is not uncommon and in time the gpu languages will evolve as well, which would give a better way of handling the intructions. I myself had hoped that opencl would make more progress so all can benefit from an open standard, so many scientist would get access to this huge powers more easily. |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
Another way may be: not to convert to gpu all the code of rosetta cpu wus, but create a "sub-projet" with specific functions (and a little part of code) and test it largely...ralph is here. After they will add, little by little, functions and code of rosetta to this gpu project. Despite the ram problem on gpu, may be interesting to know what ralph team thinks about ideas of the forum..... |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
Anybody out there? |
Rocco Moretti Volunteer moderator Project developer Project scientist Send message Joined: 18 May 10 Posts: 11 Credit: 30,188 RAC: 0 |
I don't think anyone would argue that speeding up Ralph/Rosetta@home would be anything but a positive. That said, from what I understand, GPU programing is *hard*, especially if you don't have a naturally parallel application. SETI@Home has it a bit easy, as Fourier transforms/signal processing matches GPU processors very well. In contrast, because of the way Rosetta works internally, it's non-trivial to convert to GPU programing style. From what I understand, while bits and pieces might be easily moved to a GPU, you'd be killed by the data transfer to and from the GPU when switching between GPU-using sections and CPU-using sections. My sketchy understanding is that on several occasions groups with experience doing GPU programming have taken a look at the Rosetta code with little to nothing ultimately coming of it. That said, one of the researchers in the lab was/is working on a "skunkworks"-type project regarding incorporating the use of GPUs into Rosetta - though I hesitate to mention this because it's very uncertain if anything will actually come of it, or if it does, when. Even if there are parts of Rosetta that are GPU-ized, there's no guarantee that it would be those parts which would make a significant difference in the typical runs performed on Rosetta@Home. Short answer: No one would refuse a GPU version of Rosetta, but the considered opinion is that making it would likely be a lot of effort for not all that much actual speedup. |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
Thanks for the answer!! |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
That said, one of the researchers in the lab was/is working on a "skunkworks"-type project regarding incorporating the use of GPUs into Rosetta - though I hesitate to mention this because it's very uncertain if anything will actually come of it, or if it does, when. 18 months ago, this answer Any news?? |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
From what I understand, while bits and pieces might be easily moved to a GPU, you'd be killed by the data transfer to and from the GPU when switching between GPU-using sections and CPU-using sections. As i write on Rosetta forum, this may be the answer hUMA |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
I see a post on rosetta forum about gpu An admin is testing part of rosetta code with phyton and opencl... |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
I see a post on rosetta forum about gpu Documentation about PyOpenCl |
[VENETO] boboviz Send message Joined: 9 Apr 08 Posts: 913 Credit: 1,892,541 RAC: 294 |
I agree. But also rosetta/ralph researchers have to consider: 5 years ago. Now AMD Radeon Pro Duo has 16 Tflops in SP and 1 Tflops in DP!! :-O |
Message boards :
Current tests :
Ralph & Rosetta optimized for GPU CUDA ?
©2024 University of Washington
http://www.bakerlab.org