No work from project.

Message boards : Number crunching : No work from project.

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile Astro

Send message
Joined: 16 Feb 06
Posts: 141
Credit: 32,977
RAC: 0
Message 389 - Posted: 20 Feb 2006, 20:01:49 UTC - in response to Message 386.  



Thanks in advance.

Jim, I don't know how long your machine takes to do 41 rosetta wus, but I'd say it's likely the scheduler is already satisfied with the quantity of work on hand (from any project) and has entered a "no work fetch" mode. You might see that in the message log somewhere. It might be far back. At some point you should see a message "resuming work fetch" and then a request to what ever project is next up.

tony
ID: 389 · Report as offensive    Reply Quote
Profile JimB
Avatar

Send message
Joined: 17 Feb 06
Posts: 6
Credit: 19,638
RAC: 0
Message 391 - Posted: 20 Feb 2006, 20:15:48 UTC - in response to Message 389.  



Thanks in advance.

Jim, I don't know how long your machine takes to do 41 rosetta wus, but I'd say it's likely the scheduler is already satisfied with the quantity of work on hand (from any project) and has entered a "no work fetch" mode. You might see that in the message log somewhere. It might be far back. At some point you should see a message "resuming work fetch" and then a request to what ever project is next up.

tony


I had a sneaky suspicion it might be something like that. My goal is to have about 3 days of work cached, but I think I'm slightly beyond that now. Reviewed the stdoutdae.txt and see a number of "Suspending work fetch because computer is overcommitted" and "Allowing work fetch again" in several variations, so I guess I'll wait a few days to see if clearing my stock has any effect.

Thanks!




"Be all that you can be...considering." Harold Green
ID: 391 · Report as offensive    Reply Quote
Profile JimB
Avatar

Send message
Joined: 17 Feb 06
Posts: 6
Credit: 19,638
RAC: 0
Message 401 - Posted: 20 Feb 2006, 22:28:25 UTC - in response to Message 397.  



Thanks in advance.

Jim, I don't know how long your machine takes to do 41 rosetta wus, but I'd say it's likely the scheduler is already satisfied with the quantity of work on hand (from any project) and has entered a "no work fetch" mode. You might see that in the message log somewhere. It might be far back. At some point you should see a message "resuming work fetch" and then a request to what ever project is next up.

tony


I had a sneaky suspicion it might be something like that. My goal is to have about 3 days of work cached, but I think I'm slightly beyond that now. Reviewed the stdoutdae.txt and see a number of "Suspending work fetch because computer is overcommitted" and "Allowing work fetch again" in several variations, so I guess I'll wait a few days to see if clearing my stock has any effect.

Thanks!



Jim,

If you are running the new Rosetta Application, you can speed up the dumping of the work cash by adjusting the time parameter in your prefs. If you are still running the old version, you will just have to wait, because BOINC has already tagged the version of the application to the WUs, and you cannot change that easily (there is a work around if you want to change it early, but PM me at the Moderator contact e-mail if you want to do that.)

As for the RALPH Wus, you are not crazy, there are not any to be had at the moment. This may change soon. You will have to be mindful that some of the settings for RALPH may cause you trouble with Rosetta WUs. So feel free to ask questions and a lot of folks will jump in to help you out.




My current app is 4.81. No need to frisky and work around it; I'll just wait a day or two untill my stash is lower. I'm keeping ralph on, so if wu's come out hopefully I'll get them in good time. Thanks for the help, and keep up the good work.


"Be all that you can be...considering." Harold Green
ID: 401 · Report as offensive    Reply Quote
NJMHoffmann

Send message
Joined: 17 Feb 06
Posts: 8
Credit: 1,270
RAC: 0
Message 418 - Posted: 21 Feb 2006, 9:11:36 UTC - in response to Message 384.  

There is a bug IMHO in the BOINC client that leeds to a request for far too much work for low ressource projects (esp. after a project ran "dry" of WUs). Is it possible to change the server for RALPH so that only one WU is distributed, ignoring the seconds the client asked for? This would prevent those "large cache by accident" and would give you shorter turnaround times.

Norbert


Norbert:
Actually it is not a bug it is just part of the production environment that is not compatible with a testing environment. While your point is well taken and the suggestion is a good one, It might not be so simple to implement. I will send the idea along to the project.


If you have a host, that crunches e.g. for Rosetta 90% and for Ralph 10%, if there is new work for Ralph it will ask for 86400 seconds of new work (at a setting of 1 day connect time). The client or the server would have to make that 8640 seconds (= 10% of a day). Now both parts rely on the other side to do this calculation, but it is done nowhere. I'd call it a bug.

Norbert
ID: 418 · Report as offensive    Reply Quote
Nuadormrac
Avatar

Send message
Joined: 22 Feb 06
Posts: 68
Credit: 11,362
RAC: 0
Message 617 - Posted: 25 Feb 2006, 9:32:57 UTC - in response to Message 91.  

Dumdidum......
Still waiting for the first Unit to crunch.....


These do take awhile to crunch. My first WU took about 8.5 hours or so an Athlon 64 here... Not sure what others might see, but if an older comp I imagine it could be much longer. It did go though...

As to the unavailable WUs some have mentioned, it's just something to wait for, and check the server status periodically (for a time to manually request work). As I've been running LHC for awhile, not entirely un-used to it. Though LHC is a production project, delays between the release of new batches of work isn't uncommonly. Last Tuesday or so, they had no work, then they had a blip of like 200,000+ WUs, and they're outa work again. Quicker this time then in the past, but it happens.

Does give one time to crunch other WUs also, which is how many at LHC think of it...
ID: 617 · Report as offensive    Reply Quote
Previous · 1 · 2

Message boards : Number crunching : No work from project.



©2024 University of Washington
http://www.bakerlab.org