Message boards : Number crunching : No work from project.
Previous · 1 · 2
Author | Message |
---|---|
Astro Send message Joined: 16 Feb 06 Posts: 141 Credit: 32,977 RAC: 0 |
Jim, I don't know how long your machine takes to do 41 rosetta wus, but I'd say it's likely the scheduler is already satisfied with the quantity of work on hand (from any project) and has entered a "no work fetch" mode. You might see that in the message log somewhere. It might be far back. At some point you should see a message "resuming work fetch" and then a request to what ever project is next up. tony |
JimB Send message Joined: 17 Feb 06 Posts: 6 Credit: 19,638 RAC: 0 |
I had a sneaky suspicion it might be something like that. My goal is to have about 3 days of work cached, but I think I'm slightly beyond that now. Reviewed the stdoutdae.txt and see a number of "Suspending work fetch because computer is overcommitted" and "Allowing work fetch again" in several variations, so I guess I'll wait a few days to see if clearing my stock has any effect. Thanks! "Be all that you can be...considering." Harold Green |
JimB Send message Joined: 17 Feb 06 Posts: 6 Credit: 19,638 RAC: 0 |
My current app is 4.81. No need to frisky and work around it; I'll just wait a day or two untill my stash is lower. I'm keeping ralph on, so if wu's come out hopefully I'll get them in good time. Thanks for the help, and keep up the good work. "Be all that you can be...considering." Harold Green |
NJMHoffmann Send message Joined: 17 Feb 06 Posts: 8 Credit: 1,270 RAC: 0 |
There is a bug IMHO in the BOINC client that leeds to a request for far too much work for low ressource projects (esp. after a project ran "dry" of WUs). Is it possible to change the server for RALPH so that only one WU is distributed, ignoring the seconds the client asked for? This would prevent those "large cache by accident" and would give you shorter turnaround times. If you have a host, that crunches e.g. for Rosetta 90% and for Ralph 10%, if there is new work for Ralph it will ask for 86400 seconds of new work (at a setting of 1 day connect time). The client or the server would have to make that 8640 seconds (= 10% of a day). Now both parts rely on the other side to do this calculation, but it is done nowhere. I'd call it a bug. Norbert |
Nuadormrac Send message Joined: 22 Feb 06 Posts: 68 Credit: 11,362 RAC: 0 |
Dumdidum...... These do take awhile to crunch. My first WU took about 8.5 hours or so an Athlon 64 here... Not sure what others might see, but if an older comp I imagine it could be much longer. It did go though... As to the unavailable WUs some have mentioned, it's just something to wait for, and check the server status periodically (for a time to manually request work). As I've been running LHC for awhile, not entirely un-used to it. Though LHC is a production project, delays between the release of new batches of work isn't uncommonly. Last Tuesday or so, they had no work, then they had a blip of like 200,000+ WUs, and they're outa work again. Quicker this time then in the past, but it happens. Does give one time to crunch other WUs also, which is how many at LHC think of it... |
Message boards :
Number crunching :
No work from project.
©2024 University of Washington
http://www.bakerlab.org