Posts by Honza

1) Message boards : RALPH@home bug list : Bug Report for Ralph 5.26 (Message 2283)
Posted 26 Sep 2006 by Honza
Post:
Same error here; suspending.
2) Message boards : Current tests : New crediting system (Message 1944)
Posted 9 Aug 2006 by Honza
Post:
Ethan
Your idea of measuring the average time per model and compare it with a golden computer assumes that the computer pools on Rosetta and Ralph are similar, which is probably not true. I assume on Ralph there are on average faster computer than on Rosetta. This makes the idea imho unpractical.
Not being Ethan but anyway...
No, it doesn't. Damn credit award can be postponed until each model gets cralibrated (in term of credit) on Ralph.

Or, taking on your idea, credit award can be estimated once a 100 hosts(or so) returns each results type (no need for ralph estimation)

Each model, as I understand, is not constant in terms of computng demands.
If it would be so...or better make it so - we can use calibration units.

The downside of this is that credit can't be graned immediately (not a bad one) and there will be more results rending.
3) Message boards : Current tests : New crediting system (Message 1923)
Posted 8 Aug 2006 by Honza
Post:
tralala - the aim is to avoid and abandom always and ever ill-numbered benchmarks.
(or at least I hope and pray).

You can simply choose CPU type, divide it with CPU frequency - or golden machine, as tony suggested. I know it's not perfert, RAM speed takes place etc.
But you should fit in acceptable +- 10%, not something like 500% with benchmarks.

Just try to maintain inter-project parity, that will do...
4) Message boards : RALPH@home bug list : Bug reports for Ralph 5.20 (Message 1762)
Posted 4 Jun 2006 by Honza
Post:
(too late to edit). Another one sitting idle at 100% - http://ralph.bakerlab.org/result.php?resultid=150039 so 2 of 6 got stucked at finish in my case.
5) Message boards : RALPH@home bug list : Bug reports for Ralph 5.20 (Message 1761)
Posted 4 Jun 2006 by Honza
Post:
3WUs went fine, 4th got stucked at 100% for hours - http://ralph.bakerlab.org/result.php?resultid=150036.
3 more to go...
6) Message boards : RALPH@home bug list : Bug reports for Ralph 5.17-5.19 (Message 1716)
Posted 30 May 2006 by Honza
Post:
Now really a bug, Rhiju, but we are empty, dry, no more WUs to test 5.19...
7) Message boards : RALPH@home bug list : Bug reports for Ralph 5.16 (Message 1638)
Posted 15 May 2006 by Honza
Post:
Thanks for answer, Moderator9.
I've done 5 WUs with no issue, D 820 with 1GB RAM.
...still a bit memory demanding for average machine.
8) Message boards : RALPH@home bug list : Bug reports for Ralph 5.16 (Message 1629)
Posted 15 May 2006 by Honza
Post:
Got one finished in ~ 45 min, memory usage unknown (finished sooner than I was able to check)
http://ralph.bakerlab.org/result.php?resultid=125948

Another one finished after 55 min, 230MB usage
http://ralph.bakerlab.org/result.php?resultid=125947

3 more to go...

Dumb question: apart from lower traffic of higher "Target CPU run time" - does having it set to let's say 12 hours brings "better" (i.e. more precise) results?
9) Message boards : RALPH@home bug list : Bug reports for Ralph 5.15 (Message 1620)
Posted 14 May 2006 by Honza
Post:
@ sTrey - just wanted to address same issue: quite high memory usage.
I believe it is not easy to *known* prior computing particular WU what peak memory usage will be but some users may experience problems with it.
(it is the reason why some users still stick with *outdated* Predictor even with a very little user's support).

Does it make sense to introduce WU memory limit like "Target CPU run time" in user's profile?
Or send selected WUs according host specification. I known this was addressed before (and I was critical on how scheduler works) and I'm used to high emory demands from CPDN/SAP but this is something many users will not expect...






©2024 University of Washington
http://www.bakerlab.org