Message boards : Current tests : New crediting system
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 13 · Next
Author | Message |
---|---|
feet1st Send message Joined: 7 Mar 06 Posts: 313 Credit: 116,623 RAC: 0 |
The 2 credits per model was just a stab in the dark. First trial. But the idea is that Ralph credits are disposable. So, run the WUs on Ralph, establish an appropriate credit per model. Then roll out to Rosetta using that established credit value per model. The other idea is to use Ralph as a testing ground. And so they have to test all of the server code that they've changed that's keeping track of credits claimed (on the old system) with the new credits. And so to run Ralph in the mannar the most similar to how Rosetta will be run, SOME arbitrary value had to be established before sending out WUs. Because this is how Rosetta will run. Perhaps, with the experience on Ralph it is determined that a more reasonable value is 5.53 credits per model, or whatever. But they need to practice running under an approach where the number of credits per model is established up front, in order to help assure there are no glitches when rolling out to Rosetta. |
dekim Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 20 Jan 06 Posts: 250 Credit: 543,579 RAC: 0 |
does anyone object to rolling this out to Rosetta@home? |
[B^S] thierry@home Send message Joined: 15 Feb 06 Posts: 20 Credit: 17,624 RAC: 0 |
That seems OK to me. And with this you know exactly what you have done (as Seti Classic did). The principle seems OK but... I have not followed all the discussions and there is something I don't understand very well. I take an example: - I have one PC which runs Rosetta with a 'Target CPU run time' of 10 hours. The current running WU runs for 9 hours and have 143 models. So it will be something like 318 credits for 10 hours. - I have another PC with a 'Target CPU run time' of one day. The current WU runs for 2.5 hours and have now 5 models. That's something like 96 credits for 24 hours. Isn't that strange? |
[B^S] sTrey Send message Joined: 15 Feb 06 Posts: 58 Credit: 15,430 RAC: 0 |
Not "objecting", but I'd be prepared for more uproar given what I've seen, especially since there's already been some heat on the boards there. If this credit-per-model scheme can't end up yielding some consistent amount of credit per cpu hour on a given machine, and preferably something close to that machine's results on other proejcts, there will be confusion at best and likely more conflict around credit, rough project parity etc. From my results so far it's nowhere near consistent, as mmciastro said they're all over the map. Though if you're rolling this out with the figure-out-the-right-credit-award-first mechanism already in place, that will be quite interesting to see. Best of luck with it (& I'm not going anywhere, regardless) |
UBT - Halifax--lad Send message Joined: 15 Feb 06 Posts: 29 Credit: 2,723 RAC: 0 |
Whats wrong with flops counting?? If its been answered before sorry not reading all the forums trying to find a answer Join us in Chat (see the forum) Click the Sig Join UBT |
feet1st Send message Joined: 7 Mar 06 Posts: 313 Credit: 116,623 RAC: 0 |
does anyone object to rolling this out to Rosetta@home? Before you do that, you need to document what you've got. Hopefully I've made that process easier below. Did I get it pretty much right? Once you document it, then folks can comment from an informed perspective with feedback. From there, Rosetta's next. Document it there, invite comment, address concerns, and then roll it out. Right now, even the Ralph participants don't seem to have a grasp of where you're headed with the credit system, and they've seen it at work. So, more description is needed. ...or just explain, "we're going to keep credits the same for now... but look at these new numbers we're working on". If that's the case. |
feet1st Send message Joined: 7 Mar 06 Posts: 313 Credit: 116,623 RAC: 0 |
thierry - I have one PC which runs Rosetta with a 'Target CPU run time' of 10 hours. The current running WU runs for 9 hours and have 143 models. So it will be something like 318 credits for 10 hours. Exactly, this is what I'm saying about the 2 credits per model was just an example. And the WU you are running on the first PC is going to yield less credits per model then the WU you are running on the second (assuming your two PCs are comparable speed). So, the idea would be to prototype these two WUs on Ralph first... get a grasp for how trivial models for the first are, and how difficult models for the second are... establish an appropriate credit value for each, which makes them equally attractive and does not afford any cherry picking, and in the end, you should yield same credits per hour with different WUs on different machines (that have the same relative speed). |
[B^S] thierry@home Send message Joined: 15 Feb 06 Posts: 20 Credit: 17,624 RAC: 0 |
Thanks Feet1st. Got it. In fact the first is slower compares to the second (P4 3.0 HT vs Pentium D 2.8) Just a question: what are you in the real life? Teacher in some subjects? Because your explanations are always so clear and understandable, it's a pleasure to read you. |
Astro Send message Joined: 16 Feb 06 Posts: 141 Credit: 32,977 RAC: 0 |
It's my opinion that I don't have enough data to say one way or the other. We don't know what you know about what happened behind the scenes to determine the number. I'm apologizing upfront for the following pic, but I had no other way of linking the two together into one pic, as both are needed to complete the picture and to be able to refer between them. What I'm seeing is models/time all over the map. Perhaps it you upped the number of Ralph WUs distributed per day to more than the two/three I'm getting now, that I could form an opinion. Perhaps with enough data the quantity of low number and high numbers will offset themselves to mean that the AVG granted credit over a large time period will be closer to Cross Project equality. At one point all I had were 18 cr/hour wus, then I got one that only came in at 2/hr and that offset themselves to be nearly what I would have gotten with just a standard boinc client. Below are how it shapes up vs. other projects. It also shows every new Ralph WU I have on record. Remember Einstien and Seti are still working on theirs. Which way that goes is unknown other than Seti plans to up the "load store adjustment" (fpops multiplier) from 3.35 to 3.51 in the Seti App 5.17 (now in beta). If you look at the few results/computer you'll see what I mean by "all over the place". I just think I need more time. tony |
dekim Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 20 Jan 06 Posts: 250 Credit: 543,579 RAC: 0 |
The new crediting system is pretty simple. First, we determine how much credit to grant per model for each work unit by running tests on Ralph. We will use the average credit per model from the Ralph tests for production runs on Rosetta@home. These values will be work unit specific so work units that take longer to generate structures will have higher values. I have explained this below in a previous post. The only difference is that I decided to keep the old crediting system so users can compare their work with either system. If you take a look at the top participants page you will see two new columns, "Recent average work credit" and "Total work credit". These are the new credit per model based values. I've been using 2 credits per model on Ralph JUST FOR TESTING. Rosetta@home will use work unit specific values determined from the Ralph test runs.
|
[B^S] thierry@home Send message Joined: 15 Feb 06 Posts: 20 Credit: 17,624 RAC: 0 |
Sounds good :-) |
doc :) Send message Joined: 16 Feb 06 Posts: 46 Credit: 4,437 RAC: 0 |
sounds good if the influence from over and underclaiming hosts is not too big to have a real impact on the credits per model :) how will errored WUs be dealt with? do they report how many models they did before they crashed? |
Hoelder1in Send message Joined: 17 Feb 06 Posts: 11 Credit: 46,359 RAC: 0 |
The new crediting system is pretty simple.It finally dawned upon me that people didn't understand David Kim's original explanation of the new credit system that I reposted over at Rosetta (perhaps that was a mistake). Thanks to Feet1st and David Kim for the 'extended' explanations ! :-) does anyone object to rolling this out to Rosetta@home?So what's the idea which of the two credit systems, the old or the new one, should be exported to the stats sites and be listed on the account pages (e.g. this one) ?? |
feet1st Send message Joined: 7 Mar 06 Posts: 313 Credit: 116,623 RAC: 0 |
thierry ...what are you in the real life? Teacher in some subjects? Because your explanations are always so clear and understandable, it's a pleasure to read you. Well thank you! You just made my week! I'm a "computer programmer". But I'm in a job where what I'm really doing is teaching people how to write code and perform diagnostics on their code, and optimize for performance, select the best option for their situation, etc. ...so ya, I'm a "teacher" too. |
dekim Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 20 Jan 06 Posts: 250 Credit: 543,579 RAC: 0 |
sounds good if the influence from over and underclaiming hosts is not too big to have a real impact on the credits per model :) we can apply a correction factor to account for the over/under claiming hosts or as someone on the boards suggested, remove the top and bottom X percent. Errored work units will not be granted credit with the new crediting method. They will continue to be with the existing method though. If it becomes a serious issue, we can try to come up with a reasonable solution. |
Hoelder1in Send message Joined: 17 Feb 06 Posts: 11 Credit: 46,359 RAC: 0 |
we can apply a correction factor to account for the over/under claiming hosts or as someone on the boards suggested, remove the top and bottom X percent.Assuming that you use the median instead of the mean you are already removing the top and bottom 50%, so removing any additional percentages from the distribution will not have any effect on the median. ;-) |
dekim Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 20 Jan 06 Posts: 250 Credit: 543,579 RAC: 0 |
So what's the idea which of the two credit systems, the old or the new one, should be exported to the stats sites and be listed on the account pages (e.g. this one) ?? That's a tough one. I'm inclined to keep using the current credit stats to remain consistent. But the leader lists/stats on our web site will be ordered by the new credit values by default as in https://ralph.bakerlab.org/top_users.php. These are all great questions/comments, by the way. Thanks! keep up the feedback. it really helps. feet1st, your explanations are great and right on. I'll work on documentation when we come up with a final plan. |
tralala Send message Joined: 12 Apr 06 Posts: 52 Credit: 15,257 RAC: 0 |
does anyone object to rolling this out to Rosetta@home? Oh yes I strongly object. Please don't hasten! First let's discuss the details of how to determine the credit/model. How many WU/Results you plan to receive before determing the credit/model? Then let's determine some real credit/model here on RALPH and then reissue the WU here with the new credit factor and check if anyone is pleased. Then we should discuss questions like removing the top and bottom X percent, "adjusting" the credit in order to match with other projects. Then make an announcement on the HOMEPAGE of Rosetta that you plan to switch and link to RALPH if anybody wants to see what is coming. Then after a week make the switch. That's a slow approach but I think it does much less harm to Rosetta and the atmosphere in the message boards to quarrel over the new credit system BEFORE it gets deployed than after that. |
kevint Send message Joined: 24 Feb 06 Posts: 8 Credit: 1,568,696 RAC: 0 |
does anyone object to rolling this out to Rosetta@home? I just crunched a few WU's for beta and I do object - I have several WU that did 5 decoys, a couple WU that did 6 decoys, most WU has 3 or fewer. WU's crunched in about the same time frame. If we were to look at cross project equalization of credits - Rosetta would be sitting at or very near the bottom. Granting the fewest credit per CPU hour out there. This could in fact hurt the project as many crunchers will seek out the project that grants higher credits per hour. Or is at least more consistant on granting credits per CPU hour. I have some old PIII that sometimes will crunch for 2-3 hours with only returning a single decoy - Given this credit system, those machines would do about 1 credit per hour, even seti enhanced is not that low on these guys, as I am seeing about 15 an hour from them with SETI, and 18 and hour with them at E@H. This credit system would make these machines nearly worthless IMO for returning any benifit in running them. |
dekim Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 20 Jan 06 Posts: 250 Credit: 543,579 RAC: 0 |
Everyone keep in mind that the current standard boinc crediting system will still be used. Also, minor modifications to the credit/model values will not make that much of a difference in the long run. The important thing to know is that given any credit/model value, users will be on a level playing field. I think we can all agree that this is the major drive/motivation for coming up with a new method. Making sure it closely matches the BOIINC credit values is not as important since we will still use the old system along with the new. |
Message boards :
Current tests :
New crediting system
©2024 University of Washington
http://www.bakerlab.org