Credits

Message boards : Number crunching : Credits

To post messages, you must log in.

AuthorMessage
Profile ashriel

Send message
Joined: 3 Mar 07
Posts: 11
Credit: 648
RAC: 0
Message 2937 - Posted: 29 Mar 2007, 13:12:51 UTC
Last modified: 29 Mar 2007, 13:13:04 UTC

Hello

I wonder why the granted credits are every time lower, sometimes even much lower, then claimed.

Feel free to take a look

Regards,
Maion
ID: 2937 · Report as offensive    Reply Quote
Slywy

Send message
Joined: 29 Mar 07
Posts: 1
Credit: 10,844
RAC: 3
Message 2981 - Posted: 2 Apr 2007, 10:39:54 UTC - in response to Message 2937.  
Last modified: 2 Apr 2007, 10:41:04 UTC

Hello

I wonder why the granted credits are every time lower, sometimes even much lower, then claimed.

Feel free to take a look

Regards,
Maion


Overnight my computer claimed 24.85 credits and was granted 80.00! See here. Previously, the most it had earned was 3.73 . . .
ID: 2981 · Report as offensive    Reply Quote
Odysseus

Send message
Joined: 4 May 07
Posts: 23
Credit: 16,331
RAC: 0
Message 3082 - Posted: 6 May 2007, 22:23:33 UTC

My first two tasks each took about four hours (on a Mac G4/733) and claimed between 12 and 13 credits. The first one, which was terminated by “Watchdog”, was granted 80 CS, but the second, which completed normally AFAICT, was granted less than 5. It certainly seems inconsistent—especially with no ‘quorum partners’ involved.
ID: 3082 · Report as offensive    Reply Quote
Profile anders n

Send message
Joined: 16 Feb 06
Posts: 166
Credit: 131,419
RAC: 0
Message 3238 - Posted: 27 Jun 2007, 9:31:55 UTC

I wonder what makes the credit difference on this host here and on Rosetta.

Rosetta
Ralph

Anders n


ID: 3238 · Report as offensive    Reply Quote
Profile Conan
Avatar

Send message
Joined: 16 Feb 06
Posts: 359
Credit: 1,360,907
RAC: 39
Message 3241 - Posted: 28 Jun 2007, 8:15:03 UTC

Credits on Ralph and on Rosetta are very much lower than most other projects.
On my 4 computers the amount granted varies from less than 12 an hour to a very occasional 20 per hour (this is rare).
Running on a 6 hour prefance this gives an average of 13 credits an hour on my AMD 4800+(2.4GHz) and Opteron 275(2.2GHz), with 15 to 16 credits an hour on my Opteron 285(2.6GHz).

I am led to believe that the amount of credit depends on the amount of Decoys produced? But I have not seen this happen in practice, with 12 decoys often giving more credits than say 26.

Going on previous discussions in the forums about this same issue I don\'t see it being increased but staying as is.

As I have switched most of my output to Rosetta for the moment, my RAC average (for Rosetta), sits around 2,100 credits a day.
When using the same load balance across my computers for QMC I was getting an RAC average of over 5,500 credits a day.
Big difference.


ID: 3241 · Report as offensive    Reply Quote
Profile anders n

Send message
Joined: 16 Feb 06
Posts: 166
Credit: 131,419
RAC: 0
Message 3242 - Posted: 28 Jun 2007, 10:01:01 UTC
Last modified: 28 Jun 2007, 10:01:25 UTC

Yes I know that there are projects that give more credit/H(I even run some of them).

But this is Rosetta / Ralph and credit should be about the same or??

Anders n


ID: 3242 · Report as offensive    Reply Quote
Profile Conan
Avatar

Send message
Joined: 16 Feb 06
Posts: 359
Credit: 1,360,907
RAC: 39
Message 3252 - Posted: 29 Jun 2007, 16:17:17 UTC - in response to Message 3242.  

Yes I know that there are projects that give more credit/H(I even run some of them).

But this is Rosetta / Ralph and credit should be about the same or??

Anders n



> To break it down a bit more, I went back over some recorded results for both projects showing an average of my recorded results, sampled over this year

AMD 4800+ (2.4GHz) Rosetta average of 43 WUs = 13.70 cr/h
AMD 4800+ (2.4GHz) Ralph average of 42 WUs = 14.56 cr/h

AMD Opteron 275 (2.2GHz) Rosetta Ave of 50 WUs = 14.38 cr/h
AMD Opteron 275 (2.2GHz) Ralph Ave of 44 WUs = 15.29 cr/h

No1 AMD Opteron 285 (2.6GHz) Rosetta Ave of 52 WUs = 16.85 cr/h
No1 AMD Opteron 285 (2.6GHz) Ralph Ave of 31 WUs = 15.09 cr/h

No2 AMD Opteron 285 (2.6GHz) Rosetta Ave of 23 WUs = 17.26 cr/h
No2 AMD Opteron 285 (2.6GHz) Ralph Ave of 27 WUs = 16.72 cr/h

As you can see there is a fair difference between the two projects on the same computers.
It may have something to do with the test work units. I did ask once if these Ralph work units are actual work units or not but don\'t recall getting an answer. So their actual value and therefore comparison to Rosetta may not have a lot to do with each other.
ID: 3252 · Report as offensive    Reply Quote
Profile feet1st

Send message
Joined: 7 Mar 06
Posts: 313
Credit: 113,747
RAC: 0
Message 3253 - Posted: 29 Jun 2007, 20:50:21 UTC - in response to Message 3241.  

I am led to believe that the amount of credit depends on the amount of Decoys produced? But I have not seen this happen in practice, with 12 decoys often giving more credits than say 26.


When we talk about credit per model, it is specific to a given batch of work. As you have seen, some tasks will do 100 models a day, and others will only do 30, or even 15. So credit for the tasks that do 100 models per day will be less per model then those that only produce 15.

So, picture a batch of work that typically produces 50 models per day on a given machine. If everyone had that machine type, they would all do the same 50 models per day. And the credit issued would be exactly the same as the benchmarks, or credit claimed. But, it is never that simple. There are other machines, some are faster. These will do 65 models per day, and their benchmarks will differ as well. So now the average credit per model will depend upon how their benchmarks rated their machine.

Let\'s say their benchmarks rate the faster machine as being 20% faster. But as you see it is actually producing 30% more models per unit time. Thus there is a difference between the predicted ability to do work (as measured by the BOINC benchmarks) and the actual ability to do work (as measured by the number of models produced per day). This faster machine will have it\'s benchmark figures (i.e. credit claim) averaged in to all the others reported, and it will be granted credit based on the running average at the time it reports its results back. So it will be granted the same credit per model as your machine, and it will get that number of credits times 65 models produced.

After that faster machine reports, since it\'s ability to produce models exceeded the BOINC benchmarks prediction (as compared to your machine\'s first report) the rolling average credit per model will lower slightly. This is basically because with this report from this faster machine we\'re learning that the tasks aren\'t as hard to crunch as the BOINC benchmarks would predict.

A slow machine then reports in, but it\'s ability to crunch models exactly matches your machine. So it\'s BOINC benchmarks are exactly 1/2 of your machine, and it produces exactly 25 models per day. When this machine reports, it gets the rolling average credit per model (which is now somewhat lower then where we started) times it\'s 25 models. And the net result is a slight increase in the rolling average credit per model, because this machine found these models harder to produce then the faster machine did.

So credit varies slightly with each reported result. And the actual crunchtime varies slightly from model to model within a batch of tasks. But overall, the credit per day issued by the project should be pretty steady.

That\'s why I\'ve been trying to explain to people that the runtime preference does not impact credit.

I think what \"Anders n\" is questioning is when they have the same host attached to both Rosetta and Ralph, with same benchmarks, and yet it seems the credit claim to granted ratio is noticeable higher on Ralph (i.e. Ralph seems to grant less credit per credit claimed then Rosetta).

I\'ll offer two theories on what might cause this.

1) Tasks on Ralph are tests. Perhaps they found a high variability in runtime from one model to the next in the Ralph tests, or otherwise decided not to release the task to Rosetta.

2) The mix of machines between the two projects is different. Rosetta has many more results reported and going in to the average and there it appears your credit granted is roughly in line with your claims. Whereas on Ralph the number of results is much smaller, and perhaps the mix of machines is such that most produce more work then their benchmarks would predict (Core 2\'s seem to do this, so perhaps there are more Core 2\'s on Ralph).

...some combination of the two is certainly the case. The question is just to what degree each factors in to what you are seeing.
ID: 3253 · Report as offensive    Reply Quote
Profile Conan
Avatar

Send message
Joined: 16 Feb 06
Posts: 359
Credit: 1,360,907
RAC: 39
Message 3254 - Posted: 30 Jun 2007, 1:47:10 UTC - in response to Message 3253.  

I am led to believe that the amount of credit depends on the amount of Decoys produced? But I have not seen this happen in practice, with 12 decoys often giving more credits than say 26.


When we talk about credit per model, it is specific to a given batch of work. As you have seen, some tasks will do 100 models a day, and others will only do 30, or even 15. So credit for the tasks that do 100 models per day will be less per model then those that only produce 15.

So, picture a batch of work that typically produces 50 models per day on a given machine. If everyone had that machine type, they would all do the same 50 models per day. And the credit issued would be exactly the same as the benchmarks, or credit claimed. But, it is never that simple. There are other machines, some are faster. These will do 65 models per day, and their benchmarks will differ as well. So now the average credit per model will depend upon how their benchmarks rated their machine.

Let\'s say their benchmarks rate the faster machine as being 20% faster. But as you see it is actually producing 30% more models per unit time. Thus there is a difference between the predicted ability to do work (as measured by the BOINC benchmarks) and the actual ability to do work (as measured by the number of models produced per day). This faster machine will have it\'s benchmark figures (i.e. credit claim) averaged in to all the others reported, and it will be granted credit based on the running average at the time it reports its results back. So it will be granted the same credit per model as your machine, and it will get that number of credits times 65 models produced.

After that faster machine reports, since it\'s ability to produce models exceeded the BOINC benchmarks prediction (as compared to your machine\'s first report) the rolling average credit per model will lower slightly. This is basically because with this report from this faster machine we\'re learning that the tasks aren\'t as hard to crunch as the BOINC benchmarks would predict.

A slow machine then reports in, but it\'s ability to crunch models exactly matches your machine. So it\'s BOINC benchmarks are exactly 1/2 of your machine, and it produces exactly 25 models per day. When this machine reports, it gets the rolling average credit per model (which is now somewhat lower then where we started) times it\'s 25 models. And the net result is a slight increase in the rolling average credit per model, because this machine found these models harder to produce then the faster machine did.

So credit varies slightly with each reported result. And the actual crunchtime varies slightly from model to model within a batch of tasks. But overall, the credit per day issued by the project should be pretty steady.

That\'s why I\'ve been trying to explain to people that the runtime preference does not impact credit.

I think what \"Anders n\" is questioning is when they have the same host attached to both Rosetta and Ralph, with same benchmarks, and yet it seems the credit claim to granted ratio is noticeable higher on Ralph (i.e. Ralph seems to grant less credit per credit claimed then Rosetta).

I\'ll offer two theories on what might cause this.

1) Tasks on Ralph are tests. Perhaps they found a high variability in runtime from one model to the next in the Ralph tests, or otherwise decided not to release the task to Rosetta.

2) The mix of machines between the two projects is different. Rosetta has many more results reported and going in to the average and there it appears your credit granted is roughly in line with your claims. Whereas on Ralph the number of results is much smaller, and perhaps the mix of machines is such that most produce more work then their benchmarks would predict (Core 2\'s seem to do this, so perhaps there are more Core 2\'s on Ralph).

...some combination of the two is certainly the case. The question is just to what degree each factors in to what you are seeing.


Thanks \'feet1st\' for the reply.
The gist of what you say is borne out with my 4 computers. The 2 slower ones get a higher amount per hour on Ralph and lower on Rosetta, whilst my 2 faster machines get a lower amount per hour on Ralph and a higher amount on Rosetta.
ID: 3254 · Report as offensive    Reply Quote
Profile anders n

Send message
Joined: 16 Feb 06
Posts: 166
Credit: 131,419
RAC: 0
Message 3255 - Posted: 30 Jun 2007, 7:24:01 UTC - in response to Message 3252.  

It may have something to do with the test work units. I did ask once if these Ralph work units are actual work units or not but don\'t recall getting an answer. So their actual value and therefore comparison to Rosetta may not have a lot to do with each other.


If I understand what we are doing here right we are cruching the same type of work units here as will come on Rosetta later but not all of them passes the test here.

What I don´t know is if they use the Wu-s as science basis or if the results
\"just\" is test runs for correct settings on Rosetta.

@Feet1st

I suspected that it could be the differnet computer mix that made the differance.

Anders n

ID: 3255 · Report as offensive    Reply Quote
Profile anders n

Send message
Joined: 16 Feb 06
Posts: 166
Credit: 131,419
RAC: 0
Message 3256 - Posted: 30 Jun 2007, 9:18:32 UTC

Is there a site where you can se which Boinc clients we are using here on
Ralph and Rosetta?



ID: 3256 · Report as offensive    Reply Quote
Profile anders n

Send message
Joined: 16 Feb 06
Posts: 166
Credit: 131,419
RAC: 0
Message 3257 - Posted: 30 Jun 2007, 13:01:52 UTC - in response to Message 3256.  

Is there a site where you can se which Boinc clients we are using here on
Ralph and Rosetta?

Maybe have to write so more than me understands the question :)

What I mean is how many wu´s is cruched by 5.8.15 or 5.10.7 or.....

Anders n
ID: 3257 · Report as offensive    Reply Quote
Profile feet1st

Send message
Joined: 7 Mar 06
Posts: 313
Credit: 113,747
RAC: 0
Message 3278 - Posted: 9 Jul 2007, 14:51:46 UTC

I don\'t know of a way for us to determine that. Certainly the project team could run queries and figure it out.

Are you wondering how BOINC release related to credit? It wouldn\'t effect credit granted. Only, perhaps, credit claimed. ...But then the claims are what get averaged together, so how the benchmarks are taken indirectly effects credits.
ID: 3278 · Report as offensive    Reply Quote

Message boards : Number crunching : Credits



©2018 University of Washington
http://www.bakerlab.org