Posts by Aaron Finney

21) Message boards : Current tests : New crediting system (Message 2150)
Posted 16 Aug 2006 by Aaron Finney
Post:

We astound each other :) So if a PGA golfer finished a round and reported his score as 17 (you keep your own score in golf), the rest of the players have to accept it even though obviously false? Everyone who uses optimized clients are raising their credit claims to levels not based on the properties of their hardware. It's not a fair system when you have to use modified software (or subtract 60 from your golf score) to stay competitive.


Just because some people have found an open door allowing them to sidestep the system -DOES NOT MEAN- that the system in it's design is not fair.

You do not reinvent the system simply because somebody found a back door. You close the door and put a lock on it.
22) Message boards : Current tests : New crediting system (Message 2148)
Posted 16 Aug 2006 by Aaron Finney
Post:
First, a question - I must say, DeKim (david?) I really, honestly think that you are going the wrong direction here.


I am just adding more info for user's for now. If you want to know how much actual work you've done compared to others, look at the new info once it's up. If not, just ignore it. My goal in response to users and just plain old logic is to offer a more fair credit system. -- David K


Fair is the wrong word, but I can see why you use it. It's much more political than saying 'The credit system will now be harder to manipulate by malicious users, at the expense of accuracy.'

I understand the problem, but I think the solution is improper.
23) Message boards : Current tests : New crediting system (Message 2146)
Posted 16 Aug 2006 by Aaron Finney
Post:
...I understand that you want to change the existing credit system, and because of that it is safe to infer that you felt the existing system wasn't working. Why?

The reasons why are numerous and sprawled throughout the Rosetta boards, including the infamous (and deleted) cheating thread.


I asked for an answer, not a generalization. Even if they are numerous, then there is no better place to index them than here.

I you were not aware of it... the current BOINC implementation allows a user to basically modify a simple file with notepad and claim their machine is 10x (or 1000x) faster then it really is. That's the basic premise of the need for change. And that's why many of the BOINC projects are changing in ways appropriate for each project's work.


Then -THAT- would be the problem to fix. As I said, encrypt these values and do not allow the GP access to the tools needed to change them.

In the end, the new system will still equate back to the FLOPS that BOINC proports to measure. But it will be much more difficult to modify your results and try to claim more credits.


The new system will be a system of averages. Nothing more.

Averages are hardly accurate. Where are my Swiss and German friends of yesteryear? Where is Jens? Where is Riedel? The founders of BOINC would be uproarious about this change.
24) Message boards : Current tests : New crediting system (Message 2145)
Posted 16 Aug 2006 by Aaron Finney
Post:


The current system places credits in the hands of individual particpants. . you essentially keep your own score. Want more credits? Just claim more credits by making your benchmarks higher than possible (some computers claim 15+ Gflops per cpu. . even at 4ghz and running two floating point calculations a cycle, that's 8 Gflops. . and cpu's don't get near theoretical). Don't get too greedy or you'll get zerod out. It's like speeding, if you go the same speed as everyone else in the left lane, even if 10 over, you're not likely to get in trouble. . . if you're going 40 over in the median, you'll get busted. Not a very good system imo.

The new system is taking the law of averages to make the playing field more fair. It's taking the average credit claim from hundreds of machines and applying the same score to each.



No offense, but the logic for this change astounds me. The current system itself is not flawed in it's fairness to all - on the contrary, it is the most fair. The problem is that the current system places too much responsibility for ones credit in the hands of the unknown. The only fair and appropriate way to fix it is to remove the power to control ones own credit from the CURRENT system.

What has been proposed here is to add back in the inaccurracies of the past at the cost of losing fairness to all.

Credit manipulating should never have been allowed by the public. These values should be encrypted and no access to them should be provided to the GP.
25) Message boards : Current tests : New crediting system (Message 2141)
Posted 16 Aug 2006 by Aaron Finney
Post:
First, a question - I must say, DeKim (david?) I really, honestly think that you are going the wrong direction here. I understand that you want to change the existing credit system, and because of that it is safe to infer that you felt the existing system wasn't working. Why?

Back in the beginning (pre-boinc), SETI@Home used a system that was not too unlike your proposed method. There also was no quorum for results, and the credit system was fairly basic - 1 credit per workunit.

Everyone thought this was fine and fair, but in reality this system was more bugged and UNFAIR than you could imagine. Some workunits would take much much longer, and there really wasn't any way that you could determine an average run time for every workunit without completely processing all of them (which defeats the purpose of having the DC project) ALTHOUGH, with enough care and thought, you could predict runtimes for 95%+ of the work.

When BOINC was created, it was a harsh change to move to a crediting system that was based more on the actual work done closer to the process thread level than to assign arbitrary values to each workunit. If you want to change the existing system - Work with David Anderson and Rom Walton, and see if you can iron out the wrinkles. The -ONLY- fair way of granting credit is to calculate actual work done using total FLOPS or some other completely scientific method. NOT the only seemingly accurate (yet still arbitrary) averaging system you have implemented here.

Please understand, I have been with BOINC since version 0.07, and SETI for years before that. The problems that exist now with the current credit system FAR OUTWEIGH the problems we had beforehand. I hope that you take this message to heart, and understand that what I feel you are doing is taking a step backwards in your attempt to be revolutionary. I also support you and this project in whatever changes you make; However, this doesn't mean that I am not more than mildly upset at the change.

I'm sorry that I haven't spoken on the subject earlier, but work has ahold of me as of late. As we speak, I'm at the world AIDS conference in Toronto, and could not wait to comment here until I returned home as I feel that strongly that you could be making a big mistake. CERTAINLY, (and in the very least) - I would convey that MUCH more testing is needed.

Credit is something that lives at the core of many of your constituents. Monopoly just isn't Monopoly without Boardwalk and Free Parking. Tread extremely carefully here if you do nothing else!

We can limit the number of wu you can abort. We can change the work unit distribution to be more homogenous. We can update the credit/model values. We can penalize people trying to cherry pick. It seems easy enough to me if it becomes a problem. just some ideas off the top of my head.

26) Message boards : RALPH@home bug list : RALPH Version News! - Version 4.91 released! (Message 898)
Posted 17 Mar 2006 by Aaron Finney
Post:
Might want to Un-sticky this thread, and sticky the 4.93 thread. :-D
27) Message boards : Current tests : Switching between projects with applications removed from memory (Message 875)
Posted 14 Mar 2006 by Aaron Finney
Post:
Had a problem with this on a workunit that had ran for 60 hours, application version 4.92

3/13/2006 7:40:03 PM||Suspending computation and network activity - user request
3/13/2006 7:40:03 PM|climateprediction.net|Pausing result sulphur_id14_000856696_0 (removed from memory)
3/13/2006 7:40:03 PM|ralph@home|Pausing result TEST_HOMOLOG_ABINITIO_hom008_1fna__220_3_2 (removed from memory)
3/13/2006 7:40:04 PM|ralph@home|Unrecoverable error for result TEST_HOMOLOG_ABINITIO_hom008_1fna__220_3_2 ( - exit code -1073741819 (0xc0000005))
3/13/2006 7:40:04 PM||request_reschedule_cpus: process exited
3/13/2006 7:40:04 PM|ralph@home|Computation for result TEST_HOMOLOG_ABINITIO_hom008_1fna__220_3_2 finished
3/13/2006 7:40:05 PM||request_reschedule_cpus: process exited
3/13/2006 7:40:07 PM||Resuming computation and network activity
3/13/2006 7:40:07 PM||request_reschedule_cpus: Resuming activities
3/13/2006 7:40:07 PM||Allowing work fetch again.
3/13/2006 7:40:07 PM||Resuming round-robin CPU scheduling.
28) Message boards : RALPH@home bug list : It looks like a crash, but it\'s not... (Message 671)
Posted 26 Feb 2006 by Aaron Finney
Post:
My CPDN work does this also. I haven't paid it any attention.
29) Message boards : Feedback : Difference F@H and R@H??? (Message 670)
Posted 26 Feb 2006 by Aaron Finney
Post:
Hi,

i have been looking on their pages but i wasnt able to figure out what the differences are.

Seems to me that both have same aims...
isnt it a waste to do this in 2 seperate projects?


They are similar in nature in that they both have to do with protein structure, but attack different areas of the problem.

Also - This is the ALPHA Test project for Rosetta@Home. These types of questions are better asked on the Rosetta@Home message boards, found at that project's site located at http://boinc.bakerlab.org/rosetta/
30) Message boards : Feedback : Frontpage wording.... (Message 667)
Posted 26 Feb 2006 by Aaron Finney
Post:
Yes, I updated the news. Thanks for catching this.


See, even frontpage news needs alpha testing sometimes ;-D


31) Message boards : RALPH@home bug list : Rosetta does not give up CPU time to cleanmgr.exe (Message 569)
Posted 24 Feb 2006 by Aaron Finney
Post:
It kindof defeats the purpose to disable the process, if I WANT to run it yeah?

32) Message boards : Feedback : Frontpage wording.... (Message 568)
Posted 24 Feb 2006 by Aaron Finney
Post:
"Please abort work if you are running LATER versions"

Don't you mean earlier versions? I've got some application version 4.86 work.. that's what you mean right?
33) Message boards : RALPH@home bug list : Rosetta does not give up CPU time to cleanmgr.exe (Message 548)
Posted 24 Feb 2006 by Aaron Finney
Post:
No, this is not the idle cleanmgr that checks for compressed files. I'm talking about the process that is started when you open 'My Computer', right click on the C drive, select properties, and then click the button labeled 'Disk Cleanup'. Although this is the same process, it is loaded differently and for some reason BOINC apps, (SETI, LHC, Rosetta, Ralph, Etc..) are not giving up CPU time to it.

When Rosetta is running, the cleanmgr process crawls along and takes over 22 minutes to complete.

When rosetta is not running, the cleanmgr process only takes 1 minute and 20 seconds.

This is not due to some sort of swap space or available memory issue either, this is due to rosetta not giving up the CPU time.
34) Message boards : Feedback : Volunteer Tester (Message 546)
Posted 24 Feb 2006 by Aaron Finney
Post:
Hey David, I know it's a long shot, but any chance we could get a "volunteer tester" or "Alpha Tester" etc.. title on our message board ID's?

Although.. if it's alot of work, of course disregard :)
35) Message boards : RALPH@home bug list : Rosetta does not give up CPU time to cleanmgr.exe (Message 512)
Posted 23 Feb 2006 by Aaron Finney
Post:
Rosetta is not giving up CPU % to the Drive Cleanup wizard inside Windows XP Home AND PRO.

The cleanmgr.exe uses 97%+ CPU percentage when BOINC activities are suspended (I.E. when Rosetta/ralph is removed from memory, and suspended) and when it is resumed, the process appears to "hang", and cleanmgr.exe's CPU % drops to <3%.
This MAY be a problem with MS, as it appears to have this problem with LHC@home also.

This appears to be a problem on at least three of my connected PC's, as I have only tested on the following three so far:

1x Pentium 3.06ghz/533mhz HT Northwood 1024MB ECC 1200Mhz RDRAM Running XP Pro
1x Pentium 3.06ghz/533mhz HT Northwood 512MB ECC 1200Mhz RDRAM Running XP Pro
1x Pentium III 800Mhz w/384MB DDR PC100 RAM Running XP Home
36) Message boards : Cafe RALPH : (DO NOT POST HERE) This is the Moderators Archive thread (Message 447)
Posted 22 Feb 2006 by Aaron Finney
Post:
Hey Mod9, thanks for the typo correction in that other post. :)
37) Message boards : Current tests : CPU Run Time preference (Message 398)
Posted 20 Feb 2006 by Aaron Finney
Post:
I've tried all preferences, the default (1 hour), the 2 hour, 4 hour, 8 hour, 16 hour, 1 day, 2 day, and 4 day.

The only workunit that has failed was due to the "leave application in memory=NO" bug, and crashed after 25 hours on the 4.84 application version while it was preempted due to benchmark running. (see the bug report section.)

Otherwise, this setting appears to be working flawlessly, and IMHO, the appmem=no bug is not dependant on the CPURuntime setting.

Notes :

1. Changing the preference to longer time "mid-work" will affect running workunits, if the BOINCMGR updates the project at that time.

2. Similarly, if the preference is changed to shorter time, this will also affect running workunits, even if the run-time is currently WELL over the new preference. I.E. If you have 2 workunits that have been running for 24 hours and switch the preference to "run work for 2 hours", it will finish the current model, then upload the work to the server without complaint after.

3. Users COULD theoretically push work WAY WAYYYYY PAST it's deadline by downloading a large batch of work at the 2 hour preference, and then changing the work to a 4 day preference after downloading 20-30 workunits. Some sort of safety measure should be put into place to keep this from happening. Perhaps changing the preference to a longer time should extend the deadline for work the user currently has downloaded accordingly, or it should flash some sort of warning message about how updating the client with large caches could make the workunits useless. {{{Please note : While it is easy and quick to say on this issue "Well, then don't keep large cache's" or "people need to use their heads" or "don't change and update mid-work" Those statements always look good on paper, but never quite work out in reality. You have to plan for the lowest common denominator, and you should assume that that person is a complete fool. (and knows nothing of cache management or EDF) }}}
38) Message boards : RALPH@home bug list : Report \"failure when switching projects without keeping applications in memory\" bugs here (Message 355)
Posted 20 Feb 2006 by Aaron Finney
Post:
Got a bug here..

2/19/2006 6:24:26 PM||Suspending computation and network activity - running CPU benchmarks
2/19/2006 6:24:26 PM|ralph@home|Pausing result BARCODE_30_1fna__209_15_0 (removed from memory)
2/19/2006 6:24:26 PM|ralph@home|Pausing result BARCODE_30_1cc8A_209_16_0 (removed from memory)
2/19/2006 6:24:27 PM|ralph@home|Unrecoverable error for result BARCODE_30_1fna__209_15_0 ( - exit code -1073741819 (0xc0000005))
2/19/2006 6:24:27 PM||request_reschedule_cpus: process exited
2/19/2006 6:24:27 PM|ralph@home|Computation for result BARCODE_30_1fna__209_15_0 finished
2/19/2006 6:24:28 PM||Running CPU benchmarks
2/19/2006 6:25:27 PM||Benchmark results:
2/19/2006 6:25:27 PM|| Number of CPUs: 2
2/19/2006 6:25:27 PM|| 1320 double precision MIPS (Whetstone) per CPU
2/19/2006 6:25:27 PM|| 1249 integer MIPS (Dhrystone) per CPU
2/19/2006 6:25:27 PM||Finished CPU benchmarks
2/19/2006 6:25:28 PM||Resuming computation and network activity
2/19/2006 6:25:28 PM||request_reschedule_cpus: Resuming activities
2/19/2006 6:25:28 PM|ralph@home|Restarting result BARCODE_30_1cc8A_209_16_0 using rosetta_beta version 484
2/19/2006 6:25:28 PM|ralph@home|Starting result BARCODE_30_1a19A_209_16_0 using rosetta_beta version 484


Seems that the problem happened when it was running benchmarks. :( that was a workunit that had been crunching for 25 hours. Now, granted, it was with the 4.84 application version, but I can't seem to get any more work here.
39) Message boards : Current tests : Model # 313??? (Message 351)
Posted 20 Feb 2006 by Aaron Finney
Post:
depends on what you have set in your ralph@home preferences under target cpu time, unless something is wrong with that WU it should run close to whatever you set there. and you should not be repeating anything, each model is a new try as far as i know :)


Well DKIM said that there was a 99 model limit.. I'm kindof asking wether or not the 99 model limit has been removed or not :)
40) Message boards : Current tests : Model # 313??? (Message 338)
Posted 19 Feb 2006 by Aaron Finney
Post:
How many models will my computer actually run on a single workunit?

I've been crunching one workunit for almost 24 hours and it's up to Model : 313.

How many are in a single WU, or am I processing some over and over?


Previous 20 · Next 20



©2024 University of Washington
http://www.bakerlab.org