Message boards : Number crunching : 261T flops round the clock- running out of tasks soon it seemed
Author | Message |
---|---|
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
rosetta@home must be one of the most powerful volunteer distributed computing setup on earth 261.965 T flops concurrently running 902,484 jobs and completing 210,754 successes last 24h it'd seem we'd run out of tasks soon :p lol thanks also to the researchers/scientists for creating all those work units that's truly cutting edge research :D https://boinc.bakerlab.org/rosetta/forum_thread.php?id=6753 https://boinc.bakerlab.org/rosetta/forum_thread.php?id=6726 |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
the new computing benchmark measuring computing power is probably how many of the most complex r@h models/decoys can u complete in 6 hours :p ;D lol |
Betting Slip Send message Joined: 26 Sep 05 Posts: 71 Credit: 5,702,246 RAC: 0 |
rosetta@home must be one of the most powerful volunteer distributed computing setup on earth 261.965 T flops Not even close I believe folding@home has 40 PFlops and GPUGrid has 1.7 PFlops but has been over 2 PFlops |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,524,889 RAC: 7,500 |
Not even close I believe folding@home has 40 PFlops and GPUGrid has 1.7 PFlops but has been over 2 PFlops Folding@Home, 16 Pflops Einstein@Home 2,2 Pflops Poem@Home 600 Tflops Rosetta@Home 278 Tflops |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1018 Credit: 4,334,829 RAC: 0 |
We may not use as much computing for our research (and perhaps one should consider how much computing is necessary) but 2015 still had exciting research developments which relied on R@h volunteers. Congrats and Happy New Year! Just to highlight a few: - Successful design of repeat proteins. https://www.bakerlab.org/index.php/2015/12/31/exploring-the-repeat-protein-universe-through-computational-protein-design/ - Large scale structure determination using co-evolution information. https://www.bakerlab.org/index.php/2015/12/01/large-scale-determination-of-previously-unsolved-protein-structures-using-evolutionary-information/ - CASP11 success - 3 invited papers. 4 invited talks. Successful blind prediction of a large topologically complex protein with unprecedented accuracy of less than 3 angstroms RMSD over 223 residues. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,524,889 RAC: 7,500 |
We may not use as much computing for our research (and perhaps one should consider how much computing is necessary) ?? I don't understand. Don't you like extra computational power with, for example, gpu or cpu optimization (sse/avx) or, simply, others additional cpus? |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
We may not use as much computing for our research (and perhaps one should consider how much computing is necessary) I think I would phrase that differently. What would additional computing power be used for? Different studies, more accuracy for the present studies, faster return of results, ??? I don't see anything that implies they don't want extra computing. But how much marginal value does it add? |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1018 Credit: 4,334,829 RAC: 0 |
We may not use as much computing for our research (and perhaps one should consider how much computing is necessary) Would we like more computing power, yes, but our computing demand fluctuates based on specific research projects and the protocols being tested/developed. For example there might be a spike in demand next week or a month from now, hypothetically speaking. |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1018 Credit: 4,334,829 RAC: 0 |
We may not use as much computing for our research (and perhaps one should consider how much computing is necessary) Yes, these are the questions that should be addressed. |
Timo Send message Joined: 9 Jan 12 Posts: 185 Credit: 45,649,459 RAC: 0 |
We may not use as much computing for our research (and perhaps one should consider how much computing is necessary) I think David was comparing R@H to the projects [VENETO] boboviz mentioned here:
I don't think David meant to suggest that R@H wouldn't benefit from more power, but simply pointed out that great science was being achieved thanks to the resources currently available, and while more would be nice, there is plenty that can be done with what is at hand. **38 cores crunching for R@H on behalf of cancercomputer.org - a non-profit supporting High Performance Computing in Cancer Research |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,524,889 RAC: 7,500 |
What would additional computing power be used for? Different studies, more accuracy for the present studies, faster return of results, ??? A lot of, i suppose... :-) Would we like more computing power, yes, but our computing demand fluctuates based on specific research projects and the protocols being tested/developed. For example there might be a spike in demand next week or a month from now, hypothetically speaking. Ok, now i understand. Rosetta@home seems not to be an "empty-queue" boinc's project, so i've thinked that the flops are very important for you. |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1018 Credit: 4,334,829 RAC: 0 |
What would additional computing power be used for? Different studies, more accuracy for the present studies, faster return of results, ??? faster return of results - there are other ways to get at this like methods/protocol development, in addition to aiming for improved results. For example, we are currently working on the score function, improved sampling methods (particularly given co-evolution contacts), and model selection. For large and topologically complex proteins, we need to think of and develop new approaches to the sampling problem. |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
We may not use as much computing for our research (and perhaps one should consider how much computing is necessary) but 2015 still had exciting research developments which relied on R@h volunteers. Congrats and Happy New Year! Just to highlight a few: thanks david, it's those & such heart warming results that i come back from time to time & crunch jobs :) https://www.ipd.uw.edu/big-moves-in-protein-structure-prediction-and-design/ i'd also like to thank the scientists & researchers working with / on rosetta / rosetta@home as after all as there are petaflops of computing power from kind volunteers, there are only so many scientists & researchers perusing so many elaborate bleeding edge protein designs and simulations and that is a very human effort :) Cheers & happy new year to all :D |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
often i look at the size of the queue before deciding to join the crunch/fold, a big queue seemed like a bigger challenge out there to join the crunch/fold :D |
sgaboinc Send message Joined: 2 Apr 14 Posts: 282 Credit: 208,966 RAC: 0 |
off-topic, apart from the scientific optimizations, i'd guess it's rather important to recognise that this is a 'social media/community' project, i.e. it helps to keep the project 'fun' and 'interesting' (in part that comes from the results announced) i'd think that suggestions like badges helps (i'd guess not many people may be as academically inclined to attempt to understand the challenges of protein predictions, but some may find that 'collecting badges' say for number of credits for certain milestones say 10,000 credits, 100,000 credits, 500,000 credits, 1 million credits etc a motivating pursuit, the 'badges' could perhaps be in terms of those large & complex proteins or complicated docking/etc predicted thus far on r@h ) https://boinc.bakerlab.org/rosetta/forum_thread.php?id=6171#74892 https://boinc.bakerlab.org/rosetta/forum_thread.php?id=6457#76814 and 'top predictions hall-of-fame' looked somewhat old https://boinc.bakerlab.org/rosetta/forum_thread.php?id=6759 https://boinc.bakerlab.org/rosetta/rah_top_predictions.php it'd be good if researchers/scientists could post some of the discovered / large / interesting predictions on the 'top predictions' web :D |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,524,889 RAC: 7,500 |
off-topic, apart from the scientific optimizations, i'd guess it's rather important to recognise that this is a 'social media/community' project, i.e. it helps to keep the project 'fun' and 'interesting' (in part that comes from the results announced) For example, more use of twitter is welcome! |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,524,889 RAC: 7,500 |
faster return of results - there are other ways to get at this like methods/protocol development, in addition to aiming for improved results. For example, we are currently working on the score function, improved sampling methods (particularly given co-evolution contacts), and model selection. For large and topologically complex proteins, we need to think of and develop new approaches to the sampling problem. That's very interesting. When i think to "faster return" i don't think only to shorter wus, but also to more complex simulations. A theoretical example: i download a wu that makes 3 models/decoys during the 2h i have set up in my profile. If you introduce optimizations (may be avx or others), during the same time my cpu makes 4 models/decoy. Or, if you introduce bigger and more complex simulations, these make a reasonable time to complete using all computational options of my cpu. Is this correct? |
David E K Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1018 Credit: 4,334,829 RAC: 0 |
Had we just increased our BOINC sampling for our successful T0806 CASP11 target 10x, 100x, or even 1000x, we probably still would not have succeeded. It required new approaches to improve ab initio modeling - using co-evolution data along with an iterative protocol that was actually run locally on just a handful of nodes. I'm not disagreeing that more computing would be helpful and is desired but there are also interesting scientific problems that we'd like to work on - score function, sampling, selection, informatics, etc... |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,524,889 RAC: 7,500 |
It required new approaches to improve ab initio modeling - using co-evolution data along with an iterative protocol that was actually run locally on just a handful of nodes. I'm not disagreeing that more computing would be helpful and is desired but there are also interesting scientific problems that we'd like to work on - score function, sampling, selection, informatics, etc... 2016 will be an interesting year!!! :-) P.S. This is also a CASP-year! |
Message boards :
Number crunching :
261T flops round the clock- running out of tasks soon it seemed
©2024 University of Washington
https://www.bakerlab.org