Message boards : Number crunching : Large proteins
Previous · 1 · 2 · 3 · Next
Author | Message |
---|---|
Admin Project administrator Send message Joined: 1 Jul 05 Posts: 4805 Credit: 0 RAC: 0 |
After talking to the researcher whose jobs are the 12v1n_ batch, we decided to cancel the remaining jobs. The majority of these jobs have completed successfully. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
We may consider boosting credits for these jobs. I'm not so interested in credits. I'm interested in scientific results. And i'm interested that R@H can use my pc as better as possible. |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1681 Credit: 17,854,150 RAC: 20,118 |
But many people are. And these Tasks are important, so they need to be run & not aborted because they may stop other Tasks from running at the time. The possible extra Credit will just offset any loss in Credit from other Tasks not being able to run at that time.We may consider boosting credits for these jobs.I'm not so interested in credits. So overall, it will still be the same -with mostly the same Credit awarded with the odd peak & dip here & there. Grant Darwin NT |
Feet1st Send message Joined: 30 Dec 05 Posts: 1755 Credit: 4,690,520 RAC: 0 |
If you are also crunching small memory tasks, such as WCG, what loss of credit exists? Add this signature to your EMail: Running Microsoft's "System Idle Process" will never help cure cancer, AIDS nor Alzheimer's. But running Rosetta@home just might! https://boinc.bakerlab.org/rosetta/ |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1681 Credit: 17,854,150 RAC: 20,118 |
If you are also crunching small memory tasks, such as WCG, what loss of credit exists?Overall, other projects won't be affected- Resource share settings take care of that. The only impact will be on Rosetta Credit, as other Rosetta work won't be able to be done at the same tame as these Tasks are being processed. Some systems won't be affected (humongous amounts of RAM, resulting in a good RAM to thread ratio), many systems will be affected (much lower RAM to thread ratio), other systems hardly affected (few cores/threads reasonable amount of RAM so a good RAM to thread ratio), others not affected at all (not enough RAM to be allocated the Tasks). Grant Darwin NT |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1681 Credit: 17,854,150 RAC: 20,118 |
One more thought- a 32bit programme can address 4GB of RAM, so those programmes will still be able to process the new Work Units. However a 32bit OS such as Windows can only access up to 3.6GB (and often less with PCIe cards in the system) due to reserved addresses- so these Tasks can only go to 64bit OS systems. Yes, they often won't need the full 4GB, but if they get over 3GB, then there won't be enough left for the OS and and much else and a lot of drive thrashing will follow. Grant Darwin NT |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
I'm not so interested in credits.But many people are. I know, i know. And these Tasks are important, so they need to be run & not aborted because they may stop other Tasks from running at the time. If one of these "big wus" goes in "waiting for memory", a volunteers must be present to solve the situation (or, maybe, the wus will go on deadline). I think that these new wave should be treated carefully and gently. A lot of people have hesitations about introducing 64bit native windows client or introducing SSEx/Avx extensions, but i think this change have more impact on volunteers |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
Yes, they often won't need the full 4GB, but if they get over 3GB, then there won't be enough left for the OS and and much else and a lot of drive thrashing will follow. If i'm not wrong, you are skeptical about 64 bit windows native client... Now the situation is turning into a mandatory condition |
Grant (SSSF) Send message Joined: 28 Mar 20 Posts: 1681 Credit: 17,854,150 RAC: 20,118 |
No, the 32bit client will have no problem with the new 4GB Tasks. So there is still no need for a 64bit client.Yes, they often won't need the full 4GB, but if they get over 3GB, then there won't be enough left for the OS and and much else and a lot of drive thrashing will follow. However 32bit operating systems will have problems. Grant Darwin NT |
Bryn Mawr Send message Joined: 26 Dec 18 Posts: 393 Credit: 12,110,248 RAC: 4,952 |
I'm not so interested in credits.But many people are. Must is a bit strong, yes, a waiting for memory could last all day but more likely an hour and if the deadline does loom it will, presumably, be given priority. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
However 32bit operating systems will have problems. Dismiss support to 32 bit OS. It's time to. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
Must is a bit strong, yes, a waiting for memory could last all day but more likely an hour and if the deadline does loom it will, presumably, be given priority. I see a lot of "probably", "presumably", etc about this new app |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
Some systems won't be affected (humongous amounts of RAM, resulting in a good RAM to thread ratio), many systems will be affected (much lower RAM to thread ratio), other systems hardly affected (few cores/threads reasonable amount of RAM so a good RAM to thread ratio), others not affected at all (not enough RAM to be allocated the Tasks). All these situations will be automaticaly managed by R@H servers? P.S. Seems like a little "nightmare". And a nightmare rejects new volunteers. |
Bryn Mawr Send message Joined: 26 Dec 18 Posts: 393 Credit: 12,110,248 RAC: 4,952 |
Must is a bit strong, yes, a waiting for memory could last all day but more likely an hour and if the deadline does loom it will, presumably, be given priority. Because until it is here and running there is a lot of uncertainty at to how it will run in practice. |
Admin Project administrator Send message Joined: 1 Jul 05 Posts: 4805 Credit: 0 RAC: 0 |
I want to make this clear since there is some mis-information going around. These long, up to 2000 residue, sequences that are sometimes, but not often, submitted to Robetta, our protein structure prediction server, have been around for a while now. They are nothing new, may or may not be related to COVID-19, and are rare. We have just adjusted the logic to make sure these jobs are assigned with enough memory and time to complete . The vast majority of jobs have a smaller memory footprint of less than 2g and produce models at timescales in minutes and not hours. Run times and memory usage may vary but a typical 2000 residue Rosetta comparative modeling job from Robetta (these jobs are rare) took a little over an hour to produce 1 model and used 1.8 gigs of RAM on our local cluster. These jobs should not be confused with the problematic cyclic peptide jobs (which have been canceled) that users reported sometimes taking longer than the cpu run time preference to complete. This also was a rare event and likely due to random trajectories that were not passing model quality filtering criteria. These cyclic peptide jobs have a small memory footprint and can produce models at a faster pace. These issues highlight the fact that Rosetta@home runs a variety of protocols for modeling and design, the result of which can be seen from the variety of research publications and projects related to diseases such as COVID-19 and cancer, vaccine development, nano-materials, cellular biology, structural biology, environmental sciences, and the list goes on. These are within the Baker lab and IPD, but there are also researchers around the world using the Robetta structure prediction server for a vast variety of research. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
I want to make this clear since there is some mis-information going around. These long, up to 2000 residue, sequences that are sometimes, but not often, submitted to Robetta, our protein structure prediction server, have been around for a while now. They are nothing new, may or may not be related to COVID-19, and are rare. We have just adjusted the logic to make sure these jobs are assigned with enough memory and time to complete . The vast majority of jobs have a smaller memory footprint of less than 2g and produce models at timescales in minutes and not hours. I'm here since 2005 and, even with this big wus (i don't think to have hw capable) i will be here. You're doing GREAT work. And i will continue to ask for the SSEx/AVX extensions :-P |
Jim1348 Send message Joined: 19 Jan 06 Posts: 881 Credit: 52,257,545 RAC: 0 |
And i will continue to ask for the SSEx/AVX extensions :-P You are a good man. But I don't believe for a minute that you have given up on GPUs. |
[VENETO] boboviz Send message Joined: 1 Dec 05 Posts: 1994 Credit: 9,623,704 RAC: 8,387 |
You are a good man. But I don't believe for a minute that you have given up on GPUs. Despite i continue to write on the Ralph@Home's opencl thread, during the years i became skeptical. But, maybe trRosetta... :-O |
Michael H.W. Weber Send message Joined: 18 Sep 05 Posts: 13 Credit: 6,672,462 RAC: 0 |
Just a short note on large RAM requiring tasks: I consider it mandatory that in order to distribute such tasks there needs to be a new, separate RAM threshold setting in the project's preference section that requires manual activiation before such tasks are going to be send to the corresponding machines. While it appears to work that systems not meeting the PHYSICAL RAM requirements of a project do indeed not get such tasks, it is similarly known that the determination of REMAINING RAM (while other tasks are already progressing) lacks reliability in the BOINC ecosystem (to say the least). A lot of people run BOINC on unattended machines and are not checking these for days or even weeks and I am not sure whether to date a majority of these even has (the small) amount of 16 GB of RAM. You may say "there will be only few of these". I am sure, I will get them all at once on the same machine. ;-) Michael. P.S.: Rosetta already has an EXTREMELY annoying issue with RAM management due to the sending of new WUs with deadlines dating BEFORE those of RAH tasks already running on the same system: the tasks in progress go on hold and the newly loaded ones bloat the RAM. President of Rechenkraft.net e.V. http://www.rechenkraft.net - The world's first and largest distributed computing association. We make those things possible that supercomputers don't. |
CIA Send message Joined: 3 May 07 Posts: 100 Credit: 21,059,812 RAC: 0 |
Have the larger ones started coming out? This beast ran for 18 hours and only had one decoy: https://boinc.bakerlab.org/rosetta/result.php?resultid=1156094562 |
Message boards :
Number crunching :
Large proteins
©2024 University of Washington
https://www.bakerlab.org