Boost the project!

Message boards : Rosetta@home Science : Boost the project!

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102190 - Posted: 7 Jul 2021, 14:54:31 UTC - in response to Message 102187.  
Last modified: 7 Jul 2021, 14:56:51 UTC

They have been doing protein design for a long time. In fact, that is what the "Institute for Protein Design" was intended for.

This is on the IPD website (linked from the Rosetta home page), so you should be able to see it:
https://www.ipd.uw.edu/research/practical-applications/

Or you can search around and find a lot of hits for news articles.
ID: 102190 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Jim1348

Send message
Joined: 19 Jan 06
Posts: 881
Credit: 52,257,545
RAC: 0
Message 102191 - Posted: 7 Jul 2021, 22:31:30 UTC - in response to Message 102190.  

This version is on YouTube, and might play in all countries.
https://www.youtube.com/watch?v=PSHIJhZLr00

But I think there are a lot of comparable reports around.
ID: 102191 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2125
Credit: 41,228,659
RAC: 9,701
Message 102197 - Posted: 9 Jul 2021, 12:41:47 UTC - in response to Message 102170.  

I'll continue to run Open Pandemics on WCG, as I do all their sub-projects, but I'm not sure if I'm wasting my time for the sake of having smoke blown up my arse a lot more often.


Approximately 300 million small molecules run for OpenPandemics - COVID-19 as part of system test

<blink>Wut</blink>
ID: 102197 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,169,305
RAC: 3,400
Message 102198 - Posted: 9 Jul 2021, 21:34:01 UTC - in response to Message 102197.  

I'll continue to run Open Pandemics on WCG, as I do all their sub-projects, but I'm not sure if I'm wasting my time for the sake of having smoke blown up my arse a lot more often.


Approximately 300 million small molecules run for OpenPandemics - COVID-19 as part of system test

<blink>Wut</blink>


Hopefully that means a whole bunch more can now be run thru, even those on the edges of the maybe range, and find a way to stop this stuff in it's tracks!! Now that the process is at least 10 times faster drop 100 million tasks onto WCG and let people have fun.

On a positive note with some of the higher end gpu's beginning to hit the market again maybe people can get thru those tasks even faster.
ID: 102198 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Falconet

Send message
Joined: 9 Mar 09
Posts: 353
Credit: 1,227,479
RAC: 1,013
Message 102200 - Posted: 9 Jul 2021, 23:12:56 UTC - in response to Message 102198.  
Last modified: 9 Jul 2021, 23:20:54 UTC

They need to stop the CPU app (except for molecules that *need* to run on the CPU) and increase the GPU work unit generation. Currently, they are releasing only 1700 WUs every 30 minutes ,the rest is CPU. Even so, GPUs are doing like 700 batches per day vs 150 CPU batches according to the June update.

But in the June update, they said they'd like to do just 500 batches a day since that's about as much the researchers can process without spending most of their time doing the data processing instead of doing science. The July update does mention the researchers managed to improve the data processing by 10-fold, which resulted in a 5-fold improvement of entire workflow. So maybe they can process more batches now.

Hopefully the CPU app can *die* now, it's just a waste of resources unless, again, there are some molecules better suited for that app.
ID: 102200 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,169,305
RAC: 3,400
Message 102201 - Posted: 10 Jul 2021, 2:26:07 UTC - in response to Message 102200.  

They need to stop the CPU app (except for molecules that *need* to run on the CPU) and increase the GPU work unit generation. Currently, they are releasing only 1700 WUs every 30 minutes ,the rest is CPU. Even so, GPUs are doing like 700 batches per day vs 150 CPU batches according to the June update.

But in the June update, they said they'd like to do just 500 batches a day since that's about as much the researchers can process without spending most of their time doing the data processing instead of doing science. The July update does mention the researchers managed to improve the data processing by 10-fold, which resulted in a 5-fold improvement of entire workflow. So maybe they can process more batches now.

Hopefully the CPU app can *die* now, it's just a waste of resources unless, again, there are some molecules better suited for that app.


I agree! They need to keep developing new cpu apps though to find new ways of doing things since cpu apps apparently are easier to make. I say take advantage of the willingness to help while you can even if the likelihood of success is less. Now I do NOT think that we should do junk Science as that benefits no one, but maybe pick one you that's slightly outside the current way of thinking and see what happens. How many Scientific discoveries have come from 'well who woulda thunk that would happen' moments that end up being very good in the end?
ID: 102201 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Jo

Send message
Joined: 16 May 20
Posts: 10
Credit: 3,813,274
RAC: 0
Message 102202 - Posted: 10 Jul 2021, 10:11:05 UTC - in response to Message 102085.  

Good morning everyone... 😆 It is with great pleasure that I have been helping this project, but lately I see that it is a little abandoned. Many people stopped participating, and the work has grown a lot. Please help, because from what I have read, our help has been very important for the discovery of new proteins that help a lot in the development of therapies to cure diseases. I appeal to users to try a little harder. Thank you all.



I have been told there is no community and that I should be thankful if Rosetta even bothered using my computer for any work. So i Fucked Off and dropped my initial plan to build a dedicated rig for this project. I was planning for a Ryzen9 3900X or better added once a year, but as I said, I have been told I should be grateful if I get any work to my computer at all. So I guess we are not needed.
ID: 102202 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,169,305
RAC: 3,400
Message 102203 - Posted: 10 Jul 2021, 11:44:35 UTC - in response to Message 102202.  

Good morning everyone... 😆 It is with great pleasure that I have been helping this project, but lately I see that it is a little abandoned. Many people stopped participating, and the work has grown a lot. Please help, because from what I have read, our help has been very important for the discovery of new proteins that help a lot in the development of therapies to cure diseases. I appeal to users to try a little harder. Thank you all.



I have been told there is no community and that I should be thankful if Rosetta even bothered using my computer for any work. So i Fucked Off and dropped my initial plan to build a dedicated rig for this project. I was planning for a Ryzen9 3900X or better added once a year, but as I said, I have been told I should be grateful if I get any work to my computer at all. So I guess we are not needed.


Right now this very minute, well within the last hour as the page is cached, there are:
Tasks ready to send 26775
Tasks in progress 394228

I think the project can use your resources but not necessarily at the rate of a brand new 3900X or better every year unless you plan on getting rid of the old computer and replace it with a brand new one. The problem with most Science type projects is it's bust or boom with tasks unless they are processing data that is never ending like Collatz or Prime Grid. Scientists tends to like to analyze the results and with most places having limited budgets that means we users have to be attached to several different Projects to keep our pc's busy as much as we would like too. Yes absolutely you can set the resource share to a very high level and your pc will always ask Rosetta for work first and then go onto your other Projects as you see fit but in some cases, ie Einstein, Milky Way and others, there are limits to how much work you can get per day to allow other users to crunch for the Project as well. With todays very fast cpu's and gpu's that can mean you running out of work even though the Project may have lots of work to give out unless you also have your 2nd and 3rd level Projects set up as backups. If you then add in a simple app_config.xml file for each Project to limit the number of tasks running at the same time you should be good to go for a long time with little effort on your part unless several of your Projects run out of work at the same time.
ID: 102203 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2125
Credit: 41,228,659
RAC: 9,701
Message 102206 - Posted: 11 Jul 2021, 4:32:17 UTC - in response to Message 102198.  

I'll continue to run Open Pandemics on WCG, as I do all their sub-projects, but I'm not sure if I'm wasting my time for the sake of having smoke blown up my arse a lot more often.


Approximately 300 million small molecules run for OpenPandemics - COVID-19 as part of system test

<blink>Wut</blink>

Hopefully that means a whole bunch more can now be run thru, even those on the edges of the maybe range, and find a way to stop this stuff in it's tracks!! Now that the process is at least 10 times faster drop 100 million tasks onto WCG and let people have fun.

On a positive note with some of the higher end gpu's beginning to hit the market again maybe people can get thru those tasks even faster.

While it's highly likely I'm misunderstanding what's being done, it seems to me they're now working much harder and faster at answering a question that's got nothing to do with anything useful.
It sounds to me like they haven't reached the starting gate yet, pretty much a year after the sub-project began.

At the same time, when my low-capability graphics cards are used, they run for something like 40 minutes and award 10x the credits than I get for running their CPU tasks, both of which seem to achieve the square root of nothing.
It's embarrassing.
ID: 102206 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,169,305
RAC: 3,400
Message 102209 - Posted: 11 Jul 2021, 22:53:22 UTC - in response to Message 102206.  

I'll continue to run Open Pandemics on WCG, as I do all their sub-projects, but I'm not sure if I'm wasting my time for the sake of having smoke blown up my arse a lot more often.


Approximately 300 million small molecules run for OpenPandemics - COVID-19 as part of system test

<blink>Wut</blink>

Hopefully that means a whole bunch more can now be run thru, even those on the edges of the maybe range, and find a way to stop this stuff in it's tracks!! Now that the process is at least 10 times faster drop 100 million tasks onto WCG and let people have fun.

On a positive note with some of the higher end gpu's beginning to hit the market again maybe people can get thru those tasks even faster.


While it's highly likely I'm misunderstanding what's being done, it seems to me they're now working much harder and faster at answering a question that's got nothing to do with anything useful.
It sounds to me like they haven't reached the starting gate yet, pretty much a year after the sub-project began.

At the same time, when my low-capability graphics cards are used, they run for something like 40 minutes and award 10x the credits than I get for running their CPU tasks, both of which seem to achieve the square root of nothing.
It's embarrassing.


They key is how much work is being done on each type of cruncher, ie cpu and gpu. Some tasks work very well when ported over to a gpu while some do not. The reason alot of projects have both is because some people like to split their crunching between projects or are getting badges that can require crunching for a certain app, ie wuprop. Splitting crunching between projects means you can better manage the cache size for both devices. Then there are people like me that buy older pc's with alot of cpu cores in them and want to contribute too.
ID: 102209 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile [VENETO] boboviz

Send message
Joined: 1 Dec 05
Posts: 1994
Credit: 9,623,704
RAC: 8,387
Message 102210 - Posted: 12 Jul 2021, 10:34:17 UTC - in response to Message 102206.  

At the same time, when my low-capability graphics cards are used, they run for something like 40 minutes and award 10x the credits than I get for running their CPU tasks, both of which seem to achieve the square root of nothing.
It's embarrassing.


I think that the use of gpu, if possible, is an incredible boost for SCIENTIFIC research.
I know that some crunchers runs only for credits, but researchers are interested in results (and also a lot of volunteers).
ID: 102210 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,169,305
RAC: 3,400
Message 102211 - Posted: 12 Jul 2021, 11:46:53 UTC - in response to Message 102210.  

At the same time, when my low-capability graphics cards are used, they run for something like 40 minutes and award 10x the credits than I get for running their CPU tasks, both of which seem to achieve the square root of nothing.
It's embarrassing.


I think that the use of gpu, if possible, is an incredible boost for SCIENTIFIC research.
I know that some crunchers runs only for credits, but researchers are interested in results (and also a lot of volunteers).


I agree!! I also wish Rosetta could figure out a way to use libraries or something so gpu's can be used here!! Even if the minimum memory size requirement was 4gb or even 6gb of onboard memory it would help. BUT with the limited number of tasks they sometimes have now it could make the problem worse.
ID: 102211 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1831
Credit: 119,627,225
RAC: 10,243
Message 102214 - Posted: 13 Jul 2021, 8:35:49 UTC

If the tasks can't be multi-threaded, and given that each task can require 1GB RAM, then a GPU with 2000 cores would need 2TB RAM to run Rosetta. Or looking at what is available, a 2000-core (say 2070 RTX) with 12GB RAM has ~6MB RAM per core.

I think it's safe to assume that there could be some savings by sharing some RAM, but still, the amount of RAM required for single-threaded Rosetta is huge compared to what is available.

Of course, if it were available then I'd run it on GPUs too!

I think the most likely short-term thing is that training might be done on GPUs, but whether they'll farm it out to us or do it in house I have no idea.
ID: 102214 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Sid Celery

Send message
Joined: 11 Feb 08
Posts: 2125
Credit: 41,228,659
RAC: 9,701
Message 102228 - Posted: 17 Jul 2021, 11:42:42 UTC - in response to Message 102209.  

I'll continue to run Open Pandemics on WCG, as I do all their sub-projects, but I'm not sure if I'm wasting my time for the sake of having smoke blown up my arse a lot more often.


Approximately 300 million small molecules run for OpenPandemics - COVID-19 as part of system test

<blink>Wut</blink>

Hopefully that means a whole bunch more can now be run thru, even those on the edges of the maybe range, and find a way to stop this stuff in it's tracks!! Now that the process is at least 10 times faster drop 100 million tasks onto WCG and let people have fun.

On a positive note with some of the higher end gpu's beginning to hit the market again maybe people can get thru those tasks even faster.


While it's highly likely I'm misunderstanding what's being done, it seems to me they're now working much harder and faster at answering a question that's got nothing to do with anything useful.
It sounds to me like they haven't reached the starting gate yet, pretty much a year after the sub-project began.

At the same time, when my low-capability graphics cards are used, they run for something like 40 minutes and award 10x the credits than I get for running their CPU tasks, both of which seem to achieve the square root of nothing.
It's embarrassing.


The key is how much work is being done on each type of cruncher, ie cpu and gpu. Some tasks work very well when ported over to a gpu while some do not. The reason alot of projects have both is because some people like to split their crunching between projects or are getting badges that can require crunching for a certain app, ie wuprop. Splitting crunching between projects means you can better manage the cache size for both devices. Then there are people like me that buy older pc's with alot of cpu cores in them and want to contribute too.

I think the key isn't so much the amount, but the quality of work.
It seems to me the speed of the tail wagging the dog has been considerably enhanced while I can't recognise any benefit from it, CPU or GPU, while delivering credits that ought to be about 100x less going by the state of that article about what's being "achieved".
I'm pretty sure my GTX750 isn't deserving double the amount of credit for either work nor benefit than each of the hyperthreaded cores on my Ryzen 7 5800X
That's why I find it embarrassing.
ID: 102228 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Previous · 1 · 2

Message boards : Rosetta@home Science : Boost the project!



©2024 University of Washington
https://www.bakerlab.org