Install Theme

To promote his preference, as of 2010, Tesler equipped his Subaru automobile with a personalized California license plate with the license number “NO MODES”. Along with others, he has also been using the phrase “Don’t Mode Me In” for years, as a rally cry to eliminate or reduce modes.

Hot take: it is a disgrace that the field of scientific computing still generally ignores basic good practice for writing computer code. For example (this is based on my experience doing this stuff as a grad student, and brushing shoulders with others who did it):

  • Few people in the field use version control
  • Code is sometimes shared publicly because a journal requires it or someone just likes doing it, but it’s not standard, and generally no one expects to ever reproduce the numbers and pictures in other people’s papers
  • No modularity — people share around little blobs of procedural code but they don’t, say, use OOP to separate numerical methods from the equations they are being used to solve, which means the same standard methods get re-implemented thousands of times with some chance of human error every time
  • No reusable tests — it would be immensely useful to have one standard way of verifying that your implementation is Nth order accurate, for example

Some of this is a consequence of things like “lots of this work is still done in FORTRAN,” but then the question is why that is the case, and the answers one tends to hear are not good. (Consider this post, which includes the arguments “if you keep using code you wrote decades ago, you never have to debug new code!” and “OOP is too abstract for physicists to learn.”)

forevernoon:
“ Folk of the Woods by Lucias Crocker Pardee, 1913, Illustrated by Charles Livingston Bull (1874-1932)
”

forevernoon:

Folk of the Woods by Lucias Crocker Pardee, 1913, Illustrated by Charles Livingston Bull (1874-1932)

via  Anneliese Hess

(via guavaorb)

silver-and-ivory:

it’s been a wonderful time. thank you all, no matter what tumblr decides to fuck up tomorrow.

AlphaFold @ CASP13: “What just happened?” →

tsutsifrutsi:

wirehead-wannabe:

evolution-is-just-a-theorem:

therainstheyaredropping:

Protein folding researcher comments about the way that DeepMind’s AlphaFold system beat all the established teams in the latest assessment of folding methods:

> I don’t think we would do ourselves a service by not recognizing that what just happened presents a serious indictment of academic science. There are dozens of academic groups, with researchers likely numbering in the (low) hundreds, working on protein structure prediction. We have been working on this problem for decades, with vast expertise built up on both sides of the Atlantic and Pacific, and not insignificant computational resources when measured collectively. For DeepMind’s group of ~10 researchers, with primarily (but certainly not exclusively) ML expertise, to so thoroughly route everyone surely demonstrates the structural inefficiency of academic science. This is not Go, which had a handful of researchers working on the problem, and which had no direct applications beyond the core problem itself. Protein folding is a central problem of biochemistry, with profound implications for the biological and chemical sciences. […]

> What is worse than academic groups getting scooped by DeepMind? The fact that the collective powers of Novartis, Merck, Pfizer, etc, with their hundreds of thousands (~million?) of employees, let an industrial lab that is a complete outsider to the field, with virtually no prior molecular sciences experience, come in and thoroughly beat them on a problem that is, quite frankly, of far greater importance to pharmaceuticals than it is to Alphabet. It is an indictment of the laughable “basic research” groups of these companies, which pay lip service to fundamental science but focus myopically on target-driven research that they managed to so badly embarrass themselves in this episode.

> If you think I’m being overly dramatic, consider this counterfactual scenario. Take a problem proximal to tech companies’ bottom line, e.g. image recognition or speech, and imagine that no tech company was investing research money into the problem. (IBM alone has been working on speech for decades.) Then imagine that a pharmaceutical company suddenly enters ImageNet and blows the competition out of the water, leaving the academics scratching their heads at what just happened and the tech companies almost unaware it even happened. Does this seem like a realistic scenario? Of course not. It would be absurd. That’s because tech companies have broad research agendas spanning the basic to the applied, while pharmas maintain anemic research groups on their seemingly ever continuing mission to downsize internal research labs while building up sales armies numbering in the tens of thousands of employees.

Oh wow… A new prediction technique is discovered and that technique’s foremost research group is able to perform about one year ahead of people who have had to pick it up on the fly. What an indictment. Amazing.

Yeah like, why is it surprising that machine learning specialists are able to apply machine learning to solve complex problems better than even the best trained humans? Isn’t that the whole point of ML?

I believe the OP’s argument here is that pharma companies should have research labs that have picked up on the fact that ML has promising implications for pharma research, and thus have dedicated large parts of their budgets to ML research and hiring top-grade ML researchers.

Bell Labs was a telecommunications company that realized that materials science and other applied-physics domains had promising implications for telecom infrastructure, and so hired relevant top-grade researchers and so ended up inventing (or co-inventing) the transistor, the laser, and the CCD.

The OP is essentially asking why there isn’t a “Pfizer Labs” or “Merck Labs” pounding out basic research around the topics relevant to pharmaceuticals, like ML, where they then would have been the ones to apply (work with Google to co-apply) DeepMind’s architecture to their research problems, before DeepMind themselves just decided to do it on a whim as a proof-of-concept.

I don’t have strong inclinations toward either side of this argument, but I’m reblogging to say that the post linked in the OP contains a very detailed and interesting technical and sociological discussion, most of it more moderate in tone than the excerpt, and I recommend reading it if you’re at all interested. (Although admittedly I didn’t understand all of it.)

(via tsutsifrutsi)

journalgen:

Proceedings of the 8th Intergalactic Summit on Exohate and Neoastronautics

geritsel:
“László_Mednyánszky
”

geritsel:

László_Mednyánszky