Last year brought about new hope and even more hype around the idea of applying artificial intelligence (AI) for “revolutionizing” drug discovery research — via machines being able to “learn” chemistry and biology from vast amounts of experimental data to propose potent drug candidates, accurately predict their properties and possible toxicity risks. It is supposed to dramatically minimize failures in clinical trials — saving R&D budgets, time, and most importantly, lives of patients.
On the one hand, pharmaceutical companies are clearly demonstrating a vivid interest in this technology catalyzed by quite illustrative practical achievements of AI in solving more “traditional” tasks — winning humans in chess and Go, recognizing speech and text, identifying faces in Facebook photos, driving cars, the list goes on. On the other hand, the history of AI development was filled with ups and downs — waves of hype around AI were followed by deep frustration and loss of public interest in 70th and 80th of the last century. While now, it seems, the AI technological capacity and big data technologies are all set to usher into the fourth industrial revolution, scepticism is still in the air. Take, for example, recent discussions here, and here and scroll down through commentaries to see how the scientific world has seemingly partitioned into “AI-believers”, “AI-agnostics”, and “AI-atheists”. For many drug discovery professionals, even with a PhD-level expertise in life sciences, the workings of specific AI algorithms remain more of a “black box magic”, and the lack of expertise was found to be among the top barriers for adopting AI by pharma and biotech professionals.
Let’s try to see why pharmaceutical industry is now in need of innovation more than ever, how it can approach AI innovations and how long it will take for the industry transformation to occur.