The ability to effectively integrate data and knowledge from many disparate sources will be crucial to future drug discovery. Data integration is a key touchstone for conducting scientific investigations with modern platform technologies, managing increasingly complex discovery portfolios and processes, and fully realising economies of scale in large enterprises.
However, viewing data integration simply as an ‘IT problem’ underestimates the novel and serious scientific and management challenges and potential that it addresses. Challenges could require significant methodological and even cultural changes in our approach to data.
New drug discovery is a complicated, expensive and time-consuming process. Traditional drug
development pipelines need 12-14 years and $2.7+ billion on average to result in a single successful outcome.
This effectively reduces the ‘productive life’ of a patent and calls for further investments later, in
greening and other ways to exploit patents. The smaller window to recover these investments
makes drugs expensive, and, thus, also limits the viable market size.
Reducing research costs and speeding up development processes of new drug discovery are challenging issues for the pharma industry, some answers have emerged and found validation in the recent pandemic.
The rapid growth of computational tools in recent times, such as Computer-Aided Drug Discovery
(CADD), has made significant impact on drug design. CADD enables faster, cheaper and more
effective drug design, and provides fruitful insights in the area of therapy.
Every stage of the drug discovery process produces tons of disparate data. With the arrival of
Artificial Intelligence (AI), the design of drugs ‘in-silico’ has brought about unprecedented changes.
State-of-the-art deep learning approaches, such as retro-synthetic routine planning, drug scaffold generation and drug binding affinity predictions have the potential to produce excellent chemical properties needed for new molecules.
The application of Bioinformatics across the various stages of the drug discovery, thereby, reduces
the risk of failure, makes the process cheaper and reduces turnaround times and reduces human
intervention and error by automating processes.
The Target Identification Process is greatly optimised by bringing together the knowledge of molecular bases for the disease and virtually screen targets for compounds that bind and inhibit
the protein.
High throughput screening is the traditional method in in lead identification. Bioinformatics helps screen target proteins against a database of molecules to see which compound binds strongly with the targets.
Quantity Structure Activity Relationship (QSAR) is the computational technique employed to refine the structure of the lead compound. The information from QSAR can be used to suggest
new chemical modifications and testing.
The pre-clinical testing phase benefits from bioinformatics by making it possible to do testing
without the use of animals and involves pharmacology, toxicology and pharmacokinetics.
Clinical trials are then conducted to establish Absorption, Distribution, Metabolism, Excretion, Toxicity (ADMET) and efficacy. The ability to predict these parameters in advance with bioinformatic tools like C2-ADME, TOPKAT, CLOGP, DrugMatrix, AbSolv, BioPrint, Gastroplus is a significant enabler to decision-making and lab processes becoming more efficient.
Genomics, proteomics and biopharma research have the potential to yield many more, and targets with greater specificity leading to personalisation of medicine; while virtual screening has the ability to bring predictive abilities to drug development. Combinatorial chemistry with molecular modeling allows producing a vast number of compounds and models, and improve activity using computer graphics and other bioinformatics methods.
The power of in-silico, tissue and computer-based models to pre-clinical testing and the application of Artificial Intelligence (AI) will change the clinical trials just like it is already impacting EMR.
The application of AI to EMRs closes the loop on the healthcare value chain and brings value to
the drug development cycle, thus, benefitting mankind in terms of both health and economics.
The ‘Deepmind’ protein folding is an open source tool from Google that is beginning to
revolutionise the drug target and lead identification process. It conclusively validates the power
of in-silico tools in drug development.
The impact of managing in-vitro, in-vivo data digitally will help costs being managed better and
also lesser legal challenges to the veracity of processes and data by offering water-tight traceability. These are major and avoidable cost incidences in the drug development process.
The rise of precision medicine and pharmacogenomics – following the money
The lines are beginning to blurr between biotech, devices, Genetics, Genomics, Bioinformatics
(GGB) and healthtech, EHR, LIMS, RIS coupled with digital, SMAC, IoT, AI, VR, CRM, precision
medicine, evidence-based medicine and population health.
Becton Dickinson (BD), the injectable major, has acquired 21 companies across devices, supply
chain, logistics, EMR, IoT, AI, genomics etc.; CareFusion EHR for $12B; GenCell for Genomics;
Bard Devices for $24B. What is an injectables company doing in EHR and Genomics?
Verily an Alphabet/Google company has deals with Medicxi $300M Genomics fund, GlaxoSmithKline (GSK), Sanofi, Novartis and J&J to apply novel technology. Verily is investing in areas ranging from genomics, bioinformatics, EMR, IoT, AI, disease management to robotic surgery. Calico another Biotech company from Google wants to beat aging and apoptosis. Google makes AI tool for precision medicine open source. What is going on…an Internet company in EHR, Drug R&D, medical devices and genomics?
The most active pharma investors in genomics, biogtech and digitalhealth since 2014 are Novartis, Johnson & Johnson, GlaxoSmithKline, Pfizer and Celgene. J&J is investing heavily into Genomics R&D through its PRD, JLABS, Jansen vehicles. $914M Series B of GRAIL with participation from Johnson & Johnson, Merck, Bristol-Myers Squibb and Celgene. In a distant second, Verily and Sanofi took a $500 million minority stake in Onduo in September 2016. Some of the oncology investments are $474 million Series F of Moderna Therapeutics, and the $320 million Series A of Immunocore. Pharma majors extending into genomics is logical for drug R&D; but this is getting into clinical decision-making and heading towards precision medicine. The same was validated again by the pharma and genetics/genomic company announcements in H1 of 2018. Roche acquired Flatiron EHR and MySugr App for complete genomics-based diabetes disease management. Novartis, Sanofi and BMS are also shifting to precision medicine, pharmacogenomics, data and digital.
Takeda is 230 years’ old pharma giant from Japan. Focus is oncology, gastroenterology, and the
central nervous system, as well as vaccines. Their focus is shifting to genomics, biotech and
healthtech devices.
Colour genomics has raised $80 million more, bringing its funding total to $150 million. The
company is planning to move beyond genetics into preventative health more broadly. 23andMe
gets $300 million boost from GSK to develop new drugs for precision medicine.
Data management solutions
The data management problem in big pharma is just like the public health data diversity in many
countries. Fair data principles must be applied to solve the data problem in pharma and healthcare.
For example, public health in India is struggling with multiplicity of information systems being
used at central as well as at state level. Each of these systems is unable to exchange data and
information with each other. To overcome similar challenges across ministries, the Ministry of
Communication and Technologies initiated semantic standardisation across various domains
under Metadata and Data Standards (MDDS) project. The intent was to promote the growth of
e-governance within the country by establishing interoperability across e-governance applications for seamless sharing of data and services. MDDS for health domain was created by adopting global standards in such a way that the existing applications could be easily upgraded to the
MDDS standards. The exercise yielded approximately 1,000 data elements. These data elements
were expected to serve as the common minimum data elements for development of IT applications for various sub domains of health care. The need for the CDE arose because most of
the primary and public health IT applications are being developed without any standards by
different agencies and vendors in public and private sector in India. Each application is developed
for stand-alone use without much attention to semantic interoperability.
Later, when the thought of interoperability emerges, it becomes difficult to connect the primary
and public health systems and make them talk to each other because they were never designed
for that purpose. Even if technical and organisational interoperability is done, the semantic
interoperability may remain a challenge. For example, all primary and public health applications
must have the same facility master. When application A sends the ANC data for facility 123, the
receiving application B should understand ANC and uniquely identify facility 123. Another
example is if a hospital application sends the insurance reimbursement bill to insurance
company/government, the recipient application should be able to understand and represent the
same meaning of bill information.
Ministry of Health and Family Welfare has initiated development of the national health facility
registry. The registry was intended to standardise facility masters used across public health
information systems. Standardisation of facility masters is required for two purposes, first when exchanging data the sending and receiving applications should be able to identify health facility
similarly. For example – when application A sends the maternal health data for facility 123, the
receiving application B should understand maternal health data and uniquely identify facility 123.
Second, in public health, performance of each of the facility is assessed using aggregate indicators
and facility master serve as the secondary data source on which primary program specific data is
aggregated. Forexample data from number of doctors from system A and total outpatient
attendance data from system B could be analysed to get per doctor patient load across health
facilities only when both applications use common facility masters.
Outcomes data – The compelling need
The compulsion of wide adoption of universal healthcare across the world is making it imperative for pharma companies to pay greater attention to post marketing and outcome data. Patients will not remain primary payers for much longer and therefore payers like insurance and healthcare providers will have a greater say in determining prescriptions. The precursor to this trend of local insurers like AOK, etc. in Germany floating drug tenders have been around for some years. Prescription preferences and behaviours are increasingly affected by outcome data since healthcare payments themselves are becoming more and more outcome driven and less and less by feet-on-the-ground salesforce.
Given the realities discussed above the speed, efficiency of drug discovery and development, monitoring of outcomes are emerging into a tightly integrated loop that Pharmaceutical companies can no longer ignore.
The opportunities of the future in precision or individualised medicine, orphan drugs and newer
lifestyle diseases, epidemiological challenges as they emerge can only be leveraged by embracing
informatics into areas converging with life science and healthcare, not viewing it merely as a SCM
tool.
Relevance to the pharmacy of the world
The potential of bioinformatics is particularly relevant for the Indian pharma industry which rightly prides itself in being the pharmacy to the world, but must also look for the next opportunities to value addition and competitiveness in the space to sustain its economic relevance. The ability to offshore or collaborate, and to explore newer vistas in the drug development domain will largely depend on our ability to adopt bioinformatics in the industry
and build capacity in the discipline judiciously.
I really like your site. The content shared by you is very helpful and informative. please keep sharing.