The hunt for an AIDS vaccine has lasted 30 years, with many failures.1 The fact that the human immunodeficiency virus-1 (HIV-1) virus continuously adapts and mutates results in broad genetic diversity and constantly changing antigen targets. Additionally, the viral envelope glycoprotein, which is primarily responsible for promoting viral entry into the host cell, has conformational flexibility along with numerous structural features that together help protect the virus from the humoral immune system.
Vaccine development for HIV-1 has focused on generating broadly neutralizing antibodies, which are based on structural elucidation of the viral envelope via sequencing studies. The current strategies use bioinformatics approaches that involve isolating indi-vidual B cells from broadly reactive sera (25% of HIV-positive individuals make relatively broadly reactive neutralizing
antibodies) and cloning potent and broadly neutralizing antibodies from the B cells.2
This work would be impossible without GenBank, an all-inclusive, opensource database initiated by the National Center for Biotechnology Information (NCBI). GenBank includes nucleotide sequences for more than 280,000 species and the supporting bibliographies, with submissions from individual laboratories as well as large-scale sequencing projects. Additionally, sequences from issued patents are submitted by the US Patent and Trademark Office.3 Despite the open access to this database, researchers all over the world have actively contributed to building up the resource, realizing the vast potential of this knowledge-sharing database. The information either goes to GenBank or is submitted through its European counterpart, the European Bioinformatics Institute (EBI) or its Japanese counterpart, the DNA Data Bank of Japan (DDJB).4 All the leading journals need researchers to submit their sequences to GenBank and cite the corresponding access number in the published article.
The new sequences can be directly submitted to EBI, DDJB, or GenBank and the 3 databases are synchronized daily for easy access to all the information on all 3 databases. The data is virtually in real time, with minimal delay in access to the latest data, free of cost. Other commonly used nucleotide databases include the European Molecular Biology Laboratory (EMBL; EBI is run by EMBL), SwissProt, PROSITE, and Human Genome Database (GDB).5 Taken together, these databases are essentially a bioinformatics tool that helps integrate biological information with computational software. The information gained can be applied to understand disease etiology (in terms of mutations in genes and proteins) and individual variables, and ultimately aid drug development.
According to the National Institutes of Health Biomedical Information Science and Technology Initiative, bioinformatics is defined as “research, development, or application of computational tools and approaches for expanding the use of biological, medical, behavioral, or health data, including those to acquire, store, organize, archive, analyze, or visualize such data.”6
Development of GenBank
Initially called the Los Alamos Sequence Database, this resource was conceptualized in 1979 by Walter Goad, a nuclear physicist and a pioneer in bioinformatics at Los Alamos National Laboratory (LANL).7 GenBank followed in 1982 with funding from the National Institutes of Health, the National Science Foundation, and the Departments of Energy and Defense. LANL collaborated with various bioinformatics and technology companies for sequence data management and to promote open access communications. By 1992, GenBank transitioned to being managed by the National Center for Biotechnology information (NCBI).8
Submissions to the database include original mRNA sequences, prokaryotic and eukaryotic genes, rRNA, viral sequences, transposons, microsatellite sequences, pseudogenes, cloning vectors, noncoding RNAs, and microbial genome sequences. Following a submission (using the Web-based BankIt or Sequin programs), the GenBank staff reviews the documents for originality and then assigns an accession number to the sequence, followed by quality assurance checks (vector contamination, adequate translation of coding regions, correct taxonomy, correct bibliographic citation) and release to the public database.3,8
How Are Researchers Utilizing This Database?
BLAST (Basic Local Alignment Search Tool) software, a product of GenBank, allows for querying sequence similarities by directly entering their sequence of interest, without the need for the gene name or its synonyms.4 An orphan (unknown) or de novo nucleotide sequence, which may have been cloned in a laboratory, can gain perspective following a BLAST search and a match with another, better-characterized sequence in the database. Further, by adding restrictions to the BLAST search, only specific regions of the genome (such as gene-coding regions) can be examined instead of the 3 billion bases.4 BLAST can also translate a DNA sequence to a protein, which can then be used to search a protein database.BLAST, which was developed at NCBI, works only with big chunks of nucleotide sequences, and not with shorter reads, according to Santosh Mishra, PhD, director of Bioinformatics and codirector of the Collaborative Genomics Center at the Vaccine and Gene Therapy Institute (VGTI) of Florida. Mishra, who worked as a postdoctoral research associate with Goad at LANL, was actively involved in developing GenBank.
His work contributed to the generation of the “flat file” format, and he also worked on improving the query-response time of the search engine. Additionally, he initiated the “feature table” in GenBank—the documentation within that helps GenBank, EMBL, and DDJB exchange data on a daily basis. According to Mishra, the STAR aligner, developed at Cold Spring Harbor, works better with reference sequences, while Trinity, developed at the Broad Institute in Cambridge, Massachusetts, is useful for de novo sequences. (The Broad Institute made news recently with its work on identifying gene mutations that prevent diabetes in adults who have known risk factors, such as obesity.)
Advantages and Disadvantages of the GenBank Platform
The biggest single advantage of GenBank is the open-access format, which allows for a centralized repository in a uniform format. The tremendous amount of data generated by laboratories (such as from microarrays and microRNA arrays) cannot be published in a research article. However, the data, tagged and uploaded on GenBank, can be linked to the journals’ websites and the links can be provided in the print versions of the articles as well.4
On the flip side, the biggest advantage of being an open-access platform is also the biggest disadvantage of the software. There’s always the probability of scientists registering faulty genetic sequences on the website, which will not be caught unless they are peer reviewed. Despite the incorporation of several quality control mechanisms into the system, reuse of the data by other scientists alone can help discover glitches in the existing data. Additionally, GenBank encourages its users to submit feedback and update records, which unfortunately is not a very proactive process.4
Bioinformatics and Pharmacogenomics in Drug Discovery/Development
Accelerating the drug development process saves costs for the pharmaceutical industry, especially with the way the industry functions today. The company that discovers or invents a new chemical entity, which could metamorphose into a new drug candidate, can squeeze the maximum profit out of the drug before the patent expires and competitors catch on. Essentially, companies jump at every opportunity to accelerate any aspect of the discovery/development process. Resources like the GenBank and EBI are data mines that can speed up the entire process in the following ways:
Target identification. Drug candidates can be identified (following a high-throughput screen of chemical libraries) and developed only after a “druggable target” is discovered for a disease condition. Typically, about 1 in 1000 synthesized compounds will progress to the clinic, and only 1 in 10 drugs undergoing clinical trials reaches the market.9 Optimizing/validating a target is essential due to the prohibitively high cost of conducting trials, and the potential targets for drug discovery are increasing exponentially.10 By mining and storing information from huge data sets, like the human genome sequence, the nucleotide sequence of the target proteins has become readily available, as has the potential to identify new targets. This can exponentially increase the content of the drug pipelines of pharmaceutical companies.10
Target validation. Establishing a robust association between a likely target and the disease, to confirm that target modulation translates into a beneficial therapeutic outcome, would not only validate the drug development process, but also help absorb the risks associated with clinical trial failure of the molecule being developed.10
Cost reduction.The drug development process is not just lengthy (product development can take 10 to 15 years9), but prohibitively expensive as well. Averaging $140 million in the 1970s, the cost of developing a drug was estimated at a whopping $1.2 billion in the early 2000s,11 and a recent Forbes analysis estimated the cost at $5 billion.12 Worth noting is that the final cost of any drug, which includes the total costs from discovery to approval, includes the cost of absorbing all the clinical trial failures.10 Clearly, bioinformatics tools improve the efficiency of target discovery and validation processes, reduce the time spent on the discovery phase, and make the entire process more costeffective.
Mishra believes GenBank is a good starting point in the drug discovery process. When a new sequence (of known or unknown function) is identified/isolated in the laboratory, a GenBank search will help identify homologues (human or in other organisms) with a 70% to 80% match. Functional studies would then ensue, along with cell and tissue distribution studies.
Industry Partnerships
With the value of personalized medication gaining acceptance, the study of pharmacogenomics (genetic variants that determine a person’s drug response; one size does not fit all) is extremely helpful to tailor the optimal drug, dose, and treatment options for a patient to improve efficacy as well as avoid adverse events (AEs).10 According to the Agency for Healthcare Research and Quality, in the US Department of Health and Human Services, AEs annually result in more than 770,000 injuries and deaths and may cost up to $5.6 million per hospital.13
To this end, EMBL-EBI is actively involved in industry partnerships (initiated in 1996), which includes Astellas, Merck Serono, AstraZeneca, Novartis, GlaxoSmithKline, Bristol-Myers Squibb, and several others.14 With the high throughput data that research and development (R&D) activities generate, open-source software and informatics developed by organizations like the GenBank and EBI could greatly improve efficiency and reduce the cost of drug discovery and development.
Translational Bioinformatics and Precision Medicine
Healthcare today is primarily symptom driven, and intervention usually occurs late in the pathological process, when the treatment may not be as effective. Identifying predisease states that could provide a window into the forthcoming risk of developing a disease, identifying reliable markers, and developing useful therapies would be the key to managing disease treatment15—not just to improve efficiency but also to reduce healthcare costs, which it is estimated will steadily increase and account for 19.9% of the gross domestic product (GDP) by 2022.16
With precision medicine or personalized medicine, molecular profiles generated from a patient’s genomic (coupled with other “-omics” such as epigenomics, proteomics, and metabolomics) information could help accurately drive the diagnostic, prognostic, and therapeutic plans, tailored to the patient’s physiological status. Predictive models can also be developed for different biological contexts, such as disease, populations, and tissues.15 However, the deluge of data generated by bioinformatics tools needs a framework to regulate, compile, and interpret the information.
Most importantly, the key stakeholders (government, research industry, biological community, pharmaceutical industry, insurance companies, patient groups, and regulatory bodies17) that would drive the widespread acceptance and implementation of precision medicine need to be brought up to speed with the enormous progress made in the field and the promise it brings. There would also be a revolutionary change in the approach to conducting clinical trials—the phase 3 studies conducted in the target population could focus on a more select patient group, which could improve both clinical and economic efficacy (Figure 1).17
The developing field of translational bioinformatics creates a platform to bring all the data together, which can then be used to generate a treatment plan personalized to a patient (Figure 2). It has been defined as “the development of storage, analytic, and interpretive methods to optimize the transformation of increasingly voluminous biomedical data into proactive, predictive, preventative, and participatory health.”15 The primary goal of translational bioinformatics is to connect the dots and develop disease networks that can be used as predictive models.
In other words, harmonization of the data from different sources (genome, proteome, transcriptome, metabolome, and patient’s pathological data) could help in making better-informed treatment decisions.
Within medical R&D, a commonly held belief is that cures for diseases could be found residing within existing data, if only the data could be made to give up their secrets.18 The current status of the scientific, medical, and healthcare fields is that experts in each field have put their minds into developing the best technologies; unfortunately, the technologies are compartmentalized and they work in parallel. The great need, which has been recognized and implemented in limited areas, is to create platforms where the data can be merged to produce meaningful outcomes.
Data Integration Platforms
Implementing these huge changes would necessitate that physicians and providers be better adept at interpreting molecular data, which essentially entails improved education models that include relevant courses during graduate training. Also, development of software that can interpret the data would provide a tremendous advantage to researchers, clinicians, scientists, pathologists, and maybe patients as well.
To this end, companies such as N-of-One are developing analyzers coupled with software that can provide molecular interpretation of next-generation sequencing data. The company recently announced the launch of Variant InterpreterTM, a cloud-based application, on Illumina’s BaseSpace Apps (applications store for genomic analysis).19 The app allows oncologists, pathologists, and researchers to access relevant biological and clinical information that is related to the tumor profile generated following sequencing. Additionally, the user can request a molecular interpretation of a variant or multiple variants in a tumor and receive a customized interpretive road map linking the variant data to scientific knowledge on it.
With a plan for future expansion, the software currently includes 30 cancerassociated genes.19 An application developed by Remedy Informatics, TIMe, boosts the process further. TIMe merges data, registries, applications, analyses, and any other relevant content. TIMe promises to enable faster, more informed decisions in clinical practice, research, and business operations. It also is expected to improve treatment effectiveness, quality of care, and patient outcomes.20
Applications of Translational Bioinformatics
Once the genomic and/or proteomic data have been generated, what next? How are providers employing these data to their advantage and to guide treatment?
Several reports on clinical studies are being successfully conducted on the foundation of precision as well as evidence-based medicine. A study published in the New England Journal of Medicine highlighted the importance of using panitumumab (Vectibix; Amgen Inc) in combination with traditional chemotherapy only in those patients with metastatic colorectal cancer (mCRC) who do not have RAS mutations.
The study found that the subset of mCRC patients who expressed wild-type RAS demonstrated improvements in progression-free as well as overall survival upon the inclusion of panitumumab in their treatment regimen.21 The protein KRAS functions downstream of the epidermal growth factor receptor (EGFR). Mutations in the KRAS gene entail receptor-independent functioning of the protein, so using panitumumab, which is an EGFR antagonist, would be completely fruitless in this context. Thus, prior knowledge of the patient’s genomic status helped in selecting the right cohort for successfully using this drug.
Genomewide association studies (GWASs) are also a guide to target identification and the drug discovery process. Although the CD40 locus on the human genome has been associated with the increased risk of rheumatoid arthritis,22 there are no approved inhibitors of CD40 signaling in use in the clinic. A recently concluded global multi-collaborative project conducted a high-throughput drug screen to identify potential modulators of CD40 signaling based on genetic findings. Following chronological deepsequencing, expression quantitative trait loci analysis, and in vitro retroviral studies, the authors developed a highthroughput reporter assay to screen 1982 compounds and US Food and Drug Administration—approved drugs. The result was the identification of 2 novel chemical inhibitors not previously implicated in inflammation or CD40 signaling.23 GWASs have also been responsible for identifying a number of gene loci associated with several autoimmune diseases (Table).
Bioinformatics studies have also yielded microRNAs, which are small (~22 nucleotides), noncoding RNA molecules that can repress the transcription of messenger RNA (mRNA) or promote its degradation, thereby silencing gene expression.24 Initially thought of as “junk” sequences on the DNA since they are non-coding nucleotides, miRNAs (about 24,521 listed in miRBase, a database maintained by the University of Manchester25) have now found their place in clinical trials as biomarkers (cancer,26 multiple sclerosis, 27 psoriasis28) and are also being developed as “drugs” by companies like Mirna Therapeutics Inc.29
Genetic Testing to Determine Disease Susceptibility
Another aspect of bioinformatics is genetic testing, which along with risk assessment is rapidly being streamlined into mainstream oncology practices, especially with the recommendations provided by the US Preventive Services Task Force.30 Genetic counseling has become the standard of care for patients with a personal and family history of breast, ovarian, or colon cancer, while genetic testing is appropriate for some patients with pancreatic, renal, skin, or thyroid cancers as well as with some rare cancer syndromes.31
Then you have J. Craig Venter, PhD, a biologist and entrepreneur, who competed with the Human Genome Project to sequence the human genome and who recently announced the launch of a new company, Human Longevity. The company plans to sequence 40,000 human genomes per year to gain insights into the molecular causes of aging and age-associated diseases such as cancer and heart disease.32
The Healthcare Equation
Insurance companies are rapidly adapting to this changing scene of “big data” in their own right. Back in 2011, Aetna announced a partnership with the Center for Biomedical Informatics at Harvard Medical School with the aim of improving the quality and affordability of healthcare (healthcare informatics).
The researchers at Harvard aimed to:
• Evaluate the outcomes of various treatments for specific conditions based on quality and cost
• Determine factors that predict adherence for chronic diseases • Study how claims data and clinical data, available through electronic health records, can best be used to predict outcomes
• Improve the ability to predict adverse events through a proactive study of claims and clinical data.33
The possibilities are enormous, with application in all disease fields. Translational bioinformatics integrates the various data sources and paves a path for precision medicine that would be immensely valuable to all stakeholders (patients, pharmaceutical companies, scientists, and physicians) alike.
Exploring Racial, Ethnic Disparities in Cancer Care Prior Authorization Decisions
October 24th 2024On this episode of Managed Care Cast, we're talking with the author of a study published in the October 2024 issue of The American Journal of Managed Care® that explored prior authorization decisions in cancer care by race and ethnicity for commercially insured patients.
Listen
Uniting to Support Patients With Cancer Beyond Treatment
November 17th 2024Kasey Bond, MPH, of Perlmutter Cancer Center at NYU Langone Health, speaks to why it’s vital to keep patients at the center of all strategic partnerships between academic institutions and community-based oncology practices.
Read More
Examining Low-Value Cancer Care Trends Amidst the COVID-19 Pandemic
April 25th 2024On this episode of Managed Care Cast, we're talking with the authors of a study published in the April 2024 issue of The American Journal of Managed Care® about their findings on the rates of low-value cancer care services throughout the COVID-19 pandemic.
Listen
Bridging Cancer Care Gaps and Overcoming Medical Mistrust
November 13th 2024In this clip from our interview with Oscar B. Lahoud, MD, cochair of our Institute for Value-Based Medicine® evening hosted with NYU Langone Health, he addressed medical mistrust in underrepresented communities.
Read More
How English- and Spanish-Preferring Patients With Cancer Decide on Emergency Care
November 13th 2024Care delivery innovations to help patients with cancer avoid emergency department visits are underused. The authors interviewed English- and Spanish-preferring patients at 2 diverse health systems to understand why.
Read More