MEDLINE queries made easy with the MEDOC tool

Banner-MEDOC-Omictools

MEDOC (MEdline DOwnloading Contrivance) is a Python program designed to download data from MEDLINE on an FTP and to load all extracted information into a local MySQL database, thus making MEDLINE search easy.

MEDLINE, the biomedical data keeper

Since MEDLINE’s database has been released almost 50 years ago, the number of indexed publications rose from 1 million in 1970 to 27 millions this year. The aim of this repository is to facilitate the access to the scientific literature for everyone.

Pubmed-Omictools
Evolution of the number of document on PubMed

The NIH (National Institute of Health, USA) also provides a powerful search engine, which allows to query this database throught the well-know web interface PubMed. This search engine supports complex queries by using logical operators (OR, AND) and indexes different text blocks (such as title, abstract) for refined search. Moreover, different API services have been released to allow routine search, informatics parsing of the results, and data extraction.

However, to query these API (eUtilities), the user needs to program a different script for every search (which can become time-consuming when many data requiring different parsing are needed) and to query the API many times to retrieve individual data from unique article.

To make data-mining easier, the NIH now allows to download MEDLINE’s data from a FTP containing XML-tagged file.

Relational database to the rescue

Even if noSQL databases are rising up these last years, a local and relationnal-based version of the MEDLINE database is useful for complex and frequent queries. The idea behind MEDOC was thus to build a relational scheme and load XML files into this mySQL version.

MEDOC-Omictools

The figure above presents every steps executed by the Python3 wrapper to construct this local database. 13 tables were created to store every data contained into XML files extracted from the NIH FTP (authors, chemical products, MESH, corrections, citation subset, publication type, language, grant, data bank, personal name subject, other ID and investigator).

Example of request

It took 113 hours (4 days and 17 hours) for MEDOC to load the 1174 files contained into the FTP in the mySQL database (representing 61.3 Go of disk space used).

Querying this version is almost instantenious, even if joining several tables together. In the example provided bellow, the 10 last publications about antioxidants indexed on PubMed were retrieved with SQL queries.

requete_sql_medoc_tool

The following result was provided in 0.022 secondes:

Query-result-omictools
Result obtained with the example query

In summary, this indexed relational database allows the user to build complex and rapid queries. All fields can thus be searched for desired information, a task that is difficult to accomplish through the PubMed graphical interface. MEDOC is free and publicly available on Github

Evaluating biomedical data production with text mining

text-mining-biomedical-omictools

Estimating biomedical data

Evaluating the impact of a scientific study is a difficult and controversial task. Recognition of the value of a biomedical study is widely measured by traditional bibliographic metrics such as the number of citations of the paper and the impact factor of the journal.

However a more relevant critical success criteria for a research study likely lies in the production itself of biological data, both in terms of quality and also how these datasets can be reused to validate (or reject!) hypotheses and support new research projects. Although biological data can be deposited in specific repositories such as the GEO database, ImmPort, ENA, etc., most data are primarily disseminated in articles within the text, figures and tables. This raises the question – how can we find and measure the production of biomedical data diffused in scientific publications?

To address this issue, Gabriel Rosenfeld and Dawei Lin developed a novel text-mining strategy that identifies articles producing biological data. They published their method “Estimating the scale of biomedical data generation using text mining” this month on BioRxiv.

Text mining analysis of biomedical research articles

Using the Global Vector for Word Representation (GloVe) algorithm, the authors identified term usage signatures for 5 types of biomedical data: flow cytometry, immunoassays, genomic microarray, microscopy, and high-throughput sequencing.

They then analyzed the free text of 129,918 PLOS articles published between 2013 and 2016. What they found was that nearly half of them (59,543) generated 1 or more of the 5 data types tested, producing 81,407 data sets.

datamining-fig1-omictools

Estimating PLOS articles generating each biomedical data type over time (from “Estimating the scale of biomedical data generation using text mining“, BioRxiv).

This text-mining method was tested on manually annotated articles, and provided a valuable balance of precision and recall. The obvious next  – and exciting – step is to apply this approach to evaluate the amount and types of data generated within the entire PubMed repository of articles.

datamining-fig2-omictools

Estimating PLOS articles generating each biomedical data type over time (from “Estimating the scale of biomedical data generation using text mining“, BioRxiv).

A step beyond data dissemination

Evaluating the exponentially growing amount and diversity of datasets is currently a key aspect of determining the quality of a biomedical study. However in today’s era of bioinformatics, in order to fully exploit the data we need to take this a step beyond the publication and dissemination of datasets and tools, towards the critical parameter of improving data reproducibility and transparency (data provenance, collection, transformation, computational analysis methods, etc.).

Open-access and community-driven projects such as the online bioinformatics tools platform OMICtools, provide access not only to a large number of repositories to locate valuable datasets, but also to the best software tools for re-analyzing and exploiting the full potential of these datasets.

In a virtual circle of discovery, previously generated datasets could be repurposed for new data production, interactive visualization, machine learning and artificial intelligence enhancement, allowing us to answer new biomedical questions.