Evaluating biomedical data production with text mining

text-mining-biomedical-omictools

Estimating biomedical data

Evaluating the impact of a scientific study is a difficult and controversial task. Recognition of the value of a biomedical study is widely measured by traditional bibliographic metrics such as the number of citations of the paper and the impact factor of the journal.

However a more relevant critical success criteria for a research study likely lies in the production itself of biological data, both in terms of quality and also how these datasets can be reused to validate (or reject!) hypotheses and support new research projects. Although biological data can be deposited in specific repositories such as the GEO database, ImmPort, ENA, etc., most data are primarily disseminated in articles within the text, figures and tables. This raises the question – how can we find and measure the production of biomedical data diffused in scientific publications?

To address this issue, Gabriel Rosenfeld and Dawei Lin developed a novel text-mining strategy that identifies articles producing biological data. They published their method “Estimating the scale of biomedical data generation using text mining” this month on BioRxiv.

Text mining analysis of biomedical research articles

Using the Global Vector for Word Representation (GloVe) algorithm, the authors identified term usage signatures for 5 types of biomedical data: flow cytometry, immunoassays, genomic microarray, microscopy, and high-throughput sequencing.

They then analyzed the free text of 129,918 PLOS articles published between 2013 and 2016. What they found was that nearly half of them (59,543) generated 1 or more of the 5 data types tested, producing 81,407 data sets.

datamining-fig1-omictools

Estimating PLOS articles generating each biomedical data type over time (from “Estimating the scale of biomedical data generation using text mining“, BioRxiv).

This text-mining method was tested on manually annotated articles, and provided a valuable balance of precision and recall. The obvious next  – and exciting – step is to apply this approach to evaluate the amount and types of data generated within the entire PubMed repository of articles.

datamining-fig2-omictools

Estimating PLOS articles generating each biomedical data type over time (from “Estimating the scale of biomedical data generation using text mining“, BioRxiv).

A step beyond data dissemination

Evaluating the exponentially growing amount and diversity of datasets is currently a key aspect of determining the quality of a biomedical study. However in today’s era of bioinformatics, in order to fully exploit the data we need to take this a step beyond the publication and dissemination of datasets and tools, towards the critical parameter of improving data reproducibility and transparency (data provenance, collection, transformation, computational analysis methods, etc.).

Open-access and community-driven projects such as the online bioinformatics tools platform OMICtools, provide access not only to a large number of repositories to locate valuable datasets, but also to the best software tools for re-analyzing and exploiting the full potential of these datasets.

In a virtual circle of discovery, previously generated datasets could be repurposed for new data production, interactive visualization, machine learning and artificial intelligence enhancement, allowing us to answer new biomedical questions.

Snakemake: taking parallelization a step further

pipes-snakemake

Written by Raoul Raffel from Bioinfo-fr.net, translated by Sarah Mackenzie.

Hello and welcome back to a new episode of the series of Snakemake tutorials, this one dealing with parallelization. If you missed the first episode introducing you to Snakemake for Dummies, check out the article to catch up on it before you read on.

Here we are going to see how easy Snakemake makes it to parallelize data. The general idea revolves around cutting out the raw files from the start of your pipeline and then putting them back together after the calculation-intensive steps. We are also going to find out how to use a JSON configuration file. This file is the equivalent of a dictionary / hash table and can be used to stock global variables or parameters used by the rules. It will make it easier to generalize your workflow and modify the parameters of the rules without touching Snakefile.

snakemake-parallelization-omictools

 To use it in a Makefile file, you need to add the following key word:

snakemake-parallelization-omictools

You can then access elements as if it were a simple dictionary:

 snakemake-parallelization-omictools

A single key word for parallelizing

Only one new key word (dynamic) and two rules (cut and merge) are needed to parallelize.

It’s easiest to illustrate this using the workflow example from the previous tutorial. In it, the limiting step was Burrows-Wheeler Aligner (rule bwa_aln), as this command doesn’t have a parallelization option. We can overcome this limitation with the following two rules.

snakemake-parallelization-omictools

In this case I have simplified as much as possible to show the power of this functionality, however if you want to use them in the workflow from the previous tutorial you will have to adapt these two rules.

Note: the option –cluster allows the use of a scheduler (e.g. –cluster ‘qsub’).

Taking automatization further

The file configfile.json allows automatic generation of target files (i.e., the files you want at the end of your workflow). The following example uses the configuration files presented earlier to generate the target file. Note that {exp} and {samples} come from pairs.

snakemake-parallelization-omictools

Here’s an example of the workflow with parallelization:

snakemake-parallelization-omictools

snakemake-parallelization-omictools

snakemake-parallelization-omictools

snakemake-parallelization-omictools

So to sum up, we’ve taken another step forward in learning the functionalities of Snakemake using only a single key word and two new rules. This is an easy way to improve the efficacy of your workflows by reducing the calculation time for calculation-intensive steps.

However it’s important to keep in mind that excessive parallelization is not necessarily the optimal strategy. For example if I  decide to cut a file which contains 1000 lines into 1000 files of a single line each and I have only two poor processors at my disposal, I’m likely facing a loss of time rather than a gain. So it’s up to you to make the most judicious choice for your parallelization strategy on the basis of the machine(s) available, the size of the files to cut up, and the software/scripts and extra time that your two new rules will add to the workflow.

But if you are facing particularly demanding tasks, and a computer cluster is available, you may well see an impressive gain in time.

Improving DNA amplification for single-cell genomics

single-cell-dna-analysis-method

The single-cell DNA sequencing challenge

Deep sequencing of genomes (Whole Genome Sequencing, WGS) is important not only to improve our knowledge in life sciences and evolutionary biology but also to make clinical progresses. The analysis of the genome and its variations at the cell level have major applications: analysis of mutation rates in somatic cells, including copy-number variations (CNVs)  and single-nucleotide variations (SNVs), evolution of cancer, recombination in germ cells, preimplantation genetic analysis for embryos or analysis of microbial populations (mini-metagenomics).

Because of the low amount of DNA in a cell, single-cell whole genome sequencing requires whole genome amplification.  The 3 methods currently used are degenerate oligonucleotide-primed polymerase chain reaction (DOP-PCR), multiple displacement amplification (MDA), and multiple annealing and looping-based amplification cycles (MALBAC). However, these methods have limited capability to detect genomic variants and create amplification bias, artefacts and errors (see the overview by Gawad C. et al.).

New methodology for single-cell whole genome amplification

To overcome the limitations of exponential amplification, Xie group has recently developed the Linear Amplification via Transposon Insertion (LIANTI) method.

LIANTI takes advantage of Tn5 transposition and T7 in vitro transcription to linearly amplify genomic DNA fragments from a single human cell.


lianti-scheme

Fig 1. LIANTI scheme. Genomic DNA from a single cell is randomly fragmented and tagged by LIANTI transposon, followed by DNA polymerase gap extension to convert single-stranded T7 promoter loops into double-stranded T7 promoters on both ends of each fragment. In vitro transcription overnight is performed to linearly amplify the genomic DNA fragments into genomic RNAs, which are capable of self-priming on the 3′ end. After reverse transcription, RNase digestion, and second-strand synthesis, double-stranded LIANTI amplicons tagged with unique molecular barcodes are formed, representing the amplified product of the original genomic DNA from a single cell, and ready for DNA library preparation and next-generation sequencing. From Chongyi C., et al. Science. 356:189-194. 

LIANTI exhibits the highest amplification uniformity compared to the other current WGA methods. It allows accurate detection of single-cell micro-CNVs with kilobase resolution). LIANTI method also achieves the highest amplification fidelity for accurate single-cell SNV detection.

lianti-amplification-quality

Fig 2. LIANTI amplification uniformity and fidelity. (A) coefficient of variation for read depths along the genome as a function of bin sizes from 1 b to 100 Mb, showing amplification noise on all scales for single-cell WGA methods, including DOP-PCR, MALBAC, MDA, and LIANTI. The normalized MALBAC data (dashed line) are shown together with the unnormalized MALBAC data. Only the unnormalized data of the other methods are shown as no substantial improvement by normalization was observed. Poisson curve is the expected coefficient of variation for read depth assuming only Poisson noise. LIANTI exhibits a much improved amplification uniformity over the previous methods on all scales. (B) False-positive rates of SNV detection in a single BJ cell. The error bars were calculated from three different BJ cells. From Chongyi C., et al. Science. 356:189-194.

The high precision of genomic variants detection by LIANTI method would help improved analysis of single-cell DNA sequences, better diagnosis and understanding the evolution of cancer and other diseases.

Based on the recent papers:

Chen C. et al. (2017) Single-cell whole-genome analyses by Linear Amplification via Transposon Insertion (LIANTI)Science. 356:189-194.

Gawad C. et al. (2016) Single-cell genome sequencing: current state of the science. Nat. Rev. Genet. 17(3):175-88.