Home » BioCentric » nVidia CUDA Bioinformatics: An Introduction

nVidia CUDA Bioinformatics: An Introduction

Introduction

As most readers of this blog will know, bioinformatics has always been a data-heavy field of research. Many computational problems within bioinformatics can scale up to the sky, from protein folding to calculating homology in sequences. But the real motherload of repetitive data was unleashed on the bioinformatics community when Next Generation Sequencing (NGS) was introduced.

Illumina Nextseq 500 sequencer

First build by Solexa and then by Illumina, these sequencers generate datasets containing billions of scans (reads) of short DNA strands, 50-250 base paires long in general. At first this was very expensive to do sequencing, but in 10 years’ time the same amount of DNA read by the sequencer has become about 1000 times cheaper. As you can imagine, as the price drops, the feasible applications of NGS began to grow; from analyzing human genomes for genetic defects to selecting the right cross-bred seeds for growing crops. Each subsequent year, bioinformaticians spend more and more time on digesting these NGS datasets.

In the same timeframe, computers also got better at processing data. Processors (CPUs) increased in core count rapidly and are still the main workhorse of most bioinformatics tools. But they are not keeping pace with the dropping price of NGS data, as shown in the infamous figure below:

Price of sequencing per human genome, which is a general indicator of costs

Massive Parallel Computing and GPUs

Simply put, we need more than just higher CPU cores counts and bigger servers as the NGS data load continues to grow. Better software and more efficient algorithms is a good way forward. But there is another source of compute power; the Graphics Processing Unit (GPU).

Graphics cards are originally designed for video games. They are in effect massive parallel computational units; my nVidia GTX 1080 TI has 3584 shader cores which can be leveraged. A better CPU comparison would be counting the Streaming Multiprocessors (SMs), of which my graphics card has 28, compared to the 8 to 16 cores that can be found in modern server CPUs. Another big difference with CPU cores is that each SM can only access a relatively tiny amount of memory, ( 48 KB of L1, 2.75 MB of L2 and 11GB of onboard DDR5 RAM), though this memory is blindingly fast. Still, the raw performance of modern GPU’s are in the order of magnitudes higher then CPU’s:

Giga-floating point calculations per second, on a 10-log scale. In green CUDA-enabled GPU’s, in blue the fastest, highest-core count CPU’s

All these specifications make GPU’s a  good fit for NGS computations, since those often entail small separate tasks (like find one specific DNA string within a reference genome) but there are millions of them, processed in parallel. The higher performance should reduce data analysis times or make more comprehensive analysis possible. You can imagine that patients waiting for their genetic results want them as fast as possible.

However, like I mentioned earlier, the CPU is still the go-to source for calculations within Bioinformatics. This has several reasons. First off, CPU’s are everywhere and inherent to any computer or server, which cannot be said of GPU’s. Next, developing tools for GPUs is different and more complex when compared to CPU’s because of the memory limits. But the biggest hurdle; (x86) CPU’s support basically every programming language and framework. With GPUs, you are dealing with a plug-in device that requires separate drivers and special designed frameworks and languages. Since there are less hard-core programmers within bioinformatics then in general IT, this limits the amount of developers that can code anything for the GPU.

nVidia CUDA

Enter nVidia with their CUDA framework. nVidia is constantly looking to expand their graphics cards usability and have developed libraries for many compute purposes like transcoding media files and deep learning. These libraries, programming language extensions and other GPU based tools are part of the Compute Unified Device Architecture (CUDA) API. This API is actively updated and supported, with nVidia trying to make the barrier of GPU compute usage as low as possible.

BioCentric’s focus

Personally, I was always fascinated with graphics cards, ever since I got my hands on a 3DFX VooDoo 2 extension card. I was excited to so see nVidia make a push into bioinformatics (my other computational passion). To my surprise, the adoption rate of CUDA into bioinformatics has been lower than I expected. In the beginning, I thought this was due to the limited amount of (v)RAM available, like I mentioned before. Since that has been resolved at least 4 years, I do not see why there isn’t more mainstream bioinformatics usage of nVidia’s graphics cards, considering their considerable computational performance.

The tools that do exist, are often ignored, do not get a lot of citations, and are not part of actively maintained bioinformatics pipelines. I am interested to find out why and report to you how well these tools run, and what benefit they provide. If these GPU accelerated tools are any good, will be included in our analysis suites.
For that reason I will dedicate a series of blogs to the subject of CUDA Bioinformatics. I aim to discover the best bioinformatics tools that are out there to deal with NGS computational challenges, and compare them to the current state-of-the-art CPU based options.

In the next article, I will discuss short read alignment tools BWA and its CUDA variant: BaraCUDA!