TimeLogic,
TimeLogic<sup>®</sup> Technology

Price/Performance

TimeLogic® Technology offers superior price to performance for large-scale sequence analysis

TimeLogic® offers the industry’s best price/performance ratio for high throughput sequence comparison with our scalable, accelerated FPGA technology. TimeLogic’s DeCypher® software and custom Similarity Search Engine accelerator cards offer a fully supported out of the box solution for large-scale bioinformatics analysis.

Next Generation DNA sequencing systems are now generating 600GB of data per run, and sequencing throughput continues to increase at a pace faster than Moore’s Law. The ability of CPU performance to keep pace with sequencing costs is a losing battle, requiring localized and cost-effective high performance computation platforms to meet the needs of researchers. Compute clusters, GPUs, and cloud computing are all being explored to address this growing problem, however, of these competing technologies, TimeLogic’s FPGA accelerated algorithms offer the best performance per dollar.

As you can see below, TimeLogic's combination of software and hardware designed and optimized specifically for biocomputing offers the best price for performance ratio available on the market today.  And application specific support is included with all of our systems, which you won't find on our competitor's systems.  

Contact TimeLogic for custom performance benchmarks and to learn how our DeCypher® solution can save both time and money for your high throughput computational research needs.

Read more about how TimeLogic® products outperform these alternate technologies:

 

A common approach for high throughput computation is to build a cluster of low cost servers and implement a system for queuing and partitioning jobs to run in parallel on many processors. This results in a flexible system that can run any required software as needed with the desired or available amount of processing power. However, performance for any particular application might be poor as this configuration favors flexibility over performance.

Clusters can also be quite expensive to implement, operate, and maintain, requiring a high degree of technical expertise and support. Hardware depreciation correlates with Moore’s Law; as nodes of the cluster ultimately fail over time as in any system with many limited-life parts, it is not unusual to leave them offline rather than invest in repairs that may exceed the value of the server, slowly degrading the throughput of the system.

Furthermore, for many bioinformatics algorithms, large databases must be accessible to each node running the searches. Performance suffers if these databases are maintained and accessed remotely from the servers running the search algorithms, such as on a shared storage system. But keeping databases locally on each node dramatically increases storage costs, network traffic, and data curation efforts.

A more cost effective solution is TimeLogic’s accelerated FPGA technology, which provides highly parallelized computational power with a significantly smaller hardware footprint. This reduces operation and support costs by completely alleviating the need for a cluster, or allowing more value to be captured from existing cluster resources by offloading large jobs to a TimeLogic® DeCypher® system to free cluster CPU cycles for other needs.

Contact TimeLogic® for performance benchmarks using your own data to see how our DeCypher® solutions offer better price/performance than a local cluster.

Graphics processing units (GPUs) have gained attention as a generalized accelerated compute solution with the launch of Nvidia’s CUDA (Compute Unified Device Architecture). Originally developed for graphics processing calculations, GPUs are seen as a good fit for other generalized calculations that have parallelized data flows without data interdependencies, such as Gaussian elimination.[1] However, performance of algorithms or heuristics that require serialized or branching data flows can be greatly reduced due to low bandwidth of direct memory access on GPUs. This limits the utility of GPUs as a generalized bioinformatics compute platform.

Open-source CUDA implementations of common bioinformatics algorithms such as Smith-Waterman, BLAST, and ClustalW are available to run on commodity graphics cards in desktop computers. While this seems like a cost effective approach to high performance computing[2,3], when source code for CPU implementations is multi-threaded and optimized to the same degree that is required for GPUs, the advantage for generalized applications may decrease from the reported 10x-1000x range to an average of 2.5x.[4] Performance gains for specialized bioinformatics algorithms such as BLAST also fall to the low single digits when compared in apples-to-apples tests against CPUs.[5] This small increase is completely negated when considering that the percent increases in hardware acquisition costs and in power consumption exceed the performance gain.

Furthermore, like many open source products, GPU accelerated bioinformatics algorithms are not out of the box solutions. Technical expertise is required to implement a GPU system. The resulting software is often not as full featured as the original implementations, and typically lacks technical support.

While there is a place for GPUs in accelerating computation beyond graphics and gaming, the price for performance with bioinformatics algorithms is much less compelling compared to other options. TimeLogic’s DeCypher® technology has been optimized uniquely for both dynamic programming and heuristic bioinformatics algorithms and offers significantly better price for performance than available GPU solutions. DeCypher® is a mature product with 10 years of development and industry deployment, and is backed by a full support staff to assist in optimizing and troubleshooting high throughput searches.

Contact TimeLogic® for performance benchmarks using your own data to see how our DeCypher® solutions offer better price/performance than GPUs.


1. Che, Shuai, et al. Accelerating Compute-Intesive Applications with GPUs and FPGAs. Symposium on Application Specific Processors, 2008. 8-9 June 2008, pp 101-107.

2. Liu, Yongchao, et al. CUDASW++: Optimizing Smith-Waterman Sequence Database Searches for CUDA-enabled Graphics Processing Units. BMC Research Notes 2009, 2:73.

3. Manavski, Svetlin A. and Giorgio Valle. CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment. BMC Bioinformatics 2008, 9(Suppl 2):S10

4. Lee, Victor W. et al. Debunking the 100X GPU vs. CPU Myth: An Evaluation of Throughput Computing on CPU and GPU. ACM SIGARCH Computer Architecture News - ISCA '10 38:3, June 2010.

5. Jensen, Lars Juhl. Commentary: The GPU Computing Fallacy. Online blog, 28 Jan 2011. [retrieved 29 June 2011]

On-demand computing, such as Amazon’s EC2 cloud, offers benefits for infrequent users by allowing one to rent resources as needed without the up front commitment and expense of hardware acquisition, deployment, operation, and maintenance. Cloud computing also offers flexibility by allowing scaling of resources to meet periodic fluctuations in computational needs. Issues of data security must also be considered, although these have been addressed to the satisfaction of some corporate users through encryption.

Nevertheless, studies find a significant amount of variability in application performance on EC2, as well as performance degradation during sustained usage, making cloud computing significantly slower than a typical midsize cluster.[6,7] For frequent users with sustained computational needs and large data sets, renting a data center can be more expensive than buying one when all operational costs are considered.[8]

In addition to processing time, the principal obstacles for many applications in bioinformatics that require large data sets include network transfer bandwidth and data storage costs. 100Gb of next gen sequence data on a typical gigabit network can take a week to transfer.[9] A 30-40x coverage set of sequencing reads of the human genome can exceed 140Gb, and the Illumina HiSeq2000 now produces ~600Gb per run. Faster network connections and internet service, or the physical shipment of storage media, can reduce transfer time of these data sets to the cloud, but this adds additional operating costs. Once transferred, storage of customer data on the cloud and access to common pre-loaded data sets such as GenBank also increase operational costs. Running next-gen mapping and variant calling pipelines can generate nearly 2TB of data per sequencing run (Bcl, compressed FASTQ, and BAM files), and also require an additional 5TB of storage for temporary files. After analysis, the resulting data sets must be retrieved from the cloud, which takes time, or you must continue to pay for remote storage and computing.

TimeLogic’s® DeCypher® systems offer the computational power of 100’s of CPUs in a single server without the added costs and overhead passed on to cloud computing customers. Additionally, our local solutions alleviate the problems of data transfer or worries about data security. And perhaps most importantly, TimeLogic® solutions allow you to be in control of your own data at all times. Think about that the next time your cloud provider has an outage.[10]

Contact TimeLogic® for performance benchmarks using your own data to see how our DeCypher® solutions offer better price/performance than cloud computing.


6. Jackson, Keith R. et al. Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud. CloudCom 2010.

7. Walker, Edward. Benchmarking Amazon EC2 for High-Performance Scientific Computing. ;login: October 2008.

8. Armbrust, M., et al. Above the clouds: A Berkeley view of cloud computing. Tech. Rep. UCB/EECS-2009-28, EECS Department, U.C. Berkeley, Feb 2009.

9. Stein, Lincoln D. The Case for Cloud Computing in Genome Informatics. Genome Biology 2010, 11:207.

10. Raphael, J. R. The 10 worst cloud outages (and what we can learn from them). InfoWorld 27 June 2011 [Retrieved 30 June 2011]