As a scientist, I am always looking for new and better ways to solve problems and do things I’ve never done before. Supercomputing, in all of its incarnations, allows me to leverage larger datasets, and integrate across a larger space to address questions that were previously considered intractable.
This notion of making the intractable tractable in biology/biomedical science was first exercised with the sequencing of the human genome in 2003. Before 2003, genes were sequenced and analyzed one at a time; we didn’t even know how many genes were in the human genome, at the time estimates ranged from over 100,000 to as few as 10,000. We can now look beyond the cataloging of individual genes and begin to look at how genes interact and regulate each other on a larger scale and how that regulation confers a phenotypic variation (blue eyes, the emergence of cancer, onset of metabolic syndrome, etc.). These questions can only be answered by leveraging volumes of data that were previously impossible.
With the power of the Nimbix platform, we can now begin to move biology and biomedical science from a primarily descriptive exercise to an exercise where simulation and accurate prediction is possible. These predictions are only possible when tremendous amounts of metadata are leveraged. To enable leveraging of this data volume, hardware acceleration is the only viable option.
So, why join Nimbix?
Simple, the future of biomedical science is in simulation and predictive modeling. To do that you need supercomputers. As a scientist, it would be irresponsible of me not to be here.