Regenerative Biology

Stewart Computational Biology Group

Managing biomedical big data through the use of high-throughput, cutting-edge technologies. The analysis of these very large databases is the essence of bioinformatics.

Learn more

The Stewart Computational Biology group focuses on three main areas of research:

  1. analysis of high-throughput next generation sequencing data using computational techniques including machine learning and deep learning. Next generation sequencing data we analyze includes (but is not limited to): RNA sequencing (RNA-seq), DNA genome sequencing (DNA-seq) and Assay for transposase-Accessible Chromatin using sequencing (ATAC-seq)
  2. Analysis of biomedical text using statistical analysis, Natural Language Processing (NLP), and machine learning
  3. Analysis of electronic health records using machine learning.

In all three cases, the goal is help biologists understand their data and help biologists narrow down the next experiment to do. There are millions of experiments that biologists could try. The key is to figure out which ones are most relevant and most likely to lead to an important discovery. In the case of the groundbreaking research conducted by members of the Thomson Lab, computational biologists are focused on narrowing down the critical bits of genetic material from among tens of thousands of developmental instructions in the human genome that direct the induction of pluripotent stem cells through the early stages of differentiation towards clinical important cell types such as the components of blood vessels.

An example of the type of information the Stewart Computational Biology Group focuses on is the Next- Generation DNA sequencing data produced by the lab’s Illumina HiSeq 3000 System. This sequencer can generate over 200 billion bases of sequence in one 48-hour run and represents more than 1 terabyte of undeciphered text and image information. Buried in this explosion of data may be information on important proteins that control which genes get turned on or off in embryonic stem cells or their derivatives. Using computational methods including machine learning, it is possible to produce a list of activated genes that may play a role in regeneration, development or pluripotency and pave the way to future discoveries in the field of regenerative biology.

Another example of the type of information the Stewart Computational Biology Group focuses on is the almost 30 million biomedical articles published and collected in PubMed. The amount of information is vast. (If you stack all the articles in PubMed up, it would produce a stack of paper 23.5 miles high. Visual pictured below.) There is no way that a scientist can even scratch the surface of the vast source of biomedical knowledge. The Stewart Computational Biology Group has written a software suite called “KinderMiner” that finds patterns in all of PubMed and assists biologists in narrowing down the next experiment to do based on the existing literature. KinderMiner can ‘read’ all 30 million PubMed articles in minutes, thus summarizing the current knowledge about any biomedical topic, and providing suggestions to researchers on which experiments to try next. The Stewart Computational Biology Group is applying these text mining methods to assist collaborators in the Morgridge Institute for Research as well as at the University of Wisconsin and other universities in diverse areas such as Regenerative Biology, cancer, disease, drug discovery, and drug repurposing.

A third example of the type of information the Stewart Computation Biology Group focuses on is Electronic Health Records. We are developing and applying machine learning algorithms (including deep learning) to EHR datasets to explore drug repurposing, adverse drug events, and other topics related to human health. The ensemble of EHR analysis combined with text analysis of PubMed will provide more accurate predictions in these topic areas.

Ron Stewart and his colleagues in the Stewart Computational Biology Group at the Morgridge Institute were key contributors in the quest to identify specific genes that are expressed in human embryonic stem cells but not in other cells. Their work was critical to the Thomson Lab breakthrough of creating embryonic-like stem cells from adult skin cells, thus providing a method for making blank-slate pluripotent cells without harming embryos.
The Regenerative Biology platform of the Morgridge Institute for Research relies on the full integration of the Computational Biology Group, a unique feature of the Institute. The collaborative philosophy among members of the Thomson Lab, which includes its computational biologists, is one of the reasons the laboratory has succeeded in overcoming technical hurdles and achieving groundbreaking results.

A model showing the process of producing and analyzing a dataset. A model showing the process of producing and analyzing a dataset.

Principal Investigator Ron Stewart

Ron Stewart, associate director of bioinformatics, hasn’t met a problem he doesn’t like; a critical asset given the sheer amount of data researchers can be confronted with when carrying out an experiment. Due to the exponential growth in data production, Bioinformatics and Computational Biology have become critical tools in conducting biomedical research.

Learn more >

Give now

We’re in the final days of the year! Make your tax-deductible gift TODAY and help scientists improve human health. Your donation of any amount makes a difference.

Give now