This blog was written by guest blogger and 2018 Promega Social Media Intern Logan Godfrey.
Only 30 years ago, the polymerase chain reaction (PCR)
was used for the first time, allowing the exponential amplification of a specific
DNA segment. A small amount of DNA could now be replicated until there was
enough of it to study accurately, even allowing sequencing of the amplified DNA.
This was a massive breakthrough that produced immediate effects in the fields
of forensics and life science research. Since these technologies were first
introduced however, the molecular biology research laboratory has been the sole
domain of PCR and DNA sequencing.
While an amazing revolution, application of a technology
such as DNA sequencing is limited by the size and cost of DNA sequencers, which
in turn restricts accessibility. However, recent breakthroughs are allowing DNA
sequencing to take place in jungles, the arctic, and even space—giving science
the opportunity to reach further, faster than ever before.
The newfound accessibility of DNA sequencing means a
marriage between fields of science that were previously largely unacquainted.
The disciplines of genomics and wildlife biology/ecology have largely progressed
independently. Wildlife biology is practiced in the field through observations
and macro-level assessments, and genomics, largely, has developed in a lab
setting. Leading the charge in the convergence of wildlife biology and genomics
is Field Projects International.
The rapid advancement of next-generation sequencing technology, also known as massively parallel sequencing (MPS), has revolutionized many areas of applied research. One such area, the analysis of mitochondrial DNA (mtDNA) in forensic applications, has traditionally used another method—Sanger sequencing followed by capillary electrophoresis (CE).
Although MPS can provide a wealth of information, its initial adoption in forensic workflows continues to be slow. However, the barriers to adoption of the technology have been lowered in recent years, as exemplified by the number of abstracts discussing the use of MPS presented at the 29th International Symposium for Human Identification (ISHI 29), held in September 2018. Compared to Sanger sequencing, MPS can provide more data on minute variations in the human genome, particularly for the analysis of mtDNA and single-nucleotide polymorphisms (SNPs). It is especially powerful for analyzing mixture samples or those where the DNA is highly degraded, such as in human remains. Continue reading “Harnessing the Power of Massively Parallel Sequencing in Forensic Analysis”
Today’s post was written by guest blogger Anupama Gopalakrishnan, Global Product Manager for the Genetic Identity group at Promega.
Next-generation sequencing (NGS), or massively parallel sequencing (MPS), is a powerful tool for genomic research. This high-throughput technology is fast and accessible—you can acquire a robust data set from a single run. While NGS systems are widely used in evolutionary biology and genetics, there is a window of opportunity for adoption of this technology in the forensic sciences.
Currently, the gold standard is capillary electrophoresis (CE)-based technologies to analyze short tandem repeats (STR). These systems continue to evolve with increasing sensitivity, robustness and inhibitor tolerance by the introduction of probabilistic genotyping in data analysis—all with a combined goal of extracting maximum identity information from low quantity challenging samples. However, obtaining profiles from these samples and the interpretation of mixture samples continue to pose challenges.
MPS systems enable simultaneous analysis of forensically relevant genetic markers to improve efficiency, capacity and resolution—with the ability to generate results on nearly 10-fold more genetic loci than the current technology. What samples would truly benefit from MPS? Mixture samples, undoubtedly. The benefit of MPS is also exemplified in cases where the samples are highly degraded or the only samples available are teeth, bones and hairs without a follicle. By adding a sequencing component to the allele length component of CE technology, MPS resolves the current greatest challenges in forensic DNA analysis—namely identifying allele sharing between contributors and PCR artifacts, such as stutter. Additionally, single nucleotide polymorphisms in flanking sequence of the repeat sequence can identify additional alleles contributing to discrimination power. For example, sequencing of Y chromosome loci can help distinguish between mixed male samples from the same paternal lineage and therefore, provide valuable information in decoding mixtures that contain more than one male contributor. Also, since MPS technology is not limited by real-estate, all primers in a MPS system can target small loci maximizing the probability of obtaining a usable profile from degraded DNA typical of challenging samples. Continue reading “Is MPS right for your forensics lab?”
In general, people like to know that their food is what the label says it is. It’s a real bummer to find out that beef lasagna you just ate was actually horsemeat. Plus, there are many religious, ethical and medical reasons to be cognizant of what you eat. Someone who’s gluten intolerant and Halal probably doesn’t want a bite of that BLT.
Labels don’t always accurately reflect what is in food. So how do we confirm that we are in fact buying crab, and not whitefish with a side of Vibrio contamination?
For the most part, it comes down to separation science. Scientists and technicians use various chromatographic methods, such as gas chromatography, liquid chromatography, and mass spectrometry, to separate the complex mixture of molecules in food into individual components. By first mapping out the molecular profile of reference samples, they can then take an unknown sample and compare its profile to what it should look like. If the two don’t match up, an analyst would assume that the unknown is not what it claims to be. Continue reading “Of Mice and Microbes: The Science Behind Food Analysis”
Imagine you are traveling in your car and must pass through a mountain range to get to your destination. You’ve been following a set of directions when you realize you have a decision to make. Will you stay on your current route, which is many miles shorter but contains a long tunnel that cuts straight through the mountains and obstructs your view? Or will you switch to a longer, more scenic route that bypasses the tunnel ahead and gets you to your destination a bit later than you wanted?
Choosing which route to take illustrates a clear trade-off that has to be considered—which is more valuable, speed or understanding? Yes, the tunnel gets you from one place to another faster. But what are you missing as a result? Is it worth a little extra time to see the majestic landscape that you are passing through?
Considering this trade-off is especially critical for researchers working with human DNA purified from formalin-fixed paraffin-embedded (FFPE) or circulating cell-free DNA (ccfDNA) samples for next-generation sequencing (NGS). These sample types present a few challenges when performing NGS. FFPE samples are prone to degradation, while ccfDNA samples are susceptible to gDNA contamination, and both offer a very limited amount of starting material to work with.
My favorite ice-breaker of all time is: “List one fact about you that no one would guess”. It is my favorite because I have an awesome answer (if I do say so myself). My go-to answer is: I spent a summer working with elephants.
It was the summer before I graduated from college, and it was really only one elephant, a five-year-old African elephant named Connie. Connie was intelligent, curious and mischievous—her favorite game with me was trying to untie my shoelaces (hint: double knotting is important). Working with her was one of the most amazing experiences of my life and left me with an abiding love for these creatures.
Understandably, I was excited last year when one of my fellow bloggers wrote about Promega helping support the work of Virginia Riddle Pearson, who was working to identify and track strains of elephant endotheliotropic herpesvirus (EEHV) in African Elephant populations. EEHV is associated with the lethal elephant hemorrhagic disease (EHD) (1). This disease is a serious threat to the captive breeding programs of these endangered creatures. Between 1962 and 2007, it accounted for 58% of the deaths of North American captive-born Asian elephants between 4 months and 15 years of age (1). These deaths include the first Asian elephant calves born at the National, Oakland and Bronx Zoos. EHD also claimed the first live-born Asian elephant calves conceived by artificial insemination in both North America and Europe. Continue reading “Elephant Endotheliotropic Herpesvirus—A Tiny Virus Threatens the World’s Elephants”
One of the most critical parts of a Next Generation Sequencing (NGS) workflow is library preparation and nearly all NGS library preparation methods use some type of size-selective purification. This process involves removing unwanted fragment sizes that will interfere with downstream library preparation steps, sequencing or analysis.
Different applications may involve removing undesired enzymes and buffers or removal of nucleotides, primers and adapters for NGS library or PCR sample cleanup. In dual size selection methods, large and small DNA fragments are removed to ensure optimal library sizing prior to final sequencing. In all cases, accurate size selection is key to obtaining optimal downstream performance and NGS sequencing results.
Current methods and chemistries for the purposes listed above have been in use for several years; however, they are utilized at the cost of performance and ease-of-use. Many library preparation methods involve serial purifications which can result in a loss of DNA. Current methods can result in as much as 20-30% loss with each purification step. Ultimately this may necessitate greater starting material, which may not be possible with limited, precious samples, or the incorporation of more PCR cycles which can result in sequencing bias. Sample-to-sample reproducibility is a daily challenge that is also regularly cited as an area for improvement in size-selection.