The Stories in the Bones: DNA Forensic Analysis 20 Years after 9/11

September 11, 2001 is the day that will live in infamy for my generation. On that beautiful late summer day, I was at my desk working on the Fall issue of Neural Notes magazine when a colleague learned of the first plane hitting the World Trade Center. As the morning wore on, we learned quickly that it wasn’t just one plane, and it wasn’t just the World Trade Center.

Two beams of light recognized the site of the World Trade Center attack. Today DNA forensic analysis applies new technologies to bring closure to families of victims.

Information was sparse. The world wide web was incredibly slow, and social media wasn’t much of a thing—nothing more than a few listservs for the life sciences. Someone managed to find a TV with a rabbit-eared, foil-covered antenna, and we gathered in the cafeteria of Promega headquarters—our shock growing as more footage became available. At Promega, conversation immediately turned to how we could bring our DNA forensic analysis expertise to help and support the authorities with the identification of victims and cataloguing of reference samples.

Just as the internet and social media have evolved into faster and more powerful means of communication—no longer do we rely on TVs with antennas for breaking news—the technology that is used to identify victims of a tragedy from partial remains like bone fragments and teeth has also evolved to be faster and more powerful.

Teeth and Bones: Then and Now

“Bones tell me the story of a person’s life—how old they were, what their gender was, their ancestral background.”  Kathy Reichs

Many stories, both fact and fiction, start with a discovery of bones from a burial site or other scene. Bones can be recovered from harsh environments, having been exposed to extreme heat, time, acidic soils, swamps, chemicals, animal activities, water, or fires and explosions. These exposures degrade the sample and make recovering DNA from the cells deep within the bone matrix difficult.

Continue reading “The Stories in the Bones: DNA Forensic Analysis 20 Years after 9/11”

Identifying the Ancestor of a Domesticated Animal Using Whole-Genome Sequencing

What animal can be found around the globe that outnumbers humans three to one? Gallus gallus domesticus, the humble chicken. The human appetite for eggs and lean meat drive demand for this feathered bird, resulting in a poultry population of over 20 billion.

Controversy over the origin of the domestic chicken (when, where and which species) have lead some researchers to look for that information in the genomes of contemporary chicken breeds and wild jungle fowl, the candidates from which chickens were derived. By sequencing over 600 genomes from Asian domestic poultry as well as 160 genomes from all four wild jungle fowl species and the five red jungle fowl subspecies, Wang et al. wanted to understand and identify the relationships and relatedness among these species and derive where domesticated chickens first arose.

Continue reading “Identifying the Ancestor of a Domesticated Animal Using Whole-Genome Sequencing”

Using the Power of Technology for Viral Outbreaks

Artist’s rendition of a virus particle.

When the world is experiencing a viral pandemic, scientists and health officials quickly want data-driven answers to understand the situation and better formulate a public health response. Technology provides tools that researchers can use to develop a rapid sequencing protocol. With such a protocol, the data generated can help answer questions about disease epidemiology and understand the interaction between host and virus. Even better: If the protocol is freely available and based on cheap, mobile sequencing systems.

Continue reading “Using the Power of Technology for Viral Outbreaks”

Small Changes With Large Consequences: The Role of Genetic Variance in Disease Development

Structure of Human Ferrochelatase
Human Ferrochelatase 2 angstrom crystal structure. Generated from 1HRK (RCSB PDB) using Pymol. Copyright: Sarah Wilson / CC BY-SA

Understanding how disease states arise from genetic variants is important for understanding disease resistance and progression. What can complicate our understanding of disease development is when two people have the same genetic variant, but only one has the disease. To investigate what might be happening with ferrochelatase (FECH) variant alleles that result in erythropoietic protoporphyria (EPP), scientists used next-generation sequencing (NGS) along with RNA analysis and DNA methylation testing to assess the FECH locus in 72 individuals from 24 unrelated families with EPP.

What is FECH and its relationship to EPP?

FECH is the gene for ferrochelatase, the last enzyme in the pathway that synthesizes heme. The inherited metabolic disorder, EPP, is caused when the activity of FECH is reduced to less than a third of normal levels thus, increasing the levels of protoporphyrin (PPIX) without metal in erythrocytes. The consequences of the low-metal PPIX include severe phototoxic skin reactions and hepatic injury due to PPIX accumulation in the liver.

How does FECH expression affect EPP?

The EPP disease state is not simply the lack of two functional FECH genes. Disease occurs with a hypomorphic allele, mutations in FECH that reduce its function, in trans to a null FECH allele. Researchers focused on three common variants called the GTC haplotype that are associated with expression quantitative trait loci (eQTL) that reduce FECH activity. Interestingly, these three variants have been found in trans, but researchers wanted to learn if there were individuals who were homozygous for the GTC allele and how EPP manifested for them.

Continue reading “Small Changes With Large Consequences: The Role of Genetic Variance in Disease Development”

Nucleic Acid from FFPE Samples: Effects of Pre-Analytical Factors on Downstream Success

Part one of three

Peer-reviewed publications containing data dervived from analysis of nucleic acids isolated from FFPE samples have increased dramatically since 2006.

Formalin Fixed Paraffin Embedded samples (FFPE) have been a mainstay of the pathology lab for over 100 years. Initially FFPE blocks were sectioned, stained with simple dyes and used for studying morphology, but now a variety of biomolecules can be analyzed in these samples. Over the past 10 years we have discovered that there is a treasure trove of genomics data waiting to be unearthed in FFPE tissue. While viral RNAs and miRNA were some of the first molecules found to be present and accessible for analysis starting in the 1990s, improvements to DNA and RNA extraction methods have demonstrated that PCR, qPCR, SNP genotyping, Exome and WGS are possible. This has resulted scientific publications of DNA and RNA data generated from FFPE samples starting in 2006, and today we see immense amounts of data generated from FFPE—with nearly 2000 citations in 2018 reporting sequencing of FFPE samples.

Depending on the type of project, prospective or retrospective, the genomics scientist has an opportunity to affect the probability of success by better understanding the fixation process. The challenge with FFPE is the host of variables that have the potential to negatively affect downstream assays.

Continue reading “Nucleic Acid from FFPE Samples: Effects of Pre-Analytical Factors on Downstream Success”

Harnessing the Power of Massively Parallel Sequencing in Forensic Analysis

The rapid advancement of next-generation sequencing technology, also known as massively parallel sequencing (MPS), has revolutionized many areas of applied research. One such area, the analysis of mitochondrial DNA (mtDNA) in forensic applications, has traditionally used another method—Sanger sequencing followed by capillary electrophoresis (CE).

Although MPS can provide a wealth of information, its initial adoption in forensic workflows continues to be slow. However, the barriers to adoption of the technology have been lowered in recent years, as exemplified by the number of abstracts discussing the use of MPS presented at the 29th International Symposium for Human Identification (ISHI 29), held in September 2018. Compared to Sanger sequencing, MPS can provide more data on minute variations in the human genome, particularly for the analysis of mtDNA and single-nucleotide polymorphisms (SNPs). It is especially powerful for analyzing mixture samples or those where the DNA is highly degraded, such as in human remains.  Continue reading “Harnessing the Power of Massively Parallel Sequencing in Forensic Analysis”

Is MPS right for your forensics lab?

Today’s post was written by guest blogger Anupama Gopalakrishnan, Global Product Manager for the Genetic Identity group at Promega. 

Next-generation sequencing (NGS), or massively parallel sequencing (MPS), is a powerful tool for genomic research. This high-throughput technology is fast and accessible—you can acquire a robust data set from a single run. While NGS systems are widely used in evolutionary biology and genetics, there is a window of opportunity for adoption of this technology in the forensic sciences.

Currently, the gold standard is capillary electrophoresis (CE)-based technologies to analyze short tandem repeats (STR). These systems continue to evolve with increasing sensitivity, robustness and inhibitor tolerance by the introduction of probabilistic genotyping in data analysis—all with a combined goal of extracting maximum identity information from low quantity challenging samples. However, obtaining profiles from these samples and the interpretation of mixture samples continue to pose challenges.

MPS systems enable simultaneous analysis of forensically relevant genetic markers to improve efficiency, capacity and resolution—with the ability to generate results on nearly 10-fold more genetic loci than the current technology. What samples would truly benefit from MPS? Mixture samples, undoubtedly. The benefit of MPS is also exemplified in cases where the samples are highly degraded or the only samples available are teeth, bones and hairs without a follicle. By adding a sequencing component to the allele length component of CE technology, MPS resolves the current greatest challenges in forensic DNA analysis—namely identifying allele sharing between contributors and PCR artifacts, such as stutter. Additionally, single nucleotide polymorphisms in flanking sequence of the repeat sequence can identify additional alleles contributing to discrimination power. For example, sequencing of Y chromosome loci can help distinguish between mixed male samples from the same paternal lineage and therefore, provide valuable information in decoding mixtures that contain more than one male contributor. Also, since MPS technology is not limited by real-estate, all primers in a MPS system can target small loci maximizing the probability of obtaining a usable profile from degraded DNA typical of challenging samples.

Continue reading “Is MPS right for your forensics lab?”

Where Would DNA Sequencing Be Without Leroy Hood?

There have been many changes in sequencing technology over the course of my scientific career. In one of the research labs I rotated in as a graduate student, I assisted a third-year grad student with a manual radioactive sequencing gel because, I was told, “every student should run at least one in their career”. My first job after graduate school was as a research assistant in a lab that sequenced bacterial genomes. While I was the one creating shotgun libraries for the DNA sequencing pipeline, the sequencing reaction was performed using dideoxynucleotides labeled with fluorescent dyes and amplified in thermal cyclers. The resulting fragments were separated by manual loading on tall slab polyacrylamide gels (Applied Biosystems ABI 377s) or, once the lab got them running, capillary electrophoresis of four 96-well plates at a time (ABI 3700s).

Sequencing throughput has only increased since I left the lab. This was accomplished by increasing well density in a plate and number of capillaries for use in capillary electrophoresis, but more importantly, with the advent of the short read, massively parallel next-generation sequencing method. The next-gen or NGS technique decreased the time needed to sequence because many sequences were determined at the same time, significantly accelerating sequencing capacity. Instruments have also decreased in size as well as the price per base pair, a measurement used when I was in the lab. The long-prophesized threshold of $1,000 per genome has arrived. And now, according to a recent tweet from a Nanopore conference, you can add a sequencing module to your mobile device:

Continue reading “Where Would DNA Sequencing Be Without Leroy Hood?”

Choosing a Better Path for Your NGS Workflow

Imagine you are traveling in your car and must pass through a mountain range to get to your destination. You’ve been following a set of directions when you realize you have a decision to make. Will you stay on your current route, which is many miles shorter but contains a long tunnel that cuts straight through the mountains and obstructs your view? Or will you switch to a longer, more scenic route that bypasses the tunnel ahead and gets you to your destination a bit later than you wanted?

Choosing which route to take illustrates a clear trade-off that has to be considered—which is more valuable, speed or understanding? Yes, the tunnel gets you from one place to another faster. But what are you missing as a result? Is it worth a little extra time to see the majestic landscape that you are passing through?

Considering this trade-off is especially critical for researchers working with human DNA purified from formalin-fixed paraffin-embedded (FFPE) or circulating cell-free DNA (ccfDNA) samples for next-generation sequencing (NGS). These sample types present a few challenges when performing NGS. FFPE samples are prone to degradation, while ccfDNA samples are susceptible to gDNA contamination, and both offer a very limited amount of starting material to work with.

Continue reading “Choosing a Better Path for Your NGS Workflow”

Better NGS Size Selection

One of the most critical parts of a Next Generation Sequencing (NGS) workflow is library preparation and nearly all NGS library preparation methods use some type of size-selective purification. This process involves removing unwanted fragment sizes that will interfere with downstream library preparation steps, sequencing or analysis.

Different applications may involve removing undesired enzymes and buffers or removal of nucleotides, primers and adapters for NGS library or PCR sample cleanup. In dual size selection methods, large and small DNA fragments are removed to ensure optimal library sizing prior to final sequencing. In all cases, accurate size selection is key to obtaining optimal downstream performance and NGS sequencing results.

Current methods and chemistries for the purposes listed above have been in use for several years; however, they are utilized at the cost of performance and ease-of-use. Many library preparation methods involve serial purifications which can result in a loss of DNA. Current methods can result in as much as 20-30% loss with each purification step. Ultimately this may necessitate greater starting material, which may not be possible with limited, precious samples, or the incorporation of more PCR cycles which can result in sequencing bias. Sample-to-sample reproducibility is a daily challenge that is also regularly cited as an area for improvement in size-selection.

Continue reading “Better NGS Size Selection”