On October 19, 2020, in a corner of what was once the African American section of the Potter’s Field in Tulsa’s Oaklawn Cemetery, a backhoe begins scraping away layer after layer of red Oklahoma earth. Workers in high-visibility vests and orange hard hats prepare to join the excavation. DeNeen Brown, a reporter with the Washington Post, looks on, bearing witness to a site that could be one of the final, unmarked resting places for victims of a massacre that happened 100 years in the past.
In the summer of 2000, Promega research scientist Allan Tereba was asked to develop an automated protocol for purifying DNA for forensics. His team had recently launched DNA IQ, the first Promega kit for purifying forensic DNA using magnetic beads. This was before the Maxwell® instruments, and before Promega purification chemistries were widely adaptable to high-throughput automation.
“I had my doubts about being able to do that,” Allan says. “When you’re working with STRs, small amounts of contaminant DNA are going to mess up your results. But I went ahead and tried it, and it was a challenge.”
A little over a year later, Allan was in his office when he heard on the radio that a plane had struck the North tower of the World Trade Center in New York City. Shortly after, he heard the announcement that a second plane had hit the South tower.
By that point, Allan and his colleagues had successfully adapted DNA IQ to be used on the deck of a robot. Within days of the attacks, Promega scientists were supporting the New York City Office of Chief Medical Examiner (OCME) and New York State Police in their work to identify human remains that were recovered from Ground Zero.
Thanks to the work of Allan and many other Promega scientists, Promega was prepared to offer unique solutions to urgent needs. In their own words, here are some of those scientists’ reflections.
Today’s post was written by guest blogger Anupama Gopalakrishnan, Global Product Manager for the Genetic Identity group at Promega.
Next-generation sequencing (NGS), or massively parallel sequencing (MPS), is a powerful tool for genomic research. This high-throughput technology is fast and accessible—you can acquire a robust data set from a single run. While NGS systems are widely used in evolutionary biology and genetics, there is a window of opportunity for adoption of this technology in the forensic sciences.
Currently, the gold standard is capillary electrophoresis (CE)-based technologies to analyze short tandem repeats (STR). These systems continue to evolve with increasing sensitivity, robustness and inhibitor tolerance by the introduction of probabilistic genotyping in data analysis—all with a combined goal of extracting maximum identity information from low quantity challenging samples. However, obtaining profiles from these samples and the interpretation of mixture samples continue to pose challenges.
MPS systems enable simultaneous analysis of forensically relevant genetic markers to improve efficiency, capacity and resolution—with the ability to generate results on nearly 10-fold more genetic loci than the current technology. What samples would truly benefit from MPS? Mixture samples, undoubtedly. The benefit of MPS is also exemplified in cases where the samples are highly degraded or the only samples available are teeth, bones and hairs without a follicle. By adding a sequencing component to the allele length component of CE technology, MPS resolves the current greatest challenges in forensic DNA analysis—namely identifying allele sharing between contributors and PCR artifacts, such as stutter. Additionally, single nucleotide polymorphisms in flanking sequence of the repeat sequence can identify additional alleles contributing to discrimination power. For example, sequencing of Y chromosome loci can help distinguish between mixed male samples from the same paternal lineage and therefore, provide valuable information in decoding mixtures that contain more than one male contributor. Also, since MPS technology is not limited by real-estate, all primers in a MPS system can target small loci maximizing the probability of obtaining a usable profile from degraded DNA typical of challenging samples.
Forensic analysts have long sought precision when determining time of death. While on crime scene investigation television shows, the presence of insects always seems to reveal when a person died, there are many elements to account for, and the probable date may still not be accurate. Insects arrive days after death if at all (e.g., if the body is found indoors or after burial), and the stage of insect activity is influenced by temperature, weather conditions, seasonal variation, geographic location and other factors. All this makes it difficult to estimate the postmortem interval (PMI) of a body discovered an unknown time after death. One way to make estimating PMI less subjective would be to have calibrated molecular markers that are easy to sample and are not altered by environmental variabilities.
Bacterial communities called microbiomes have been frequently in the news. The influence of these microbes encompass living creatures and the environment. Not surprisingly, research has focused on the influence of microbiomes on humans. For example, changes in gut microbiome seem to affect human health. Intriguingly, microbiomes may also be a key to determining time of death. The National Institute of Justice (NIJ) has funded several projects focused on the forensic applications of microbiomes. One focus involves the necrobiome, the community of organisms found on or around decomposing remains. These microbes could be used as an indicator of PMI when investigating human remains. Recent research published in PLOS ONE examined the bacterial communities found in human ears and noses after death and how they changed over time. The researchers were interested in developing an algorithm using the data they collected to estimate of time of death.
“How do you like the name Jack?” the woman on the phone asked.
On April 26, 1964, a nurse came into the hospital room of Dora Fronczak, who had just given birth to her young son, Paul. She told Mrs. Fronczak that it was time to take the baby to the nursery (at that time newborns did not stay in the room with the moms), took the baby, and left. A few hours later, another nurse came into the room to take young Paul to the nursery. It was then that everyone realized a mother’s worst fear: Her infant had been stolen.
Authorities were able to determine how the woman left the hospital and that she got into a cab, but they were never able to find the woman. However, in 1965, a small toddler-aged boy was found, abandoned outside a store in New Jersey. Blood tests were not inconsistent with him being Paul Fronczak (DNA testing was not available), and there were no other missing children cases in the area that were matches. The little boy was sent to Chicago as Paul Fronczak and the case was closed.
However, as an adult, Paul Fronczak began to suspect that the couple who raised him were not his biological parents, and in 2012 Paul underwent DNA analysis to test his suspicions. The results showed that indeed, he was not the biological son of Dora and Chester Fronczak. His next step was to enlist the help of a genetic genealogist to assist him in finding his true biological parents and his identity.
By conducting “familial searches” using commercially available DNA databases like 23andMe and AncestryDNA and many resources, the genealogist’s group found a match to his DNA on the east coast. Further groundwork, discovered that this family was indeed Paul’s…now Jack.
The knowledge of Jack’s true identity didn’t bring with it a joyous union of the adoptive family who had raised and loved Jack (as Paul) with the biological family who had pined for him over the years as many might imagine.
We shared in laughter and tears. We tempered our scientific pursuit of the truth with the story of an unimaginably strong survivor of rape. We witnessed the struggles of a man trying to find his identity and the joy of being reunited with real family members after 30 years of lies. I find it hard to succinctly describe to others what my first ISHI conference was like. There is perhaps nothing more personal than our own genetic identities. This conference didn’t shy away from the raw emotions that encompass the human experience. We define ourselves as employees of this company or researchers at that institution, competing for attention and funding, yet this conference reveals how limiting these preconceptions may be.
The desire to make the world a better place unites us. I spoke with analysts for hours about the challenges of overcoming the sexual assault kit backlog, I made a fool of myself dancing to musical bingo with new friends from the Philippines and Brazil, and I was inspired by the casual musings of a video journalist. We are sure to see countless more ethical debates on how we should be using DNA (or proteins!) for human identification. The field of science relies on the open sharing and exploration of new ideas, and as admittedly biased as I am to the conveniences of the digital age, there has never been a better time to come together in person.
Here at Promega we receive some interesting requests…
Take the case of Virginia Riddle Pearson, elephant scientist. Three years ago we received an email from Pearson requesting a donation of GoTaq G2 Taq polymerase to take with her to Africa for her field work on elephant herpesvirus. Working out of her portable field lab (a tent) in South Africa and Botswana, she needed a polymerase she could count on to perform reliably after being transported for several days (on her lap) at room temperature. Through the joint effort of her regional sales representative in New Jersey/Pennsylvania (Pearson’s lab was based out of Princeton University at the time) and our Genomics product marketing team, she received the G2 Taq she needed to take to Africa. There she was able to conduct her experiments, leading to productive results and the opportunity to continue pursuing her work. Continue reading “Of Elephant Research and Wildlife Crime – Molecular Tools that Matter”
Forensic lab validations can be intimidating, so Promega Technical Services Support and Validation teams shared these tips for making the process go more smoothly.
Prepare Your Lab. Make sure all of your all of your instrumentation (CEs, thermal cyclers, 7500s, centrifuges) and tools (pipettes, heat blocks) requiring calibration or maintenance are up to date.
Start with Fresh Reagents. Ensure you have all required reagents and that they are fresh before beginning your validation. This not only includes the chemistry being validated, but any preprocessing reagents or secondary reagents like, polymer, buffers, TE-4 or H2O.
Develop a Plan. Before beginning a validation, take the time to create plate maps, calculate required reagent volumes, etc. This up-front planning may take some time initially, but will greatly improve your efficiency during testing.
Create an Agenda. After a plan is developed, work through that plan and determine how and when samples will be created and run. Creating an agenda will hold you to a schedule for getting the testing done.
Determine the Number of Samples Needed to Complete Your Validation. Look at your plan and see where samples can be used more than once. The more a sample can be used, the less manipulation done to the sample and the more efficient you become.
Select the Proper Samples for Your Validation. Samples should include those you know you’ll obtain results with be similar to the ones you’ll most likely be using, and your test samples should contain plenty of heterozygotes. When you are establishing important analysis parameters, like thresholds, poor sample choice may cause more problems and require troubleshooting after the chemistry is brought on-line.
Perform a Fresh Quantitation of Your Samples. This will ensure the correct dilutions are prepared. Extracts that have been sitting for a long time may have evaporated or contain condensation, resulting in a different concentration than when first quantitated.
Stay Organized. Keep the data generated in well-organized folders. Validations can contain a lot of samples, and keeping those data organized will help during the interpretation and report writing phase.
Determine the Questions to Be Answered. While writing the report, determine the questions each study requires to be answered. Determining what specifically is required for each study will prevent you from calculating unnecessary data. Do you need to calculate allele sizes of your reproducibility study samples when you showed precision with your ladder samples?
Have fun! Remember, validations are not scary when approached in a methodical and logical fashion. You have been chosen to thoroughly test something that everyone in your laboratory will soon be using. Take pride in that responsibility and enjoy it.
Need more information about validation of DNA-typing products in the forensic laboratory? Check out the validation resources on the Promega web site for more information for the steps required to adopt a new product in your laboratory and the recommended steps that can help make your validation efforts less burdensome.
That is the prevailing question I’m asked when someone learns of my occupation as Deputy Sheriff Criminalist for the Contra Costa County (CA) Office of the Sheriff. Alas, my life is not quite so glamorous. It actually often entails entering formulas into an excel spreadsheet while being placed on hold as I order some pipette tips.
But, why does it have to be that way?
I have attended my fair share of professional conferences and workshops and written numerous journal articles. As a forensic scientist I do believe in the importance of sharing data, new techniques, and new methodologies with my colleagues. Yet what I think is not highlighted enough is the one element that differentiates our field from any other scientific field—our involvement with the criminal justice system. Every case we work on involves a mystery, a crime, a victim(s), and a suspect(s). And while scientists in other fields typically only speak to other scientists, in my world, forensic scientists usually interact with a person in a black robe who has the power to strongly influence the outcome of a case. These wildly frustrating, invigorating, and challenging cases are the most interesting things about our field, and yet we hardly share our stories.
The American Academy of Forensic Sciences’ 68th annual conference took place in Las Vegas February 22–27th, and those of you who did not attend, like me, had to live vicariously through the social media posts of those who did. The question on everyone’s mind: Who was up five hundy by midnight?
Okay, okay, most people who went to AAFS went for scientific purposes, and in fact, @andycyim was the only one to post a tribute to Swingers with a #vegasbaby tweet. Tip of the hat to you, Andy. So what did the Twitterverse look like during the week of the conference? I analyzed nearly 600 tweets and found some interesting patterns of how scientists interact on social media during a conference. More on my methodology is at the end of this article.
By clicking “Accept All”, you consent to the use of ALL the cookies. However you may visit Cookie Settings to provide a controlled consent.
If you are located in the EEA, the United Kingdom, or Switzerland, you can change your settings at any time by clicking Manage Cookie Consent in the footer of our website.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Advertisement".
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
6 months 2 days
This cookie is set by the provider Media.net. This cookie is used to check the status whether the user has accepted the cookie consent box. It also helps in not showing the cookie consent box upon re-entry to the website.
This cookie is used to store the language preferences of a user to serve up content in that stored language the next time user visit the website.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
This cookie is associated with Sitecore content and personalization. This cookie is used to identify the repeat visit from a single user. Sitecore will send a persistent session cookie to the web client.
This domain of this cookie is owned by Vimeo. This cookie is used by vimeo to collect tracking information. It sets a unique ID to embed videos to the website.
1 month 18 hours 24 minutes
This cookie is used to calculate unique devices accessing the website.
This cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors.
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected including the number visitors, the source where they have come from, and the pages visted in an anonymous form.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
1 year 24 days
Used by Google DoubleClick and stores information about how the user uses the website and any other advertisement before visiting the website. This is used to present users with ads that are relevant to them according to the user profile.
This cookie is set by doubleclick.net. The purpose of the cookie is to determine if the user's browser supports cookies.
5 months 27 days
This cookie is set by Youtube. Used to track the information of the embedded YouTube videos on a website.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
This cookies is set by Youtube and is used to track the views of embedded videos.
This is a pattern type cookie set by Google Analytics, where the pattern element on the name contains the unique identity number of the account or website it relates to. It appears to be a variation of the _gat cookie which is used to limit the amount of data recorded by Google on high traffic volume websites.