In general, people like to know that their food is what the label says it is. It’s a real bummer to find out that beef lasagna you just ate was actually horsemeat. Plus, there are many religious, ethical and medical reasons to be cognizant of what you eat. Someone who’s gluten intolerant and Halal probably doesn’t want a bite of that BLT.
Labels don’t always accurately reflect what is in food. So how do we confirm that we are in fact buying crab, and not whitefish with a side of Vibrio contamination?
For the most part, it comes down to separation science. Scientists and technicians use various chromatographic methods, such as gas chromatography, liquid chromatography, and mass spectrometry, to separate the complex mixture of molecules in food into individual components. By first mapping out the molecular profile of reference samples, they can then take an unknown sample and compare its profile to what it should look like. If the two don’t match up, an analyst would assume that the unknown is not what it claims to be. Continue reading
Implementing automated nucleic acid purification or making changes to your high-throughput (HT) workflow can be complicated and time-consuming. There are also many barriers to success such as challenging samples types and maintaining desirable downstream results that can add to the stress, not to mention actually getting the robotic instrumentation to do what you want it to. All of this makes it easy to understand why many labs avoid automating or own expensive instrumentation that goes unused. Continue reading
One of the most critical parts of a Next Generation Sequencing (NGS) workflow is library preparation and nearly all NGS library preparation methods use some type of size-selective purification. This process involves removing unwanted fragment sizes that will interfere with downstream library preparation steps, sequencing or analysis.
Different applications may involve removing undesired enzymes and buffers or removal of nucleotides, primers and adapters for NGS library or PCR sample cleanup. In dual size selection methods, large and small DNA fragments are removed to ensure optimal library sizing prior to final sequencing. In all cases, accurate size selection is key to obtaining optimal downstream performance and NGS sequencing results.
Current methods and chemistries for the purposes listed above have been in use for several years; however, they are utilized at the cost of performance and ease-of-use. Many library preparation methods involve serial purifications which can result in a loss of DNA. Current methods can result in as much as 20-30% loss with each purification step. Ultimately this may necessitate greater starting material, which may not be possible with limited, precious samples, or the incorporation of more PCR cycles which can result in sequencing bias. Sample-to-sample reproducibility is a daily challenge that is also regularly cited as an area for improvement in size-selection.
Keynote speaker David O’Shea kicked off the ISHI 27 conference with his investigation on how DNA profiling is helping to reunite families in Argentina after the military coup in the late 70’s.
We shared in laughter and tears. We tempered our scientific pursuit of the truth with the story of an unimaginably strong survivor of rape. We witnessed the struggles of a man trying to find his identity and the joy of being reunited with real family members after 30 years of lies. I find it hard to succinctly describe to others what my first ISHI conference was like. There is perhaps nothing more personal than our own genetic identities. This conference didn’t shy away from the raw emotions that encompass the human experience. We define ourselves as employees of this company or researchers at that institution, competing for attention and funding, yet this conference reveals how limiting these preconceptions may be.
The desire to make the world a better place unites us. I spoke with analysts for hours about the challenges of overcoming the sexual assault kit backlog, I made a fool of myself dancing to musical bingo with new friends from the Philippines and Brazil, and I was inspired by the casual musings of a video journalist. We are sure to see countless more ethical debates on how we should be using DNA (or proteins!) for human identification. The field of science relies on the open sharing and exploration of new ideas, and as admittedly biased as I am to the conveniences of the digital age, there has never been a better time to come together in person.
Don’t just take my word for it, though.
There were some phenomenal talks each day, and I did my best to capture the essential takeaways from Continue reading
CRISPR is a hot topic right now, and rightly so—it is revolutionizing research that relies on editing genes. But what exactly is CRISPR? How does it work? Why is everyone so interested in using it? Today’s blog is a beginner’s guide on how CRISPR works with an overview of some new applications of this technology for those familiar with CRISPR.
Introduction to CRISPR/Cas9
Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) were discovered in 1987, but it took 30 years before scientists identified their function. CRISPRs are a special kind of repeating DNA sequence that bacteria have as part of their “immune” system against invading nucleic acids from viruses and other bacteria. Over time, the genetic material from these invaders can be incorporated into the bacterial genome as a CRISPR and used to target specific sequences found in foreign genomes.
CRISPRs are part of a system within a bacterium that requires a nuclease (e.g. Cas9), a single guide RNA (sgRNA) and a tracrRNA. The tracrRNA recruits Cas9, while sgRNA binds to Cas9 and guides it to the corresponding DNA sequence of the invading genome. Cas9 then cuts the DNA, creating a double-stranded break that disables its function. Bacteria use a Protospacer Adjacent Motif, or PAM, sequence near the target sequence to distinguish between self and non-self and protect their own DNA.
While this system is an effective method of protection for bacteria, CRISPR/Cas9 has been manipulated in order to perform gene editing in a lab (click here for a video about CRISPR). First, the tracrRNA and sgRNA are combined into a single molecule. Then the sequence of the guide portion of this RNA is changed to match the target sequence. Using this engineered sgRNA along with Cas9 will result in a double-stranded break (DSB) in the target DNA sequence, provided the target sequence is adjacent to a compatible PAM sequence. Continue reading
By Fredy Peccerelli
Guatemala’s method of uncovering human rights violations can help other post-conflict areas, says Fredy Peccerelli.
During Guatemala’s internal armed conflict (1960–1996) almost 200,000 people are thought to have been killed or ‘disappeared’ at the hands of repressive and violent regimes. Those lives matter. Their families’ demands are clear: they want to know what happened to their loved ones and they want their remains returned. They need truth and justice.
Using forensic sciences, the Forensic Anthropology Foundation of Guatemala (FAFG) is assisting families by returning their loved ones’ remains, promoting justice, and setting the historical record straight.
DNA-based evidence has a long history of admissibility in legal proceedings stretching back to 1985 when Sir Alec Jeffreys first used DNA testing to resolve an immigration dispute in the United Kingdom. In 1987, DNA made more court appearances in parallel legal cases to convict serial rapist and murderer Colin Pitchfork in the UK and rapist Tommy Lee Andrews in the United States. Since these cases, the admissibility of DNA evidence in US courts has been challenged and upheld numerous times (United States v. Jakobetz and Andrews v. Florida), and DNA evidence has become the gold standard in many court cases. So why are scientists being asked once again to debate the admissibility of DNA evidence, specifically high sensitivity DNA, in the courtroom?
A wanted poster for Jack the Ripper, who was also known as Leather Apron.
Image courtesy of the British Museum
In the late 1800s, Victorian England was mesmerized and horrified by a series of brutal killings in the crowded and impoverished Whitechapel district. The serial killer, who became known as “Jack the Ripper
”, had murdered and mutilated at least five women, many of whom worked as prostitutes in the slums around London. None of these murders were ever solved, and Jack the Ripper was never identified, although investigators interviewed more than 2,000 people and named more than 100 suspects
. Now, 126 years after the murders, a British author, who coincidentally has just published a book on the subject, is claiming that DNA analysis has revealed the identity of the notorious killer. DNA is often thought to be the “gold standard” of human identification techniques, so why is there so much skepticism surrounding this identification?
Recently, researchers of the SIGMA Type 2 Diabetes Consortium published a paper in Nature identifying a new locus associated with a higher risk of type 2 diabetes (1). Considering the increasing prevalence of this metabolic disease in today’s sugar-filled world, any discovery that helps us understand diabetes is exciting news. However, the most interesting discovery published in this paper might not be this new gene variant but rather the origin of this variant in modern human populations: Neandertals.
At the recent International Symposium on Human Identification, Kevin Davies, the keynote speaker and author of The $1,000 Genome, entertained attendees with a history of human genome sequencing efforts and discussed ways in which the resulting information has infiltrated our everyday lives. Obviously, there is enough material on the subject to fill a book, but I will describe just a few of the high points of his talk here.