Will Artificial Intelligence (AI) Transform the Future of Life Science Research?

Artificial intelligence (AI) is not a new technological development. The idea of intelligent machines has been popular for several centuries. The term “artificial intelligence” was coined by John McCarthy for a workshop at Dartmouth College in 1955 (1), and this workshop is considered the birthplace of AI research. Modern AI owes much of its existence to an earlier paper by Alan Turing (2), in which he proposed the famous Turing Test to determine whether a machine could exhibit intelligent behavior equivalent to—or indistinguishable from—that of a human.

The explosive growth in all things AI over the past few years has evoked strong reactions from the general public. At one end of the spectrum, some people fear AI and refuse to use it—even though they may have unwittingly been using a form of AI in their work for years. At the other extreme, advocates embrace all aspects of AI, regardless of potential ethical implications. Finding a middle ground is not always easy, but it’s the best path forward to take advantage of the improvements in efficiency that AI can bring, while still being cautious about widespread adoption. It’s worth noting that AI is a broad, general term that covers a wide range of technologies (see sidebar).

AI personified looking at a dna double helix against an abstract cosmic background
Image generated with Adobe Firefly v.2.

For life science researchers, AI has the potential to address many common challenges; a previous post on this blog discussed how AI can help develop a research proposal. AI can help with everyday tasks like literature searches, lab notebook management, and data analysis. It is already making strides on a larger scale in applications for lab automation, drug discovery and personalized medicine (reviewed in 3–5). Significant medical breakthroughs have resulted from AI-powered research, such as the discovery of novel antibiotic classes (6) and assessment of atherosclerotic plaques (7). A few examples of AI-driven tools and platforms covering various aspects of life science research are listed here.

Continue reading “Will Artificial Intelligence (AI) Transform the Future of Life Science Research?”

Can AI Help You Develop a Research Proposal?

Integrating artificial intelligence (AI) into the process of scientific research offers a wealth of efficiency-boosting tools that are transforming the ways scientists can approach their work. Many are already using AI to refine code, automate data processing, and edit papers, presentations, abstracts and more. Personally, I find generative language models like ChatGPT to be invaluable “editorial assistants” in my work as a science writer, helping me work through wonky sentence structures, be more concise and get over writer’s block, to name a few applications.

An AI-generated image of a man in a white lab coat who has a thoughtful look on his face. He is looking off camera with his hand on his chin. The background is a field of light bursts and bright lines against a dark backdrop.
Image generated using Adobe Firefly

But a scientist’s work doesn’t only involve writing or analyzing data, making presentations or keeping up with the literature. An essential component of any research scientist’s skillset is their ability to develop entirely new ideas and novel research proposals. Coming up with research questions and plans is a central component of graduate education and research careers, both in academia and industry.

As AI continues to advance and find broader use, a critical question arises: Can AI play a pivotal role in the creative process of developing entirely new ideas, such as crafting novel research proposals?

Continue reading “Can AI Help You Develop a Research Proposal?”

The Simplex Things In Life: Utilizing Artificial Intelligence Models to Better Understand Autism

Autism Spectrum Disorder, or ASD, is nothing if not unique.

The way ASD manifests itself in people is unique; although it most often presents as some form of variable impairment in social interaction and communication, each individual has behaviors and habits that are as unique to them as snowflakes are to one another.

ASD has also proven itself to be a uniquely challenging disorder to study. In the past decade, de novo (new) mutations have been identified as key contributors to causality of ASD. However, the majority of these identified de novo mutations are located in protein-coding genes, which comprise only 1–2% of the entire human genome.

Up to this point, a majority of previous research has focused on identifying mutations located in the 20,000 identified genes in the protein-coding region, which would seem like a promising approach. Genes are the genetic blueprints for creating proteins, which control and perform crucial tasks in our bodies, such as fighting off infections, communicating between your organs, tissues, and cells as chemical messengers, and regulating your blood sugar levels. It seems like basic math: Genes + Mutations = Mutated Proteins. Mutated Proteins = Disrupted Protein Function.

However, it has been observed that all the known genes that are ASD-associated can explain only a minor fraction of new autism cases, and it is estimated that known de novo mutations in the protein-coding region contribute to not more than 30% of cases for individuals who have no family history of autism (better known as simplex ASD). This provides evidence to suggest mutations contributing to autism must additionally occur elsewhere in the genome. Continue reading “The Simplex Things In Life: Utilizing Artificial Intelligence Models to Better Understand Autism”

Is Artificial Intelligence a Threat to Mankind?

Artificial intelligenceTechnology: We all use it, and some of us couldn’t go an entire day without it. In many ways, digital technology has improved our lives by increasing productivity and communication. Computer technology is everywhere: our homes, offices, phones and even cars. Technology has integrating into our lives so completely that most of us no longer stop to marvel at even the [seemingly] simplest capabilities such as the predictive software that our smart phones use to predict which word we are typing after we type in only the first few letters, especially if the software gets it wrong much of the time. However, digital technology has its dangers and inconveniences: cybercrime, hackers, stolen data, and computer crashes and failed Wi-Fi connections at the most inopportune times. In a recent BBC interview, one of modern science’s most brilliant minds highlighted another potential danger: artificial intelligence. Does artificial intelligence pose a threat to mankind?

Continue reading “Is Artificial Intelligence a Threat to Mankind?”

Under the Weather? Twitter Knew Over a Week Ago

I’m not feeling very well today, which stinks because it’s Friday and I had some really fun plans tonight. Instead, I’ll probably end up staying home for a quiet night with my husband and daughter and some takeout food, and an early night to bed. I’m not complaining too much, though, because let’s be honest, you enjoy those quiet nights when you have a one-year-old toddler! But a recent article in New Scientist makes me wonder if, had I been paying close enough attention to Twitter, I could maybe have known over a week ago that I would’ve been under the weather today, and save me from having to tell all my girlfriends I’m probably pooping out on them tonight. Continue reading “Under the Weather? Twitter Knew Over a Week Ago”