If your organization creates or curates content, you are aware of the new kid in town: artificial intelligence (AI). Here we lay out the landscape of AI tools as they relate to research.
Tools powered by artificial intelligence (AI), in particular those using large language models, promise to speed up the often time-consuming work of scientific discovery and innovation. Most researchers spend a majority of their work days reading and analyzing vast amounts of data and literature—tasks which take AI applications a fraction of the time.
Given the potential time and cost savings, it’s no wonder legacy scientific databases are rolling out AI enhancements for search and discovery tools. Formatting datasets and publications eat up an estimated 52 hours/year. AI tools for every use case are being released at lightning speed. However, automated solutions are prone to error, bias, and other ethical concerns. Unattributed use of chatbots in research is on the rise.
Evidence shows that power and privilege can be hardwired into search algorithms, the same is proving true for AI. And the environmental impact of running large language models (LLMs) is cause for concern. So, what are the benefits and drawbacks of using these tools in your research?
AI For Research: The Good News
Writing and presentation tasks are often a challenge and time-suck for researchers, where tools like ChatGPT can get a jump on a draft paper or refine an article in development. Automated translation tools like DeepL are proving valuable, especially for non-English speakers looking to publish in leading journals. Summaries of papers for non-specialist audiences are not very widespread, given the already time-intensive task of writing articles; tools like WritefullX, SciSpace CoPilot, and SemanticScholar are looking to make that an easier step in the publishing process.
Faster, more targeted literature or data search is an obvious area to flex AI muscles, as well as analytics tasks like identifying patterns across texts. This is the focus of System, which surfaces relevant content as well as offers background context to deepen understanding. Services like LookUp promise to shorten the runway for data analysts and AI is proving useful to speed up quantitative calculations in some proof-of-concept studies.
AI-facilitated literature reviews,using tools like Elicit, can save researchers a great deal of time, more quickly identifying research gaps across large batches of articles than any human could achieve. LitMaps also offers visual knowledge graphs and some platforms include chatbot assistance, such as Consensus and Iris.ai.
ChatGPT itself has proven to help scholars with a range of research tasks, from peer review to manuscript formatting. Data visualization can now be achieved with Olli.ai. Publishing tasks can be supported by the likes of Grammarly and Turnitin. And the AI behind Research Rabbit can facilitate research collaboration and networking.
AI For Research: The Drawbacks
It’s not all upside; there are many hazards to navigate when using AI tools in advanced research contexts. Anyone who’s tried ChatGPT has encountered its limitations and maybe even “hallucinations,” where AI conjures plausible content or citations that are incorrect. In Its current state, the free version of ChatGPT is trained on superficial content, not the type of highly specialized material required for professional use cases. Even when trained for research purposes, AI tools do not produce the same results. Inaccuracies have even been found with scientific tools, like Consensus.
And there are many legal unknowns. The US Congress is looking into regulations, under pressure from the White House, and lawsuits around the world are disputing whether training AI with copyrighted materials should be considered fair use. Debates are raging about whether chatbot outputs should be considered plagiarism—or if publishing chatbot-produced content as your own is legit. As far as scholarly publishing is concerned, the industry came out quickly to affirm that chatbots are not authors—but the jury is still out on how copyright policies apply to generative AI.
Some are concerned about risks to privacy and security, as well as encoded biases that will eventually undermine the credibility and value of research advancements. Similar to the anxieties that arose with the advent of the calculator or automated transmission, some worry that regular use of generative AI will degrade researchers’ critical thinking skills in specialty fields. Could AI become a crutch that will eventually erode overall knowledge generation and scientific breakthroughs?
Balancing Risk and Reward
So, how does one mitigate these risks while incorporating the benefits of AI into the research workflow? Here are a few expert tips about how to make the most of these innovations while minimizing the drawbacks.
Keep humans in the loop: All AI outputs should be carefully reviewed and validated, and likely expanded and edited to meet the necessary standards. AI tools must be carefully trained and maintained, which involves human interaction and, therefore, regular opportunities to check for errors or biases. Also, experts advise to expect engaging with AI to be somewhat iterative and, for example, to not give up after one or two attempts at a chatbot prompt, which can be a key mechanism to continually refine and train AI tools.
Use AI for quality assurance: AI can generate content, but it can also validate content. AI Reviewer is a startup offering pre-submission checks. We can leverage AI to check for errors, gaps, or manipulated data, for example using ImageTwin and Penelope.ai. Some of these validation tasks are better suited to AI than humans, for example in the speed and volume of memorization and quickly spotting patterns in large datasets. This can free up human time for things AI cannot do well, like affective, situational, and experiential intelligence, common sense, or rationality. But, be sure to heed the advice of experts and don’t assume AI tools are perfect.
Seek pro guidance: Just as we continually train AI models, we must all continually learn how to make the most of these new technologies. Look for best practices and training from experts like librarians and information practitioners. Alongside other professional development goals, we can all hone our AI skills, or human intelligence tasks (HITs), as these will be key to how research is done in the very near future.
Resources & More Reading
- Chan Zuckerberg Science To Build AI GPU Cluster To Model Cell Systems
- Can AI help with the heavy lifting of research communications?
- Setting the Scene: How Artificial Intelligence is reshaping how we consume and deliver research
- AI hype is built on high test scores. Those tests are flawed.
- Artificial-intelligence search engines wrangle academic literature
- Superficial engagement with generative AI masks its potential contribution as an academic interlocuter