Back Back

Disruptive forces: Open research and AI

01/07/2024
An image of a circuitboard

AI! We can’t seem to go a day without hearing about it! Researchers have been using forms of AI for a while now to generate research data. In principle, open research and new forms of AI have some common elements as they are both disruptive to the status quo. Open research, in its various forms, has been established a lot longer than the current spate of generative AI developments, and so the benefits and ethical frameworks are more apparent, but in both cases neither are likely to disappear and if used responsibly have the potential to impact on research practice.

In this post we won't go into issues that need to be considered in how to use AI responsibly, and we aren't endorsing usage, but will try to summarise some ways that researchers are using AI to achieve open research goals. There are clear ethical issues in using generative AI to write articles, theses and research papers.

How does open research interact with artificial intelligence? There are a few ways that the two areas could come together in the near future.

  • Making research outputs open access means that more robust material is available to LLMs like ChatGPT. Early problems with Generative AI included hallucinatory responses and obvious biases coming out of the material LLMs were using to generate responses. Access to open research should improve the quality of responses to prompts, and some AI products, particularly those from publishing companies, are promoting the validity of their tool based on the datasets they use. However, to meet with Open Research goals, AI tools should be transparent about what data they are using to generate responses and at the moment many are not.
  • Automation of open research practices. Open research is built on precepts of responsible and reproducible research. AI could be used to improve the efficiency of workflows through generation of material via carefully prepared prompts. For example, code for processing data could be generated for quantitative research processes using other examples of open code, reducing the possibility of human error and the need for extensive training of researchers. Where code is not open, there are copyright issues that would need to be resolved before use of such tools would work.
  • Automation of publishing workflows. While there are reasons to be concerned about using generative AI to write papers, AI may be capable of streamlining and improving editorial processes. If these tools are available generally, it could make it easier for researchers to set up their own journals.

Of course, AI does not operate in a vacuum, any more than open research does. Open AIRE have a blogpost on potential directions for interaction and what this might require.

And it would be remiss not to mention that there are currently a number of controversies around AI, particularly the rapid development of generative AI tools, including sustainability due to the power requirements, and effects on the creative industries, that may cause us to consider how we should engage with it and which tools are ethical and responsible to use. Open research is part of a wider drive to responsible research, and we need to explore the services we use in both their ethical implications and their impact on research integrity.

So what do you think? Can AI provide benefits for researchers? What do you need to consider before using AI to make sure use is ethical and responsible?

 

Photo by Michael Dziedzic on Unsplash

For more information please contact the Corporate Communications Team.

Share this release