There is a lot of talk about Artificial Intelligence (AI), with the release of powerful tools and new initiatives and techniques emerging. But it can be hard to separate the signal from the noise, and figure out what this all means for our work and the future of the library sector.
The debate really kicked off with the public release of various AI-powered image tools such as Midjourney and DALL·E 2 (which create images based on text prompts – we recently took these for a spin).
The company behind some of the more well-known tools, OpenAI, have now also released a chatbot called ChatGPT. This is not like anything that has come before; it’s a Large Language Model (LLM) chatbot that can create responses to complex questions that are indistinguishable from text written by humans, and it has far-reaching consequences.
In this post we aim to provide an introduction to the topic and some of the potential issues, and offer some further reading for library and information workers who are interested in finding out more.
The language of AI
AI is a broad field that involves using computers and software to perform tasks that typically require human intelligence, such as understanding natural language, recognizing images, and making decisions. But some of these new tools have given rise to questions around plagiarism, copyright and accessibility. And some of the future use cases are still blurry.
The first barrier to getting your head around all of these discussions about AI is the specialised jargon. This can make reading about it quite dense.
The main thing to remember is that AI is an umbrella term that covers different techniques that are constantly developing. So machine learning, for example, is a type of AI. And ChatGTP is an example of generative AI – it can be used to create new content, rather than analysing or reacting to data.
We asked ChatGTP to summarise the main terminology in AI and this list is what came back.
We think it did a pretty good job, and it certainly completed the task much more quickly than we could have done. This is quite a straightforward task, and it is capable of much more. But it’s certainly not without issues or faults.
We have concerns
ChatGTP’s frankly mind-boggling ability to answer questions in a human-like way has made it a hot topic in academia since its general release in November 2022 and there are calls for academia to respond immediately. And so while people are intrigued by the possibilities of these new tools, concerns for the implications are also growing.
Iris Van Rooij comes out squarely against ChatGTP stating that its use constitutes ‘automated plagiarism’. Iris warns of dire consequences and calls for academics to take action to resist the hype.
Mozilla’s #internethealth report for 2022 is dedicated to AI and this year it is in the form of the podcast AI, in Real Life. This really captures the paradox of these shiny new tools. Central to the report is the question of who has power over AI and who is shifting that power, and Mozilla warn
“Amid the global rush to automate, we see grave dangers of discrimination and surveillance. We see an absence of transparency and accountability, and an overreliance on automation for decisions of huge consequence. But we also find champions insisting there is a better way to build, deploy, and comprehend AI’s potential”.
In this summary of a webinar on the implications of AI in the field of scholarly publishing, three experts give their perspective on what AI means to them and whether it will help or hinder the industry. They reveal that AI is already being used extensively in many aspects of publishing, in areas such as recommender systems. Their discussion touches on the topic of bias in machine learning, and the idea that the machines have been fed the biases of the people and data that teaches them. They also consider how these biases might be addressed.
Exercising healthy scepticism
New tech is exciting and fun to experiment with, we can’t deny. But it is also important to be aware of some of the pitfalls in how the media reports this complicated and relatively new field.
Not everyone is a believer, even those working with the field. In this interview, unsurprisingly, Arvind Narayanan, author of AI snake oil, has some specific concerns about misinformation.
There are also worries about what these developments will do for accessibility. They are often touted as being universally good for access, however this is not always the case. This article explains how the new technology has actually made it more difficult for those that are visually impaired to access the internet.
What does this all mean for libraries?
A great place for a curious library professional to begin to understand how and where AI might work for them is in the work of Andrew Cox of the University of Sheffield, who has a particular interest in the field. His 2022 study considers a range of AI applications, and how they might be applied to knowledge discovery in libraries.
As part of the IFLA Artificial Intelligence SIG, Andrew has produced a useful list of 23 resources for library and information workers who are wanting to get up to speed on the subject.
In Sweden, the National Library is already training their own AI models on 500 years of their collections data.
This study, ‘(In)accessibility and the technocratic library: Addressing institutional failures in library adoption of emerging technologies’ published in the First Monday journal, provides an in-depth look at how AI in libraries is failing people with disabilities.
But what about these new AI tools and models specifically?
In this article in School Library Journal, Kara Yorio asks school librarians for their reactions to ChatGTP. The librarians discuss the implications of the software and also some practical, day to day uses, including readers’ advisory.
Curtis L. Kendrick also tested out some of the library-specific cases in a guest post for the Scholarly Kitchen, The Efficacy of ChatGPT: Is it Time for the Librarians to Go Home?
One reason that the developments in AI are particularly relevant to information professionals is the potential of chatbots to change (if not replace) how we search online. Microsoft is a significant investor in OpenAI and has plans to introduce ChatGPT into Bing search, for example. Even though it is still early days, the NY Times referred to ChatGPT as a “’Code Red’ for Google’s Search Business” and there has been plenty of discussion and theories about how tools like this might ultimately replace web search as we know it.
Continuing to explore
There is huge potential in these new AI tools, but as we have seen, many experts are advising caution.
Some of the potential use cases for libraries are clear, whether in the form of chat support, or by helping to manage and analyse huge datasets. But so are the valid concerns – trustworthiness and authority are central tenets of libraries and so far, AI tools such as ChatGPT have failed to reliably deliver on either.
Whatever your view, one thing is certain that AI and the debates surrounding its use will continue to evolve and change. And it’s a topic that doesn’t seem to be going away anytime soon.
If you are interested in getting more hands-on experience in this area, The Carpentries Incubator offers this Intro to AI for GLAM lesson.
And, as always, if you have any questions about AI in your library get in touch for a chat!