Search engines are some of the most popular sites on the internet, acting as a gateway to information for users. Whether you’re looking to remember the URL for your online banking or the answer to a particularly tricky pub quiz question, the first port of call for many is Google.
Traditionally, search engines haven’t necessarily directly answered a question, instead filtering results it finds on the internet and presenting them in a list of links to websites it believes may be useful to your query. In recent years, Google has rolled out features including ‘featured snippets’ and ‘knowledge panels’ that try to give you the information directly, rather than going elsewhere.
But that’s all changing, thanks to the rise of AI-powered chatbots, and the technology that underpins them.
Artificial intelligence, or AI, is already used in search engines to rank relevant links or pull out key phrases of text to populate knowledge panels. But the release of ChatGPT, the AI-powered chatbot that opened to the public in late November, has provided an impetus for search engines to overhaul how they summarise and present information to users.
The future of search isn’t putting in a robotic query designed to meet the search engine halfway, allowing it to understand what you’re looking for. It’s instead to engage in natural conversation, having a back-and-forth about what you want.
The rise of ChatGPT upended everything, with its revolutionary technology pushing Microsoft to invest $10 billion into the company that developed it, OpenAI, while Google, the incumbent search giant, has asked every employee to spend between two and four hours helping to test Bard, its version of a chat-based search engine.
It’s clear both tech giants believe that AI-powered search using a chat interface is likely to be the future of how we find information. However, there are issues with the bright new feature being predicted that are already becoming apparent.
“The new features will change the way interactions with search happen – as a chat bot, instead of just entering search queries,” says Aleksandra Urman, a researcher studying how we trust the information we find on search engines at the University of Zurich in Switzerland. But they also “have a much bigger potential to make things up and output incorrect information”, she adds.
More on Technology
The AI chatbots behind the search engines do simply make things up. When Google unveiled Bard, its response to Microsoft’s Bing powered by ChatGPT, the search engine got wrong a simple fact about the James Webb Space Telescope. Google later said that the error highlighted the importance of rigorous testing of such tools. Microsoft was not immune, either: its AI made up opening times for businesses and fictionalised numbers when asked to summarise a company’s financial report. Microsoft, too, said that it was “expecting that the system may make mistakes during this preview period” of its technology.
Making things up is a known issue with artificial intelligence, deemed its proclivity to “hallucinate” – but is risky when presented in the context of search, which has, for more than 20 years now, been presented as an attempt to identify absolute truth.
Historically, search engines haven’t always been able to do that. “Google’s snippets were not always perfect and sometimes would summarise things in totally wrong ways,” says Urman, “so these issues existed before ChatGPT and the like. Now they are likely to get supercharged.”
Apart from the fabrication of information that is meant to be unimpeachable fact, AI-powered chat-based search engines have another problem: they’re as petulant as a child. ‘Sydney’, the persona of the ChatGPT-powered Bing, will often resort to calling users liars or trolls. (It has done that to me, after I was given access to the tool and presented it with a query about why it answered a question in a different way between sessions.)
There’s also the fear that while not making up information, it’s presenting bad results. One of the reasons that Bing might be so feisty could be because of the way it’s trained. Chat-based search engines are trained using large language models: in essence, hoovering up vast volumes of text from the internet and trying to understand how we speak, as well as what we know as a society. But, as anyone who has spent any time online knows, discourse on the internet often devolves into slanging matches – and people often hold abhorrent views. The risk is that AI-powered search engines perpetuate the worst of our foibles.
Nevertheless, I’ve been trying to use Bing instead of my usual search engine, Google, this week – ever since gaining access to it. I say “trying to use” rather than “replacing”, because there are some things that I feel I can’t use it for. And for those questions that I can get a straight answer out of Bing about, I always cross-check it with Google.
AI search is here to stay – a Big Bang moment – but we should still be cautious.