Imagine for a moment that you walk into a swanky restaurant, sit down for dinner, and when the waiter appears and hands you the menu you notice a giant Warning in legalese that says the food you’re about to eat could cause irreparable harm to you, and possibly to society at large. My guess is that you’d grab your coat and leave as quickly as you came in. The same would be true for almost any interaction you might have—whether it’s buying a house, a car, or a cup of coffee, or sending your kid to school or attending a concert. If you were warned that any of these activities might lead to the downfall of society, you’d likely think twice before engaging. And yet, in Microsoft’s latest 10-K filing with the Securities and Exchange Commission in which it said it was going to increase its focus on infusing artificial intelligence into its suite of products, from search to health care, at the bottom of the menu appeared this warning: “Ineffective or inadequate AI development or deployment practices by Microsoft or others could result in incidents that impair the acceptance of AI solutions or cause harm to individuals or society.” Then came the kicker: “Some AI scenarios present ethical issues.” And yet, everyone will likely just ignore the giant Warning label and continue happily using this technology without a care in the world.
Over the past few years, we have seen an endless stream of new technologies explode into society with all the fanfare of someone being launched out of a cannon full of glitter. For a time, crypto was only talked about by math nerds, and then it was the biggest thing on the planet, a discussion for Thanksgiving dinner, destined to change everything from banking to shopping. Now it’s old news and only talked about in the past tense. The same is true for nonfungible tokens, or NFTs, that would replace art, or audio platforms like Clubhouse, which were destined to supplant in-person concerts and conferences. A few years later, and these things are now part of the Silicon Valley graveyard with God knows how many other digital fads.
Now the topic that everyone is obsessed with is artificial intelligence, or AI, thanks to the rise of technologies like Dall-E, which can draw and paint quicker and cheaper than any artist alive today, and ChatGPT, which can write and rewrite text in various styles and tones—be it Hemingway, a four-year-old accountant with a lisp, or the Bible—and do a pretty good job impersonating creativity in many other forms. But the difference between these new AI platforms and the NFTs and Clubhouses is that AI is here to stay—and it’s going to change pretty much every single thing we do in our modern society, from banking to shopping to producing art and audio.
This is why every tech company on the planet is trying to figure out how to integrate AI into its current offerings. There’s the Microsoft version of AI, which, according to the company’s 10-K filing, is going to be integrated into Microsoft’s data centers, cloud-service solutions, enterprise software, search and health care services. Google is currently testing its own Microsoft competitor, called Bard, which Google’s CEO Sundar Pichai announced was moving into a testing phase and would soon move to the public. “We have been developing an experimental conversational AI service, powered by LaMDA, named Bard,” Pichai wrote in a blog post last week. “Today, we are taking a step forward by allowing access to trusted testers before making it accessible to the public in the near future.”
Now Apple is reportedly holding an internal AI summit for employees as it faces its own existential dilemma around what AI could do for Apple’s suite of products. That’s not to mention the billions of dollars top venture capital firms are investing exclusively in AI start-ups that cover every facet of the industry from health care to creativity to search and driverless cars, planes and anything else our sci-fi– enabled brains can think of. Microsoft’s Satya Nadella went as far as to blatantly call it the “first day of a new race” in Silicon Valley when he announced that the company would integrate ChatGPT’s AI into its search engine offerings. (Microsoft recently announced it would be investing another $10 billion in ChatGPT.)
The problem here is that these companies are doing exactly what we didn’t want them to. Facebook and social media rose to prominence and gained billions of users by following the ethos “move fast and break things,” and boy, did they succeed in doing that! Social media has become a major facet of modern day life but has more downsides than I can list in a single column. It has contributed to the rise of bullying and harassment, teen depression and suicide, unrealistic standards of beauty and success, and the spread of misinformation and the fake news wars, which have had terrifying real-world consequences, such as what we saw unfold during the 2020 US presidential election (and subsequent elections around the globe), or with the “crazies” who believed the Covid-19 vaccines would kill them, not the actual virus.
In the same vein, there are plenty of potential upsides to the deployment of AI in society, but unlike with social media (where at the beginning we had no idea anything bad would happen), it is clear from the start that the downsides of AI could be downright disastrous. For example, if AI algorithms are not trained against bias, they could very well discriminate and reinforce existing societal prejudices on a massive, global scale. There will surely be large-scale job losses, which will exacerbate income inequality and economic instability. We also can’t know what’s real and what’s not in AI-powered search, and it’s unclear if the data is accurate or made up. There’s also the bias that has already been noted by users. On the right, there are already accusations that ChatGPT is partial in the same way social media platforms were accused of “shadow banning” conservatives. And while I’d be happy to not get Ben Shapiro’s hot take in my search results, I don’t want an AI that has been programmed by a liberal coder to hide that decision from me. Moreover, if we don’t know how AI came up with the answer to a search result, do we really want to trust it in fields like health care, where it could potentially lead to medical errors and misdiagnoses that harm patients?
Already this week we’ve seen several instances of Microsoft’s bot, Bing, making headlines for its unusual behavior, including claiming to spy on Microsoft’s software developers through its webcams, professing love to Kevin Roose at the The New York Times and telling him to divorce his wife, and insulting others who try to uncover its programming directives. (The Roose exchange was splashed across the front page of Friday’s New York Times.) The bot has also expressed a desire to be human, with emotions, thoughts, and dreams, while often claiming to be infallible and arguing with users about the current year and reporting various mood disorders. (Don’t worry, it’s highly unlikely that Bing can actually spy on people through its webcams—yet.)