- Open AI lets users disable their chat histories, saying those chats will be “permanently deleted.”
- Microsoft has also taken steps to inform users about ways to review and erase search histories.
- AI tools can improve in part based on feedback and conversations with users.
Loading Something is loading.
Thanks for signing up!
Access your favorite topics in a personalized feed while you’re on the go.
OpenAI has said that one of the ways ChatGPT gets better is through interactions with users. But as the mass experiment underway — since the AI chatbot’s launch to the public last year — moves past novelty, the company has signaled it is closely considering safety and trust.
ChatGPT users are already greeted with a pop-up alert that their conversations can be seen by “AI trainers,” and are warned not to type in “sensitive information.” In April, OpenAI said it’s also giving users the choice to disable their conversation record with ChatGPT, seeking to offer more visibility and control over data. (Insider’s Sarah Jackson has a helpful explainer on how to do that).
Keeping conversation history off means those chats won’t help train the tool, and that the company will delete after 30 days any conversations under that higher privacy mode, according to OpenAI’s website.
The stakes can be high for both regular users and companies dealing with confidential information, who should also consider their own policies for how such tools should be used at work, said Duane Pozza, a partner at Wiley Rein LLP who advises on privacy, data, and other matters.
“When looking at AI chatbots, there is a potential for these tools to collect a lot of consumer personal information that could include things like conversation histories,” he told Insider, speaking generally about such tools and not about any specific company.
“Average consumers and businesses using these tools have to make sure they understand the privacy policies,” he added. “They should understand if they have options or settings to understand how data is collected by these tools.”
A representative for OpenAI did not comment beyond indicating the company’s resources on its website.
The privacy of user data on popular websites has been the subject of heightened consumer protection scrutiny over the past decade, amid the rise of social media sites. Meta, for instance, will be making payments to Facebook users after reaching a $725 million settlement over data issues involving Cambridge Analytica.
The popularity of AI websites may raise similar concerns about the scope of potential users wrangling with privacy questions, said Rudina Seseri, founder and managing partner of Glasswing Ventures investing in AI.
“I reiterate best practices here — absolutely don’t share with ChatGPT what you don’t want the world to know,” she said.
“And this is not somehow grounded on any mal-intent from OpenAI, or, forget ChatGPT, any large language model,” she said. “It also has to do with the fact that, the more surface — if you were to think of the digital world as a surface — the more surface, the more reach, the more opportunity for exploitation.”
Microsoft’s new Bing search bot launched in February has also rapidly gained ground, amassing more than 100 million daily users in March.
The company offers a “privacy dashboard” where users can get a sense of how their search history is used, and explore options to clear that history. The dashboard essentially allows users to “view, export, and delete stored conversation history,” according to a recently updated Microsoft document entitled, “The new Bing: Our approach to Responsible AI.”
A note on the page says Bing uses “web search history to improve your search experience by showing you suggestions as you type, providing personalized results, and more.”
Microsoft generally also uses privacy measures like encryption, and keeps customer data “for as long as it is necessary,” a company representative said in a statement.
“We also provide users with transparency and control over their search data via the Microsoft Privacy Dashboard,” the representative said.