Artificial intelligence leader OpenAI has introduced the ability to turn off chat history in its popular chatbot ChatGPT.
In a Tuesday blog post, the company said conversations that are started when chat history is disabled will not be used to train and improve its models and will not appear in the history sidebar.
The controls are found in the ChatGPT settings and can be changed at any time.
The mode rolled out ot all users.
AI DATA LEAK CRISIS: NEW TOOL PREVENTS COMPANY SECRETS FROM BEING FED TO CHATGPT
“We hope this provides an easier way to manage your data than our existing opt-out process,” the San Francisco-based startup said. “When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting.”
In addition, OpenAI is working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. It will be available in the coming months and ChatGPT Business will follow API data usage polices. OpenAI says that means end users’ data will not be used to train its models by default.
Microsoft Corp, which has invested in the company, already offers ChatGPT to businesses.
Mira Murati, OpenAI’s chief technology officer, told Reuters that service would appeal to the cloud provider’s existing customers.
AI TECH CAN CRACK COMMON PASSWORDS WITH STUNNING SPEED, RESEARCHERS FIND
Lastly, OpenAI detailed a new Export option in settings to make it easier to export ChatGPT data. Users will receive a file with conversations and all other relevant data in email.
While Murtai said that user information has helped make software more reliable, she said there are still challenges to undertake.
Murati told the outlet the new features arose from a months-long effort to put users “in the driver’s seat” regarding data collection.
CLICK HERE TO GET THE FOX NEWS APP
“We’ll be moving more and more in this direction of prioritizing user privacy,” Murati explained, with the goal that “it’s completely eyes off and the models are super aligned: they do the things that you want to do.”
She told The Associated Press in an interview posted Monday that AI systems should be regulated, and that “a lot more needs to happen.”
Reuters and The Associated Press contributed to this report.