OpenAI has removed an experimental feature in the ChatGPT app that made shared conversations visible in search engines, after it was found that this could inadvertently expose sensitive or personal information.
“”We have just removed a feature from the ChatGPT app that allowed users to make their conversations searchable by search engines like Google,”” writes Dane Stuckey, CISO of OpenAI on X. The feature could expose sensitive information or personal data in search engines. What was initially intended as a way for people to discover relevant conversations, turned out to be a privacy risk according to critics. Meanwhile, the feature has been removed for all users.
Brief experiment
Stuckey explains that it was a “brief experiment to help people discover useful conversations.” The feature has since been deactivated after it was found that sensitive conversations or personal data of users could also appear in search engines.
To sign up, users first had to select a chat to share, and then click a checkbox to give permission to share this conversation with search engines.
Stuckey states that his team “believes this feature offered too many opportunities for people to accidentally share things they did not intend to.” Consequently, this feature was removed for all users.
“Security and privacy are of utmost importance to us, and we will continue to do everything we can to ensure this is reflected in our products and features,” he concludes.