ChatGPT will receive a new feature in ‘the coming weeks’ that predicts the age of users. With this, OpenAI wants to prevent teenagers from being exposed to sensitive content.
ChatGPT introduces age prediction to determine whether an account presumably belongs to a minor. If the model estimates a user as possibly under 18, ChatGPT automatically enables additional protections. Is the estimate incorrect according to the user? Then there is the option to verify the age via a selfie.
Age prediction model
For the estimation, ChatGPT uses a separate age prediction model. That model combines behavioral and account signals. It looks at the age of the account, active times and usage patterns over time, among other things. The age that a user enters themselves can also play a role.
read also
ChatGPT Will Soon Require Age Verification to Protect Minors
Users who unfairly end up in the under-18 experience can confirm their age. This is done via a selfie check with Persona, an identity verification service. Users can find this option in the settings under Account. There they can also see whether extra security measures are active.
Sensitive content
When it effectively appears that the person is a minor, ChatGPT will limit exposure to certain types of content. This includes graphic violence, risky viral challenges and sexual or violent role-playing. Images of self-harm and content that promotes extreme beauty ideals or unhealthy dieting practices are also subject to the restrictions.
In addition to the automatic protections, parents can activate additional settings via parental supervision. They can set quiet hours, manage functions such as memory or model training and receive notifications of acute distress signals. OpenAI will monitor the rollout and make further adjustments based on the results.
