ChatGPT Search can provide erroneous information because of so-called “prompt injections,” according to an investigation by The Guardian.
ChatGPT’s new search engine, ChatGPT Search, appears to be susceptible to manipulation, according to an investigation by The Guardian. This is due to so-called prompt injections, where hidden instructions on Web pages influence the answers. ChatGPT Search was made available to all users just a few days ago.
Prompt injections
The Guardian’s investigation indicates that the search engine appears to be susceptible to manipulation. More specifically, the answers could be influenced by “prompt injections. This technique uses hidden instructions on Web pages to manipulate ChatGPT’s responses. This can affect the way ChatGPT summarizes Web pages, and cause incorrect information to be offered.
read also
OpenAI makes SearchGPT available to everyone
But there’s more. According to The Guardian, malicious actors can also use this vulnerability to spread malicious code. The risk with large language models (LLMs) has been highlighted several times by security experts. Responses should be viewed with caution.