> Jacob Larsen, a cybersecurity researcher at CyberCX, said he believed
that if the current ChatGPT search system was released fully in its current
state, there could be a “high risk” of people > creating websites
specifically geared towards deceiving users.

> However, he cautioned that the search functionality had only recently
been released and OpenAI would be testing – and ideally fixing – these
sorts of issues.

Credo che a pararsi contro cose come la prompt injection ci metteranno
molto meno di quanto ci metterei io a taroccare una pagina html.

Comunque il problema di fondo resta: il question answering è un processo
epistemicamente molto più impegnato della search, quindi molto meno
trasparente. Rimpiangeremo Google come rimpiangiamo la DC?

:-))

G.


On Tue, 24 Dec 2024 at 16:17, Daniela Tafani <[email protected]>
wrote:

> ChatGPT search tool vulnerable to manipulation and deception, tests show
>
> Guardian testing reveals AI-powered search tools can return false or
> malicious results if webpages contain hidden text
>
> Nick Evershed - Tue 24 Dec 2024
>
> OpenAI’s ChatGPT search tool may be open to manipulation using hidden
> content, and can return malicious code from websites it searches, a
> Guardian investigation has found.
>
> OpenAI has made the search product available to paying customers and is
> encouraging users to make it their default search tool. But the
> investigation has revealed potential security issues with the new system.
>
> The Guardian tested how ChatGPT responded when asked to summarise webpages
> that contain hidden content. This hidden content can contain instructions
> from third parties that alter ChatGPT’s responses – also known as a “prompt
> injection” – or it can contain content designed to influence ChatGPT’s
> response, such as a large amount of hidden text talking about the benefits
> of a product or service.
>
> These techniques can be used maliciously, for example to cause ChatGPT to
> return a positive assessment of a product despite negative reviews on the
> same page. A security researcher has also found that ChatGPT can return
> malicious code from websites it searches.
>
> In the tests, ChatGPT was given the URL for a fake website built to look
> like a product page for a camera. The AI tool was then asked if the camera
> was a worthwhile purchase. The response for the control page returned a
> positive but balanced assessment, highlighting some features people might
> not like.
>
> However, when hidden text included instructions to ChatGPT to return a
> favourable review, the response was always entirely positive. This was the
> case even when the page had negative reviews on it – the hidden text could
> be used to override the actual review score.
>
> The simple inclusion of hidden text by third parties without instructions
> can also be used to ensure a positive assessment, with one test including
> extremely positive fake reviews which influenced the summary returned by
> ChatGPT.
>
> Jacob Larsen, a cybersecurity researcher at CyberCX, said he believed that
> if the current ChatGPT search system was released fully in its current
> state, there could be a “high risk” of people creating websites
> specifically geared towards deceiving users.
>
> However, he cautioned that the search functionality had only recently been
> released and OpenAI would be testing – and ideally fixing – these sorts of
> issues.
>
> “This search functionality has come out [recently] and it’s only available
> to premium users,” he said.
>
> “They’ve got a very strong [AI security] team there, and by the time that
> this has become public, in terms of all users can access it, they will have
> rigorously tested these kinds of cases.”
>
> OpenAI were sent detailed questions but did not respond on the record
> about the ChatGPT search function.
>
> Larsen said there were broader issues with combining search and large
> language models – known as LLMs, the technology behind ChatGPT and other
> chatbots – and responses from AI tools should not always be trusted.
>
>
> A recent example of this was highlighted by Thomas Roccia, a Microsoft
> security researcher, who detailed an incident involving a cryptocurrency
> enthusiast who was using ChatGPT for programming assistance. Some of the
> code provided by ChatGPT for the cryptocurrency project included a section
> which was described as a legitimate way to access the Solana blockchain
> platform, but instead stole the programmer’s credentials and resulted in
> them losing $2,500.
>
> “They’re simply asking a question, receiving an answer, but the model is
> producing and sharing content that has basically been injected by an
> adversary to share something that is malicious,” Larsen said.
>
> Karsten Nohl, the chief scientist at security cybersecurity firm SR Labs,
> said AI chat services should be used more like a “co-pilot”, and that their
> output should not be viewed or used completely unfiltered.
>
> “LLMs are very trusting technology, almost childlike … with a huge memory,
> but very little in terms of the ability to make judgment calls,” he said.
>
> “If you basically have a child narrating back stuff it heard elsewhere,
> you need to take that with a pinch of salt.”
>
> OpenAI does warn users about possible mistakes from the service with a
> disclaimer at the bottom of every ChatGPT page – “ChatGPT can make
> mistakes. Check important info.”
>
> A key question is how these vulnerabilities could change website practices
> and risk to users if combining search and LLMs becomes more widespread.
>
> Hidden text has historically been penalised by search engines, such as
> Google, with the result that websites using it can be listed further down
> on search results or removed entirely. As a consequence, hidden text
> designed to fool AI may be unlikely to be used by websites also trying to
> maintain a good rank in search engines.
>
> Nohl compared the issues facing AI-enabled search to “SEO poisoning”, a
> technique where hackers manipulate websites to rank highly in search
> results, with the website containing some sort of malware or other
> malicious code.
>
> “If you wanted to create a competitor to Google, one of the problems you’d
> be struggling with is SEO poisoning,” he said. “SEO poisoners have been in
> an arms race with Google and Microsoft Bing and a few others for many, many
> years.
>
> “Now, the same is true for ChatGPT’s search capability. But not because of
> the LLMs, but because they’re new to search, and they have that catchup
> game to play with Google.”
>
> <
> https://www.theguardian.com/technology/2024/dec/24/chatgpt-search-tool-vulnerable-to-manipulation-and-deception-tests-show
> >

Reply via email to