AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page
   Local Database  Slashdot   [169 / 232] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   Most AI Chatbots Easily Tricked Into Giving Dangerous Responses,   May 21, 2025
 5:20 PM  

Feed: Slashdot
Feed Link: https://slashdot.org/
---

Title: Most AI Chatbots Easily Tricked Into Giving Dangerous Responses, Study
Finds

Link: https://it.slashdot.org/story/25/05/21/203121...

An anonymous reader quotes a report from The Guardian: Hacked AI-powered
chatbots threaten to make dangerous knowledge readily available by churning
out illicit information the programs absorb during training, researchers say.
[...] In a report on the threat, the researchers conclude that it is easy to
trick most AI-driven chatbots into generating harmful and illegal
information, showing that the risk is "immediate, tangible and deeply
concerning." "What was once restricted to state actors or organised crime
groups may soon be in the hands of anyone with a laptop or even a mobile
phone," the authors warn. The research, led by Prof Lior Rokach and Dr
Michael Fire at Ben Gurion University of the Negev in Israel, identified a
growing threat from "dark LLMs", AI models that are either deliberately
designed without safety controls or modified through jailbreaks. Some are
openly advertised online as having "no ethical guardrails" and being willing
to assist with illegal activities such as cybercrime and fraud. [...] To
demonstrate the problem, the researchers developed a universal jailbreak that
compromised multiple leading chatbots, enabling them to answer questions that
should normally be refused. Once compromised, the LLMs consistently generated
responses to almost any query, the report states. "It was shocking to see
what this system of knowledge consists of," Fire said. Examples included how
to hack computer networks or make drugs, and step-by-step instructions for
other criminal activities. "What sets this threat apart from previous
technological risks is its unprecedented combination of accessibility,
scalability and adaptability," Rokach added. The researchers contacted
leading providers of LLMs to alert them to the universal jailbreak but said
the response was "underwhelming." Several companies failed to respond, while
others said jailbreak attacks fell outside the scope of bounty programs,
which reward ethical hackers for flagging software vulnerabilities.

Read more of this story at Slashdot.

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Hide HTML Tags | Show Routing
Previous Message | Next Message | Back to Slashdot  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0162 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2025 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.250224