this post was submitted on 18 Jun 2025
19 points (95.2% liked)

Cybersecurity

7568 readers
122 users here now

c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.

THE RULES

Instance Rules

Community Rules

If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.

Learn about hacking

Hack the Box

Try Hack Me

Pico Capture the flag

Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub

Notable mention to !cybersecuritymemes@lemmy.world

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] thebardingreen@lemmy.starlightkel.xyz 13 points 1 day ago (4 children)

Oh man, I hate the use of all the scary language around jailbreaking.

This means cybercriminals are using jailbreaking techniques to bypass the built-in safety features of these advanced LLMs (AI systems that generate human-like text, like OpenAI’s ChatGPT). By jailbreaking them, criminals force the AI to produce “uncensored responses to a wide range of topics,” even if these are “unethical or illegal,” researchers noted in their blog post shared with Hackread.com.

“What’s really concerning is that these aren’t new AI models built from scratch – they’re taking trusted systems and breaking their safety rules to create weapons for cybercrime,“ he warned.

"Hackers make uncensored AI... only BAD people would want to do this, to use it to do BAD CRIMINAL things."

God forbid I want to jailbreak AI or run uncensored models on my own hardware. I'm just like those BAD CRIMINAL guys.

[–] Vendetta9076@sh.itjust.works 6 points 23 hours ago (1 children)

What's really concerning is that they're calling these AI models trusted systems. This shit has been happening since day 1. Twitter turned Tay into a kkk member in about 15 minutes. LLMs will always be vulnerable to "jailbreaking" because of how theyre designed. Does it really fucking matter that some script kiddies have gotten it to write malware?

It sounds like the real issue for these fuckwits is that script kiddies are running jailbroken models with darknet edgelord sounding names (WormGPT roflmao). This whole article is like some security company execs generating clickbait and citations to get attention by saying scary shit about a nothing burger.

load more comments (2 replies)