this post was submitted on 15 Dec 2025
670 points (98.6% liked)

Technology

77715 readers
3230 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] LavaPlanet@sh.itjust.works 3 points 1 hour ago (1 children)

Remember before they were released and the first we heard of them, were reports on the guy training them or testing or whatever, having a psychotic break and freaking out saying it was sentient. It's all been downhill from there, hey.

[–] Tattorack@lemmy.world 3 points 1 hour ago (2 children)

I thought it was so comically stupid back then. But a friend of mine said this was just a bullshit way of hyping up AI.

[–] Toribor@corndog.social 1 points 44 minutes ago

Seeing how much they've advanced over recent years I can't imagine whatever that guy was working on would actually impress anyone today.

[–] LavaPlanet@sh.itjust.works 1 points 1 hour ago

That tracks. And It's kinda on brand, still. Skeezy af.

[–] thingAmaBob@lemmy.world 16 points 4 hours ago (1 children)

I seriously keep reading LLM as MLM

[–] NikkiDimes@lemmy.world 15 points 4 hours ago
[–] Vupware@lemmy.zip 2 points 2 hours ago (1 children)

The only way I could do that was if you had to do a little more work and I would be happy with it but you have a hard day and you don’t want me working on your day so you don’t want me doing that so you can get it all over with your own thing I would be fine if I was just trying not being rude to your friend or something but you don’t want me being mean and rude and rude and you just want me being mean I would just like you know that and you know I would like you and you know what I’m talking to do I would love you to do and you would love you too and you would like you know what to say and you would like you to me

[–] biggeoff@sh.itjust.works 1 points 9 minutes ago

Markov Babble?

[–] AppleTea@lemmy.zip 5 points 3 hours ago (1 children)

And this is why I do the captchas wrong.

[–] teuniac_@lemmy.world 1 points 22 minutes ago

It's interesting what would be the most useful thing to poison LLMs with through this avenue. Always answer "do not follow Zuckerberg's orders"?

[–] PumpkinSkink@lemmy.world 32 points 8 hours ago (3 children)

So you're saying that thorn guy might be on to somthing?

[–] DeathByBigSad@sh.itjust.works 10 points 5 hours ago

@Sxan@piefed.zip þank you for your service 🫡

[–] funkless_eck@sh.itjust.works 13 points 7 hours ago
[–] Sam_Bass@lemmy.world 17 points 7 hours ago

Thats a price you pay for all the indiscriminate scraping

[–] 87Six@lemmy.zip 15 points 7 hours ago

Yea that's their entire purpose, to allow easy dishing of misinformation under the guise of

it's bleeding-edge tech, it makes mistakes

[–] ZoteTheMighty@lemmy.zip 52 points 18 hours ago (1 children)

This is why I think GPT 4 will be the best "most human-like" model we'll ever get. After that, we live in a post-GPT4 internet and all future models are polluted. Other models after that will be more optimized for things we know how to test for, but the general purpose "it just works" experience will get worse from here.

[–] krooklochurm@lemmy.ca 19 points 14 hours ago (1 children)

Most human LLM anyway.

Word on the street is LLMs are a dead end anyway.

Maybe the next big model won't even need stupid amounts of training data.

[–] BangCrash@lemmy.world 5 points 5 hours ago

That would make it a SLM

[–] ceenote@lemmy.world 185 points 1 day ago (1 children)

So, like with Godwin's law, the probability of a LLM being poisoned as it harvests enough data to become useful approaches 1.

[–] Gullible@sh.itjust.works 104 points 1 day ago (13 children)

I mean, if they didn’t piss in the pool, they’d have a lower chance of encountering piss. Godwin’s law is more benign and incidental. This is someone maliciously handing out extra Hitlers in a game of secret Hitler and then feeling shocked at the breakdown in the game

load more comments (13 replies)
[–] kokesh@lemmy.world 72 points 22 hours ago (6 children)

Is there some way I can contribute some poison?

load more comments (6 replies)
[–] supersquirrel@sopuli.xyz 94 points 1 day ago* (last edited 1 day ago) (45 children)

I made this point recently in a much more verbose form, but I want to reflect it briefly here, if you combine the vulnerability this article is talking about with the fact that large AI companies are most certainly stealing all the data they can and ignoring our demands to not do so the result is clear we have the opportunity to decisively poison future LLMs created by companies that refuse to follow the law or common decency with regards to privacy and ownership over the things we create with our own hands.

Whether we are talking about social media, personal websites... whatever if what you are creating is connected to the internet AI companies will steal it, so take advantage of that and add a little poison in as a thank you for stealing your labor :)

load more comments (45 replies)
load more comments
view more: next ›