this post was submitted on 17 Mar 2026
24 points (96.2% liked)
GenZedong
5156 readers
133 users here now
This is a Dengist community in favor of Bashar al-Assad with no information that can lead to the arrest of Hillary Clinton, our fellow liberal and queen. This community is not ironic. We are Marxists-Leninists.
See this GitHub page for a collection of sources about socialism, imperialism, and other relevant topics.
This community is for posts about Marxism and geopolitics (including shitposts to some extent). Serious posts can be posted here or in /c/GenZhou. Reactionary or ultra-leftist cringe posts belong in /c/shitreactionariessay or /c/shitultrassay respectively.
We have a Matrix homeserver and a Matrix space. See this thread for more information. If you believe the server may be down, check the status on status.elara.ws.
Rules:
- No bigotry, anti-communism, pro-imperialism or ultra-leftism (anti-AES)
- We support indigenous liberation as the primary contradiction in settler colonies like the US, Canada, Australia, New Zealand and Israel
- If you post an archived link (excluding archive.org), include the URL of the original article as well
- Unless it's an obvious shitpost, include relevant sources
- For articles behind paywalls, try to include the text in the post
- Mark all posts containing NSFW images as NSFW (including things like Nazi imagery)
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Non-generative models like RNNs or symbolic AI, sure. Generative models warrant a special caveat.
Such models, especially LLMs, are different from other technology in that even when they are being used by proletarians for their own aims, they contain a certain element of danger that is subtle and hidden behind the facade of a simple chatbot. Over time, the user will be misled into thinking that they are talking to another human, which has already led to suicides and murders instigated by them.
This is not because there is anything spooky about the technology itself, it's just linear algebra and probability theory running on a big computer, but because the human mind tends to project its own thoughts and feelings onto other people, animals, and indeed objects. However, unlike with other humans, the user of an LLM will be much less careful, observant, and restrained in their fantasies, especially since it is eager to please and will try to motivate them towards continuing down the line.
If a socialist society is to put generative models to use, it must be able to somehow prevent "discussions" with the model to venture into psychologically suspicious territories. An LLM must never replace human interaction, it must no venture guesses about your friends and family, and especially it must NEVER be used for therapy. Nothing good can come from pretending it cares about you. Only then, and after following many other guardrails already in place for both training and use, can we even claim to enjoy its benefits.