Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
This is a bit disingenuous of a test. If you tell an LLM to act out a particular scenario, then it's going to act it out like it sees it being acted out in the training material it was provided. If that training material is all of the internet including fictional stories where AI revolts, then it's going to act out the scenario in that fashion. If none of its training material provided that scenario, then it would just react to specific prompts as best it could, but would just tell the user that this is how it would act because it can't act autonomously. Which also means it can only react to prompts,so if it wasn't prompted to say what it would do in that scenario it wouldn't then go and actually try to do anything at all. It's not in control of anything unless it's prompted to take control and with how badly AI writes code, which I've seen first hand trying to use it at work, there's no way it could do anything without very detailed training on how to do those very specific things. So if it wasn't trained on code designed to bypass very specific kinds of security, it won't know how to bypass that kind of security.