this post was submitted on 09 Apr 2025
27 points (100.0% liked)

196

18014 readers
1029 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS
 
all 34 comments
sorted by: hot top controversial new old
[–] pewgar_seemsimandroid@lemmy.blahaj.zone 1 points 2 days ago* (last edited 2 days ago)

chai: we made an ai chatbot so you can roleplay your wife getting fucked by the bo-bot and a bunch of other customizable things like tentacle por-

[–] angrystego@lemmy.world 2 points 2 months ago

Oh poor baby, do you need the dishwasher to wash your dishes? Do you need the washing mashine to wash your clothes? You can't do it?

[–] SaharaMaleikuhm@feddit.org 1 points 2 months ago (2 children)

Had it write some simple shader yesterday cause I have no idea how those work. It told me about how to use the mix and step functions to optimize for GPUs, then promptly added some errors I had to find myself. Actually not that bad cause now after fixing it I do understand the code. Very educational.

[–] TheOakTree@lemm.ee 1 points 2 months ago (1 children)

This is my experience using it for electrical engineering and programming. It will give me 80% of the answer, and the remainder 20% is hidden errors. Turns out the best way to learn math from GPT is to ask it a question you know the answer (but not the process) to. Then, reverse engineer the process and determine what mistakes were made and why they impact the result.

Alternatively, just refer to existing materials in the textbook and online. Then you learn it right the first time.

[–] Cataphract@lemmy.ml 0 points 2 months ago

thank you for that last sentence because I thought I was going crazy reading through these responses.

[–] Klear@sh.itjust.works 1 points 2 months ago

Shaders are black magic so understandable. However, they're worth learning precisely because they are black magic. Makes you feel incredibly powerful once you start understanding them.

[–] TxzK@lemmy.zip 1 points 2 months ago (1 children)

nah, I want chat gpt to be my wife, since I don't have a real one

/s

[–] TheBat@lemmy.world 1 points 2 months ago (1 children)
[–] Hoimo@ani.social 1 points 2 months ago* (last edited 2 months ago)

I would rather date an operating system than an LLM. The OS can be trusted with elevated rights and access to my files.

SpoilerAssuming Samantha is Linux-based, of course, I would never date a non-free OS.

[–] orca@orcas.enjoying.yachts 1 points 2 months ago

I just had Copilot hallucinate 4 separate functions, despite me giving it 2 helper files for context that contain all possible functions available to use.

AI iS tHe FuTuRE.

[–] Aggravationstation@feddit.uk 0 points 2 months ago (1 children)

I don't need Chat GPT to fuck my wife but if I had one and her and Chat GPT were into it then I would like to watch Chat GPT fuck my wife.

[–] not_IO@lemmy.blahaj.zone -1 points 2 months ago

good on you!

[–] SaltyIceteaMaker@lemmy.ml 0 points 2 months ago (1 children)

yeah imma keep it real with you, i ain't wasting my time writing an essay or sum when i can take one sentence into an AI instead. more free time for me.

however i do not possess the audacity to then claim that i am a "AI writer" or sum

[–] not_IO@lemmy.blahaj.zone -1 points 2 months ago

yeah your precious time

[–] sheetzoos@lemmy.world 0 points 2 months ago* (last edited 2 months ago) (2 children)

People are constantly getting upset about new technologies. It's a good thing they're too inept to stop these technologies.

[–] WrenFeathers@lemmy.world 0 points 2 months ago (1 children)

People are also always using one example to illustrate another, also known as a false injunction.

There is no rule that states all technology must be considered safe.

[–] sheetzoos@lemmy.world 0 points 2 months ago (1 children)

Every technology is a tool - both safe and unsafe depending on the user.

Nuclear technology can be used to kill every human on earth. It can also be used to provide power and warmth for every human.

AI is no different. It can be used for good or evil. It all depends on the people. Vilifying the tool itself is a fool's argument that has been used since the days of the printing press.

[–] wolframhydroxide@sh.itjust.works 0 points 2 months ago* (last edited 2 months ago) (1 children)

While this may be true for technologies, tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb. In the rifle, the technology of the mechanisms in the gun is the same precision-milled clockwork engineering that is used for worldwide production automation. The technology of the harnessing of a nuclear chain reaction is the same, whether enriching uranium for a bomb or a power plant.

HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose. In these cases, that SOLE purpose is to, in an incredibly short period of time, with little effort or skill, enable the user to end the lives of as many people as possible. You can never use a bomb as a power plant, nor a rifle to alleviate supply shortages (except, perhaps, by a very direct reduction in demand). Here, our problem has never been with the technology of Artificial Neural Nets, which have been around for decades. It isn't even with "AI" (note that no extant "AI" is actually "intelligent")! No, our problem is with the tools. These tools are made with purpose and intent. Intent to defraud, intent to steal credit for the works of others, and the purpose of allowing corporations to save money on coding, staffing, and accountability for their actions, the purpose of having a black box a CEO can point to, shrug their shoulders, and say "what am I supposed to do? The AI agent told me to fire all of these people! Is it my fault that they were all ?!"

These tools cannot be used to know things. They are probabilistic models. These tools cannot be used to think for you. They are Chinese Rooms. For you to imply that the designers of these models are blameless


when their AI agents misidentify black men as criminals in facial recognition software; when their training data breaks every copyright law on the fucking planet, only to allow corporations to deepfake away any actual human talent in existence; when the language models spew vitriol and raging misinformation with the slightest accidental prompting, and can be hard-limited to only allow propagandized slop to be produced, or tailored to the whims of whatever despot directs the trolls today; when everyone now has to question whether they are even talking to a real person, or just a dim reflection, echoing and aping humanity like some unseen monster in the woods


is irreconcilable with even an iota of critical thought. Consider more carefully when next you speak, for your corporate-apologist principles will only help you long enough for someone to train your beloved "tool" on you. May you be replaced quickly.

[–] sheetzoos@lemmy.world 0 points 2 months ago (1 children)

You've made many incorrect assumptions and setup several strawmen fallacies. Rather than try to converse with someone who is only looking to feed their confirmation bias, I'll suggest you continue your learnings by looking up the Dunning Kruger effect.

[–] erin@lemmy.blahaj.zone 0 points 2 months ago (1 children)

Can you point out and explain each strawman in detail? It sounds more like someone made good analogies that counter your point and you buzzword vomited in response.

[–] sheetzoos@lemmy.world 0 points 2 months ago* (last edited 2 months ago) (1 children)

Dissecting his wall of text would take longer than I'd like, but I would be happy to provide a few examples:

  1. I have "...corporate-apologist principles".

Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).

  1. "...tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb" "HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose"

Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.

There are a ton of invalid assumptions about machine learning as well, but I'm not interested in wasting time on someone who believes they know everything.

[–] erin@lemmy.blahaj.zone 0 points 2 months ago (1 children)

I understand that you disagree with their points, but I'm more interested in where the strawman arguments are. I don't see any, and I'd like to understand if I'm missing a clear fallacy due to my own biases or not.

[–] sheetzoos@lemmy.world 0 points 2 months ago (1 children)

Many of their points are factually incorrect. The first point I refuted is a strawman argument. They created a position I do not hold to make it easier to attack.

[–] erin@lemmy.blahaj.zone 1 points 2 months ago (1 children)

I don't see it, as it seems like you are in fact arguing that tools are neutral. Giving counter examples isn't the same thing as a strawman, it's challenging your argument. Did you mean a different part of their argument?

[–] sheetzoos@lemmy.world 1 points 2 months ago* (last edited 2 months ago) (1 children)

Did you not read my previous post? The first point I refuted is a strawman argument. They created a position I do not hold to make it easier to attack.

If you don't believe this to be a strawman argument, please explain your logic.

[–] erin@lemmy.blahaj.zone 1 points 2 months ago (1 children)

I suppose you're talking about the part about your post history, which seems flimsy. Just because some of your posts agree with the other poster doesn't mean the ones specifically referred to don't exist. A strawman is putting your ideas up framed such that you do not support them, but arguing that you do in order to make a simpler argument. That doesn't appear to be happening, as lacking nuance isn't the same thing as a strawman. You do seem to be making the argument referred to, and having a nuanced position from other posts doesn't make that untrue. It also seems irresponsible to use that one point to discredit the entire argument, which broadly doesn't care about said point.

[–] sheetzoos@lemmy.world 1 points 2 months ago (1 children)

I am not a corporate apologist. I never said I was a corporate apologist. My post history backs up the fact that I am not a corporate apologist. There's nothing "flimsy" about this. It's clear cut if you're willing to objectively look at the logic of the arguments presented.

I'm not using that one point to discredit their entire post. I posted two examples and stated their wall of text was so full of false statements that I wasn't interested in debating every single point with someone who already had their mind made up.

[–] erin@lemmy.blahaj.zone 1 points 2 months ago* (last edited 2 months ago) (1 children)

You claimed they made several strawman arguments. The one you are pointing to is where they called your argument corporate apologia, which isn't a strawman, whether you are or are not l, as it's referring to the beneficiaries of your argument, which they argue to be corporations. The points they are making are sound.

For example (none of this is my actual beliefs), I could make an argument for unrestricted gun ownership. Someone, in disagreement with me, could say I need to take my gun lobby apologia and leave, after discussing why my position supports the gun lobby. In actuality, hypothetical me wants easier gun ownership for queer people and other marginalized groups. Me not supporting the gun lobby doesn't make that a strawman. They aren't making a strawman argument by saying because my argument supports the gun lobby, it is automatically invalid.

They do this exact same thing against your argument. They argue the points that your beliefs ultimately support corporations, not that your opinion is automatically invalid because you support corporations. If all they said was that last line about corporate apologia, you'd have a point, but they don't. You're simply misusing and diluting the strawman fallacy. You also claimed they made several strawman arguments, but failed to demonstrate the one example you pulled. I don't even really care about your arguments or theirs in regards to my response, as others have covered my beliefs already, I only am concerned in discussing the improper use of logical fallacies to discredit people you disagree with.

[–] sheetzoos@lemmy.world 1 points 2 months ago* (last edited 2 months ago) (1 children)

"A straw man fallacy occurs when someone distorts or exaggerates another person's argument"

They distorted my argument by making shit up. That's called a straw man fallacy.

You think you're saying a lot, but you've said nothing.

[–] erin@lemmy.blahaj.zone 1 points 2 months ago

Saying "your views support this" is not making the argument you're claiming it does.

[–] not_IO@lemmy.blahaj.zone -1 points 2 months ago

isn't that comic gas company propaganda, or am i rememberong it wrong