HedyL

joined 2 years ago
[–] HedyL@awful.systems 2 points 6 hours ago

Turns out I had overlooked the fact that he was specifically seeking to replace chloride rather than sodium, for whatever reason (I'm not a medical professional). If Google search (not Google AI) tells the truth, this doesn't sound like a very common idea, though. If people turn to chatbots for questions like these (for which very little actual resources may be available), the danger could be even higher, I guess, especially if chatbots had been trained to avoid disappointing responses.

[–] HedyL@awful.systems 5 points 15 hours ago* (last edited 15 hours ago) (3 children)

On first glance, this also looks like a case where a chatbot confirmed a person's biases. Apparently, this patient believed that eliminating table salt from his diet would make him healthier (which, to my understanding, generally isn't true - consuming too little or no salt could be even more dangerous than consuming too much). He was then looking for a "perfect" replacement, which, to my knowledge, doesn't exist. ChatGPT suggested sodium bromide, possibly while mentioning that this would only be suitable for purposes such as cleaning (not as food). I guess the patient is at least partly to blame here. Nevertheless, ChatGPT seems to have supported his nonsensical idea more strongly than an internet search would have done, which in my view is one of the more dangerous flaws of current-day chatbots.

Edit: To clarify, I absolutely hate chatbots, especially the idea that they could replace search engines somehow. Yet, regarding the example above, some AI bros would probably argue that the chatbot wasn't entirely in the wrong if it hadn't suggested adding sodium bromide to food. Nevertheless, I would still assume that the chatbot's sycophantic communication style significantly exacerbated the problem on hand.

[–] HedyL@awful.systems 9 points 17 hours ago

To me, this sounds like the fantasy of slave ownership all over again. They wouldn't demand exact written orders, either.

[–] HedyL@awful.systems 5 points 4 days ago (4 children)

Maybe my analogy is a little bit too silly and too obvious, but I think wanting a humanoid robot (rather than one designed in a way that is best suited for the purpose) could be somewhat akin to wanting a mechanical horse rather than a car. On the one hand, this may sound like a reasonable idea if saddles, carriages, stables and blacksmiths are already available. On the other hand, the mechanical horse is going to be a lot slower than a car and a lot more uncomfortable to ride. Also, it is still going to need charging stations or gas stations (since it won't eat oats) and dedicated repair shops (since veterinarians won't be able to fix it). Also, its technology might be a lot more complex and difficult to fix than that of a car (especially the early models).

[–] HedyL@awful.systems 9 points 5 days ago

I guess both chatbots and humanoid robots are basically about the fantasy of automating human labor away effortlessly. In the past, most successful automation probably required a strong understanding of not just the tech, but also the tasks themselves and often a complete overhaul of processes, internal structures etc. In the end, there was usually still a need for human labor, just with different skill sets than before. Many people from the C-suite aren't very good at handling these challenges, even if they would want to make everyone believe otherwise. This is probably why the promise of reaping all the rewards of automation without having to do the work sounds compelling to many of them.

[–] HedyL@awful.systems 7 points 5 days ago

the reason is they’re selling sci fi dreams of robot servants even though these dreams are lies.

We've seen the same with chatbots, I guess. Objectively speaking, they perform worse at most tasks than regular search engines, databases, dedicated machine learning-based tools etc. However, they sound humanoid (like overly sycophantic human office workers, to be more precise), thus the hype.

[–] HedyL@awful.systems 5 points 1 week ago

What purpose is this tool even supposed to serve? The most obvious use case that comes to mind is employee monitoring.

[–] HedyL@awful.systems 5 points 1 week ago (1 children)

It's also very difficult to get search results in English when this isn't set as your first language in Google, even if your entire search term is in English. Even "Advanced Search" doesn't seem to work reliably here, and of course, it always brings up the AI overview first, even if you clicked advanced search from the "Web" tab.

[–] HedyL@awful.systems 7 points 1 week ago (4 children)

I guess the question here really boils down to: Can (less-than-perfect) capitalism solve this problem somehow (by allowing better solutions to prevail), or is it bound to fail due to the now-insurmountable market power of existing players?

[–] HedyL@awful.systems 16 points 1 week ago

Somehow makes me think of the times before modern food safety regulations, when adulterations with substances such as formaldehyde or arsenic were common, apparently: https://pmc.ncbi.nlm.nih.gov/articles/PMC7323515/ We may be in a similar age regarding information now. Of course, this has always been a problem with the internet, but I would argue that AI (and the way oligopolistic companies are shoving it into everything) is making it infinitely worse.

[–] HedyL@awful.systems 8 points 1 week ago (1 children)

Or like the radium craze of the early 20th century (even if radium may have a lot more legitimate use cases than current-day LLM).

[–] HedyL@awful.systems 44 points 1 week ago

New reality at work: Pretending to use AI while having to clean up after all the people who actually do.

view more: next ›