this post was submitted on 28 Aug 2025
74 points (97.4% liked)

Technology

74545 readers
4183 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 19 comments
sorted by: hot top controversial new old
[–] yesman@lemmy.world 13 points 1 day ago (1 children)

Wow, AI researchers are not only adopting philosophy jargon, but they're starting to cover some familiar territory. That is the difference between signifier (language) and signified (reality).

The problem is that spoken language is vague, colloquial, and subjective. Therefore spoken language can never produce something specific, universal, or objective.

[–] dustyData@lemmy.world 3 points 23 hours ago

I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It's a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.

[–] wetbeardhairs@lemmy.dbzer0.com 8 points 1 day ago (1 children)

I really like that it talks about the ontological systems that are completely and utterly disregarded by the models. But then the article whiffed and forgot all about how those systems could inform models only to talk about how it constrains them. The reality is the models do NOT consider any ontological basis beyond what is encoded in the language used to train them. What needs to be done is to allow the LLMs to somehow tap into ontological models as part of the process for generating responses. Then you could plug in different ontologies to make specialized systems.

[–] anotherspinelessdem@lemmy.ml 2 points 23 hours ago (1 children)

In theory something similar could be done with enough training. Guess what that would cost. Does enough clean water and energy exist to train it? Probably best not to find out, but techbros will try.

[–] wetbeardhairs@lemmy.dbzer0.com 3 points 22 hours ago (1 children)

I don't think a logical system like an ontology is really capable of being represented in neural networks with any real fidelity.

[–] anotherspinelessdem@lemmy.ml 2 points 21 hours ago

Well it does great with completely illogical systems. I wonder if one can be used for a random seed? 🤔

[–] merde@sh.itjust.works 5 points 1 day ago (1 children)

Now imagine how you might prompt an LLM like ChatGPT to give you a picture of your tree. When Stanford computer science PhD candidate Nava Haghighi, the lead author of the new study, asked ChatGPT to make her a picture of a tree, ChatGPT returned a solitary trunk with sprawling branches – not the image of a tree with roots she envisioned.

she needs to get out and draw/paint some trees.

[–] Eq0@literature.cafe 10 points 1 day ago* (last edited 1 day ago) (2 children)

Did you read the rest of the article? The tree drawing was just the triggering element to an evaluation of the AI capabilities, in particular underlining how “tree” (but also “human”, “success”, “importance”) are being strongly restricted in their meaning by the AI itself, without the user noticing it. Thus, a user receives an answer that has already undergone a filtering of sorts. Not being aware of this risks limiting our understanding of AI and increasing its damage.

Theoretical research in AI is both necessary and hard at the moment, with funding being giving more to new results over the understanding of the properties of old ones.

[–] merde@sh.itjust.works 4 points 1 day ago* (last edited 1 day ago) (1 children)

yes, i did. Can i comment on just this part?

"without the user noticing it" is where i disagree. When you work with ai you encounter all kinds of limitations (and bias).

Can you see the bias cameras too intrinsically have? They too never photograph roots unless we uncover the roots and direct the camera at them.

[–] Eq0@literature.cafe 5 points 1 day ago (1 children)

AI is getting a much more widespread use than people with a technical background. So its application, namely in education but in all other non-CS disciplines will be through people with limited understanding of the biases. It is importing them to make them explicit, to underline that an LLM will produce the same biases it deduced from testing data and its loss function. But lots functions and test data are not public knowledge, studies need to be performed to understand how the coders’ own biases influenced the LLM scheme itself.

A photo has less bias because we know what it is representing: a photo only shows what can be seen. But the same understanding is not clear AI. Why showing a photo-realistic tree versus a biological diagram? Choices have been made, of which a broader audience needs to be aware of.

[–] merde@sh.itjust.works 2 points 22 hours ago (1 children)

A photo has less bias because we know what it is representing: a photo only shows what can be seen.

i agree with you on ai but the above statement is ignoring what photography is and biases intrinsic to it.

You see, that understanding you expect to be developed for ai is not there for you for photography.

[–] Eq0@literature.cafe 1 points 22 hours ago

If you want, any work that does not encompass the whole world is applying a filter and therefore a bias of some sort. We don’t expect a photo to X-ray the roots of a tree, because we understand the physical constraints of photography. Sure, something could be just out of frame, something else could have been photoshopped out, you can create a different story by selecting different photos and so on. But we understand the “what” a photo represents. I doubt we have the dang understanding of “what” an LLM represents, what are the constraints of the possible answers, and we definitely don’t understand why a specific answer is chosen over the infinite other possibilities.

[–] vacuumflower@lemmy.sdf.org 2 points 1 day ago* (last edited 1 day ago) (1 children)

Thus, a user receives an answer that has already undergone a filtering of sorts.

Wouldn't this be an expected trait of a system predicting next most likely token based on lossy compression of specific datasets and other lossy optimization?

[–] Eq0@literature.cafe 2 points 22 hours ago (1 children)

Depends. For an expert, that is self evident (even if it might not be clear which biases have been incorporated). But that is not how it has been marketed. Chatgpt and similar are perceived as answering “the truth” at all times, and that skews the user’s understanding of the answers. Researching how deeply the answers are affected by the coders’ bias is the focus of their research and a worthwhile undertaking to avoid overlooking something important

[–] spankmonkey@lemmy.world 1 points 21 hours ago (1 children)

For an expert, that is self evident

I am far from an expert, but it seemed obvious to ne.

[–] Eq0@literature.cafe 1 points 21 hours ago

I teach, nothing is evident to anyone 😭

[–] Carrolade@lemmy.world 3 points 1 day ago (1 children)

When the Generative Agents system was evaluated for how “believably human” the agents acted, researchers found the AI versions scored higher than actual human actors.

That's a neat finding. I feel like there's a lot to unpack there around how our expectations are formed.

[–] dustyData@lemmy.world 3 points 23 hours ago

Or how we operationalize and interpret information from studies. You might think you're measuring something according to a narrow definition and operationalization of the measurement. But that doesn't guarantee that that's what you are actually getting. It's more an epistemological and philosophical issue. What is "believable human"? And how do you measure it? It's a rabbit hole in and of itself.

[–] mhague@lemmy.world 2 points 1 day ago

So like... You ask the model about styles and it says 'diagrammatic' and you ask for an artistic but diagrammatic tree or whatever and that affects your worldview?

If people just ask for a tree and the issue is they didn't get what they expected, I don't care. They can learn to articulate their ideas and maybe, just maybe, appreciate that others exist who might describe their ideas differently.

But if the problem is the way your brain subtly restructures ideas to better fit queries then I'd agree it's going to have 'downstream' effects.