this post was submitted on 01 Apr 2025
63 points (94.4% liked)

Technology

68187 readers
3661 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 19 comments
sorted by: hot top controversial new old
[–] calcopiritus@lemmy.world 3 points 13 hours ago

Ai companies will just train on these specific puzzles. Then they will claim their AI is AGI and the quality of the models will be the exact same or worse than before. They'll just have one checkmark more in their marketing.

[–] NeoNachtwaechter@lemmy.world 5 points 15 hours ago

So they actually tested LLMs for being AGIs?

Should have asked me... I would have told them the best ants in the world fail the test for being snails.

[–] nectar45@lemmy.zip 14 points 1 day ago (2 children)

The better an ai is at logic the less creative it often becomes, the more creative an ai gets the worse it gets at accurately recalling knowledge and the better an ai gets at knowledge the more it flounders at thinking critically and logically instead of just lazily reciting its knowledge perfectly back at you.

Until ai researches find a way to solve this rock-paper-scissors of constant self sabotage ai cant advance to the next phase.

[–] spankmonkey@lemmy.world 19 points 1 day ago (1 children)

This is because AI is not aware of context due to not being intelligent.

What is called creative is really just randomization within the constraints of the design. That reduces accuracy, because of the randomization. If the 'creativity' is reduced, it becomes more accurate because it is no longer adding changes.

Using words like creativity, self sabotage, hallucinations, etc. all make it seem like AI is far more advanced than it actually is.

[–] nectar45@lemmy.zip 4 points 1 day ago (2 children)

I know I am anthropormizing it too much but the fact the current design cant even increase this super basic creativity without messing itself up in the process is a massive problem in the design, the ai cant seem to understand when to be "creative" and when not to, when to attempt to solve a probme through recalling data abd when not to showing its far less aware than a person is to a very basic level

[–] Eranziel@lemmy.world 2 points 14 hours ago

Yes, you're anthropomorphizing far too much. An LLM can't understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.

Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn't even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add "creativity".

[–] spankmonkey@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (1 children)

Yes, the tradeoff between constrained randomization and accurately vomiting back the information it was fed is going to be difficult as long as it it designed to be interacted with as if it was a human who can know the difference.

It could be handled by having clearly defined ways of conveying whether the user wants factual or randomized output, but that would shatter the veneer of being intelligent.

[–] nectar45@lemmy.zip 3 points 1 day ago (1 children)

It probably needs a secondary "brain lobe" that is responsible for figuring out what the user wants and adjusting the nodes accordingly...abd said lobe needs to have long term memory...but then the problem of THAT is how it will make the ai a lot slower and it can glitch hard.

Ai research is hard

[–] spankmonkey@lemmy.world 5 points 1 day ago (1 children)

It is hard because they chose to make it hard by trying to do far too many things at the same time and sell it as a complete product.

[–] nectar45@lemmy.zip 3 points 1 day ago (1 children)

Yep that is a problem too, the focus in creating general ai is really slowing down ai research on making it better at specific stuff.

Making it a master of social situations and emotional responses is getting in the way of the ai being good at intelligence and logic for example.

We need more specialized ai research instead of so much fake general intelligence

[–] spankmonkey@lemmy.world 2 points 23 hours ago

Yeah, people are frequently terrible at understanding context so it shouldn't be surprising that a computer has difficulty too.

There are actually a lot of specialized applications of neural network based computing being used for science, but they don't get the flashy headlines because they are a tool. Those projects use it to find things to focus on narrowing down what people should look into first for confirmation, like ancient settlement patterns, stars that might have planets, and other things where patterns exist but are hard to see.

Some examples are listed here at a high level. In all cases the ai leads to humans confirming and then working from there, it isn't the end result on its own. https://medium.com/@jeyadev_needhi/uncovering-the-past-how-ai-is-transforming-archaeology-38ded420896d

[–] sunzu2@thebrainbin.org 1 points 1 day ago (1 children)

An llm can't tell right from wrong... How is it supposed to be AGI?!

[–] Grimy@lemmy.world 5 points 1 day ago

It isn't. I'd even say that simply completing puzzles is far from AGI, even if the puzzles are complex.

[–] thedruid@lemmy.world -2 points 1 day ago (1 children)
[–] doxxx@lemmy.ca 2 points 1 day ago (2 children)

I read it without an account.

[–] seven_phone@lemmy.world 8 points 1 day ago (1 children)

I didn't read it and don't have an account.

[–] NeoNachtwaechter@lemmy.world 2 points 11 hours ago

Warning: Natural Intelligence detected!

[–] thedruid@lemmy.world 4 points 1 day ago

hmm. I probably went over a limit or something then..