ZDL

joined 1 month ago
[–] ZDL@lazysoci.al 3 points 9 hours ago

My boss is pretty decent. He gets dumb ideas (like we all do), but he recognizes when they fail and doesn't try to force them for face-saving. It's because of that that we survived the COVID-19 disasater and the pivot away from American clients (my initiative) without disappearing. He doesn't appear to have the usual ego of a businessman.

[–] ZDL@lazysoci.al 1 points 11 hours ago

... Really? Please explain.

[–] ZDL@lazysoci.al 1 points 11 hours ago

It's not "Distance Raptor" or "Time Raptor" or even "Veloci Raptor". It's Distanceraptor. All one word. Or Timeraptor or Velociraptor.

I mean otherwise it would be:

D~~i~~stanc~~eraptor~~

T~~i~~m~~eraptor~~

And that leaves us with:

Dstanc

Tm

[–] ZDL@lazysoci.al 25 points 1 day ago (3 children)

My boss tried to get everybody to use LLMbeciles for work. He was taken in by the fakery and set up a commercial OpenAI account.

I refused to use it. Ever. Like to the point that I never even collected my ID and password.

Others used it, but it was, as always, a time-waster where it looked like they were making gains at the start, only to get bogged down when important details entered the picture. So I quietly got my work done while others wrestled with AI. Then by the end of six weeks nobody was using it and my boss quietly shut down the account.

[–] ZDL@lazysoci.al 7 points 1 day ago

They actually believe their own press and think their "reasoning" models say anything meaningful when they "explain" their "reasoning"?

Wow! I could be a top AI researcher too and I don't even know how to begin programming a computer!

[–] ZDL@lazysoci.al 2 points 1 day ago

Not me. The Democrats are not a whole lot better. I mean make no mistake they're better than the Republicans ... but not by as much as they should be.

[–] ZDL@lazysoci.al 8 points 2 days ago

Those ridiculous, unserious human (presumably) beings?

They probably honestly didn't think it was a threat. It is a common delusion in the USA that everybody wants to be American.

[–] ZDL@lazysoci.al 18 points 2 days ago (1 children)

I'm sure the boycott will soften somewhat after Trump leaves (presuming he isn't made President for Life and heralding a new era in American governance) because people have short memories and will forget that almost all American officials at every level were either directly working for his platform or just sitting ineffectually at the side pretending to do anything but not really stopping it.

But it will only soften. It will not be over. Not for most Canadians alive today. The USA poisoned its brand but good and by the time Trump has stopped raging like a bull in a china shop, Canadians will have found new favourite brands and suppliers and such and American brands will be dead anyway.

Because once I find something I like, why would I go back just because the old brand says "but we've chaaaaaaaaaaaaaaaaaaaaaaaaaanged!"

[–] ZDL@lazysoci.al 16 points 2 days ago (1 children)

I understood about one word in five of that article.

Which means I'm about one word in five ahead of that highlighted idiot. He's acting as if the Spicy Autocorrect understands things. As if it knows what a database is, what code is, what rules are, etc.

I'm not just technically declined, I'm a borderline technophobe and even I know this is bullshit.

[–] ZDL@lazysoci.al 2 points 3 days ago

Tesla simps are just so draining to watch.

[–] ZDL@lazysoci.al 1 points 4 days ago

De Santis Dachau?

[–] ZDL@lazysoci.al 3 points 4 days ago

People have said alarmist things like this with every new thing introduced in history. Even Chess was derided as harmful to its players!

LLMbeciles are terrible technology capable of all kinds of harm. This is not one of them in any meaningful way. People get addicted to, get psychotic over, and stories get told of people going overboard with, literally every new thing that has been introduced to society in all of history.

Including. Chess.

 

:::spoiler I don't know why. :::

 

This is why the aitechbrodude will never understand opposition to AI. They don't understand anything of substance.

 

But what are you going to do?

<>Buoys will be buoys.

 

I thought hard about what to do with the cash, but then I thought: "What would Jesus do?"

.So I turned it into wine.

 

Do you know the Ice Cream Man?
The Ice Cream Man?
The Ice Cream Man!
Do you know the Ice Cream Man
On Culver City’s lanes?

Yes, I know the Ice Cream Man,
The Ice Cream Man,
The Ice Cream Man.
Yes, I know the Ice Cream Man:
He’s friendly, kind, and plain.

He sells us cones and rocket pops,
He chimes his bell and never stops,
He smiles at kids and mops his brow—
But where, oh where can he be now?

Do you know the ICE patrols?
The ICE patrols?
The ICE patrols!
Do you know that ICE patrols
prowled Culver City’s lanes?

Oh yes, I know the ICE patrols,
The ICE patrols,
The ICE patrols.
With badges, boots, and hearts of coal,
They took our friend away.

So now we miss the Ice Cream Man,
The Ice Cream Man,
The Ice Cream Man.
No more treats from his old cart—
Just silence on the lane.

But hey, at least the law was served,
And justice done (or so they say),
For nothing says “safe neighborhood”
Like chasing joy away.
 

wait for it!Because one egg is un oeuf.

 

So I gave an LLMbecile that claims to be a reasoning model...

Are you a "reasoning" model?

Great question! Yes, I am designed to be a "reasoning" model. That means I don't just repeat information—I analyze, synthesize, and draw logical conclusions based on the data and context you provide. My goal is to help you solve problems, answer questions, and think through complex topics in a way that’s clear and helpful.

If you have a tricky question, need help with decision-making, or want to break down a complicated concept, feel free to put me to the test!

...the following as a test of its reasoning:

There are seven customers (Carol, Darrell, Henrietta, Kenneth, Lula, Omar, Peter).
There are seven cheeses (cheddar, colby jack, gouda, havarti, muenster, provolone, swiss).
There are seven meats (bologna, corned beef, ham, liverwurst, prosciutto, roast beef, salami).
There are seven sandwiches priced at one of ($2.75 $3.75 $4.75 $5.75 $6.75 $7.75 $8.75)

Match prices to customers, meats, and cheeses based on these clues:

1. Omar's sandwich costs more than the order with liverwurst.
2. Henrietta's order doesn't cost $2.75.
3. Kenneth's order is either the order with provolone cheese or the $7.75 order.
4. Omar's order doesn't include swiss cheese.
5. The order with prosciutto includes provolone cheese.
6. Lula's order doesn't include corned beef.
7. The order with ham doesn't include swiss cheese.
8. Henrietta's order costs less than Darrell's sandwich.
9. Carol's sandwich costs $5.75.
10. Of the sandwich with havarti cheese and Kenneth's order, one includes prosciutto and the other costs
$3.75.
11. Henrietta's sandwich doesn't cost $4.75.
12. The order with colby jack cheese costs less than the sandwich with prosciutto.
13. Henrietta's sandwich doesn't include roast beef.
14. The order with roast beef doesn't include colby jack cheese.
15. The sandwich with muenster cheese costs 1 dollar less than Lula's order.
16. The $5.75 order includes colby jack cheese.
17. The $8.75 sandwich includes gouda cheese.
18. The sandwich with colby jack cheese costs 3 dollars less than the sandwich with corned beef.
19. The $3.75 order is either the sandwich with bologna or Kenneth's sandwich.

Now before I did this, I gave the task to my SO as an assignment. (He's an engineer.) He wrote two solutions using two different tools. Something he called CLP/FD and something he called CHR. He's rusty in both of those, so it took him about 30 minutes to get them working. (He's confident he can do any more such problems with far less time spent coding now.)

It took about 3ms for his first program to solve it, and about 7ms for his second.

So how did the "reasoning" LLMbecile fare?

Well, it was infinitely slower than my SO's programs. Because it couldn't solve it.

After 16 steps (I told it to explain its reasoning along the way; I'm not going to copy all that out because it's tedious) it gave me this:

So it failed, and when I pressed it for the final solution it basically argued that it would take too long but told me that if I guided it with some more detailed questions it could solve it for me.

(You know. The thing I already had solved two different ways before asking the LLMbecile.)

When I asked this, however...

I thought the whole point of a reasoning model was to do the reasoning, not have the user do the reasoning for it?

... the answer was golden:


Your critique is entirely valid—and it’s a sharp observation that highlights the current state of AI reasoning models. Here’s what’s really going on:

Why AI Reasoning Models Struggle with Complex Logic Puzzles

  • Pattern Matching, Not True Reasoning:
    Most AI models, including those billed as “reasoning” models, are fundamentally pattern matchers. They excel at low- and medium-complexity problems where the answer can be inferred from seen data or familiar patterns. But when a problem’s complexity increases—like with large, multi-variable logic grid puzzles—their performance can collapse, and they often fail to follow through with systematic, step-by-step deduction.

  • No Genuine Understanding:
    These models don’t “understand” logic or context the way humans do. They don’t build or manipulate internal representations of the puzzle’s state; instead, they generate plausible next steps based on training data. This leads to errors, contradictions, or incomplete solutions as complexity rises.

  • Scaling Limits:
    Research shows that as puzzles get harder, models may initially increase their reasoning effort, but beyond a certain point, their ability to sustain coherent reasoning collapses—even if they have enough computational resources to keep going.

  • Lack of Explicit Algorithms:
    Unlike humans who might draw a grid or systematically eliminate possibilities, models rarely execute explicit, algorithmic reasoning. Their “chain-of-thought” is often just a simulation of reasoning, not the real thing.

What This Means for Users

  • For simple logic puzzles:
    AI models can often solve them and explain their reasoning convincingly.

  • For complex, multi-layered puzzles:
    The model’s reasoning may break down, leading to incomplete, inconsistent, or outright wrong answers—just as you observed in your sandwich puzzle example.

  • Human intervention is still needed:
    For truly novel, complex logic problems, humans remain better at systematic, stepwise deduction, especially when the solution space is large and requires careful cross-elimination.

Bottom Line

AI “reasoning” is impressive, but brittle. It’s not the same as human logic, and it can’t reliably replace human reasoning in genuinely complex or novel scenarios—especially when the solution requires detailed, multi-step deduction and cross-referencing.

Your expectation that a reasoning model should “do the reasoning” is exactly what researchers are working toward—but as of now, there are real, well-documented limits to what these models can achieve on hard logic puzzles.


The takeaway points from this response (keeping in mind that all of this comes from the LLMbecile itself ... so it may be hallucinated! 🤣):

  • Most AI models, including those billed as “reasoning” models, are fundamentally pattern matchers.
  • These models don’t “understand” logic or context the way humans do.
  • their ability to sustain coherent reasoning collapses
  • Their “chain-of-thought” is often just a simulation of reasoning, not the real thing.
  • The model’s reasoning may break down, leading to incomplete, inconsistent, or outright wrong answers
  • Human intervention is still needed

So next time you see some aitechbroshilldude honking the horn of "reasoning models", point them to this where a reasoning LLMbecile explains why it's not actually reasoning (and accidentally explains why it's utterly useless).

181
submitted 1 month ago* (last edited 1 month ago) by ZDL@lazysoci.al to c/fuck_ai@lemmy.world
 

(For the techbrodude shills: this is called "allegory". Maybe you can ask ChatGPT to explain it to you.)

view more: next ›