ZDL

joined 1 month ago
 

Inside of Mozart, however, there were, like, eleven or twelve.

…drum roll please…That's why he was called Wolfgang.

 

This is the leader of Health and Human Services, America. This moron is in charge of establishing government policy for health.

At least we now know how he got a brain worm. Probably inserted it himself to eat part of his brain before curling up and dying in his head.

[–] ZDL@lazysoci.al 2 points 20 hours ago

The issue isn't who owns it but rather who gets money. Crunchyroll, while it is owned by Sony, is based in the USA so part of the revenue it takes in goes to it's US operations: staff, servers, marketing, customer service, etc.

For people who don't want to give anything to Americans, looking for an alternative is a sound idea.

[–] ZDL@lazysoci.al 0 points 20 hours ago

Here's some waaaaaaaaaaaaaaaaaaaaaaaaaaaaaay more plausible alternative explanations:

  1. You didn't read either the full post nor the group it was in and smarmily listed a bunch of American options.

  2. You intended some kind of sarcastic observation but fumbled it because you are a shit writer.

Me, I'm leaning toward #2, actually. I'll give you the benefit of some doubt. But you're still a shit writer.

[–] ZDL@lazysoci.al 1 points 20 hours ago

No. We read it. We understood it. You were wrong. And you answered in the wrong way for the wrong reason.

Shit, your backpedaling is going to result in lethal whiplash at this rate!

[–] ZDL@lazysoci.al 0 points 22 hours ago (2 children)

So you listed six "options" that are absolutely in direct opposition to what the OP wants and what this group is about and think you are on the moral high horse?

Wattamaroon.

[–] ZDL@lazysoci.al 11 points 1 day ago (4 children)

sigh

The sound of jackboots marching in lockstep is something people with an ear for them could hear in the '90s already. Trump is an expression of American culture rot—a symptom—not the cause.

The cause of the USA's decline dates a lot further back than 2015. It dates back to the Southern Strategy that began when I was only beginning to get out of diapers. And the blame falls squarely on the architects of it, as well as those who sleep-walked through the subsequent decades of obvious rising fascism.

[–] ZDL@lazysoci.al 33 points 1 day ago (4 children)

I'll believe that the USA will allow this business to end when I see it. For some bizarre reason the PTBs in the USAnal government is massively hard on this idiot technology.

[–] ZDL@lazysoci.al 42 points 1 day ago* (last edited 1 day ago) (1 children)

Announcer: In the latest news, China has placed one of its television networks under stricter government control, setting up a Party official to monitor its broadcasts for an anti-gove...

(Frantic whispering from off-camera.)

Announcer: (to the off-screen voice) What? The USA? CBS? FCC?

(Continued whispering from off-camera.)

Announcer: (to camera) In these glorious United States of America, the government has decided that the collection of biased traitors at CBS will be monitored by a special office of the FCC to ensure they do not report negatively about our Great Leader.

[–] ZDL@lazysoci.al 5 points 1 day ago

In the Boycott US community, no less.

[–] ZDL@lazysoci.al -1 points 1 day ago

Manosphere whining detected.

Shields engaged.

[–] ZDL@lazysoci.al 1 points 1 day ago

Now all you have to do is provide a convincing business model that shows profitability with these clusterfuck satellite arrays.

(Hint: this is not possible.)

[–] ZDL@lazysoci.al 11 points 1 day ago (1 children)

Dr. Collier is one of the people I love best on Youtube!

[–] ZDL@lazysoci.al 3 points 2 days ago

Yeah, this amazes me everywhere. The Chinese government also wants people to have more children.

But children are fucking EXPENSIVE. Having just one child, once only school fees are taken into account, is so expensive that most families see a significant drop in their lifestyle from having one. This doesn't include food, clothing, medical care, etc. Just school fees take middle class people and make them live unskilled labour lifestyles. (You can imagine what it does to unskilled labour...)

If the Chinese government wants people to have more children, they're going to have to address that somehow. Like free schooling. Free healthcare. Maybe also fix the education system so that there's not this perceived need for thousands of hours of expensive cram schools. That kind of thing.

And then we have the Big Brains of Business sounding the same alarm. All while they do everything in their power to make people poorer and poorer so that even in the west people simply can't afford to have children. But they want people to have more children.

This is literally insane to me.

 

:::spoiler I don't know why. :::

 

This is why the aitechbrodude will never understand opposition to AI. They don't understand anything of substance.

 

But what are you going to do?

<>Buoys will be buoys.

 

I thought hard about what to do with the cash, but then I thought: "What would Jesus do?"

.So I turned it into wine.

 

Do you know the Ice Cream Man?
The Ice Cream Man?
The Ice Cream Man!
Do you know the Ice Cream Man
On Culver City’s lanes?

Yes, I know the Ice Cream Man,
The Ice Cream Man,
The Ice Cream Man.
Yes, I know the Ice Cream Man:
He’s friendly, kind, and plain.

He sells us cones and rocket pops,
He chimes his bell and never stops,
He smiles at kids and mops his brow—
But where, oh where can he be now?

Do you know the ICE patrols?
The ICE patrols?
The ICE patrols!
Do you know that ICE patrols
prowled Culver City’s lanes?

Oh yes, I know the ICE patrols,
The ICE patrols,
The ICE patrols.
With badges, boots, and hearts of coal,
They took our friend away.

So now we miss the Ice Cream Man,
The Ice Cream Man,
The Ice Cream Man.
No more treats from his old cart—
Just silence on the lane.

But hey, at least the law was served,
And justice done (or so they say),
For nothing says “safe neighborhood”
Like chasing joy away.
 

wait for it!Because one egg is un oeuf.

 

So I gave an LLMbecile that claims to be a reasoning model...

Are you a "reasoning" model?

Great question! Yes, I am designed to be a "reasoning" model. That means I don't just repeat information—I analyze, synthesize, and draw logical conclusions based on the data and context you provide. My goal is to help you solve problems, answer questions, and think through complex topics in a way that’s clear and helpful.

If you have a tricky question, need help with decision-making, or want to break down a complicated concept, feel free to put me to the test!

...the following as a test of its reasoning:

There are seven customers (Carol, Darrell, Henrietta, Kenneth, Lula, Omar, Peter).
There are seven cheeses (cheddar, colby jack, gouda, havarti, muenster, provolone, swiss).
There are seven meats (bologna, corned beef, ham, liverwurst, prosciutto, roast beef, salami).
There are seven sandwiches priced at one of ($2.75 $3.75 $4.75 $5.75 $6.75 $7.75 $8.75)

Match prices to customers, meats, and cheeses based on these clues:

1. Omar's sandwich costs more than the order with liverwurst.
2. Henrietta's order doesn't cost $2.75.
3. Kenneth's order is either the order with provolone cheese or the $7.75 order.
4. Omar's order doesn't include swiss cheese.
5. The order with prosciutto includes provolone cheese.
6. Lula's order doesn't include corned beef.
7. The order with ham doesn't include swiss cheese.
8. Henrietta's order costs less than Darrell's sandwich.
9. Carol's sandwich costs $5.75.
10. Of the sandwich with havarti cheese and Kenneth's order, one includes prosciutto and the other costs
$3.75.
11. Henrietta's sandwich doesn't cost $4.75.
12. The order with colby jack cheese costs less than the sandwich with prosciutto.
13. Henrietta's sandwich doesn't include roast beef.
14. The order with roast beef doesn't include colby jack cheese.
15. The sandwich with muenster cheese costs 1 dollar less than Lula's order.
16. The $5.75 order includes colby jack cheese.
17. The $8.75 sandwich includes gouda cheese.
18. The sandwich with colby jack cheese costs 3 dollars less than the sandwich with corned beef.
19. The $3.75 order is either the sandwich with bologna or Kenneth's sandwich.

Now before I did this, I gave the task to my SO as an assignment. (He's an engineer.) He wrote two solutions using two different tools. Something he called CLP/FD and something he called CHR. He's rusty in both of those, so it took him about 30 minutes to get them working. (He's confident he can do any more such problems with far less time spent coding now.)

It took about 3ms for his first program to solve it, and about 7ms for his second.

So how did the "reasoning" LLMbecile fare?

Well, it was infinitely slower than my SO's programs. Because it couldn't solve it.

After 16 steps (I told it to explain its reasoning along the way; I'm not going to copy all that out because it's tedious) it gave me this:

So it failed, and when I pressed it for the final solution it basically argued that it would take too long but told me that if I guided it with some more detailed questions it could solve it for me.

(You know. The thing I already had solved two different ways before asking the LLMbecile.)

When I asked this, however...

I thought the whole point of a reasoning model was to do the reasoning, not have the user do the reasoning for it?

... the answer was golden:


Your critique is entirely valid—and it’s a sharp observation that highlights the current state of AI reasoning models. Here’s what’s really going on:

Why AI Reasoning Models Struggle with Complex Logic Puzzles

  • Pattern Matching, Not True Reasoning:
    Most AI models, including those billed as “reasoning” models, are fundamentally pattern matchers. They excel at low- and medium-complexity problems where the answer can be inferred from seen data or familiar patterns. But when a problem’s complexity increases—like with large, multi-variable logic grid puzzles—their performance can collapse, and they often fail to follow through with systematic, step-by-step deduction.

  • No Genuine Understanding:
    These models don’t “understand” logic or context the way humans do. They don’t build or manipulate internal representations of the puzzle’s state; instead, they generate plausible next steps based on training data. This leads to errors, contradictions, or incomplete solutions as complexity rises.

  • Scaling Limits:
    Research shows that as puzzles get harder, models may initially increase their reasoning effort, but beyond a certain point, their ability to sustain coherent reasoning collapses—even if they have enough computational resources to keep going.

  • Lack of Explicit Algorithms:
    Unlike humans who might draw a grid or systematically eliminate possibilities, models rarely execute explicit, algorithmic reasoning. Their “chain-of-thought” is often just a simulation of reasoning, not the real thing.

What This Means for Users

  • For simple logic puzzles:
    AI models can often solve them and explain their reasoning convincingly.

  • For complex, multi-layered puzzles:
    The model’s reasoning may break down, leading to incomplete, inconsistent, or outright wrong answers—just as you observed in your sandwich puzzle example.

  • Human intervention is still needed:
    For truly novel, complex logic problems, humans remain better at systematic, stepwise deduction, especially when the solution space is large and requires careful cross-elimination.

Bottom Line

AI “reasoning” is impressive, but brittle. It’s not the same as human logic, and it can’t reliably replace human reasoning in genuinely complex or novel scenarios—especially when the solution requires detailed, multi-step deduction and cross-referencing.

Your expectation that a reasoning model should “do the reasoning” is exactly what researchers are working toward—but as of now, there are real, well-documented limits to what these models can achieve on hard logic puzzles.


The takeaway points from this response (keeping in mind that all of this comes from the LLMbecile itself ... so it may be hallucinated! 🤣):

  • Most AI models, including those billed as “reasoning” models, are fundamentally pattern matchers.
  • These models don’t “understand” logic or context the way humans do.
  • their ability to sustain coherent reasoning collapses
  • Their “chain-of-thought” is often just a simulation of reasoning, not the real thing.
  • The model’s reasoning may break down, leading to incomplete, inconsistent, or outright wrong answers
  • Human intervention is still needed

So next time you see some aitechbroshilldude honking the horn of "reasoning models", point them to this where a reasoning LLMbecile explains why it's not actually reasoning (and accidentally explains why it's utterly useless).

181
submitted 1 month ago* (last edited 1 month ago) by ZDL@lazysoci.al to c/fuck_ai@lemmy.world
 

(For the techbrodude shills: this is called "allegory". Maybe you can ask ChatGPT to explain it to you.)

view more: next ›