FizzyOrange

joined 2 years ago
[–] FizzyOrange@programming.dev 1 points 3 weeks ago

It has no memory, for one.

It has very short term memory in the form of it's token context. Especially with something like Meta's Coconut.

What makes you think that it does know its in a conversation?

I don't really. Yet. But I also don't think that it is fundamentally impossible for LLMs to think, like you seem to. I also don't think the definition of the word "think" is so narrow that it requires that level of self-awareness. Do you think a mouse is really aware it is a mouse? What about a spider?

[–] FizzyOrange@programming.dev 10 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

Ah this ancient nonsense. Typescript and JavaScript get different results!

It's all based on

https://en.wikipedia.org/wiki/The_Computer_Language_Benchmarks_Game

Microbenchmarks which are heavily gamed. Though in fairness the overall results are fairly reasonable.

Still I don't think this "energy efficiency" result is worth talking about. Faster languages are more energy efficient. Who new?

Edit: this also has some hilarious visualisation WTFs - using dendograms for performance figures (figures 4-6)! Why on earth do figures 7-12 include line graphs?

[–] FizzyOrange@programming.dev 1 points 3 weeks ago

Your comment doesn't account for the fact that LLMs can generalise. Often not very well but they can produce outputs for inputs not seen in their training sets. Otherwise what would be the point?

You would not ask a piece of cardboard so solve a math problem, would you?

Uhhh you know LLMs can solve quite complex maths problems? Including novel ones.

[–] FizzyOrange@programming.dev 0 points 3 weeks ago (2 children)

And how do you know LLMs can't tell that they are involved in a conversation?

Unless you think there is something non-computational in the human brain, then you must accept that computers are - in theory - capable of thinking. With the right software and sufficiently powerful hardware.

Given that truth (which I think you can only avoid through religion or quantum quackery), you can't just say "it's only maths; it can't be thinking" because we know that maths can think.

Do LLMs "think"? The definition of "think" is wooly enough and we understand them little enough that it's quite an assertion to say that they definitely don't.

[–] FizzyOrange@programming.dev 2 points 3 weeks ago

This argument makes no more sense than trying to say that a plant is thinking because brains are made of cells and so are plants.

[–] FizzyOrange@programming.dev 1 points 3 weeks ago (5 children)

By that logic we also conclude that the human brain doesn't "think" about what it is saying.

[–] FizzyOrange@programming.dev 4 points 3 weeks ago

It's really amazing the number of people trying to argue that LLMs are useless, while simultaneously so many people are using them successfully. Makes me wonder if they've even tried them.

[–] FizzyOrange@programming.dev -2 points 3 weeks ago

Well… isn’t it? If one’s daily or most frequent back-and-forth journeys don’t exceed 100 ㎞, then a 160 ㎞ range is indeed fine.

Uhm... No. Most people only have one car so if you get one that only works 95% if the time it's going to be super inconvenient when you have to hire a car every time you go on holiday or visit your family or go to a distant concert or whatever.

That's why low range electric cars are not very popular.

[–] FizzyOrange@programming.dev 1 points 3 weeks ago

Ah yes the old pointless vague anecdote.

If your argument is "LLMs can't do useful work", and then I say "no, I've used them to do useful work many times" how is that a pointless vague anecdote? It's a direct proof that you're wrong.

Promoting pseudo-science.

Sorry what? This is bizarre.

[–] FizzyOrange@programming.dev -3 points 3 weeks ago (2 children)

Because most users simply use the browser

This is the same problem as saying "an electric car with 100 mile range is totally fine because most journeys are well under 100 miles".

Most of the time I'm only using a browser (or VSCode). The annoying thing is the 1% of times when I want to print something, create a shortcut, use bluetooth headphones, configure a static IP, etc.

Use Photopea instead. It’s practically a copy-paste of Photoshop but in the browser, created by one person. Or if one has never used Photoshop before, try GIMP first.

Saying Photopea or GIMP is "practically a copy-paste of Photoshop" is laughable. Paint.NET, maybe.

[–] FizzyOrange@programming.dev -3 points 3 weeks ago (6 children)

LLMs can’t think - only generate statistically plausible patterns

Ah still rolling out the old "stochastic parrot" nonsense I see.

Anyway on to the actual article... I was hoping it wouldn't make these basic mistakes:

[Typescript] looks more like an “enterprise” programming language for large institutions, but we honestly don’t have any evidence that it’s genuinely more suitable for those circumstances than the regular JavaScript.

Yes we do. Frankly if you've used it it's so obviously better than regular JavaScript you probably don't need more evidence (it's like looking for "evidence" that film stars are more attractive than average people). But anyway we do have great papers like this one.

Anyway that's slightly beside the point. I think the article is right that smart people are not invulnerable to manipulation or falling for "obviously" stupid ideas. I know plenty of very smart religious people for example.

However I think using this to dismiss LLMs is dumb, in the same way that his dismissal of Typescript is. LLMs aren't homeopathy or religion.

I have used LLMs to get some work done and... guess what, it did the work! Do I trust it to do everything? Obviously not. But sometimes I don't need perfect code. For example recently I asked it to create an example SystemVerilog file for me utilising as many syntax features as possible (testing an auto-formatter). It did a pretty good job. Saved some time. What psychological hazard have I fallen for exactly?

Overall, B-. Interesting ideas but flawed logic.

[–] FizzyOrange@programming.dev -2 points 3 weeks ago (1 children)

No, I said that some important features don't exist. They said "well I don't use them", as if that somehow negated the point that they don't exist. It's typical "works for me" nonsense. You get these replies whenever anyone says anything is suboptimal about Linux. It's so tedious.

view more: ‹ prev next ›