lagrangeinterpolator

joined 6 months ago
[–] lagrangeinterpolator@awful.systems 3 points 2 hours ago (1 children)

Unfortunately, I don't think anyone is ever going to go through all 19,797 submissions and 75,800 reviews (to one conference, in one year) and manually review them all. Then again, using the ultra-advanced cutting-edge innovative statistical technique of randomly sampling a few papers/reviews, one can still get useful conclusions.

[–] lagrangeinterpolator@awful.systems 4 points 3 hours ago (1 children)

After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: "Never let an LLM have any decision-making power." At most, LLMs will serve as a heuristic function for an algorithm that actually works.

Unlike the railroads of the First Gilded Age, I don't think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it's not worth spending lots of money on a task where you don't need reliability.

The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true "use cases" to be mainly spam, and perhaps students cheating on homework.

Now I'm even more skeptical of the programmers (and managers) who endorse LLMs.

[–] lagrangeinterpolator@awful.systems 6 points 14 hours ago* (last edited 14 hours ago)

The basilisk now eats its own tail.

[–] lagrangeinterpolator@awful.systems 8 points 22 hours ago* (last edited 22 hours ago) (1 children)

Promptfans still can't get over the Erdős problems. Thankfully, even r/singularity has somehow become resistant to the most overhyped claims. I don't think I need to comment on this one.

Link: https://www.reddit.com/r/singularity/comments/1pag5mp/aristotle_from_harmonicmath_just_proved_erdos/

alt text (original claim)We are on the cusp of a profound change in the field of mathematics. Vibe proving is here.

Aristotle from @HarmonicMath just proved Erdos Problem #124 in @leanprover, all by itself. This problem has been open for nearly 30 years since conjectured in the paper “Complete sequences of sets of integer powers” in the journal Acta Arithmetica.

Boris Alexeev ran this problem using a beta version of Aristotle, recently updated to have stronger reasoning ability and a natural language interface.

Mathematical superintelligence is getting closer by the minute, and I’m confident it will change and dramatically accelerate progress in mathematics and all dependent fields.


alt text (comments)Gcd conditions removed, still great, but really hate the way people shill their stuff without any rigor to explaining the process. A lot of things become very easy when you remove a simple condition. Heck reimann hypothesis is technically solved for function fields over finite fields. But nowadays in the age of hype, a tweet post would probably say “Reimann hypothesis oneshotted by AI” even though that’s not true.

Gcd conditions removed

So they didn't solve the actual problem?


[–] lagrangeinterpolator@awful.systems 6 points 5 days ago* (last edited 5 days ago) (1 children)

True, it is possible to achieve 100,000x speedups if you dispose of the silly restriction of being correct.

[–] lagrangeinterpolator@awful.systems 15 points 5 days ago* (last edited 5 days ago) (5 children)

We will secure energy dominance by dumping even more money and resources into a technology that is already straining our power grid. But don't worry. The LLM will figure it all out by reciting the Wikipedia page for Fusion Power.

AI is expected to make cutting-edge simulations run “10,000 to 100,000 times faster.”

Turns out it's not good to assume that literally every word that comes out of a tech billionaire's mouth is true. Now everyone else thinks they can get away with just rattling off numbers where their source is they made it the fuck up. I still remember Elon Musk saying a decade ago that he could make rockets 1,000 times cheaper, and so many people just thought it was going to happen.

We need scientists and engineers. We do not need Silicon Valley billionaire visionary innovator genius whizzes with big ideas who are pushing the frontiers of physics with ChatGPT.

[–] lagrangeinterpolator@awful.systems 6 points 6 days ago* (last edited 6 days ago) (2 children)

You'd think peer review would make things better here, but big ML conferences have to deal with an absurd amount of submissions these days. NeurIPS this year got over 21000. The system they use for reviews is that anyone who submits a paper is required to review a certain number of other papers. So yeah, your ML paper is getting reviewed by other people who happen to submit their own papers. Who are competing with you to get their own papers accepted. Yeah, no problems there.

Just make sure you have a few missile turrets protecting the area if you're playing against zerg. You don't want your SCV that is building the SMR to get sniped by a flock of mutalisks.

[–] lagrangeinterpolator@awful.systems 16 points 1 week ago (5 children)

In my experience most people just suck at learning new things, and vastly overestimate the depth of expertise. It doesn't take that long to learn how to do a thing. I have never written a song (without AI assistance) in my life, but I am sure I could learn within a week. I don't know how to draw, but I know I could become adequate for any specific task I am trying to achieve within a week. I have never made a 3D prototype in CAD and then used a 3D printer to print it, but I am sure I could learn within a few days.

This reminds me of another tech bro many years ago who also thought that expertise is overrated, and things really aren't that hard, you know? That belief eventually led him to make a public challenge that he could beat Magnus Carlsen in chess after a month of practice. The WSJ picked up on this, and decided to sponsor an actual match with him and Carlsen. They wrote a fawning article about it, but it did little to stop his enormous public humiliation in the chess community. Here's a reddit thread discussing that incident: https://www.reddit.com/r/HobbyDrama/comments/nb5b1k/chess_one_month_to_beat_magnus_how_an_obsessive/

As a sidenote, I found it really funny that he thought his best strategy was literally to train a neural network and ... memorize all the weights and run inference with mental calculations during the game. Of course, on the day of the match, the strategy was not successful because his algorithm "ran out of time calculating". How are so many techbros not even good at tech? Come on, that's the one thing you're supposed to know!

[–] lagrangeinterpolator@awful.systems 20 points 1 week ago (5 children)

Just had a conversation about AI where I sent a link to Eddy Burback's ChatGPT Made Me Delusional video. They clarified that no, it's only smart people who are more productive with AI since they can filter out all the bad outputs, and only dumb people would suffer all the negative effects. I don't know what to fucking say.

[–] lagrangeinterpolator@awful.systems 14 points 3 weeks ago (3 children)

One of the core beliefs of rationalism is that Intelligence™ is the sole determinant of outcomes, overriding resource imbalances, structural factors, or even just plain old luck. For example, since Elon Musk is so rich, that must be because he is very Intelligent™, despite all of the demonstrably idiotic things he has said over the years. So, even in an artificial scenario like chess, they cannot accept the fact that no amount of Intelligence™ can make up for a large material imbalance between the players.

There was a sneer two years ago about this exact question. I can't blame the rationalists though. The concept of using external sources outside of their bubble is quite unfamiliar to them.

view more: next ›