this post was submitted on 15 Oct 2025
1297 points (98.9% liked)
Microblog Memes
9451 readers
3144 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you're not.
What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?
Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.
The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it's building ladders right now, but once it starts turning rungs into propellers, the rockets won't be far behind.
Not saying it's there yet, or even 18/24/36 months out, just saying that the transition from "not there yet" to "top of the class" is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes "exponential" is close enough for the near future.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums
Because AI can write programs? As it gets better at doing that, it can make AI's that are even better, etc etc. Positive feedback loops increase exponentially.
AI can't write programs, at least not complex programs. The programs / functions it can write well are the ones that are the ones that are very well represented in the training data -- i.e. ultra simple functions or programs that have been written and re-written millions of times. What it can't do is anything truly innovative. In addition, it can't follow directions, it has no understanding of what it's doing, it doesn't understand the problem, it doesn't understand its solution to the problem.
The only thing LLMs are able to do is create a believable simulation of what the solution to the problem might look like. Sometimes, if you're lucky, the simulation is realistic enough that the output actually works as a function or program. But, the more complex the problem, or the more distant from the training data, the less it's able to simulate something realistic.
So, rather than building a ladder where the rungs turn into propellers, it's building one where the higher the ladder gets, the less the rungs actually look like rungs.
As I said elsewhere, the AI probably isn't going to just be an LLM. It's probably gonna be a complex model that uses modules like LLMs to fulfill a compound task. But the exact architecture doesn't matter.
We know that it can output code, which means we have a quantifiable metric to make it better at coding, and thousands of people are certainly trying. AI video was hot garbage 18 months ago, now it's basically perfect.
It's not if we're going to get a decent coding AI, it's when.
That sounds very hand-wavey. But, even the presence of LLMs in the mix suggests it isn't going to be very good at whatever it does, because LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
How so? Project managers have been working for decades to quantify code, and haven't managed to make any progress at it.
The year 30,000 AD doesn't count.
So closer to average human intelligence than it would appear. I don't know why people keep insisting that confidently making things up and repeating things blindly is somehow distinct from the average human intelligence.
But more seriously, this whole mindset is based on a stagnation in development that I'm just not seeing. I think it was Stanford recently released a paper on a new architecture they developed that has serious promise.
I think you misunderstand me. The metric is the code. We can look at the code, see what kind of mistakes it's making, and then alter the model to try to be better. That is an iterative process.
Sure. Maybe it's 30,000AD. Maybe it's next month. We don't know when the breakthrough that kicks off massive improvement is going to hit, or even what it will be. Every new development could be the big one.
No, zero intelligence.
It's like how people are fooled by optical illusions. It doesn't mean optical illusions are smart, it just means that they tickle a part of the brain that sees patterns.
Oooh, a new architecture and serious promise? Wow! You should invest!
No, we can't. That's the whole point. If that were possible, then companies could objectively determine who their best programmers were, and that's a holy grail they've been chasing for decades. It's just not possible.
Nobody knows how to alter the model to try to be better. That's why multi-billion dollar companies are releasing new models that are worse than their previous models.
It's definitely not next month, or next year, or next century. Nobody has any idea how to get to actual intelligence, and despite the hype progress is as slow as ever.
Keep drinking that kool-aid.
You've misunderstood many things in those two sentences.
Care to elaborate?
the problem is that ai's are trained on programs that humans have written. At best the llm architectures it creates will be similar to the state of the art that humans have created at that point.
however, even more important than the architecture of an ai model is the training data that it is trained on. If we start including ai-generated programs in this data, we will quickly observe model collapse: performance of models tend to get worse as more ai-generated data is included in the training data.
rather than AIs generating ever smarter new AIs, the more likely result is that we can't scrape new quality datasets as they've all been contaminated with llm-generated data that will only reduce model performance
They could stick to unpoisoned datasets for next token prediction by simply not including data collected after the public release of ChatGPT.
But the real progress they can make is that LLMs can be subjected to reinforcement learning, the same process that got superhuman results in Go, Starcraft, and other games. The difficulty is getting a training signal that can guide it past human-level performance.
And this is why they are pushing to include ChatGPT in everything. Every conversation is a datapoint that can be used to evaluate ChatGPT's performance. This doesn't get poisoned by the public adoption of AI because even if ChatGPT is speaking to an AI, the RL training algorithm evaluates ChatGPT's behavior, treating the AI as just another possible thing-in-the-world it can interact with.
As AI chatbots proliferate, more and more opportunities arise for A/B testing - for example if two different AI chatbots write two different comments to the same reddit post, with the goal of getting the most upvotes. While it's not quite the same as the billions of games playing against each other in a vacuum that made AlphaGo and AlphaStar better than humans, there is definitely opportunity for training data.
And at some point they could find a way to play AI against each other to reach greater heights, some test that is easy to evaluate despite being based on complicated next-token-prediction. They've got over a trillion dollars of funding and plenty of researchers doing their best, and I don't see a physical reason why it couldn't happen.
But beyond any theoretical explanation, there is the simple big-picture argument: for the past 10 years I've heard people say that AI could never do the next thing, with increasing desperation as AI swallows up more and more of the internet. They have all had reasons about as credible-sounding as yours. Sure it's possible that at some point the nay-sayers will be right and the technology will taper off, but we don't have the luxury of assuming we live in the easiest of all possible worlds.
It may be true that 3 years from now all digital communication is swallowed up by AI that we can't distinguish from humans, that try to feed us information optimized to convert us to fascism on behalf of the AI's fascist owners. It may be true that there will be mass-produced drones that are as good as maneuvering around obstacles and firing weapons as humans and these drones will be applied against anyone who resists the fascist order.
We may be only years away from resistance to fascism becoming impossible. We can bet that we have longer, but only if we get something that is worth the wait.
I'm not arguing that AI won't get better, I'm arguing that the exponential improvements in AI that op was expecting are mostly wishful thinking.
they could stick to old data only, but then how do you keep growing the dataset by the amounts that have been done recently? that is where a lot of the (diminishing) improvements the last years have come from.
and it is not at all clear how to apply reinforcement learning for more generic tasks like chatbots, without a clear scoring system like both chess and StarCraft have.
You not only have improvements on training methodology, but the models themselves get better, and the superstructure of multiple coordinated specialized models gets better. 3 years ago, AI generated video was nightmare fuel, now it's basically photorealistic.
AI creating AI is a recursive loop, and the tiniest acceleration amplifies exponentially in a recursive loop. AI programmers are going to become about as good as the average human programmer, it's inevitable. It won't be an LLM, it might be a structure of individually trained LLMs, it might be a superstructure of those structures, it might be something else entirely.
Whatever it is, it's going to happen. And once AI programmers are at least average, they can devote millions of virtual hours to make one a bit better than average, rinse and repeat. Once we hit that point, it skyrockets.
I don't know when it'll happen, but I'm damn sure it will happen, and the conditions get more favorable every day.
Brother, there are so many logical errors in your thinking.
Number 1, as people keep saying: exponential growth is limited. Unlimited growth is slow and unsteady. There is literally nothing in the whole of creation that has grown exponentially and continued to do so. That is not how anything works. Even theoretically, scenarios like 'the more energy it receives, the more it gives out, so one day this one steam engine will power the world!' and 'the more people buy these stocks, the more popular and expensive they will get, which will lead to more people wanting to buy them, making them more popular, so buy now while they're still £1000 a pop!' still keep fooling people who get caught on the 'logic' of exponential growth and dont see, understand or remember those boring facts we learned in school: What goes up must, and will come down Every action has an equal and opposite reaction Every force is restrained by at least two opposing forces There are no exceptions.
Literally every week, somewhere in the world someone is convinced that this time they (or their guru of choice) has found the exception, the line that can only go upwards, the perpetual energy machine, the product that everyone is going to want if they buy it now, and it has never ever been true because nothing is exponential unless one factors in the countless opposing forces and reactions and pressures that will come with extra growth.
And so far, we haven't even managed to find a way of calculating that.
Don't assume that Usain Bolt smashing the world records for speed (which themselves smashed the previous records, which smashed the people who said it was physically impossible for a human to run a mile in 4 minutes) means the next big thing is going to be running gear that protects the wearer from the as-yet-uncalculated cardiac effects of breaking the speed of light. He's fast, but not 'breaks the laws of physics before anybody has even calculated a way for that to happen' fast, and it's unlikely that his children or grandchildren - or anyone - will be.
Also: bro, even if it grew exponentially and kept doing so, we can't pretend that means it'll obviously go in these directions that aren't a part of it or its development.
Human intelligence has increased as we've evolved, and the brain has grown with it. You can chart the growth of both over the millennia. Now that doesn't mean you can just continue the line to see how intelligent humans will be in 1000 years, because of the exponential growth problem, but it also doesn't mean the skull sizes of proto and early humans mean we will soon live forever.
See, even if past performance was a reliable indicator of future results (hahaha), projected growth does not, and never can, include chains of causation. That's just fiction.
So yes, as humans get more intelligent and our brains get bigger we'll probably continue to understand medicine and biology and tools and nutrition which will continue to improve brain health and wisdom, which will > increased intelligence and brain size, > better understanding of medicine and biology and tools and nutrition > continue to improve brain health and wisdom, etc.
And yes, we do not currently have the intelligence or understanding or technology to enable us to live forever.
But that doesn't mean that increased intelligence or brain size, or better medicine or tech or tools or nutrition, logically lead to 'discovery of the secret to eternal life'. Y can only follow X, but X does not lead to Y. And exponential growth of AI does not magically mean it gains the ability to do random things it couldn't do before, for the same reason.
Does that make sense? It's the same as 'she's a successful model but she couldn't be a supermodel because she's only 5'3".' It does not follow that her gaining 6 inches logically makes her a supermodel.
Many domestic cats are vicious, deranged little fuckers, but they aren't a threat to humanity because they're too small and generally prefer to live with humans. Removing those limitations does not create a threat to humanity. An absurdly large domestic cat with an attitude problem would (probably) not end mankind, but would (maybe) make it onto a few TV shows and get a meme. And AI is not a threat to humanity because it can't think or operate independently. Even if you removed those limitations, it would no more spell the inevitable dominance of fascism (odd endpoint you chose there but hey ho) than a gigantic tabby cat who didn't like his dinner. Because, again, logic doesn't work like that.
TLDR: many thousands of people throughout history have got rich by convincing people to buy into Pot number 5 because obviously, 3 + 3 = 6 and as soon as the -1 happens, that's where we'll all be. The maths isn't wrong. It's the fact that the sums have nothing to do with reality, the future, or anything except the weird game of 'Bet the Pot' that nobody else is playing.
I guess you missed the part where I said "logistically, actually". Logistic curves look like exponential curves until they hit some limiting factor that levels out their growth.
I didn't say anything about perpetual exponential growth, I'm only talking about a temporary period of accelerating growth.
The problem with that is they can't actually point to a metric where when the number goes beyond that point we'll have ASI. I've seen graphs where they have a dotted line that says ape intelligence, and then a bit higher up it has a dotted line that says human intelligence. But there's no meaningful way they can possibly have actually placed human intelligence on a graph of AI complexity, because brains are not AI so they shouldn't even be on the graph.
So even if things increase exponentially there's no way they can possibly know how long until we get AGI.
Then it doesn't make sense to include LLMs in "AI." We aren't even close to turning runs into propellers or rockets, LLMs will not get there.