sobchak

joined 23 hours ago
[–] sobchak@programming.dev 7 points 12 hours ago (1 children)

That was my first though too. But the author is:

Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

[–] sobchak@programming.dev 3 points 12 hours ago (1 children)

Yeah, they probably wouldn't think like humans or animals, but in some sense could be considered "conscious" (which isn't well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

This argument seems weak to me:

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

You can emulate inputs and simplified versions of hormone systems. "Reasoning" models can kind of be thought of as cognition; though temporary or limited by context as it's currently done.

I'm not in the camp where I think it's impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I'm not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a "singularity-like" scenario.

I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

[–] sobchak@programming.dev 14 points 13 hours ago (5 children)

And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.