Comparing these two technologies seems somewhat silly
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Imperial media has to push some kind of garbage to distract from their ceaseless support for genocide.
It is much more nefarious than that. It is disingenous and misleading, on purpose. These AI techbros are going to use whatever means it takes in order to spread their minddeleting slop.
The two aren't equivalent. One of them is an actual proven technology that definitively exists, the other one is still to prove itself.
It helps write emails and reviews and edits resumes. I have very little other use for it.
hope you proofread those emails, least you send part of a romance novel that the AI hallucinated into being.
Those can be sent to me.
I do. Always. I’ve been playing with VEO. Creating a CocoPanda character. It’s been a lot of fun.
It also helps with tons of complex tasks in the sciences like finding new protein folding algorithms.
That right there is the problem with this discussion. They're not even remotely similar technologies.
The ones doing protein folding are specialised limited capability AI. They are absolutely useful and very good at their jobs, but they are not the kind of AI that the public are using.
The public are using large language models and Diffusion-Based image generators. Not the narrow AI that you're talking about.
AI is a superset of transformers which is then a superset of LLM’s. I think I’m making the same point as you, that in the broader sense “AI” can be useful.
Nothing like comparing a technology that took more than 10 years to get "released in the wild" and had several "killer apps" built using it very early on (email, instant messaging, web pages, online games) and many companies had no idea how to get money with it, vs. a "content generator" that is run almost entirely on promises of increased productivity and profit.
If AI gave you an accurate correct answer 99% of the time would you use it to find the answer to questions quickly?
I would. I absolutely would, the natural language search of ai feels amazing for finding the answer to a question you have.
The current problem is that its not accurate and not correct at a high enough percentage. As soon as that reaches a certain point we're cooked and AI becomes undeniable.
Yeah, but internet was for the people for decades.
(And it didn't really cost nature as much. Or stolen from the people so much - even by current laws LLM companies do that illegally.)
"AIs" are getting their enshitification & monopolies pre-baked into their core bossiness models from the start.
Not to mention that AIs will definitely worsen inequalities all over the world (like assembly robots that replaced people but aren't owned by people, and people still need to work 8h/day for decades for some reason).
(This but AI. I'm not saying, there aren't/won't be other jobs, just pointing out how this reshapes & concentrates wealth that on the other hands allows for slave wages with no prospects for full time jobs.)
If AIs will affect the world as much as the internet (and do so with peoples data), then they should be seen as core infrastructure - and government or non-profit owned.
Monetisation of all the things is killing us.
Also the AI could automate away that man's hobby...
Search sucks now, LLMs are useful. Not as useful as tech companies claim it to be but yeah, most people will use it at some point.
That's because search engines have reached the stage of enshittification where they no longer need to be good. Instead, they want you to spend as much time there as possible.
LLMs are still being sold as "the better option" - including by the exact same search giants who intentionally ruined their own search results. And many of them are already prioritizing agreeableness over "truthfulness." And we're still in the LLM honeymoon phase, where companies are losing billions of dollars on a yearly basis and undercharging their users.
Exactly. It will be ruined eventually when the shareholders come knocking wanting profitability.
This is something I think the 'you have to use LLMs or you're falling behind' crowd are missing. Of course these companies want you to become dependent on their product, and unable to complete basic tasks without it, because then when they slap you with monthly fees and ads and tokens you won't have a choice but to pay.
Use them if they're useful, but don't out source your brain. You'll need it when the enshittification begins.
Tech itself maybe. But the money, the copyright and the politics. AI is filthy.
Ah yes 1998, the last year before Matrix.
Huh I thought matrix was pretty new, and people used irc back then.
No it doesn't. Fuck this fake news from these genocidal scumbags.
One worked, though. 🤷♂️
the internet is actually useful and serves a purpose.
ai isn't useful at all and has no purpose.
It is very useful, just only in very few circumstances. 99% of what people are shoving it into, it has no place being there, but there are some things that it legitimately just does better.
AI is very useful and powerful as a propaganda device and a system to generate and disseminate disinformation, misinformation and non-information very quickly and very efficiently.
It was thought that the internet would do the same but that system only goes at the speed of humans and the whole system is regulated by humans ... so as propaganda tool, it has worked better but not as well as predicted. Humans saw the the potential for abuse and fought back against it.
AI is like propaganda on cocaine ... and there is very little to stop it other than our awareness of it ... but the majority of everyone in the world don't care to understand what they are watching is real or not. What that means is that AI is set to reshape how everyone thinks and how we all see the world.
We already saw Grok being used for this, though rather clumsily by stuffing the prompt.
If an AI company were behind the scenes fine tuning on specific political sentiment you would never know.
In fact there’s some evidence that later ChatGPT models are more right wing biased than early models (which were accused of being left wing).
Also important to note how much social media gets fed into these things and how astroturfed modern social media is these days, so even if not explicitly biased the well has been poisoned.