Generally I equate positivity about LLMs with people’s technical ability. I find the more they say AI is good the worse programmer they are.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Technical literacy in general. My friend thinks it's the greatest thing ever, is an idiot with technology (and life in general).
Might be some dunnig kruger curve there. Not tooting my horn but I know my ways around and I only use ai for programming when I more or less know how it works already. Which means I verify and fix any eventual problems before committing any code. It does speed up the process, it's a tiny bit simpler than checking stuff out on stack overflow IMO.
Now, if you don't know your ways around, and "trusts" the outcome on an LLM, boy are you in trouble 😵💫.
It also says a lot about their inability to identify bullshit
every time I use it, i waste hours
Yes, exactly this. It looks good, I ask for it to tweak something. It tweaks, but now something else needs adjustment. Then it comes back unusable.
It ends up taking the same time as doing it myself. There’s some value perhaps in either the novelty or engagement that keeps me focused but it’s not more efficient.
When it does work, I’m always worried it is an illusion I’ve missed something. Like how you send an email and immediately see the typo.
People who love it, love it because they don’t need to or care about having accuracy and precision in their work. Sales and marketing, management, etc. Business idiots.
Ed ed ed!!!!
you're obviously prompting it wrong, and/or not using the latest models ~/s~
There are right tools and wrong tools depending on the application.
There are right ways to use said tools and wrong ways...like you wouldn't use a phillips head screwdriver on a flat head.
I guarantee your company's provided tool is Copilot or OpenAI based, which is already bottom of the barrel for usefulness.
Haha, yes it is
My experience is that it can work reasonabky well, but you have to waste absurd amount of tokens and have the 1m token context window.
I only do gamedev, which means a little bit more simple scriots, but it could handle even more involved systems.
If i force it to first document and explain the wholw architecture and data flow.
Did it help? Yes, but it still did take only a slightly faster. I could do it in a day more, probably.
It also cost like 50$ in tokens, in today's prices - where every AI company is loosing trillions of money, so the costs will get a lot worse. And if I tried to conserve tokens, it's shit. You have to feed it 10$ of data to be useful.
Add to that the fact that it also causes skill attrition, so once the expensive-cost future arrives, you probsbly won't be able to afford it, and good luck getting your skills back after that.
Our company wants us to use it, and the average token consumption is like 100$ per day in consumer prices. How is that even considered for such a minor gain?
So, I'll pass.
Especially once they kill the "old net" and you won't be able to browse anything to learn skills at all.
kagi is the only way i can stay sane on web 3.0.
Id say you dont know how to use the tool. 'Write tests here, develop.feature X' is not a good way to use llms. Using 80k tokens and keep using same a Session is context rotting. There are a lot of boring, everyday tasks in my job that got faster. Many others that meh. Use AI, dont be driven by AI.
I know this community is all about fuck AI, but this is just straight echo chambering.
But honestly your post sounds like you're just not using it right? You can get pretty good results with it with enough guardrails. Just because you can't get the results you want doesn't mean that no one can.
That said, fuck AI. It's all a bunch of bullshit, but denying real results just means you're sticking your head in the sand and that's not how you fix this problem.
I agree... Saying LLMs are good at nothing is just plain ignorance... One can disagree with the philosophy or dislike hallucinations, but they are definitely good at some things.
pretty good results with it with enough guardrails
examples?
For a research project, I had to convert 20+ projects from a dataset into a new format. The old format was simply a single script for each project that builds it. But I needed a format with a Docker file and a script. It would've taken me around a week to do all that one by one.
I got Claude to do it in 2 hours.
I know people hate AI in this community, but to say it doesn't do anything good or to insult all people who use it is just pure negativity.
Or that it's not right for their use case.
Like someone throwing a bunch of data into an LLM and trying to use it to process it into a chart or something. It can work, but it was never designed to be used in that manner.
I've got an acquaintance who does that, despite the fact that python would be a better thing to use.
Personally, I sometimes run a few saved images thorough a multi-modal 8 gigaparameter local model on my computer, so I can automate giving them more descriptive names than randomnumbers.png, and that seems to work fine. I could do it by hand, but it would take hours and days, compared to minutes, and since it's not too important, it doesn't matter if it's wrong. The resource usage is also less of an issue, since it's my own computer.
holy fucking shit man. this community has a clear astroturfing problem.
I recently used it to install Nvidia l40s drivers on redhat 9 and pass it through to my Frigate instance. Took me a few minutes. Would have been a lot of reading to find the exact answers manually.
I cut my LLM usage to almost zero because of environmental and political reasons, but it was helpful enough to wish it could be sustainable and not another tool in the dystopian take on the world.
local models are advanced enough to the point where you can run em as needed without datacenter.
the datacenter craze is basically just an excuse to get the banks (and eventually the american taxpayer, via bailouts when they fail) to fund your local nepitistic infrastructure rollout.
the entire US economy is built around the purposeful boom/bust system, as it's very effecient at "bagging" people that don't know the rules.
An idiot intern is better.
Well, 100% because the intern WILL eventually learn. That's the entire difference. It won't be about adjusting the prompt, or add yet another layer of "reasoning", or wait for the next "version" with a different code name an .1% larger dataset. No, you'll point to the intern they did a mistake, try not calling them an idiot, explain WHY it's wrong, optionally explain how to do it right, THEN the next time they'll avoid it or fix it after.
That's the entire point of having an intern : initially they suck BUT as you train them, they don't! Meanwhile an LLM, despite technical jargon hijacked by the marketing department, they don't "learn" (from machine learning) or train (from "training dataset") or have "neurons" (from "artificial neural networks") rather it's just statistics on the next most probable world, sounding right with 0 "reasoning".
Oooh buddy, is isn't even young engineers using these to destroy their designs. I was at a building construction conference recently where one of the presentations was about how AI is going to "give us so much time back" as designers. He then told us about how the AIs hallucinate math still, and that the AI companies are not liable for their output. After the presentation, I and another person asked him a question about who exactly the liability will lie with and how someone could protect themselves from the liability without spending all the time we "save" meticulously checking the outputs. His response was to generate thousands of outputs for the same task and then only check "the best versions." Okay, so how will we know which are the "best" without meticulously checking thousands of them?
Anyway, afterwards, I asked my colleagues from all around the country who were at the conference for their opinion on AI and the presentation, and most of these 50-60 year old men told me they regularly use it in their work already. So be prepare for things constructed in the past few years to be incredibly dangerous facilities to be in or near.
Yeah, don’t generate code with it. Treat it like StackOverflow. It does pretty good at that.
This is the only way I use it, and I do it grudgingly only because AI has ironically also ruined the web and web search. It’s also a last resort for when Kagi isn’t helping.
For programming, at least it's a good way to speed up things that you know how to do but take some time to type, or you don't remember the syntax of. But relying on AI any more than that usually means you'll be adding free technical debt and debugging time or becoming dependent on it.
Claude and super powers / planning have changed my mind more on AI feature development. Iterating on the spec and making it as unambiguous as possible gives good results when you clear context and have it implement the plan. Even if it starts to stray you can just do a git reset and start a new session with the spec, adjusting it a bit, because time wise you probably haven't invested much.
It also depends on the code base, if the code base has very clear separation of concerns, good documentation, and good contracts between layers then claude can handle it pretty well. If the code base is full of spaghetti code with multiple ways to do the same thing then AI will struggle with it. In our large legacy monolith repo it doesn't do well, in our micro service repos it does great.
Also time wise it may not seem like a benefit if you just set it and wait for it to complete, the productivity advantage comes from running a couple sessions in parallel.
Also context is key, having a good claude.md file in the repo to explain patterns helps it to avoid pitfalls. If it's only context is the prompt you gave it and you tell it to implement a feature without a plan / spec outlined it will generate shit code.
Making it as unambiguous as possible
If only we had a way to communicate with machines in a reliable, deterministic and unambiguous way.
For every post I see of people complaining, I have to imagine there are 100 other people that get value out of LLMs quietly.