ebu

joined 2 years ago
[–] ebu@awful.systems 10 points 3 months ago (1 children)

ah, yes, i'm certain the reason the slop generator is generating slop is because we haven't gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i'm certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm

[–] ebu@awful.systems 6 points 3 months ago

no worries -- i am in the unfortunate position of very often needing to assume the worst in others and maybe my reading of you was harsher than it should have been, and for that i am sorry. but...

"generative AI" is a bit of a marketing buzzword. the specific technology in play here is LLMs, and they should be forcefully kept out of every online system, especially ones people rely on for information.

LLMs are inherently unfit for every purpose. they might be "useful", in the sense that a rock is useful for driving a nail through a board, but they are not tools in the same way hammers are. the only exception to this is when you need a lot of text in a hurry and don't care about the quality or accuracy of the text -- in other words, spams and scams. in those specific domains i can admit LLMs are the most applicable tool for the job.

so when ostensibly-smart people, but especially ones who are running public information systems, propose using LLMs for things they are unable to do, such as explain species identification procedures, it means either 1) they've been suckered into believing they're capable of doing those things, or 2) they're being paid to propose those things. sometimes it is a mix of both. either way, it very much indicates those people should not be trusted.

furthermore, the technology industry as a whole has already spent several billion dollars trying to push this technology onto and into every part of our daily lives. LLM-infested slop has made its way onto every online platform, and more often than not, with direct backing from those platforms. and the technology industry is openly hostile to the idea of "consent", actively trying to undermine it at every turn. it's even made it all the way through to the statement attempting to reassure on that forum post about the mystery demo LLMs -- note the use of the phrase "making it opt-out". why not "opt-in"? why not "with consent"?

it's no wonder that people are leaving -- the writing is more or less on the wall.

[–] ebu@awful.systems 13 points 3 months ago (2 children)
  1. no one is assuming iNaturalist is being malicious, saying otherwise is just well-poisoning.
  2. there is no amount of testing that can ever overcome the inherently-stochastic output of LLMs. the "best-case" scenario is text-shaped slop that is more convincing, but not any more correct, which is an anti-goal for iNaturalist as a whole
  3. we've already had computer vision for ages. we've had google images for twenty years. there is absolutely no reason to bolt a slop generator of any kind to a search engine.
  4. "staff is very much connected with users" obviously should come with some asterisks given the massive disconnect between staff and users on their use and endorsement of spicy autocorrect
  5. framing users who delete their accounts in protest of machine slop being put up on iNaturalist, which is actually the point of contention here, as being over-reactive to the mere mention of AI, and thus being basically the same as the AI boosters? well, it's gross. iNat et. al. explicitly signaled that they were going to inject AI garbage into their site. users who didn't like that voted with their accounts and left. you don't get to post-hoc ascribe them a strawman rationale and declare them basically the same as the promptfans, fuck off with that
[–] ebu@awful.systems 13 points 4 months ago

"emotional"

let me just slip the shades on real quick

"womanly"

checks out

[–] ebu@awful.systems 8 points 4 months ago

don't post slop, nobody wants to read any of that

[–] ebu@awful.systems 1 points 5 months ago (1 children)

your third sentence here is a non-sequitur -- do you mean to say disposable razors better work on longer hair that safety razors?

[–] ebu@awful.systems 1 points 8 months ago* (last edited 8 months ago) (1 children)

i can admit it's possible i'm being overly cynical here and it is just sloppy journalism on Raffaele Huang/his editor/the WSJ's part. but i still think that it's a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden costs, even before things like the necessary capex (which goes unmentioned in the original paper -- though they note using a 2048-GPU cluster of H800's that would put them down around $40m). i'm thinking in the mode of "the whitepaper exists to serve the company's bottom line"

btw announcing my new V7 model that i trained for the $0.26 i found on the street just to watch the stock markets burn

[–] ebu@awful.systems 0 points 8 months ago* (last edited 8 months ago) (3 children)

consider this paragraph from the Wall Street Journal:

DeepSeek said training one of its latest models cost $5.6 million, compared with the $100 million to $1 billion range cited last year by Dario Amodei, chief executive of the AI developer Anthropic, as the cost of building a model.

you're arguing to me that they technically didn't lie -- but it's pretty clear that some people walked away with a false impression of the cost of their product relative to their competitors' products, and they financially benefitted from people believing in this false impression.

[–] ebu@awful.systems 0 points 8 months ago (9 children)

i think you're missing the point that "Deepseek was made for only $6M" has been the trending headline for the past while, with the specific point of comparison being the massive costs of developing ChatGPT, Copilot, Gemini, et al.

to stretch your metaphor, it's like someone rolling up with their car, claiming it only costs $20 (unlike all the other cars that cost $20,000), when come to find out that number is just how much it costs to fill the gas tank up once

[–] ebu@awful.systems 1 points 1 year ago* (last edited 1 year ago) (1 children)

"blame the person, not the tools" doesn't work when the tools' marketing team is explicitly touting said tool as a panacea for all problems. on the micro scale, sure, the wedding planner is at fault, but if you zoom out even a tiny bit it's pretty obvious what enabled them to fuck up for as long and as hard as they did

view more: ‹ prev next ›