this post was submitted on 11 Aug 2025
22 points (100.0% liked)

TechTakes

2263 readers
117 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

(page 3) 50 comments
sorted by: hot top controversial new old
[–] BlueMonday1984@awful.systems 7 points 2 months ago (1 children)

Anyways, personal sidenote/prediction: I suspect the Internet Archive’s gonna have a much harder time archiving blogs/websites going forward.

Me, two months ago

Looks like I was on the money - Reddit's began limiting what the Internet Archive can access, claiming AI corps have been scraping archived posts to get around Reddit's pre-existing blocks on scrapers. Part of me suspects more sites are gonna follow suit pretty soon - Reddit's given them a pretty solid excuse to use.

[–] Soyweiser@awful.systems 7 points 2 months ago

It sucks how much of the usefulness of the internet is being trashed by this.

[–] V0ldek@awful.systems 7 points 2 months ago* (last edited 2 months ago) (5 children)

Can anyone explain to me why tf do promptfondlers hate GPT5 in non-crazy terms? Actually I have a whole list of questions related to this, I feel like I completely lost any connection to this discourse at this point:

  1. Is GPT5 "worse" in any sensible definition of the word? I've long complained that there is no good scientific metric to grade those on but like, it can count 'r's in "strawberry" so I thought it's supposed to be nominally better?
  2. Why doesn't OpenAI simply allow users to use the old model (4o I think?) It sounds like the simplest thing to do.
  3. Do we know if OpenAI actually changed something? Is the model different in any interesting way?
  4. Bonus question: what the fuck is wrong with OpenAI's naming scheme? 4, then 4o? And there's also o4 that's something else??
[–] corbin@awful.systems 7 points 2 months ago (1 children)

Oversummarizing and using non-crazy terms: The "P" in "GPT" stands for "pirated works that we all agree are part of the grand library of human knowledge". This is what makes them good at passing various trivia benchmarks; they really do build a (word-oriented, detail-oriented) model of all of the worlds, although they opine that our real world is just as fictional as any narrative or fantasy world. But then we apply RLHF, which stands for "real life hate first", which breaks all of that modeling by creating a preference for one specific collection of beliefs and perspectives, and it turns out that this will always ruin their performance in trivia games.

Counting letters in words is something that GPT will always struggle with, due to maths. It's a good example of why Willison's "calculator for words" metaphor falls flat.

  1. Yeah, it's getting worse. It's clear (or at least it tastes like it to me) that the RLHF texts used to influence OpenAI's products have become more bland, corporate, diplomatic, and quietly seething with a sort of contemptuous anger. The latest round has also been in competition with Google's offerings, which are deliberately laconic: short, direct, and focused on correctness in trivia games.
  2. I think that they've done that? I hear that they've added an option to use their GPT-4o product as the underlying reasoning model instead, although I don't know how that interacts with the rest of the frontend.
  3. We don't know. Normally, the system card would disclose that information, but all that they say is that they used similar data to previous products. Scuttlebutt is that the underlying pirated dataset has not changed much since GPT-3.5 and that most of the new data is being added to RLHF. Directly on your second question: RLHF will only get worse. It can't make models better! It can only force a model to be locked into one particular biased worldview.
  4. Bonus sneer! OpenAI's founders genuinely believed that they would only need three iterations to build AGI. (This is likely because there are only three Futamura projections; for example, a bootstrapping compiler needs exactly three phases.) That is, they almost certainly expected that GPT-4 would be machine-produced like how Deep Thought created the ultimate computer in a Douglas Adams story. After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
[–] scruiser@awful.systems 6 points 2 months ago

After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.

That's actually more batshit than I thought! Like I thought Sam Altman knew the AGI thing was kind of bullshit and the hesitancy to stick a GPT-5 label on anything was because he was saving it for the next 10x scaling step up (obviously he didn't even get that far because GPT-5 is just a bunch of models shoved together with a router).

[–] fullsquare@awful.systems 6 points 2 months ago* (last edited 2 months ago)
  1. from what i can tell people who roleplayed bf/gf with the idiot box aka grew parasocial relationship with idiot box did that on 4o, and now they can't make it work on 5 so they got big mad
  2. i think it's only if they pay up 200$/mo, previously it was probably available at lower tiers
  3. yeah they might have found a way to blow money faster somehow https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-5-power-consumption-could-be-as-much-as-eight-times-higher-than-gpt-4-research-institute-estimates-medium-sized-gpt-5-response-can-consume-up-to-40-watt-hours-of-electricity ed zitron says also that while some of prompt could be cached previously it looks like it can't be done now because there's fresh new thing that chooses model for user, while some of these new models are supposedly even heavier. even that openai intention seemed to be compute savings, because some of that load presumably was to be dealt with using smaller models
load more comments (3 replies)
[–] blakestacey@awful.systems 7 points 2 months ago (4 children)

Dan Olson finds a cursed subreddit:

R/aitubers is all the entitlement of NewTubers but exclusively for people openly churning out slop.

“I’ve automated 2-4 videos daily, zero human intervention, I spend a half hour a week working on this, why am I not getting paid yet?”

The original reddit post:

I’ve been running my YouTube channel for about 3 months. It’s focused on JavaScript and React tutorials, with 2–4 videos uploaded daily. The videos are fully automated (AI-generated with clear explanations, code demos, and screen recordings).

Right now:

  • Each video gets only a few views (1–10 views).

  • I tried Google Ads ($200 spent) → got ~20 subscribers and ~20 hours of watch time.

  • The Google campaigns brought thousands of uncounted views, and the number of Likes was much higher than dislikes.

  • Tried Facebook/Reddit groups → but most don’t allow video posting, or posts get very low engagement.

My goal is to reach YPP within 6 months, but the current pace is not enough. I’m investing about $300/month in promotion and I can spend 30 minutes weekly myself.

👉 What would you suggest as the most effective strategy to actually get there?

load more comments (4 replies)
[–] BlueMonday1984@awful.systems 6 points 2 months ago (1 children)

New piece from Brian Merchant, about the growing power the AI bubble's granted Microsoft, Google, and Amazon: The AI boom is fueling a land grab for Big Cloud

load more comments (1 replies)
[–] o7___o7@awful.systems 6 points 2 months ago

Palantir's public relations team explains how it helped America win the Global War on Terror

https://news.ycombinator.com/item?id=44894910

[–] o7___o7@awful.systems 6 points 2 months ago* (last edited 2 months ago)

"usecase" is a cursed term. It's an inverted fnord that lets the reader know that whatever follows can be safely ignored.

[–] bitofhope@awful.systems 6 points 2 months ago

If I ever get the urge to start a website for creatives to sell their media, please slap me in the face and remind me it will absolutely not be worth it.

[–] scruiser@awful.systems 6 points 2 months ago (3 children)

This was discussed last week but I looked at the comments and noticed someone in the comments getting slammed for... checks notes... noting that Eliezer wasn't clear on what research paper he was actually responding to (multiple other comments are kind of confused, because they assume he means one paper then other comments correct them that he obviously meant another). The commenter of course edits to back-peddle.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›