this post was submitted on 02 Jun 2025
2 points (100.0% liked)

TechTakes

1981 readers
71 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] Moonrise2473@feddit.it 1 points 2 weeks ago (1 children)

The insane part is that since the website is powered by WordPress, the scrapers could access all the posts in a single JSON file.

I was also exasperated by the fucking scrapers reading the same fucking page 20 times a second on posts that didn't get new content from a decade ago so I migrated my blog to Hugo and I was completely shocked to discover that by default every WordPress blog comes with an unauthenticated api that allows literally everyone to get the whole blog content in JSON files. Why the fuck are you wasting my server power to scrape the HTML if you can get a easy JSON??????? Take that fucking JSON and subscribe to the RSS for getting the next post that will be published the next decade. If you refresh that fucking URL 1000 times a day you will get the same fucking stuff, not a new magical article

I mitigated the issue with Wordfence by setting a rule like "more than 2 pages requested within a second = ip banned for a year"

Now, why would WordPress include an unauthenticated api that allows everyone to do a full unauthorized copy of the site in literally seconds is beyond me. There's no valid reason to have it public without any authentication. That API shit doesn't make sense, why by default a website should accept user signups from bots via API

[–] jlow@discuss.tchncs.de 1 points 2 weeks ago

Ohhhh, thanks for the mention of Wordfence. I'd love for Anubis to be available for Wordpress but I'll take this in the meantime!

[–] HedyL@awful.systems 1 points 3 weeks ago* (last edited 3 weeks ago)

Even if it's not the main topic of this article, I'm personally pleased that RationalWiki is back. And if the AI bots are now getting the error messages instead of me, then that's all the better.

Edit: But also - why do AI scrapers request pages that show differences between versions of wiki pages (or perform other similarly complex requests)? What's the point of that anyway?

[–] RVGamer06@sh.itjust.works 0 points 1 week ago