this post was submitted on 26 Nov 2025
265 points (96.8% liked)

Selfhosted

53166 readers
976 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Got a warning for my blog going over 100GB in bandwidth this month... which sounded incredibly unusual. My blog is text and a couple images and I haven't posted anything to it in ages... like how would that even be possible?

Turns out it's possible when you have crawlers going apeshit on your server. Am I even reading this right? 12,181 with 181 zeros at the end for 'Unknown robot'? This is actually bonkers.

Edit: As Thunraz points out below, there's a footnote that reads "Numbers after + are successful hits on 'robots.txt' files" and not scientific notation.

Edit 2: After doing more digging, the culprit is a post where I shared a few wallpapers for download. The bots have been downloading these wallpapers over and over, using 100GB of bandwidth usage in the first 12 days of November. That's when my account was suspended for exceeding bandwidth (it's an artificial limit I put on there awhile back and forgot about...) that's also why the 'last visit' for all the bots is November 12th.

you are viewing a single comment's thread
view the rest of the comments
[–] panda_abyss@lemmy.ca 16 points 19 hours ago (2 children)

I don’t really get those bots.

Like, there are bots that are trying to scrape product info, or prices, or scan for quantity fields. But why the hell do some of these bots behave the way they do?

Do you use Shopify by chance? With Shopify the bots could be scraping the product.json endpoint unless it’s disabled in your theme. Shopify just seems to show the updated at timestamp from the db in their headers+product data, so inventory quantity changes actually result in a timestamp change that can be used to estimate your sales.

There are companies that do that and sell sales numbers to competitors.

No idea why they have inventory info on their products table, it’s probably a performance optimization.

I haven’t really done much scraping work in a while, not since before these new stupid scrapers started proliferating.

[–] dual_sport_dork@lemmy.world 18 points 19 hours ago (3 children)

Negative. Our solution is completely home grown. All artisinal-like, from scratch. I can't imagine I reveal anything anyone would care about much except product specs, and our inventory and pricing really doesn't change very frequently.

Even so, you think someone bothering to run a botnet to hound our site would distribute page loads across all of our products, right? Not just one. It's nonsensical.

[–] Nighed@feddit.uk 3 points 14 hours ago (1 children)

Can you just move that product to a new URL? What happens?

[–] dual_sport_dork@lemmy.world 6 points 14 hours ago (1 children)

It doesn't quite work that way, since the URL is also the model number/SKU which comes from the manufacturer. I suppose I could write an alias for just that product but it would become rather confusing.

What I did experiment with was temporarily deleting the product altogether for a day or two. (We barely ever sell it. Maybe 1 or 2 units of it a year. This is no great loss in the name of science.) This causes our page to return a 404 when you try to request it. The bots blithely ignored this, and continued attempting to hammer that nonexistent page all the same. Puzzling.

[–] DoGeeseSeeGod@lemmy.blahaj.zone 3 points 12 hours ago (1 children)

This is far beyond my limited coding experience but I do enjoy a good puzzle. In your opinion do you think it could be some gen AI scraper. Like the gen AI is deciding what page to scrape and cuz its stupid it keeps selecting your page?

Alternatively I wonder if the product page just happens to have an unsual combinination of keywords that the bot is looking for. Maybe its looking for cheap prices of RAM and the page has some keywords related to RAM?

Good luck I hope you are able to get them to start hammering that page.

[–] dual_sport_dork@lemmy.world 2 points 12 hours ago

In my case the pattern appears to be some manner of DDoS botnet, probably not an AI scraper. The request origins are way too widespread and none of them resolve down to anything that's obviously datacenters or any sort of commercial enterprise. It seems to be a horde of devices in consumer IP ranges that have probably be compromised by some malware package or another, and whoever is controlling it directed it at our site for some reason. It's possible that some bad actor is using a similar malware/bot farm arrangement to scrape for AI training, but I'd doubt it. It doesn't fit the pattern from that sort of thing from what I've seen.

Anyway, my script's been playing automated whack-a-mole with their addresses and steadily filtering them all out, and I geoblocked the countries where the largest numbers of offenders were. ("This is a bad practice!" I hear the hue and cry from specific strains of bearded louts on the Internet. That says maybe, but I don't ship to Brazil or Singapore or India, so I don't particularly care. If someone insists on connecting through a VPN from one of those regions for some reason, that's their own lookout.)

They seem to have more or less run out of compromised devices to throw at our server, so now I only see one such request every few minutes rather than hundreds per second. I shudder to think how long my firewall's block list is by now.

[–] panda_abyss@lemmy.ca 9 points 19 hours ago

Yeah, that’s the kind of weird shit I don’t understand. Someone on the other hand is paying for servers and a residential proxy to send that traffic too. Why?

[–] lka1988@lemmy.dbzer0.com 2 points 15 hours ago (1 children)

Could it be a competitor for that particular product? Hired some foreign entity to hit anything related to their own product?

[–] dual_sport_dork@lemmy.world 5 points 15 hours ago (1 children)

Maybe, but I also carry literally hundreds of other products from that same brand including several that are basically identical with trivial differences, and they're only picking on that one particular SKU.

[–] DoGeeseSeeGod@lemmy.blahaj.zone 1 points 12 hours ago (1 children)

Have you googled the SKU and see if anything else happens to share the number?

[–] dual_sport_dork@lemmy.world 1 points 12 hours ago

I have and there's nothing noteworthy, other than tons of other retailers selling the same thing of course.

[–] porcoesphino@mander.xyz 2 points 15 hours ago* (last edited 10 minutes ago)

Have you ever tried writing a scrapper? I have for offline reference material. You'll make a mistake like that a few times and know but there are sure to be other times you don't notice. I usually only want a relatively small site (say a Khan Academy lesson which doesn't save text offline, just videos) and put in a large delay between requests but I'll still come back after thinking I have it down and it's thrashed something