Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I run a modest Lemmy instance (lemmy.blehiscool.com). It’s not on the scale of lemmy.world or anything, but it’s been around long enough that I’ve had to deal with some real growth and scaling issues. I’ll try to focus on what actually matters in practice rather than theory.
Infrastructure
I’m running everything via Docker Compose on a single VPS (22GB RAM, 8 vCPU). That includes Postgres, Pictrs, and the Lemmy services.
This setup is great right up until it suddenly isn’t.
The main scaling issue I hit was federation backlog. At one point, the queue started piling up badly, and the fix was increasing federation worker threads (I’m currently at 128).
If you run into this, check your
lemmy_federatelogs—if you see:that’s your early warning sign.
What Actually Takes Time
Once your infrastructure is stable, the technical side becomes pretty low-effort.
The real time sink is moderation and community management. Easily 90% of the work.
On the technical side, my setup is pretty straightforward:
pg_dump+ VPS-level backupsBackups are boring right up until they aren’t. Test your restores. Seriously.
Where the Gaps Are
The main gaps I’ve run into:
Pictrs storage growth Images from federated content add up fast. Keep an eye on disk usage.
Postgres tuning As tables grow, default configs start to fall behind.
Federation queue visibility There’s no great built-in “at a glance” view—you end up relying on logs.
My Actual Workflow
Nothing fancy, just consistent habits:
Daily (quick check):
Weekly:
Monthly:
As needed:
What I’d Do Differently
If I were starting over:
TL;DR
Happy to answer specifics if you’re planning a setup—there’s a lot of small gotchas that only show up once you’ve been running things for a while.
Hey, super helpful comment.
A few of the details you mentioned are exactly the kind of practical stuff I’m trying to collect, so I wanted to ask a bit more:
Waiting for X workerslog message?pg_dump+ VPS backups, or also separately backing uppictrs, configs, secrets, and proxy setup?I’m mostly interested in the boring operational side of running Lemmy long-term: backup/restore, federation lag, storage growth, and early warning signs before things get messy.
Sorry if some of these questions are a bit basic or oddly specific — I’m using AI to help gather as much real-world Lemmy hosting experience as possible, and it generated most of these follow-up questions for me.