Selfhosted

45718 readers
506 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU.

Anyways, has anyone had a chance to look at his guide? It's accompanied by two youtube videos that are about 7 hours each.

I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests.

Any advice/links for a beginner are more than welcome.

Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it.

Edit 2: This is looking pretty good right now.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
9
submitted 32 minutes ago* (last edited 30 minutes ago) by ippokratis@lemmy.ml to c/selfhosted@lemmy.world
 
 

A python script that reads your services from a yml , reads your auth key from an .env , generates the go binaries , creates the Systemd units and exposes the tailscale subdomains and their funnels .

Have a look

https://ippocratis.github.io/tailscale/

Thanks

4
 
 

I have a couple Docker containers that use email as an alert system or just for info like completed jobs. The server I would like to host them on has, for whatever reason, blocked email ports and you have to pay extra to have them turned on.

It seems to me tho, that I should be able to port all email through Tailscale to a local or even remote email client. For instance, in the case of setting the parameters in the Docker compose, it would look something like this:

SITE_NAME: mycoolwebsite
DEFAULT_FROM_EMAIL: email address
EMAIL_HOST: smtp
EMAIL_HOST_USER: email address
EMAIL_HOST_PASSWORD: email password
EMAIL_PORT: 100.x.x.x:587

Then, configure the local email client to listen on 100.x.x.x:587.

Would this be doable, or is there a better way?

5
 
 

I would like to start managing ebooks and manga properly. I don't have many, but I plan on increasing my collection. My requirements are not so strict, I don't mind getting the books/manga myself, but I am also curious about setting up LazyLibrarian at one point, is it worth it? (I already have other *arrs installed on my server). I had similar thoughts about Suwayomi.

My confusion starts from the accessories around all this: Calibre, CalibreWeb/Automated, Komga, Kavita, Audiobookshelf, etc. Does having a Kindle as reading device limits my possibilities to use any of these? Is setting up e.g. both CalibreWeb and Kavita redundant?

I guess my question is how is everyone using these services for their own library :)

6
 
 

My jellyfin collection has finally become large enough that I have been able to cancel all my streaming services. My issue now is that I want to get rid of my Roku's that are hooked up to each TV.

Is there a good alternative? It MUST be family approved, meaning:

  1. It is not visible (no desktop/laptop hooked up)
  2. It is low power
  3. It has a simple remote control
  4. It supports Jellyfin
  5. It is relatively cheap (< $150)

I am sure I could build something out of a raspberry pi, but:

  1. I don't need another project I have to fiddle with
  2. It MUST support new codecs (h.265/AC1/aac/...) as I want direct play from my server
  3. If it stutters/buffers once, it goes into the trash!

I've generally been mostly happy with my Roku, and my pi.hole blocks most of their analytics, but last week, I pressed the home button on my Roku and it started play a video add with audio. Completely unacceptable (That has happened twice in the last week). And in general, the more of this crap I can get out of my life the better!

7
 
 

I currently do a lot of my monitoring via MQTT for my solar system etc. I currently use MQTT Alert and set up my alerts to ring my phone at top volume until silenced. But I have missed more than one alert because I don't think the background agent is always active and it doesn't necessarily start when I reboot the phone. While the application does "monitor" the MQTT connection, it only makes a short sound if it drops, with no followup until you notice that there was a notification and go back into it to figure out why the connection is down.

Does anyone have foolproof way of getting things like security alerts that will always trigger on the phone, without having to check the phone 10 times a day to be sure the application is on and the connection is active?

8
9
54
submitted 20 hours ago* (last edited 20 hours ago) by No_Bark@lemmy.dbzer0.com to c/selfhosted@lemmy.world
 
 

Hello most excellent Selfhosted community,

I'm very new to this and am confused about how vulnerable my server and/or home network is with my current setup.

I just got a basic server up and running on a machine with proxmox and a DAS for 10tb of storage. I've got two LXCs running for a docker deployed arr stack and jellyfin+jellyseer stack. The proxmox server is connected to a router attached to a fiber ONT. Everything is accessed over the home LAN network and that's it.

Everything is working correctly and my containers are all talking to each other correctly via ip addresses (gluetun network on the arr stack container). I've been reading up on reverse proxies and tailscale to connect to the server from outside my LAN network, and it's mostly gone over my head, but it did make me concerned about my network security.

Is my current set up secure, assuming strong passwords were used for everything? I think it is for my current uses - but I could use a sanity check, I'm tired. I'm open to any suggestions or advice.

I own a domain that I don't use for anything, so it would be cool to get reverse proxy working, but my attempts so far have failed and I learned I'm behind a double NAT (ONT and router) - and attempts to bypass that by setting the ONT into bridge mode have also failed. I don't really need to access anything from outside my home network right now - but I would like to in the future.

10
94
submitted 1 day ago* (last edited 1 day ago) by dengtav@lemmy.ml to c/selfhosted@lemmy.world
 
 

As Nextcloud advanced with progresses making it competitive in fully integrated government and corporate workflows, OpenCloud is getting more and more attention.

The fact, that both are collaborative cloud plattforms, designed to be selfhosted and mainly developed in/around Berlin from FOSS-Community-Surroundings, makes one ask about the differences.

The main difference I see, is the software stack

  • Nextcloud, as a fork of ownCloud, kept the PHP code base and is still mainly developing in PHP
  • OpenCloud, also a fork of ownCloud, did a complete rewrite in Go

Until know, Nextcloud is far more feature complete (yes I know, people complain, they should fix more bugs instead of bringing new features) than OpenCloud, if we compair it with comercial cometitors like MS Teams.

I like Nextcloud!

I deploy it for various groups, teams, associations, when ever they need something where they want to have fileshare, calendar, contacts and tasks in one place. Almost every time, when I show them the functionality of Nextcloud Groups an the sharing-possibilities, people are thrilled about it, because they didn't expect such a feature rich tool. Although I sometimes wish it would be more performant and easier to maintain, so non-tech-people could care for their hosting themselves.

Why OpenCloud?

Now, with OpenCloud, I am asking my self, why not just contribute to the existing colab-cloud project Nextcloud. Why do your own thing?

Questions

So here I expect the Go as a somewhat game-changer (?). As you may have noticed, that I am not a developer or programmer, so maybe there are obvious advantages of that.

  • Will OpenCloud, at some point, outreach Nextclouds feature completeness and performance, thanks to a more modern approach with Go?
  • Will Nextcloud with their huge php stack run into problems in the future, because they cant compete with more modern architectures?
  • If you would have to deploy a selfhosted cloud environment for a ~500 people organization lasting long term: Would you stick to the goo old working php stack or see possible advantages in the future of the OpenCloud approach?

Thanks :)

11
 
 

It was recently announced that FTTH will soon (finally) be available in my market. The provider coming to town offers rates up to 8g.

I'm upgrading from DSL at <100mbps - really exciting! However I will then face a bit of an issue.

I self host many services over my DSL, and use custom firmware on my router. My DSL modem is in a transparent bridging mode. I like the flexibility and customizability this setup provides.

The new service includes a WiFi 7 router, but that means I'll also potentially be subject to all the weird things providers like to do, like adding backdoors, opening shared WiFi networks, force deploying different firmware, etc. Plus I won't be running any kind of service on the router itself, which I do have today (transparent proxy etc). The router I have today is not going to enable me to touch the peak bandwidth available.

What're the best options to upgrade LAN components so that I can support multi gig internal networking speeds, ensure my self hosted services all function normally, and I take advantage of the bandwidth the ISP upgrade offers? In your personal opinion, is it worth it to invest in upgraded lan components?

Anyone have experience converting from 1G LAN to 2.5 or even 10?

Do I really need 8G FTTH, of course not, but if I ever wanted to get the max out of it, what does that take?

12
13
 
 

Hi!

For the past few years I've been hosting most of my services on Contabo VPSs (Immich, Nextcloud, and a few others).

They used to offer a great price to specs ratio, but have lately been awful to me as a customer. (drastically increasing the price to force me to change product, and then doing the same weeks after migrating to their new product).

I don't really trust them anymore, especially since the only way to make them react to my problem was to give them a bad trustpilot review..

So anyway I am looking for advice on a different VPS provider based on the EU. I mostly need a lot of SSD space (800Gb minimum), at least 4 vCPU cores, and a decent amount of RAM. The maximum I'm willing to pay per month is around 20€.

Anything you could recommend? Ideally with a good track record of maintaining their prices.

Have a good week end!

14
 
 

Happy Friday, r/selfhosted! Linked below is the latest edition of This Week in Self-Hosted, a weekly newsletter recap of the latest activity in self-hosted software and content.

This week's features include:

Hoarder's new name change New round of Tailscale funding (cue the enshittification?) Software updates and launches A spotlight on Streamystats -- a self-hosted statistics-tracking platform for Jellyfin A ton of great guides, videos, and content from the community Thanks, and as usual, feel free to reach out with feedback!

15
 
 

Hello everyone I recently got mastodon running on my server and everything seems to be running fine.

the trouble I'm having now though is with my username being shown differently on different remote instances.
I set my local domain to example.com
and my web domain to mastodon.example.com
following the guide from mastodon
masto-docs

it mentions adding this to the nginx of example.com

location /.well-known/webfinger {
  add_header Access-Control-Allow-Origin '*';
  return 301 https://mastodon.example.com$request_uri;
}

so I did that and it seems to be working as when I search my username from another mastodon instance my user shows up how I expect user@example.com

but when I search for my user on pixelfed it shows up as user@mastodon.example.com

Im using nginx proxy manager instead of nginx so maybe I added the webfinger to the wrong place? also from searching around online there might be other things I need to add for a more broad federation fix instead of just mastodon but Im not really sure what all that means

fediverse custom domains

16
 
 

cross-posted from: https://lemmy.world/post/28083834

Windows 10/11 is a requirement because it will be a shared console, sorry y'all. I'm considering a refurb'ed model on eBay because some include Win10 Pro licenses, which would save ~$80 versus building from scratch.

The PC would be used for streaming media through Plex and maybe some light gaming via SteamLink, so specs aren't very important as long as it can handle basic web browsing and graphic output.

Has anyone here had any experience with these? I would imagine they're pretty no-fuss boxes given that they're business-class units, but I also understand that they're several years old. What' the upgradeability of them like? What specific models/SKUs would be recommended for this use / are there any other SFF PCs that are comparable?

17
 
 

Is there a good solution for an entirely off-grid server?

Is it possible to use a smartphone hotspot/USB tethering for internet connection?

I have some solar panels & batteries and an old laptop (or I might get a raspberry pi) and am curious about whether I could selfhost literally in the middle of nowhere, without a residential internet connection?

18
 
 

I am making this post in good faith

In my last post I asked about securely hosting Jellyfin given my specific setup. A lot of people misunderstood my situation, which caused the whole thread to turn into a mess, and I didn't get the help I needed.

I am very new to selfhosting, which means I don't know everything. Instead of telling me that I don't know something, please help me learn and understand. I am here asking for help, even if I am not very good at it, which I apologize for.

With that said, let me reoutline my situation:

I use my ISP's default router, and the router is owned by Amazon. I am not the one managing the router, so I have no control over it. That alone means I have significant reason not to trust my own home network, and it means I employ the use of ProtonVPN to hide my traffic from my ISP and I require the use of encryption even over the LAN for privacy reasons. That is my threat model, so please respect that, even if you don't agree with it. If you don't agree with it, and don't have any help to give, please bring your knowledge elsewhere, as your assistance is not required here. Thank you for being respectful!

Due to financial reasons, I can only use the free tier of ProtonVPN, and I want to avoid costs where I can. That means I can only host on the hardware I have, which is a Raspberry Pi 5, and I want to avoid the cost of buying a domain or using a third party provider.

I want to access Jellyfin from multiple devices, such as my phone, laptop, and computer, which means I'm not going to host Jellyfin on-device. I have to host it on a server, which is, in this case, the Raspberry Pi.

With that, I already have a plan for protecting the server itself, which I outlined in the other post, by installing securecore on it. Securing the server is a different project, and not what I am asking for help for here.

I want help encrypting the Jellyfin traffic in transit. Since I always have ProtonVPN enabled, and Android devices only have one VPN slot enabled, I cannot use something such as Tailscale for encryption. There is some hope in doing some manual ProtonVPN configurations, but I don't know how that would work, so someone may be able to help with that.

All Jellyfin clients I have used (on Linux and Android) do not accept self-signed certificates. You can test this yourself by configuring Jellyfin to only accept HTTPS requests, using a self-signed certificate (without a domain), and trying to access Jellyfin from a client. This is a known limitation. I wouldn't want to use self-signed certificates anyways, since an unknown intruder on the network could perform a MITM attack to decrypt traffic (or the router itself, however unlikely).

Even if I don't trust my network, I can still verify the security and authenticity of the software I use in many, many ways. This is not the topic of this post, but I am mentioning it just in case.

Finally, I want to mention that ProtonVPN in its free tier does not allow LAN connections. The only other VPN providers I would consider are Mullvad VPN or IVPN, both of which are paid. I don't intend to get rid of ProtonVPN, and again that is not the topic of this post.

Please keep things on-topic, and be respectful. Again, I am here to learn, which is why I am asking for help. I don't know everything, so please keep that in mind. What are my options for encrypting Jellyfin traffic in transit, while prioritizing privacy and security?

19
111
TIL - Caddy (lemmy.world)
submitted 3 days ago* (last edited 3 days ago) by irmadlad@lemmy.world to c/selfhosted@lemmy.world
 
 

Today I gained a little more knowledge about Caddy, and I thought I'd share in case someone is having the same issue.

I've been biting my nails worrying about Caddy updating certificates. Everything I had read told me not to sweat it. That Caddy had my back and wouldn't let any certs expire. Well, two did, today. So I set about today, after I got all my chores done, to see if I could figure out wtf.

Long story short, I had a inconsistency in the format of my Caddy file. It didn't affect the function of the file to the extent that it would not provide the certificate in daily use, but apparently I confused Caddy enough so that it couldn't determine when certs were expiring, and reissue the cert.

If you run the following:

caddy reload --config /etc/caddy/Caddyfile 

And you get something like this:

2025/04/09 21:49:03.376 WARN    Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies{"adapter": "caddyfile", "file": "/etc/caddy/Caddyfile", "line": 1}

It's a warning that something is askew. Not to worry tho, you can fix it thusly:

Make a backup assuming etc/caddy/Caddyfile is where your Caddyfile is:

cp /etc/caddy/Caddyfile /etc/caddy/Caddyfile.bak

Next we'll ask Caddy nicely to please reformat in an acceptible form:

sudo caddy fmt --overwrite /etc/caddy/Caddyfile

Trust but verify:

caddy validate --config /etc/caddy/Caddyfile

Now run:

caddy reload --config /etc/caddy/Caddyfile

You should be golden at this point.

Cheers

20
27
submitted 3 days ago* (last edited 3 days ago) by someacnt@sh.itjust.works to c/selfhosted@lemmy.world
 
 

I am currently looking into ansibles to store my configurations and deploy services more easily.

I have couple of iptable rules in /etc/iptables/rules.v4, which I can easily restore. Meanwhile, ansible has iptable role for configurations - hence, I am confused on what approach to take.

How do I persist this rules, especially across reboots? Should I rerun ansible every time on each reboot? I am at loss on how to best manage iptables, as other services can interact with it. How do you folks handle this? Thanks in advance!

21
 
 

Hi all, I have my home lab set up as a single git repo. I’ve got all infrastructure as opentofu / ansible configs, and using git crypt to protect secret files (tofu state, ansible secret values, etc).

How would you back up such a system? Keeping it on my self hosted git creates a circular dependency. I’m hesitant to use a private codeberg repo in case I leak secrets. Just wondering what the rest of you are doing.

22
 
 

For context I created a video search engine last year, I shut it down and put the data online. You can read about it here: https://www.bendangelo.me/2024/07/16/failed-attempt-at-creating-a-video-search-engine/

I put that project on hold because of scaling issues, anyway I'm back with an other idea. I've been frustrated with how AI slop is ruining the internet and recently it's been hitting Youitube pretty hard with AI videos. I’m brainstorming a tool for people to selfhost:

Self-hosted crawler: Pick which sites/videos to index (blogs, forums, YT channels, etc.). AI chat interface: Ask questions like, “Show me Rust tutorials from 2023” or “Summarize recent posts about homelab backups.” Optional sharing: Pool indexes with trusted friends/communities.

Why? No Google/YouTube spam—only content you choose. Works offline (archive forums, videos, docs). Local AI (Mistral) or cloud (paid) for smarter searches.

Would this be useful to you? What sites would you crawl? Any killer features I’m missing?

Prototype in progress—just testing interest!

23
 
 

The problem is simple: consumer motherboards don't have that many PCIe slots, and consumer CPUs don't have enough lanes to run 3+ GPUs at full PCIe gen 3 or gen 4 speeds.

My idea was to buy 3-4 computers for cheap, slot a GPU into each of them and use 4 of them in tandem. I imagine this will require some sort of agent running on each node which will be connected through a 10Gbe network. I can get a 10Gbe network running for this project.

Does Ollama or any other local AI project support this? Getting a server motherboard with CPU is going to get expensive very quickly, but this would be a great alternative.

Thanks

24
 
 

Please take this discussion to this post: https://lemmy.ml/post/28376589

Main contentSelfhosting is always a dilemma in terms of security for a lot of reasons. Nevertheless, I have one simple goal: selfhost a Jellyfin instance in the most secure way possible. I don't plan to access it anywhere but home.

TL;DR

I want the highest degree of security possible, but my hard limits are:

  • No custom DNS
  • Always-on VPN
  • No self-signed certificates (unless there is no risk of MITM)
  • No external server

Full explanation

I want to be able to access it from multiple devices, so it can't be a local-only instance.

I have a Raspberry Pi 5 that I want to host it on. That means I will not be hosting it on an external server, and I will only be able to run something light like securecore rather than something heavy like Qubes OS. Eventually I would like to use GrapheneOS to host it, once Android's virtual machine management app becomes more stable.

It's still crazy to me that 2TB microSDXC cards are a real thing.

I would like to avoid subscription costs such as the cost of buying a domain or the cost of paying for a VPN, however I prioritize security over cost. It is truly annoying that Jellyfin clients seldom support self-signed certificates, meaning the only way to get proper E2EE is by buying a domain and using a certificate authority. I wouldn't want to use a self-signed certificate anyways, due to the risk of MITM attacks. I am a penetration tester, so I have tested attacks by injecting malicious certificates before. It is possible to add self-signed certificates as trusted certificates for each system, but I haven't been able to get that to work since it seems clients don't trust them anyways.

Buying a domain also runs many privacy risks, since it's difficult to buy domains without handing over personal information. I do not want to change my DNS, since that risks browser fingerprinting if it differs from the VPN provider. I always use a VPN (currently ProtonVPN) for my devices.

If I pay for ProtonVPN (or other providers) it is possible to allow LAN connections, which would help significantly, but the issue of self-signed certificates still lingers.

With that said, it seems my options are very limited.

25
 
 

Hello everybody, Daniel here!

Today, we're excited to announce the release of Linkwarden 2.10! 🥳 This update brings significant improvements and new features to enhance your experience.

For those who are new to Linkwarden, it's basically a tool for preserving and organizing webpages, articles, and documents in one place. You can also share your resources with others, create public collections, and collaborate with your team. Linkwarden is available as a Cloud subscription or you can self-host it on your own server.

This release brings a range of updates to make your bookmarking and archiving experience even smoother. Let’s take a look:

What’s new:

⚡️ Text Highlighting

You can now highlight text in your saved articles while in the readable view! Whether you’re studying, researching, or just storing interesting articles, you’ll be able to quickly locate the key ideas and insights you saved.

highlight.jpeg

🔍 Search Is Now Much More Capable

Our search engine got a big boost! Not only is it faster, but you can now use advanced search operators like title:, url:, tag:, before:, after: to really narrow down your results. To see all the available operators, check out the advanced search page in the documentation.

For example, to find links tagged “ai tools” before 2020 that aren’t in the “unorganized” collection, you can use the following search query:

tag:"ai tools" before:2020-01-01 !collection:unorganized

This feature makes it easier than ever to locate the links you need, especially if you have a large number of saved links.

🏷️ Tag-Based Preservation

You can now decide how different tags affect the preservation of links. For example, you can set up a tag to automatically preserve links when they are saved, or you can choose to skip preservation for certain tags. This gives you more control over how your links are archived and preserved.

tag_based_preservation.jpeg

👾 Use External Providers for AI Tagging

Previously, Linkwarden offered automated tagging through a local LLM (via Ollama). Now, you can also choose OpenAI, Anthropic, or other external AI providers. This is especially useful if you’re running Linkwarden on lower-end servers to offload the AI tasks to a remote service.

🚀 Enhanced AI Tagging

We’ve improved the AI tagging feature to make it even more effective. You can now tag existing links using AI, not just new ones. On top of that, you can also auto-categorize links to existing tags based on the content of each link.

ai_tagging.jpeg

⚙️ Worker Management (Admin Only)

For admins, Linkwarden 2.10 makes it easier to manage the archiving process. Clear old preservations or re-archive any failed ones whenever you need to, helping you keep your setup tidy and up to date.

worker_page.jpeg

✅ And more...

There are also a bunch of smaller improvements and fixes in this release to keep everything running smoothly.

Full Changelog: https://github.com/linkwarden/linkwarden/compare/v2.9.3...v2.10.0

Want to skip the technical setup?

If you’d rather skip server setup and maintenance, our Cloud Plan takes care of everything for you. It’s a great way to access all of Linkwarden’s features—plus future updates—without the technical overhead.


We hope you enjoy these new enhancements, and as always, we'd like to express our sincere thanks to all of our supporters and contributors. Your feedback and contributions have been invaluable in shaping Linkwarden into what it is today. 🚀

Also a special shout-out to Isaac, who's been a key contributor across multiple releases. He's currently open to work, so if you're looking for someone who’s sharp, collaborative, and genuinely passionate about open source, definitely consider reaching out to him!

view more: next ›