nah you're probably not going to get any benefits from it. The best way to make your setup more maintainable is to start putting your compose/kubernetes configuration in git, if you're not already.
jlh
First they came for the Palestinian supporters, and I did not speak out, because I was not a Palestinian supporter.
85C could be throttling
Ah, no, Kopia uses a shared bucket.
Seems like a good way to do it.
Keep in mind Kopia has some weirdness when it comes to transferring repos between filesystem and S3, so you'd probably want to only keep one repo.
https://kopia.discourse.group/t/exported-s3-storage-backup/3560
Backblaze B2 is a cheap S3 provider. Hetzner storage box is even cheaper, but it doesn't support S3 natively, so you're likely to run into issues with the kopia repo compatibility I mentioned.
Who made the graphic?
Check out this website for information about multiplayer games on Linux:
https://areweanticheatyet.com/
There's only about 700 games that have broken anticheat and Hunt: Showdown's anticheat system officially supports running on Linux.
In terms of industrial applications, the abstract states
We have realized all-optical wavelength conversion for a more than 200-nm-wide wavelength span at 100 Gbit s−1 without amplifying the signal and idler waves. As the 32-GBd 16-QAM is the dominant modulation format of current optical-fibre communication systems connecting the continents on Earth, the Si3N4-chip high-efficiency wavelength conversion demonstrated has a bright future in the all-optical reconfiguration of global WDM optical networks by unlocking transmission beyond the C and L bands of optical fibres and increasing the capacity of optical neuromorphic computing for artificial intelligence.
From the abstract: "we obtained a continuous-wave gain bandwidth of 330 nm in the near-infrared regime. [...] Furthermore, we realized wide all-optical wavelength conversion of single-wavelength signals beyond 100 Gbit s−1 without amplifying the signal and idler wave."
Here is the paper: https://www.nature.com/articles/s41586-025-08824-3
I think figure 4 from the PDF shows it the best. Their amplifier covers 1400 nm to 1700 nm infrared lasers.
Yeah, what you're talking about is called GitOps. Using git as the single source of truth for your infrastructure. I have this set up for my home servers.
https://codeberg.org/jlh/h5b
nodes
has NixOS configuration for my 5 kubernetes servers and a script that builds a flash drive for each of them to use as a boot drive (same setup forporygonz
, but that's my dedicated DHCP/DNS/NTP mini server)mikrotik
has a dump of my Mikrotik router config and a script that deploys the config from the git repo.applications
has all my kubernetes config: containers, proxies, load balancers, config files, certificate renewal, databases, clustered raid, etc. It's all super automated. A pretty typical "operator" container to run in Kubernetes is ArgoCD, which watches a git repo and automatically deploys any changes or desyncs back to the Kubernetes API so it's always in sync with git. I don't use any GUI or console commands to deploy or update a container, I just edit git and commit.The kubernetes cluster runs about 400 containers, most of them just automatic replicas of services for high-availability. Of course there's always some manual setup steps outside of git, like partitioning drives, joining the nodes to the cluster, writing hardware-specific config, and bootstrapping Argocd to watch git. But overall, my house could burn down tomorrow and I would have everything I need to redeploy using this git repo, the secrets git repo, and my backups of my databases and container
/data
dirs.I think Portainer supports doing GitOps on Docker compose? Never used it.
https://docs.portainer.io/user/docker/stacks/add
Argocd is really the gold standard for GitOps though. I highly recommend trying out k3s on a server and running ArgoCD on it, it's super easy to use.
https://argo-cd.readthedocs.io/en/stable/getting_started/
Kubernetes is definitely different than Docker Compose, and tutorials are usually written for Docker
compose.yml
, not KubernetesDeployments
, but It's super powerful and automated. Very hard to crash once you have it running. I don't think it's as scary as a lot of people think, and you definitely don't need more than one server to run it.