suicidaleggroll

joined 3 months ago
[–] suicidaleggroll@lemm.ee 4 points 2 months ago

Yeah unfortunately it's no guarantee that blue states are better. Many are, but not all. Colorado, California, and New York all score a 'C', still not great but much better.

[–] suicidaleggroll@lemm.ee 1 points 2 months ago (1 children)

They likely streamed from some other Plex server in the past, and that's why they're getting the email. The email specifically states that if the server owner has a plex pass, you don't need one.

I got the email earlier today and it couldn't be clearer:

As a server owner, if you elect to upgrade to a Plex Pass, anyone with access to your server can continue streaming your server content remotely as part of your subscription benefits.

[–] suicidaleggroll@lemm.ee 2 points 2 months ago* (last edited 2 months ago)

I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself (cAdvisor + node_exporter + Promethus + Grafana + Alert Manager). If I find that the VM is creeping up on its load or memory limits, I'll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.

Theoretically I could implement per-container resource limits, but I've never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.

[–] suicidaleggroll@lemm.ee 3 points 2 months ago* (last edited 2 months ago) (2 children)

People always say to let the system manage memory and don't interfere with it as it'll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that's been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it's decided to move some of its memory into swap, and responsiveness doesn't pick up until I manually empty the swap so it's operating fully out of RAM again.

So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again. Other people will probably recommend differently, but that's been my experience after ~25 years of using Linux daily.

[–] suicidaleggroll@lemm.ee 61 points 2 months ago (16 children)

Civil Asset Forfeiture. It's legal in most states, but some are better than others. Oklahoma is dog shit:

https://ij.org/report/policing-for-profit-3/?state=OK

Seriously people, don't move to Oklahoma, or really most southern states.

[–] suicidaleggroll@lemm.ee 2 points 2 months ago (1 children)

The bottom hits when all (or most) of the bad news is on the table. People know what's happening and what the future looks like. It doesn't happen when the pain is gone, just when people know what that pain will look like for the foreseeable future. For example, in 2022 the bottom happened when rate increases started to slow down, not when they stopped completely, just when inflation was starting to level off and we dropped from .75pt hikes to .5pt and people could see a path forward.

We are not at that point yet in the current crash, nobody has any idea how bad it's going to get, none of the indicators show the problems yet because they're all lagging, and consumers haven't been hit yet by the high prices and supply chain crashes because manufacturers and retailers are still running off of back stock.

I could be wrong of course, but I don't think I am.

[–] suicidaleggroll@lemm.ee 5 points 2 months ago* (last edited 2 months ago) (3 children)

Way too early. We haven't even begun to see the results of these policies yet. Inflation results don't yet take tariffs into account, the mass layoffs that are currently happening don't show up in unemployment stats yet, the massive GDP shrinkage isn't showing up yet, supply chains that are in the process of crashing haven't yet affected consumers. This is a dead cat bounce, which literally every single crash in history has, and every time there are people shouting that the pain is over and now is the time to buy back in, right before the bottom drops out.

[–] suicidaleggroll@lemm.ee 10 points 2 months ago

Market self regulation assumes informed consumers that are smart enough to know what things mean

Not just smart enough, but informed enough. That means every person spending literally hundreds/thousands of hours per week researching every single aspect of every purchase they make. Investigating supply chains, performing chemical analysis on their foods and clothing, etc. It's not even remotely realistic.

So instead, we outsource and consolidate that research and testing, by paying taxes to a central authority who verifies all manufacturers keep things safe so we don't have to worry about accidentally buying Cheerios that are laced with lead. AKA: The government and regulations.

[–] suicidaleggroll@lemm.ee 3 points 2 months ago (2 children)

No names? On what? People just go around saying “no names”?

It says "no mames". I'm not sure what on earth that means, but I suspect it isn't a typo (writeo?)

[–] suicidaleggroll@lemm.ee 6 points 2 months ago* (last edited 2 months ago)

I self-host Bitwarden, hidden behind my firewall and only accessible through a VPN. It's perfect for me. If you're going to expose your password manager to the internet, you might as well just use the official cloud version IMO since they'll likely be better at monitoring logs than you will. But if you hide it behind a VPN, self-hosting can add an additional layer of security that you don't get with the official cloud-hosted version.

Downtime isn't an issue as clients will just cache the database. Unless your server goes down for days at a time you'll never even notice, and even then it'll only be an issue if you try to create or modify an entry while the server is down. Just make sure you make and maintain good backups. Every night I stop and rsync all containers (including Bitwarden) to a daily incremental backup server, as well as making nightly snapshots of the VM it lives in. I also periodically make encrypted exports of my Bitwarden vault which are synced to all devices - those are useful because they can be natively imported into KeePassXC, allowing you to access your password vault from any machine even if your entire infrastructure goes down. Note that even if you go with the cloud-hosted version, you should still be making these encrypted exports to protect against vault corruption, deletion, etc.

[–] suicidaleggroll@lemm.ee 2 points 2 months ago

That’s a complicated question. Bigger memory can split it between more banks, which can mean more precharge penalties if the memory you need to access is spread out between them.

But big memory systems generally use workstation or server processors, which means more memory channels, which means the system can access multiple regions of memory simultaneously. Mini-PCs and laptops generally only have one memory controller, higher end laptops and desktops usually have two, workstations often have 4, and big servers can have 8+. That’s huge for parallel workflows and virtualization.

[–] suicidaleggroll@lemm.ee 11 points 2 months ago* (last edited 2 months ago) (8 children)

I'd be in trouble, since between ZFS and my various VMs, my system idles at ~170 GB RAM used. With only 32 I'd have to shut basically everything down.

My previous system had 64 GB, and while it wasn't great, I got by. Then one of the motherboard slots died and dropped me to 48 GB, which seriously hurt. That's when I decided to rebuild and went to 256.

view more: ‹ prev next ›