blackstrat

joined 2 years ago
[–] blackstrat@lemmy.fwgx.uk 2 points 1 hour ago

I had Slackware running on a couple of 386 machines with 200MB hard disks. It was impossible to do almost anything as it was all compile from source but I didn't have the disk space to install all the compiler tools and what I was trying to run on them. I was originally going to use them as part of a distributed system for my degree, but in the end I didn't use them and did something different instead.

I used CentOS at work a lot for several years and liked it, but only fully switched form Windows at home 10 years ago and I went to Ubuntu at the time. Installed KDE on it, messed around with i3 and had a great time. I then went hopping and landed on Endeavour OS which I've been really enjoying for many years now and have no intention of moving from. All my servers still run Ubuntu LTS Server as it has been unbelievably solid.

[–] blackstrat@lemmy.fwgx.uk 4 points 2 hours ago (5 children)

1950 is around the birth of rock and roll, which seems a long tine ago to me. Seems crazy that my kids might see 2100 though

[–] blackstrat@lemmy.fwgx.uk 5 points 2 hours ago

Personally, 6 months. Sounded great on paper and even today it sounds great, but I really didn't like it. Now I'm somewhere that sounds rubbish on paper and in many ways is, but I'm pretty happy.

Quickest I ever saw was when I did a 2 week school placement in an IT support company. The whole company was like 4 people including me. Back in the late 90's it was all reinstalling Windows, ISDN lines, that sort of basic IT provided in to companies. They hired a new guy and sent him off to install a couple of Windows PCs for some company. The next day he left as he was out of his depth.

[–] blackstrat@lemmy.fwgx.uk 1 points 12 hours ago

I've got a great Plexi sound from my Helix and Les Paul for playing Led Zeppelin numbers.

[–] blackstrat@lemmy.fwgx.uk 1 points 14 hours ago

Sounds like something written at the likes of Manjaro which differ enough from plain Arch for it to be problematic.

To be honest, with EOS the point is moot - they have their own excellent forums and if you do insist on going to the Arch forums, just say you're using Arch.

[–] blackstrat@lemmy.fwgx.uk -1 points 15 hours ago

Jist install EndeavourOS. You'll get the wallpaper and the best distro to boot.

[–] blackstrat@lemmy.fwgx.uk 4 points 1 day ago

Can confirm EOS works beautifully with Steam and has done for all the years I've used it.

[–] blackstrat@lemmy.fwgx.uk -4 points 4 days ago

The pros are that it's hip and trendy and almost complete (and has been for the past 15 years).

The cons are it doesn't work & has insane failure modes that maximise downtime.

[–] blackstrat@lemmy.fwgx.uk 9 points 4 days ago

Yes. Small socks.

[–] blackstrat@lemmy.fwgx.uk 3 points 1 week ago (1 children)

Up there with your retirement planning being to win the lottery.

[–] blackstrat@lemmy.fwgx.uk 4 points 1 week ago (1 children)

Found the medium fake boob enjoyer.

 

I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they're a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they're not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x 'refurbed' 12TB enterprise drives and running ZFS RAIDZ1. So even though they'd have a decent amount of hours on them, they'd be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I'm not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it'll be nicer.

 

I previously asked here about moving to ZFS. So a week on I'm here with an update. TL;DR: Surprisingly simple upgrade.

I decided to buy another HBA that came pre-flashed in IT mode and without an onboard BIOS (so that server bootups would be quicker - I'm not using the HBA attached disks as boot disks). For £30 it seems worth the cost to avoid the hassle of flashing it, plus if it all goes wrong I can revert back.

I read a whole load about Proxmox PCIE passthrough, most of it out of date it would seem. I am running an AMD system and there are many sugestions online to set grub parameters to amd_iommu=on, which when you read in to the kernel parameters for the 6.x version proxmox uses, isn't a valid value. I think I also read that there's no need to set iommu=pt on AMD systems. But it's all very confusing as most wikis that should know better are very Intel specific.

I eventually saw a youtube video of someone running proxmox 8 on AMD wanting to do the same as I was and they showed that if IOMMU isn't setup, then you get a warning in the web GUI when adding a device. Well that's interesting - I don't get that warning. I am also lucky that the old HBA is in its own IOMMU group, so it should pass through easy without breaking anything. I hope the new one will be the same.

Worth noting that there are a lot of bad Youtube videos with people giving bad advise on how to configure a VM for ZFS/TrueNAS use - you need them passed through properly so the VM's OS has full control of them. Which is why an IT HBA is required over an IR one, but just that alone doesn't mean you can't set the config up wrong.

I also discovered along the way that my existing file server VM was not setup to be able to handle PCIe passthrough. The default Machine Type that Proxmox suggests - i440fx - doesn't support it. So that needs changing to q35, also it has to be setup with UEFI. Well that's more of a problem as my VM is using BIOS. A this point it became easier to spin up a new VM with the correct setting and re-do the configuration of it. Other options to be aware of: Memory ballooning needs to be off and the CPU set to host.

At this point I haven't installed the new HBA yet.

Install a fresh version of Ubuntu Server 24.04 LTS and it all feels very snappy. Makes me wonder about my old VM, I think it might be an original install of 16.04 that I have upgraded every 2 years and was migrated over from my old ESXi R710 server a few years ago. Fair play to it, I have had zero issues with it in all that time. Ubuntu server is just absolutely rock solid.

Not too much to configure on this VM - SSH, NFS exports, etckeeper, a couple of users and groups. I use etckeeper, so I have a record of the /etc of all my VMs that I can look back to, which has come in handy on several occasions.

Now almost ready to swap the HBA after I run the final restic backup, which only takes 5 mins (I bloody love restic!). Also update the fstabs of VMS so they don't try mount the file server and stop a few from auto starting on boot, just temporarily.

Turn the server off and get inside to swap the cards over. Quite straightforward other than the SAS ports being in a worse place for ease of access. Power back on. Amazingly it all came up - last time I tried to add an NVME on a PCIe card it killed the system.

Set the PICe passthrough for the HBA on the new VM. Luckily the new HBA is on it's own IOMMU group (maybe that's somehow tied to the PCIE slot?) Make sure to tick the PCIE flag so it's not treated as PCI - remember PCI cards?!

Now the real deal. Boot the VM, SSH in. fdisk -l lists all the disks attached. Well this is good news! Try create the zpool zpool create storage raidz2 /dev/disk/by-id/XXXXXXX ...... Hmmm, can't do that as it knows it's a raid disk and mdadm has tried to mount it so they're in use. Quite a bit of investigation later with a combination of wipefs -af /dev/sdX, umount /dev/md126, mdadm --stop /dev/sd126 and shutdown -r now and the RAIDynes of the disks is gone and I can re-run the zpool command. It that worked! Note: I forgot to add in ashift=12 to my zpool creation command, I have only just noticed this as I write, but thankfully it was clever enough to pick the correct one.

$ zpool get all | grep ashift
storage  ashift                         0                              default

Hmmm, what's 0?

$ sudo zdb -l /dev/sdb1 | grep ashift
ashift: 12

Phew!!!

I also have passed through the USB backup disks I have, mounted them and started the restic backup restore. So far it's 1.503TB in after precisely 5 hours, which seems OK.

I'll setup monthly scrub cron jobs tomorrow.

P.S. I tried TrueNAS out in a VM with no disks to see what it's all about. It looks very nice, but I don't need any of that fancyness. I've always managed my VM's over SSH which I've felt is lighter weight and less open to attack.

Thanks for stopping by my Ted Talk.

1
Anyone running ZFS? (lemmy.fwgx.uk)
submitted 6 months ago* (last edited 6 months ago) by blackstrat@lemmy.fwgx.uk to c/selfhosted@lemmy.world
 

At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I'll need to IT flash the HBA, or get another. I'm guessing it's best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

view more: next ›