homelab

· erock's devlog


This is a living document that I update periodically about my "homelab."

status (2025-09-04) #

I'm tired of dealing with proxmox. It's fun, it has mostly just worked, with a few hiccups:

With the recent announcement of proxmox v9.0, I'm not particularly excited about making that upgrade.

Further, after all these grand ideas of hosting more services and creating more VMs, I've basically settled on 3 (dev, gpu, cpu) and cpu only hosts my unifi controller so it could easily be deprecated.

I also am not using any of the clustering features. I have a single install and don't have any plans of expanding it.

So really I just need 2 VMs, one of which could just be the host and the other could be running inside qemu. As such, I've decided to ditch proxmox and move back to archlinux.

I'm also planning to move from a mostly containerized deployment strategy, to a more hybrid approach, leveraging the systemd both for my OS supervisor, but also my container supervisor using quadlet. I also plan to lean more into the rich and vibrant arch ecosystem.

For example, instead of trying to using the nvidia container toolkit for gpu-enabled containers, I'm going to opt for arch packages. I think it'll dramatically reduce the amount of tinkering getting services like jellyfin or ollama to properly use my gpu.

I already have all my important data hosted on my zfs nas pool, so I just need to export it before installing a new OS. I also plan on swapping out one of my nvme drives since one of them seems to be causing zfs slowness. On that topic, I don't plan on using zfs for my root disk. I don't need it, everything important is on my nas. I'm also going to place my 2 nvme root disks in a raid 0 array so I can install video games with 2TB of fast storage.

Next steps:

status (2024-07-31) #

I'm finally in a place where I think things are mostly working properly. Let's start from the beginning.

To start, I built a new services VM, cpu, where all of my containers live for various web services. It's also a place where I'm going to be doing some self-hosting of public facing services.

I was having a bunch of issues getting LXC to connect to my nas VM and tried bunch of techniques and eventually got something working. However, in doing that I realized that it was kind of silly to have a nas vm -- that's just running zfs -- when I could just have my proxmox host -- that's also running zfs -- do it instead. They were both running zfs+debian so it felt silly to deal with a nas VM and hdd passthrough.

So I exported my nas zfs pool and imported it into my proxmox host, which worked great! Except I then had another problem: ashift was misconfigured. Darn, that means I need to rebuild my zfs pool.

So I got my old synology out and did data exiltration / infiltration on a fresh zfs pool. There, I think I have everything setup, until I rebooted proxmox after a kernel update: zfs datasets are empty. wtf? After hours messing around with it -- even rebuilding the zfs pool again from scratch -- I figured out what was wrong: I encrypted the disks and didn't mount with the correct flag zfs mount -l -a. Yea, I felt like an idiot.

Now everything is working. I'm still noticing occassional high load / high io when saving a bunch of files to the fs (e.g. package updates). Let's hope I can figure it out. My current suspicion it might be using proxmox with zfs in a 2-disk mirror pool? Not sure.

status (2024-06-25) #

I decided to convert my gaming rig into a virtualization server using proxmox.

Everything is setup and working really well. I knew GPU passthrough was going to be challenging but I eventually got it to work after about 8-hours of trial-and-error.

Overall really excited about having a system and UI dedicated to virtualization. I plan on distro-hopping a lot more now that it's trivial for me to spin up new VMs.

I'll keep updating this post with more status updates.

Here's a pic of my rack:

homelab

current #

last updated:

I have no idea what I'm doing. Subscribe to my rss feed to read more posts.