New Jul 30, 2024

Summer 2024: Self-Hosted Update

More Front-end Bloggers All from dbushell.com View Summer 2024: Self-Hosted Update on dbushell.com

Back in March I shared my spring self-hosted update. I had a new ZimaBlade to play with. I bought a cheap GeForce GT 710 just to try the PCIe. It worked. That graphics card is terrible (more on that later). The ZimaBlade is also, frankly, terrible. The CPU thermal throttles itself to death. It gets dangerously hot. Add it to the list of crowdfunding campaigns I regret.

So I still use my Raspberry Pi and Mac Mini as home servers. Until now!

Mini-ITX Build

I built this thing based around the ASRock N100DC-ITX motherboard.

Mini ITX pc build

This motherboard has the same N100 CPU my Beelink EQ12 uses for my forbidden router. The Beelink is now running Proxmox with OPNsense and a few other critical services for DNS and backup I never want offline.

The new Mini-ITX PC will replace my Mac Mini as my main self-hosted server. I’ve been running Proxmox on the Mac for years but always struggled with stable T2 patched kernels. Despite being six years older the Mac still has better multi-core performance. I might use it as a desktop Mac again!

Inside the new mini PC I’ve squashed:

The fans are tiny but silent and do shift some air. The SSD it blocking a lot of the intake and the RAM is blocking the exhaust. Not sure if there is a better configuration. If I add a second SSD I may glue both drives to the ceiling panel.

The case is around 230×185mm in footprint (9×7.5” in freedom). There is no power supply. The motherboard takes a 19 V power adapter like a laptop. There are eight USB ports in total.

The graphics card is mostly for laughs; see below. Riser cable missing from picture.

Proxmox Banter

I’ve played with PCI passthrough in Proxmox before. I was hoping with the N100DC-ITX motherboard I can leave the iGPU alone and passthrough a graphics card.

Alas, no luck. If a discrete GPU is plugged into PCIe the iGPU just vanishes. The Proxmox host starts using the graphics card and lspci will not report the integrated graphics. No amount of kernel config worked. I’ve read there can be a BIOS setting to enable both but the ASRock BIOS on this board has no option I can find.

Disappointing! I’ll have to go back to splitting the iGPU into 7 horcruxes. As mentioned the GeForce GT 710 is laughably bad and likely worse than the Intel UHD graphics. I’d like to get this working regardless. Any ideas?

Curiously, when connecting a PCIe card the existing device IDs shift. The ethernet controller moved from 01:00.0 to 02:00.0. This broke Proxmox networking because the bridge is hardcoded for enp1s0 in /etc/network/interfaces.

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
	address 192.168.1.10/24
	gateway 192.168.1.1
	bridge-ports enp1s0
	bridge-stp off
	bridge-fd 0

I could edit the two references to enp2s0 but I found a better solution.

Create this file:

vim /etc/systemd/network/10-eth0.link

Add the contents below using the ethernet controller MAC address which can be found with ip link.

[Match]
MACAddress=1a:2b:3c:4d:5e:6f

[Link]
Name=eth0

Then edit interfaces replacing enp1s0 with eth0. Reboot and the Proxmox bridge remains online regardless of PCI device IDs.

I learnt something else fun too! I was trying to passthrough the GPU at 01:00.0 to a VM but later removed the card. Proxmox will happily attempt to pass whatever device shifted to that ID. In my case the ethernet controller. Unlike GPUs, apparently Proxmox is more than happy to relinquish control and bring down the host link. When I booted the VM the entire host went offline. I feel like Proxmox is mocking me.

Future Projects

So my graphics card is in the bin with the ZimaBlade for now. Ironically passthrough on the ZimeBlade worked perfectly.

I’ll probably buy a NIC with 10 Gb or multiple ports to make use of the PCIe slot. I could move my router from the Beelink to this machine in future.

Or I could get an additional SATA controller. It’s only PCIe 3.0 x4 but that should be fast enough for multiple hard drives. I’d have to buy a larger case though. There’s no power supply but the motherboard does have a SATA power connector I’m using for one SSD. Can this be extended to power multiple HDD? Can I add a real PSU that isn’t connected to the motherboard?

Anyway, projects for another day! I have Git infrastructure to setup.

Scroll to top