Last September, I posted progress on the construction of our domestic mainframe. To recap, the intent is to build a dedicated home server that's as awesome as possible - meaning it's reliable, safe, and easy to maintain. That rules out "desktop tower PC in a cupboard" (accumulates dust bunnies, gets too hot, easily stolen, prone to children poking it); "put a 19" rack somewhere in your house" is better, but consumes a lot of floor footprint and doesn't fix the dust bunny problem. So I've made my own custom steel chassis; fed cold air at pressure via a filter, incorporating a dedicated battery backup system, locked and anchored to the wall, and with lots of room inside for expansion and maintenance.
Since that blog post, I've finished the metalwork, painted it with automotive paint using a spray gun (which was a massive job in itself!), fixed it to the wall, and fitted nearly all of the electronics into it.
A significant delay was caused by the motherboard not working. I sent it back to the shop, and they said it was fine; so I sent the CPU back, and they said THAT was fine; so I sent both back together and it turned out that the two of them weren't compatible in some way that was solved by the motherboard manufacturer re-flashing my BIOS. That's now up and running; I was able to use the HDMI and USB ports on the outside of the chassis to connect up and install NetBSD from a USB stick, then connected it to the network and installed Xen so I can run all my services in virtual machines. It's now running fine and everything else can be done via SSH, but the HDMI and USB ports are there so I can do console administration in future without having to open the case (unless I need to press the reset button, which is inside).
The one thing it's lacking is the management microprocessor. I've prototype this thing on a breadboard and written the software, but need to finish off the PCB and cabling: but it will have an AVR controlling three 10mm RGB LEDs on the front panel, and three temperature/humidity sensors in the inlet and outlet air (and one spare for more advanced air management in future). But the idea is that the three LEDs on the front panel will display useful system status, and the environment sensor data will be logged.
Here's what it looks like from the outside; note the air inlet hose at the top left:
The socket panel on the left hand side worked out pretty well - 240v inlet at the bottom, then on the aluminium panel, three Ethernets, HDMI, and USB (my console cable is still plugged into the HDMI and USB in the photo, which won't usually be the case):
And here's the inside, with lots of space for more disks or other extra hardware; the big black box at the bottom is the battery backup system:
Now I have Xen installed, I'm working on a means of building VMs from scripts, so any VM's disk image can be rebuilt on demand. This will make it easy for me to upgrade; any data that needs keeping will be mounted from a separate disk partition, so the boot disk images of the VMs themselves are "disposable" and entirely created by the script (the one slightly tricky thing being the password file in /etc/
). This will make upgrades safe and easy - I can tinker with a build script for a new version of a VM, testing it out and destroying the VMs when I'm done, and then when it's good, remount the live data partition onto it and then point the relevant IP address at it. If the upgrade goes bad, I can roll it back by resurrecting the old VM, which I'll only delete when I'm happy with its replacement. This is the kind of thing NixOS does; but that's for Linux rather than NetBSD, so I'm rolling my own that's a little more basic (in that it builds entire VM filesystems from a script, rather than individual packages, with all the complexities of coupling them together nicely).
I'm using NetBSD's excellent logical volume manager to make it easy to manage those partitions across the four disks. There are two volume groups, each containing two physical disks, so I can arrange for important data to be mirrored across different physical disks (not in the RAID sense, which the LVM can do for me, but in the sense of having a live nightly snapshot of things on separate disks, ready to be hot-swapped in if required). I still have SATA ports and physical bays free for more disks, and the LVM will allow me to add them to the volume groups as required, so I can expand the disk space without major downtime.
So for now it's just a matter of making VMs and migrating existing services onto them, then I can take down the noisy, struggling, cranky old servers in the lounge! This project has been a lot of work - but when I ssh into it from inside the house (over the cabling I put in between the house and the workshop) and see all that disk space free in the LVM and all the RAM waiting to be assigned to domU VMs that I can migrate my current services to, it's all worth it!