Xen (by alaric)
I'm seriously considering becoming a big user of Xen. As in, making all of my servers run Xen (with NetBSD as the host OS), with everything of import then running in Xen "domains" (virtual servers) beneath.
There's a number of advantages to this.
Firstly, it's easy to move domains between physical servers. Without even stopping them running (although I assume you have to sort out your own trickery to relocate the IP address). This means I could upgrade hardware by bringing the new machine in, transferring the domains to it, then taking the old machine down. Or upgrade the host OS by shifting everything to a temporary second machine while I upgrade, then shifting back. Although I might as well upgrade the OS and the hardware together.
Secondly, I can compartmentalise things better. I could have different virtual servers for mail, DNS, Apache, Jabber, and shell logins on love
, my primary server. Then I'd have a single domain (perhaps the host domain) connected to the 'live' IP address and running port forwarding of services to the appropriate internal IPs, with ssh forwards to all of the IPs for administrative logins. This helps to make things a little more secure in some ways, and a lot easier to manage for me, and decreases the chance of one thing breaking and then knocking everything else over.
Thirdly, If I wanted to upgrade the OS or server software in any of them, I could create a new virtual server, configure and test it, then bring it up while bringing the old one down (since all the data is stored on a different machine and accessed via NFS, PostgreSQL, or MySQL over the LAN, this is very easy).
Fourthly, on my home server (which I use for providing workgroup services to the LAN, and for development) I can have separate domains for the internal workgroup services, externally accessible interfaces (so I can contact my home system from afar, and people can VoIP-call in), and development domains running various different operating systems so I can use whatever's appropriate to the task at hand. I'd set all of these machines up with different IPs, and have the host domain do Ethernet bridging between them and the vlans for internal LAN, wifi LAN, or external LAN, depending on the role of the virtual server.
The downside is, I'm going to need more RAM, since there'll be more kernels and copies of libc and so on lying about consuming space. But not by a terrible amount. And, of course, the fact that I'll then have rather a lot of virtual servers to configure and remember to patch - although that just might motivate me to better automate things...
By @ndy, Tue 9th Jan 2007 @ 8:51 am
Xen has been looking quite cool for a while now. I've never used it tho'. As you mention, it takes more RAM. What's the cost associated with an "intermachine context switch"? Is it the same as a regular context switch?
By Charlee, Tue 9th Jan 2007 @ 9:12 am
Ahha, I know very little about Xen, but I know a man who does.
By alaric, Tue 9th Jan 2007 @ 10:47 am
I think an inter-domain context switch should take the same order of magnitude time as a 'regular' context switch - after all, it still just involves saving and loading CPU state between two processes. The differences occur in the fact that kernels run in ring 1 rather than ring 0, and that they 'syscall' to Xen to access hardware rather than doing it themselves.
By alaric, Tue 9th Jan 2007 @ 4:38 pm
I downloaded a Xen 3 Linux LiveCD and had a play with it last night, which was interesting. Found out after MUCH searching that, when you're connected to a domain's console from the root domain, you have to hit Ctrl+] to disconnect from the console (until I figured that out, I had to shut down the virtual server to get back to the root domain...).
I started two domains, then in one of them set off a shell script fork bomb as root, and the other continued to run without batting an eyelid. That's what we like to see.
I think I'll start my Xen adventures by fitting a bunch more RAM into my home server and a bigger hard disk, then Xening it so I have a non-critical playground to get the hang of things like how to partition the disks and set up the networking. Then I'll wait until I'm buying a new frontend server, and set it up to do what love does (from home) but with multiple Xen domains - then swap it out for love (which will be easy, since love's data all comes via NFS/SQL from infatuation); and when that's all working OK, I'll fit the old love with large SATA disks (it has an unused SATA controller on the motherboard) and replace infatuation with it (which will be a slower, more laborious, process since it'll involve taking love and infatuation down then transferring all those gigs of data, although clever use of rsync may make it possible to reduce the downtime period to an hour or less). I'm thinking that the NIS, NFS, PostgreSQL, and MySQL services infatuation provides could run in separate domains, too, for compartmentalisation of failures and ease of upgrade, but it'll mean having to split the disk partitions up carefully.
Which will leave the current infatuation's hardware free - which I can then install with the latest NetBSD and Xen, and then have it replacing pain, my third server (which runs RFC.net and backup MX/NS/HTTP services for love and infatuation). Pain has an uptime of 1,083 days to date, since the 21st of Jan 2004, which is both amazing (wow! No power outages! No kernel panics!) and also worrying (No OS upgrades!); it'll be a shame to take it down, but it'll also be very good to have it running a new OS and in manageable Xen domains.
Which will then leave me pain's old hardware free. Hmmm. I'll have to think of a use for that.