Virtual machines (by alaric)
But the benefits of virtualising your servers aren't restricted to those who can afford mainframes. One of the goals of cluster computing is to provide all of the benefits of a mainframe, but with off-the-shelf hardware. Rather than having special CPU, RAM, disk, and network interface modules connected by a special network so they can be connected into virtual servers, a cluster consists of a number of conventional servers - each with their own CPU, RAM, disk, and network interfaces - but with software that enables them to share their resources. A transparently distributed database or file system can tie all of those disks into a single logical store, with replication so the loss of a server is tolerated due to copies of the data residing elsewhere. This then allows any given piece of software to run on any server in the cluster, since it can access the same network resources and same data as on any other server. This allows the loss of a server to be tolerated by simply migrating its functions to other servers automatically, and allows the cluster to be expanded by just adding more servers.
Using off-the-shelf servers to form a cluster offers many of the advantages of a mainframe, but there are a few key differences:
- Lots of individual servers will consume more power and space than a mainframe - however, this can be solved by using Blade servers, which are minimalised servers sharing a common enclosure and power supply, bringing back the efficiences of scale
- Off the shelf servers and an off the shelf network to connect them are a lot cheaper than special mainframe modules; for a start, different manufacturers will compete to provide equivelant parts
- Mainframe-based virtual servers look just like ordinary dedicated servers to the software running on them and can run existing applications, while clusters generally need the software rewritten to use the distributed database/filesystem. However, this does give clusters a slight efficiency advantage; a cluster-aware application can happily run on thousands of servers at once, and get the full benefit of same, while a mainframe with a thousand CPUs all dedicated to a single giant virtual server won't necessarily run existing applications anywhere near a thousand times better unless the application has been written to take advantage of a large number of CPUs. Also, a cluster may (in addition to running special cluster-aware applications) use virtualisation software like Xen to run several virtual servers on each of the cluster's physical servers - but with the ability to migrate snapshots of the virtual servers between nodes to deal with failures, or onto new nodes to increase capacity.
If ARGON takes off as a clustering solution, I may consider merging virtualisation support into the HYDROGEN hardware abstraction layer, so it can host Xen-style paravirtualised machines, with an infrastructure to automatically migrate them around the cluster, thus allowing the same cluster to host cluster-aware ARGON software as well as absorbing legacy UNIX servers, which might be cool.
Pages: 1 2