Virtual machines (by alaric)
Once upon a time, computers were generally mainframes - mainly because we hadn't yet learnt to make small computers.
As technology progressed, computers became smaller, and more ubiquitous.
However, at the same time, the role of the network became more and more important. At first, the model du jour was that there'd be a PC on every desk, and as a bit of a hack, networks were designed so that you could shares files between the PCs. But each file still sat on one machine, and the others just accessed it over the network.
As the number of PCs in an organisation grew, it became apparent that this sucked; it was hard to find the files you needed, and when a computer broke, the files on it were lost. So the workgroup server was born; all the important files in a given office would be placed on a centralised shared server, which could then be easily backed up, and it was cost effective to spend a bit extra giving it RAID disks to make it more reliable in the first place.
But as the software industry slowly learnt ways to make more and more of the network, there was an increasing desire for application servers, web servers, mail servers, directory servers, and other such centralised systems, providing their services to the network of PCs. After a while, managing all those servers began to become more and more of a headache.
So we're starting to see an interesting shift back to the mainframe. It's becoming more and more cost effective for an organisation to abandon its rooms full of servers, and instead get a single very large computer that does the job of many, centralising the maintenance workload and benefitting from economies of scale to get higher overall reliability and lower costs.
A key technology driving this is that of creating virtual machines. The difference between a mainframe (lots of CPUs, lots of RAM, and lots of disks connected by a network) and a room full of servers (lots of CPUs, each connected to their own RAM and disks, then connected to each other by a network) is that the mainframe can group any number of CPUs, any amount of RAM, and any amount of disk space together into a virtual server, isolated from the resources of other virtual servers. With a room full of servers, when a CPU breaks, the server containing that CPU is down until the CPU is replaced - even though there's idle CPU time going on other adjacant servers; but when a CPU in a mrainframe breaks, the CPU is disconnected and an idle CPU is patched in to replace it with little or no disruption. And whereas adding more RAM to a real server involves switching it off, new RAM (or disks or CPUs) can be added to a mainframe while it's running, then made available to virtual servers.
In other words, the virtual server isn't tied to a particular set of hardware resources, like a real server. It can be moved around between the available hardware without disruption, and this agility allows for more efficient allocation of resources (rather than overspeccing each serve to allow for growth, one can just slightly overspec the mainframe, and add idle resources to whichever virtual server needs them first - then add more resources to the mainframe to make sure you have enough spare to deal with future growths). It also allows for fast reaction to hardware failures. And above a certain size, it's even cheaper; particularly when you include power consumption, server room space, and administrative manpower.
IBM have started a new line of business - renting virtual servers on their mainframes. Under their Linux Virtual Services offering, they will rent you any number of virtual Linux servers, joined to each other and to the Internet by a virtual router you can configure yourself. They just charge you monthly for CPU/RAM/disk/bandwidth usage. It's not very cheap compared to getting a server from Rackspace, but it offers the other benefits of a mainframe-hosted virtual server: extreme reliability, and on-the-fly expandability.
Pages: 1 2