What is Virtualization?
Virtualization in computing – the process of on-set of computing resources, or their logical association that provides any advantages over the original configuration. This new virtual view of resources is not limited to the implementation, geographic location or physical configuration of component parts. Perhaps it sounds too complicated for an untrained person, so try to be translated into “human language”.
Development of solutions based on virtualization, many vendors do, I would say – pretty much everything. After all, the logical drives, which are just partitions on a single physical hard drive – is also virtualization. SMP Technology, which allows programs to submit two or more physical processors as a single virtual – is also virtualization.
It is distinguishes between three aspects of virtualization: server virtualization, presentation virtualization and application virtualization. With virtualization concepts, they are more or less familiar to almost all system administrators: the most striking example of it – Terminal Services Microsoft Windows Cloud Hosting Servers. Application virtualization – the creation of a special, isolated environment within the operating system to run individual applications, the theme is very big and interesting, for a single long article. Here and below the word “virtualization,” we shall mean server virtualization. Windows Server 2008 adds built-in support for virtualization (hypervisor), called Hyper-V. In Windows Server 2008 R2 hypervisor was substantially modified, and became known as Hyper-V 2.0.
Let’s take a closer look at server virtualization. What is it? Speaking in plain language – is the creation of software-emulated environment, which imitates the hardware of the physical computer: CPU, RAM, hard disk, I / O devices. Further, on this virtual server can be running (to be called “guest», «Guest OS»), and some applications. All this will work as a full server, but it is not clear: it will exist virtually, inside the OS on a physical server (for this OS uses the term, Host OS). Moreover, within a single physical server can run simultaneously on two or more, and sometimes – even dozens of virtual servers.
For what it might be useful? Initially, the virtual machines are used only for testing purposes: to conduct experiments with them is much easier, quicker and most importantly – cheaper than a real server. I’m sure many sysadmin ever in my practice experience anything in a virtual machine. But now virtualization has become increasingly used in industrial applications. The fact there are substantial reasons, although as in every decision – have a place to be and disadvantages. About it – more.
Advantages and disadvantages of virtualization
The most important thing: the use of virtualization allows a more rational allocation of hardware resources of servers. Indeed, because most servers on the strength of 10% of its resources – processing power, memory, etc. Virtualization allows you to place a few little busy servers to use one server to be loaded a little harder. It is clear that a single server, even a little more powerful – will be cheaper than some separate.
Likewise it is logical to assume that one server will consume much less power and occupy less space in the rack (or on the desk or under your desk – well, that’s who – like). Another very important advantage – ease of administration. Any administrator is often faced with the need to go into the server and make some kind of manipulation directly on the console of the server itself, when it “fell”. The use of virtualization allows you to access the consoles of virtual servers directly from the administrator’s workstation, and the need for trips to the server virtually eliminates the addition – are greatly simplified backup and disaster recovery of servers. All administrators know how it is difficult to make a working backup system partition server: it is often necessary to buy additional software and in some cases – to restart the server. The use of virtualization allows you to backup server disk directly on the fly, transparent to users, and recovery is reduced to just copy a few files.
But, unfortunately, any stick – always double-edged sword, and besides all the advantages, the solutions based on virtualization has a significant drawback: lowering the overall system reliability. Indeed, since the same physical server simultaneously run multiple virtual machines – that the failure of the server itself (for example, “burned” processor or RAID-controller) will lead to the simultaneous failure of all virtual machines running on it, and accordingly – all services that they provide. So, together with the solutions based on virtualization is advisable to use fault-tolerant solutions, in particular – on the basis of fault-tolerant clusters. More information about this question below.
There is one drawback, on the only virtualization on Windows Server 2008: The hardware requirements include a 64-bit processor with hardware support for virtualisation and DEP. So, a lot of older servers with 32-bit processors, we simply do not fit. Nevertheless, to buy a server that does not meet the technical requirements of Hyper-V is currently difficult, because the servers with the old models of processors were recently removed from production by all major service providers.
Latest posts by ESDS (see all)
- How Cloud Computing Is Changing The Labor Market - March 25, 2015
- Adopting Infrastructure as a Service Can be a Good Deal - March 17, 2015
- Will Virtualize? Take These Six Points Into Consideration - March 12, 2015