10
Oct

Myths and Misconceptions about Cloud Computing and Hosting

As we all know that cloud computing and cloud hosting is becoming popular day by day and at  the same time, the myths and misconceptions about “cloud computing and cloud hosting” is growing among common end-users, by considering the network primarily as a means of transmitting information.

Cloud” is based only on “Software

Theoretically, it is quite possible to build a cloud over the industry-standard servers (x86) and intelligent software, combining multiple virtual devices, and obtain a “cloud”. But in fact, far from it, for various reasons such as, maintaining adequate performance (specialized ASICs or dedicated hardware resources), ensuring compatibility (to install drivers for each new x86 platform), or control functions (HIPPA, PCI-DSS, departmental isolation, etc.) are not all system developers that have abandoned the use of allocated hardware resources for specific elements of their data centers. In principle, the inevitability of some components of the virtualization computer environment is obvious.

Therefore, the market leaders producing the equipment, for example, Nexus 1000v, which ensures the transparency of the bandwidth of virtual machines at the level of network security technology with a built-VN-Link, providing a mobile network. And also a virtual security gateway and vWAAS. In some cases, clients choose the virtual device. In other cases, they prefer a combination of hardware and software resources, such as control points before the Nexus 1010v. All of these unified network services provide a standard set of things to share hardware and software resources to system designers .

Cloud” and “Flexible Resources

Many network equipment vendors have not fully realized an idea of flexible resources. The fact that the concept of “flexible resources” should not be restricted only to servers and storage systems, it should be extended to all the cloud, including its network elements.

Cloud” and the “Combined Resources

This is another area where many of them only associate with servers and storage systems. For many years, until the “cloud”, a network is a comprehensive resource that provides services (bandwidth, security, segmentation or isolation, the function of QoS, etc.) separate groups of systems. As networks become more virtual, more automated, as well as growing numbers of their tenants, a company offering solutions in this market continue to expand with the intellectual logic of their products that are required for the functioning of modern cloud systems. These solutions enable customers to deploy a modular virtual systems (eg such as Vblock and SMT by Cisco).

Virtualization” makes the “Cloud Server” more “Flexible

On the one hand, it is absolutely true, for dynamic resource allocation and rapid inclusion of new VMs with templates and clones. However, it requires some structural changes, for example, while using vMotion, Live Migration, or XenMotion, the starting and ending points should be in the same logical domain.

A survey of administrators of virtual systems showed that, for a month they have to perform thousands of migrating virtual machines. Thus, in addition to this flexibility, they have only to abolish the traditional distinction levels. To do this they should be allowed to build a large, two-level network, devoid of the persistent problems of hierarchical networks.

And it would be even better if this concept is not only included in one but many networks, thereby much simplifying the problem of mobility applications and prevent accidents. Just as at the time, the network was to support the transmission of various multimedia data in real time, now it must adapt to changes associated with virtualization and dynamic computing environments.

Virtual Machine – this is the “same server, and the software

It’s not quite true. Indeed, the application and the guest operating system probably will not see any difference, but the network administrators and security service is not indifferent. The virtual machine no longer starts and ends at one end of the cable Ethernet. In fact, its bandwidth may have no relationship to the cable, if it communicates with other virtual machines within one and the host. Or it may move from one host to another several times a day, thereby it is holding gigabytes of bandwidth (the virtual machine and storage system). So much more sensible is to introduce a “cloud” function, register all these migrations, assign these virtual machines to specific policies (security, function QoS, role-playing system access, etc.) to limit their movement in certain places, and, thus, extend technical support. Most administrators of virtual machines would be pleased to get such opportunities, especially now, when they began to demand data on a virtual machine, backup, the guest operating system, storage system, disaster recovery services and all other elements that are somehow associated with virtualization.

Cloud” is “Planned and Administered network

It is possible that this is true, but it means that your computer environment is still isolated from your network. You can look at things in a different way. Technologies such as virtualization, not only require a closer integration computing and network operations, but also a higher level of automation and control, this level of automation, would provide open access to all the autonomous systems and functional modules with prescribed policies.

We could continue this list, but we hope that you already begin to imagine the big picture. While the “cloud” provides end users and applications to an incredible opportunity, it makes certain demands on the level of network security. These requirements can be effectively implemented within the software and hardware platforms. If you limit your architecture only to networks with high bandwidth, then you will much faster exhaust its resources and expansion.

ESDS

Leave a Reply

RSS
Follow by Email