The History of Cloud Computing


We are still at the beginning of this story, every day we see many innovations.

From A Computer, To The Grid

Two decades ago, computers were grouped to form one large supercomputer with processing capacity. This technique is common among many IT departments. Clustering, as it is known, allows you to configure computers using a special protocol so that everyone can chat with others. The purpose was to balance the processing load across multiple machines, dividing them into work units and multiplying processors.

To the user, it is a little different in the execution of an application on the CPU. The cluster management software ensures greater CPU processing capability at the time it is used to run an application or code. And the key to effective management is the engineering behind where the data is performed / conducted. This process became known as date residency. Computers were grouped usual and physically connected in magnetic disks that stored the data, while CPUs performed processes input / output (I / O) quickly and efficiently.

In the early ’90s, Ian Foster and Carl Kesselman presented the concept of ‘The Grid’, making an analogy with the grid electricity where users could ‘plug in’ and use a more calibrated system. They thought that if companies cannot generate their own energy chains, should make the purchase of third party service that is able to provide a constant supply of electricity. Then he asked: “Why not apply the same concept to the sources of computing?”.

If a computer could plug into a grid of computers and only pay for the resources used, will yield a solution more that will be cost-effective for companies than buying and managing their own infrastructure. Grid Computing expands under the techniques used in clustered computing model, in which the multiple independent clusters appear to act as a simple grid because they have the same domain.

The biggest hurdle to overcome migration on clusters model for grid computing is the process of residency time, as with the nature distribution grid compute nodes could be anywhere in the world.

The issues of storage management, data migration and security are keys to success of any proposed solution of the grid model. A kit called Globus was created to address these issues, but the hardware infrastructure available did not progressed to the level that a real Grid Computing can achieve.

The Globus Toolkit, developed and maintained by the Globus Alliance and other community organizations for the technological model of Grid is an open source software used for building grid systems and applications. This kit allowed people to share database, tools and other tools securely online beyond the geographical boundaries, institutional and corporate without sacrificing local autonomy.

Entities related to Cloud Computing, as providers of data centers have used the concept of Grid Computing in service offerings to other organizations that do not want their infrastructures loaded but want the capabilities present in these data centers. One of the most famous provider of cloud computing services in India is ESDS (eNlight Cloud Storage), being a wide receiver storage for the Internet. It is a simple service interface that can be used to store and execute sets of data, anytime, anywhere.


Leave a Reply

Follow by Email