In computing, a server is a system of computation which provides centralized services to a network of computers. These services can be varied in nature, eg, for websites, applications, files and mail. Computers that access the services of a server are called clients. Networks using servers are kind of clientserver, used in networks of medium and large machines (and in networks where the issue of security plays a major role). The term is widely applied to complete computers, but a server may equate to a software or parts of a computer system, or even a machine which is not necessarily a computer.
The story of the hosting obviously has to do with computer networks. Networks allow communication between different computers, and with the growth of these, the idea of devoting some computers to provide some service to the network, while others would use these services. Servers would be responsible for the first function.
With the advent of networks, the need of growing networks have mainframes, which eventually contributes to the decreased use of mainframes. The growth of networking companies and the growing use of the Internet between professionals and ordinary users was the major impetus for the development and improvement of technologies for computing.
Clients and servers communicate via protocols, as well as two or more network computers.
A computer occasionally can provide more than one service simultaneously. Can exist in a network, a computer that acts as a webserver and database server, for example, or a computer can act as a file hosting, mail and proxy at the same time. Computers that act as a single type of computing is called dedicated server. Have the advantage of meeting a request from a client.
With the exception of the server database (a type of applicationserver), the other servers only store information, getting paid by the client processing the information. In the application computing, the roles are reversed, with the customer receiving the result of the data processing hosting machine.
Hardware and Software of a Server
Dedicated hosting, which have a high requirement of data per customer parts and working in critical applications using hardware – specific computing. Already servers that do not have these performances can use hardware from an ordinary computer.
For starters, many are based on inputs and outputs of information (mostly recordings and deletions of files), which implies input and output interfaces and hard drives for high performance and reliability. The type of hard drive has the most used standard SCSI, which allows the interconnection of multiple devices, arranged in RAID arrays.
Due to operation with many inputs and outputs of information, the servers require high speed processors, sometimes some servers are multi-processed, ie, have more than one processor. Servers also have a large amount of RAM and is commonly used for caching data.
Having to operate for a long time (often incessantly), some are connected to electrical generators. Other systems use power (for example, UPS ) who continue to feed the computing if there is any voltage drop.
And, operate for long periods of time, due to the existence of one or more high speed processors, hosting need an efficient heat dissipation, which means coolers that are more expensive, noisy, but gives greater efficiency and reliability.
There are other hardware – specific server, especially plaques, like hot swapping, which allows the exchange of these while the computer is on, which is critical for the network to continue to operate.
There is much discussion about to use or not to use a regular PC, the popular Personal Computer (PC) as servers and the need or not to purchase a device to act as a more robust hosting . The answer to this question depends on the use of the equipment and the “criticality” of the service that the servers is running. In a structure that is not critical, an ordinary computer can be used as a servers. Note that the network size does not matter, for example: a company with 3 instructors online, have 3 computers and one computing is Internet access. If this server fails the company’s business is stopped.
Predicting this kind of need, manufacturers of computer components develop more robust plates, applying a more elaborate ventilation engineering, redundancy and expandability items expanded, so that the servers can guarantee the availability of the service and reliability in the same.
Usually the concern is focused on developing servers in large market makers, who have prepared teams and laboratories for this purpose.
To run a clientserver network, it is necessary that an operating system is installed on the servers that recognizes this type of network. The operating systems for clientserver networks are:
Windows 95 , Windows 98 and Windows ME Operating systems only recognize networks of the point-to-point, and the operating system DOS does not support any kind of network.
In hosting, the Unix-based systems (Linux and Solaris) are the most used systems for applications such as firewall and web computing, while the Windows system, are best used for User Management and services for ease of operation and maintenance (AD).
The Internet, the largest computer network in the world, uses the client-server model. Many servers around the world are interconnected and process information simultaneously.
Some services offered by hosting internet are web pages, email, file transfer, remote access, instant messaging and others. It is interesting to note that any action taken by a user involves the work of many computers around the world.
Companies need to adopt a set of four best practices aimed at ensuring disaster recovery technologies and its availability.
The technology is used to support policies for disaster recovery, but is not in itself a solution alone. Managers need to keep in mind that the strategy to manage this type of problem should be a continuous process, which should ensure a constant update of the initiatives and needs to be tested on a regular basis.
Furthermore, it is important to periodically update the impact analysis for business and raising risks as part of a specific strategy for disaster recovery. This proactive, rather than prepare the company for possible problems, tends to save a substantial money from organizations in the long term. And companies that adopt this model can replace the term ‘disaster recovery’ for ‘continuity of IT services’.
Anyway, companies have become increasingly dependent on IT every day and in return, business managers require a higher level of availability of equipment and systems. What force technology teams to reevaluate their strategies for both the continuity of IT services and to improve service to users in the organization.
In addition, the CIO needs to balance another aspect: maintain or reduce costs. Both need to justify any new investment in the availability, separate them according to their criticality to the business.
Below, I list the best practices to ensure the continuity of IT services:
1. Rank the systems according to their criticality
2. Develop classes of services that ensure both the availability and continuity
3. Assess the availability of services from the perspective of the user
4. Include considerations of availability and continuity in the development phase and testing applications.