Work in modern organizations is difficult to imagine without a server. Without it, data is disjointed, is stored on different computers and even when using network, is quite difficult to pull together when needed for the job. And so what about a situation when database is in daily use? You could never do this without a server.
Previously, many organizations are saved on the server using this as a regular computer. They did not meet the main requirements for servers, but this was ignored in order to save. As a result, when the computer is denied, information is lost, which cost far more than it would cost a dedicated server. Or, if you need to increase the capacity of the system, had to buy a new computer, which is why, again, the economy turned out to be doubtful.
Performing even the simplest functions of the office server, such as mail-server, file / print server, shared Internet access, applications for small workgroups, the server for small databases, the personal computer is subjected to the voltage for which it is perfect. Mode without turning off the power for which it is not intended, wears all parts of the machine physically, and it may be said to be destroyed in one or two years. The velocity of its work is often a serious impediment to business. And in that moment when the personal computer “dies”, with it freezes all the activities of the firm, the entire database, all the contacts and stuff which was on the server. This can be compared with a small disaster for the company.
To solve a similar problem, a server specialized computer can be used that is designed for this purpose. But first, to understand the difference between a server and personal computer is important.
First of all (and this is probably the most important) – the server differs a great in reliability. The fact that failures of servers come in two forms – one is planned when they take the technical work (maintenance, upgrades) and unplanned, when the server “lies”. In the first case, the server shuts down intentionally, in the most convenient time and usually in such cases, it does not bring inconvenience to users virtually.
The second case – the theme for the subject of conversation. The fact that most things on the server can be changed at run time and has only one element fail, the person who looks after it, right there, it should make a dead piece of labor. These details are most often going down the fans and power supplies (if not supported by redundancy), with hard drives and expansion cards. Thus, there will be no downtime. That is, unplanned downtime can occur only because of IT-specialist, if he had not found problem on time and did not eliminate it, if one or more hard drives and so on are out of order. Moreover, until now there are companies where the leadership sees nothing wrong with that server is also a working machine programmer. This leads to the fact that the additional software installed is conflicting with the system, causing disruptions.
In addition, the server may refuse to work if it is subjected to intentional remote attack. It can be organized as a local area network and the Internet. Therefore, you must take care of server security.
In recent years, various databases are greatly developed, changing the format for storing and processing data, database servers came. File servers are only for storing user files and archives.
Increasingly, networks are now based on terminal servers. This system allows you to save users in the workplace, as all software and files are located on the server and the terminal can be a pretty simple machine. In addition, significant savings are achieved at the expense of installing and maintaining software. For serious work, it is important and acquired servers are much more carefully maintained.
Thus, buying a server and using it for its intended purpose, rather than establishing an ordinary personal computer, the company invests in its future, because this investment will pay for itself through increased productivity, saving time on maintenance and improving stability.
The server includes:
Hardware is hosted in very large office buildings or data centers. In rooms with climate control, the racks of dedicated servers are stored. Most servers can boast that they are in working order 99% of the time, and more. This impressive figure and one more reason to enjoy a host server. Remember how often you had rebooted my computer?
Software. (Operating System)
In the market there are different operating systems. It is possible that your server uses Apache, or Linux, the open source software. Dedicated Servers must be protected and the application of open source in most cases contain less bugs and more secure than their opponent – a product of Microsoft.
These servers provide an interface to access and configure them, the so-called control panel. Some of the panel are different from other more complex. Depending on your knowledge and skills in the technical field, should look for this circumstance when choosing a host.
To send the page to the server you need it “loaded” by executing a file transfer protocol (FTP, file transfer protocol). FTP – a client (for you) application server. If you have created your Web page in your own desktop, it is obvious that in order to open this page for general use, you must download it to your server. FTP-protocol is also used to exchange files between your and other computers.
I have tried to explain what is a server host in a very simplified form, and hope that you understand the basic idea of organizing such a system.
Before starting, I’ll tell you what we can do by using the screen command.
As a Linux sysadmin, it is very common that you have the need to run several commands at once. You will probably open several ssh sessions for this, however, there is a way better, safer and recommended.
Another need that will arise frequently, is to run a command or script that will take hours to complete, such as an rsync. This requires that you keep the connection open on ssh, because if you close, the script or command also will close, which can be a problem if you have your Internet connection interrupted.
Given these problems, we can begin to introduce you to the command screen.
Screen – makes it possible to run several pseudo terminals, manipulate and save your entries and exits on the screen, and copy and paste between windows.
I’ll show you how to run a command that takes hours, within one screen.
We will first install screen, using yum, apt-get, or use a package manager that you have available. In our example we will use yum.
yum install screen
This command will start a new window within the screen for you. You also have the option to launch the command window giving a name (parameter-S). Example:
screen-S gracious 1
In this case, the window will be called “gracious 1.
The screen works like any other within the ssh session. You can now enter any command that will run normally.
If you want to exit the screen session, leaving it active, and maintain the command that was running, do:
Ctrl-a d (control + the press release, and then press d)
So when you want to re-screen window, type:
Other options you have when you are inside the screen are:
1. To create a new session:
2. To change between sessions:
There are other possibilities of use. For more information and help, you can use in your terminal:
The advantages of server virtualization can be so tempting, it’s easy to overlook a number of limitations to this method. Before attempting to design a solution that should get acquainted with the peculiarities of the methods of server virtualization, as well as the architecture and the needs of its own network administrator.
Servers, designed for applications with high demands on computing power, virtualization is contraindicated. The reason is that virtualization actually divides the processing power of servers to virtual servers. If the computing power of the server is not enough for normal functioning of the application, all processes are slowed down. Then the problem where its performance under different circumstances did not require much time, can now run for hours. Perhaps even worse: when the server does not satisfy the requirements put forward by the processing of information, there is a risk of system failure. Before dividing a physical server into multiple virtual machines, network administrators should pay particular attention to the level of processing power CPU.
It would also be unwise to overload the server’s CPU, creating an excessively large number of virtual servers on one machine. The more virtual machines must maintain a physical server, the less processing power will get to each virtual server. Also, keep in mind that the physical servers have limited disk space. Too many virtual servers could adversely affect the ability of servers to maintain the required amount of data.
Another limitation is the migration. Nowadays, possible migration of virtual servers on one physical machine to another only can be done only if both physical machines are used by the processor of the same manufacturer. If your network uses a server on an Intel processor and another that uses a AMD processor, it is impossible to translate the virtual server from one machine to another.
Why would be the administrator primarily interested in the migration of a virtual server? In situations where a physical server needs service, the translation of virtual servers on other machines can reduce the downtime of the application. If migration is not possible, all applications running on virtual servers that are placed on a given physical machine will be unavailable during the lifetime of the maintenance procedures.
Many companies have invested in virtualization, in spite of its limitations. With the progression of server virtualization technology the need for large data centers will decrease. Also with reduced power consumption and heat dissipation, making use of virtual servers is attractive, not only financially but also in terms of environmental protection. If the network servers were used at a level close to their potential, we would expect the appearance of larger, more efficient computer networks. Without exaggeration, one can argue with virtual servers that can cause an entire revolution in the computer industry. We can only wait and monitor the development of this process.
There are three ways to create virtual servers: full virtualization, para-virtualization and OS-level virtualization. They have little in common. Physical server is called host. Virtual servers are called guest. Virtual servers behave in the same way as physical machines. In each system, it uses its own resources unlike any other approach to resource allocation of a physical server in accordance with the requirements of virtual servers.
Full virtualization using special software is called a hypervisor. The hypervisor interacts directly with the physical server’s CPU and disk space. It serves as a platform for operating systems, virtual servers. The hypervisor provides complete independence and autonomy of each virtual server to other virtual servers running on the same physical machine. Each guest server has its own operating system – may even happen that, one guest server running on Linux, and the other on Windows.
Hypervisor monitors the physical server resources. In the process of application of virtual servers, hypervisor distributes physical machine resources between the virtual servers. For the hyper visor, it need data processing, which means that part of the computing power of a physical server and related resources should be reserved for the hyper visor program. This can have a negative impact on overall server performance and slow down your application.
With para-virtualization, it uses a slightly different approach. Unlike the full virtualization, servers, guest systems with para-virtualization can feel the presence of each other. Para-virtualization hypervisor is to manage guest operating systems that does not require large computational resources, since every operating system receives information about the needs of other operating systems hosted on the same physical server. The whole system is functioning as a whole.
The method, which uses virtualization at the OS level, generally does not include the use of a hypervisor. Instead, the virtualization capabilities are part of the hosting operating system, which performs all the functions of the hypervisor with full virtualization. The biggest limitation of this approach is that all guest servers must run on the same operating system. Each virtual server remains independent of all others, but it cannot use these servers, different operating systems and harmonize them. Since all the guest operating system should be the same, such an environment is called homogeneous.
What is the best method? It depends primarily on the needs of network administrators. If all the physical servers, network administrators are working for the same operating system, it could best run OS-level virtualization. Compared with other methods of system-level virtualization, operating system virtualization is faster and more efficient. On the other hand, if the administrator uses dedicated hosting servers with several different operating systems, it is best to choose para-virtualization. One of the potential drawbacks of systems with para-virtualization is limited software support. The method is relatively new and few companies offer programs that carry out its implementation. Many companies use full virtualization, but now there is a growing interest in para-virtualization, and eventually the latter could displace full virtualization.