Setting up a data center is a complicated and lengthy procedure. Would that help to standardize the process and reduce the risk of unreliable equipment and eliminate the need for additional investment in the elimination of defects?
According to the research, the world average volume of user data per one company that is 120 terabytes. In turn, estimated at IDC, last year for one hour in the world, dispatched 35 billion messages. Given these numbers, it turns out that one message has approximately 3.4 MB of information. Of course, most of the e-mails are not accompanied by voluminous reports. But not so little attachment contains a sample from corporate databases or presentations with graphic slides and video. The size of this investment – a few tens of megabytes. For the preparation of reports they utilized powerful computing resources. In addition, the database itself are handled by dozens of different kinds of applications, covering all aspects of the company.
Intensive data streams circulating in the information systems of many companies require a particular organization of the IT infrastructure. It must adapt the changing business requirements and, in particular, to ensure a steady increase in productivity of the decisions and maximize the effectiveness of their operation.
Concentration of resources
Solution to this problem may be the concentration of computing resources and their distribution functions between applications. This approach is known as “virtualization”. “Virtualization of resources” again sparked interest in the concept of centralized computing. The result of the implementation of this concept are data processing centers (DPC). Data Processing Center (DPC) – a group of premises whose main function is to place processing equipment and storage.
The option of data center represents all the leading manufacturers of computer and communications equipment, as well as software, which organizes the operation and management information systems. Typically, these solutions are used by the latest development companies. In turn, should pay special attention to IT infrastructure, within the centralization of computing resources.
Illustrating the need for a universal and productive infrastructure for data centers can be similar to the automotive industry.
The first cars on the design repeated the horse-drawn carriages, and only after some time they worked out universal principles of layout nodes, which remain unchanged. On the basis of these principles they kept improving the design of the car continuously, whose goal was to create an optimal functioning and is also more economical models.
At the same time, the vehicle, which is a cart on which the most modern car engine, of course, would not meet the performance characteristics and economic indicators. So with data centers are. Approaches for building the infrastructure of computing centers of the sixties and seventies, or even for server rooms, do not correspond to the equipment in the modern data center.
Standards for data centers adhere to the basic principle of constructing cable infrastructure topology “hierarchical star
Thus, there is need to develop a special standard that defines the design and construction of infrastructure for the data center. The modern concept of DPC is based on the definition of its functionality and services, implemented an information system within the enterprise, or services provided by the user.
In accordance with the purpose of data centers, they can be divided into two groups. First – it is private (or corporate) data centers that operate exclusively within a particular company. The second group includes common data centers, often owned by the suppliers of Internet services and are used to implement services such as web hosting, colocation hosting, rental applications, the deployment of e-commerce, etc.
But those and other centers primarily represent premises, which set the computers and network equipment and creates conditions for continuous and reliable operation of this equipment, and storage systems. Among the obligatory prerequisite for the normal functioning of the DPC include the presence of uninterrupted power supply, maintenance of temperature and implementation of network connections within the data center, as well as connections to the networks of telecommunications service providers.
Currently, there are two standards defining the principles of data center infrastructure. It was developed in the U.S. TIA/EIA-942 standard and European standard EN 50173-5. Both standards contain many similar provisions, but the scope of the American standard is much broader, since it determines not only the features of the organization cabling.
Number of European standard says that he belongs to a group of cable standards, and the number after the hyphen indicates the application in the respective areas. Thus, EN 50173-5 – is a European standard, largely determines the cable solutions for data centers.
Standard TIA/EIA-942 considering several options for building
American Standard considers the structure as a whole and contains not only general guidance on the organization of cable infrastructure, installation, mounting fixtures and identifying sites for laying cable. It also focuses on the design of the network, providing access, rules of placing data center, the architectural features of buildings, the organization of power, lighting, climatic conditions, to ensure the smooth operation of the equipment, fire safety and protection from moisture.
An important component of the standard is the requirement to ensure high operational readiness of equipment in the data center required to service requests that come from a large number of users. Given such a broad scope of issues related to the implementation of infrastructure data center, will continue to be considered primarily an American standard.
Approved in April 2005, the standards bodies, the U.S. standard TIA/EIA-942 Telecommunications Infrastructure Standard for Data Centers defines the requirements and the basic rules for the design and implementation of data centers and server rooms.
The original “starting point” of the standard is beginning design work prior to construction or reconstruction. Only at this stage we can fully appreciate all the architectural features of data center premises and ensure coordination of all technical systems. Therefore, guided by the standard should be primarily the designers, because they have to plan for the relationship building’s architecture, its technical systems and cabling infrastructure to the operation of a large number of computer equipment with a high density layout.
List of main components
Standard TIA / EIA 942 provides the mandatory provision of specialized facilities and organization of work areas. In particular, this space for liner external telecommunication systems (Entrance Room), computer hardware (Computer Room), telecommunications equipment (Telecommunications Room) and accommodation for engineering systems, eg, electrical, industrial space conditioning and ventilation systems, etc.
To monitor and manage data center (especially the center, which is mission-critical) is organized by the center of the current network management (Network Operations Center – NOC). Its function is to identify faults and develop action precluding such effects, as a possible simple computer equipment. In the NOC located equipment, it monitors the thermal regime, stop and monitor equipment malfunction with subsequent diagnosis of modules and blocks, which are out of order.
In addition to space for the installation of computer equipment in the building of a data center can allocate space for offices and support services, such as customer service centers or service training of data entry. These facilities include the switching points of horizontal wiring for office and support services.
The room for computer equipment allocated area of basic wiring (Main Distribution Area – MDA), horizontal wiring (Horizontal Distribution Area – HDA), zonal routing (Zone Distribution Area – ZDA) and the domain routing equipment (Equipment Distribution Area – EDA).
Possibility to add external systems (ER), MDA, HDA, ZDA and EDA although in many respects and are consistent with but not identical premises and areas specified in the standard TIA/EIA-568-V.1 (Entrance Facility, Equipment Room, Telecom Room, Consolidation Point and Work Area). In the European standard, EN 50173-5 are used with other names of elements of cable infrastructure. External telecommunications services are connected to an external network interface (External Network Interface-ENI), which connects to the main switching center (Main Distributor – MD) through a network access subsystem (network access cabling). In the zonal sub-system by wiring to sockets equipment (Equipment Outlets – EO), either directly or through the connection of local distribution points (Local Distribution Point – LDP). However, LDP is an optional element.
Location of premises and areas determined by the size of the data center, as well as the ability to install additional equipment and transition to more advanced communication technologies.
In the premises for the liner external telecommunication, systems are interfaces that connect structured cabling data center routes a group of buildings, as well as with cable equipment suppliers of telecommunications services. This may be a separate room (standard recommends that you make a separate room for security reasons), but allowed the union and with room for computer equipment. In general, the premises equipment, providing input of external telecommunications services, consolidated in the MDA.
In the data center there may be some room for liner external telecommunication systems, which allows to observe restrictions on the length of lines, as well as implement various service units. Field of basic routing (MDA) – a location of main switching center cabling data center. MDA is the most suitable place to install the routers and switches, the core of the local area network data center and network storage. In addition, this region can be integrated distribution points horizontal wiring servicing equipment in the immediate vicinity of the MDA.
Areas of horizontal wiring allocated for the implementation of distribution points horizontal subsystem, cable lines that reach the area of routing equipment. Therefore, HDA is considered as the location of switches, local area network and network storage, as well as KVM-Switch (allow you to control multiple servers through one set of “keyboard-video-mouse), which serve facilities in the respective EDA.
Additional switching centers zonal cable equipment, which correspond to the region zonal wiring with this optional element. They are placed between the HDA and EDA, where necessary frequent reconfiguration of cable equipment, or used as a means of providing additional flexibility in horizontal solutions. The horizontal cables that are suitable for ZDA, in zonal outlet or consolidation point. Further arrangement by means of switching cords.
In the zonal wiring, it is not recommended to install the active equipment, except for decisions on the organization of power in the twisted pair.
In the field of EDA implemented network, connections are needed for computer equipment.
Allowed additional connections between the HDA (including the purpose of redundancy) or connect to cable equipment rooms liner external telecommunication systems (when released a few such facilities).
Standard TIA/EIA-942 considering several options for constructing the cable infrastructure. The basic topology is best suited for any data center – and for corporate, located on the same site, and for the center of common use, dispersed across multiple platforms.
In distributed data centers, there is a allocated several rooms for liner external telecommunication systems. This is done for security reasons or if necessary to cover large distances in comparison with the maximum communication range.
The simplified topology of the data center involves the combination of HDA and MDA. In smaller data centers, these switching points can also be combined with equipment TR and ER. This system allows for increasing the length of the horizontal line on the optical fiber to 300 m.
In the data center with a centralized topology is realized an optical system with a central administration. In this system, all the electronics concentrates in MDA and the EDA, and the horizontal subsystem is missing.
Cable Equipment Data Center are the horizontal cabling, backbone wiring and switching equipment in the relevant fields.
Backbone wiring connects the Entrance Room, MDA and HDA. The horizontal wiring – is part of the SCS from the point of termination to the EDA to the horizontal distribution point in one area of HDA. As the backbone wiring actually aggregated traffic coming from the horizontal lines, it must have the appropriate bandwidth.
The implementation of horizontal and backbone postings, according to the standards implemented for the data center on a constant basis for SCS hierarchical star. It allow only one level of hierarchy for backbone wiring, which implies that one distribution point.
However, the standards provide redundant wiring. For this purpose we introduce the region of secondary wiring, switching point which is connected to the distribution points of the horizontal wiring, duplicating compounds with the basic wiring.
Standard EN 50173-5 includes the use of cable equipment to twisted pair and optical fiber. The minimum capacity of copper systems is determined by the class E. The minimum capacity for optics – Class OF-300, implemented fiber OM-2 or OM-3. The standard provides the length of optical fiber cable channels in the OM-1 OM-2 and OM-3.
TIA/EIA-942 Standard provides for the application of cable equipment, switching equipment and wiring cables in accordance with AN-SI/TIA/EIA-568-B.2 and VZ. This is a transmission medium of a twisted pair to the characteristic impedance of 100 ohms, a multimode fiber (62,5 / 125 or 50/125) and single-mode fiber.
In addition, a number of applications allow the use of coaxial cable with impedance of 75 ohms.
Process Longevity for the solutions that are implemented in the newly developed projects, provide some types of cable. In TIA/EIA-942 recommended twisted pair is sixth category and optimized for laser fiber. Given that at the time of TIA/EIA-942 absent standard twisted-pair sixth category is possibly an indication of the type of cable that is preferred for use in data centers.
The length of the cable channels (with patch cords) in the horizontal wiring for cable of any type should not exceed more than 100 m. For the solutions with a singular line, in which HDA is integrated with MDA, the length of optical cable ducts can reach 300 m.
Standard TIA/EIA-942 puts forward a number of requirements and recommendations concerning the organization of cabling. All computer and switching equipment is housed in enclosures and racks.
Latest posts by ESDS (see all)
- How Cloud Computing Is Changing The Labor Market - March 25, 2015
- Adopting Infrastructure as a Service Can be a Good Deal - March 17, 2015
- Will Virtualize? Take These Six Points Into Consideration - March 12, 2015