Minggu, 24 Juni 2018

Sponsored Links

Datacenter Facilities
src: www.bluelock.com

Data center is a facility used to store computer systems and related components, such as telecommunications and storage systems. Usually includes backup or backup power backup, excessive data communication connection, environmental control (eg air conditioning, fire fighting) and various security devices. Large data centers are industrial scale operations that use as much electricity as small towns.


Video Data center



Histori

The data center was rooted in a large computer room in the 1940s, marked by ENIAC, one of the earliest examples of data centers. Initial computer systems, which are complex to operate and maintain, require special environments to operate. Many cables are required to connect all the components, and the method to accommodate and manage these is designed like a standard rack for installing equipment, floor lifting, and cable trays (mounted above or below elevated floors). One main frame requires enormous power, and must be cooled to keep it from overheating. Security becomes important - computers are expensive, and are often used for military purposes. Basic design guidelines for controlling access to computer space are therefore designed.

During the microcomputer industry boom, and especially during the 1980s, users began using computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations begin to grow in complexity, organizations grow aware of the need to control IT resources. The rise of Unix since the early 1970s led to the proliferation of Linux PC operating systems that were available for free during the 1990s. This is called a "server", because a time-sharing operating system like Unix relies heavily on client-server models to facilitate the sharing of unique resources among multiple users. The availability of inexpensive network equipment, coupled with new standards for structured network cabling, makes it possible to use a hierarchical design that puts servers in a certain room within the company. The use of the term "data center", as applied to specially designed computer rooms, is beginning to gain popular recognition today.

Booming data centers came during the 1997-2000 dot-com bubble. Companies need fast Internet connectivity and non-stop operations to deploy systems and establish a presence on the Internet. Installing such equipment is not feasible for many small companies. Many companies are beginning to build huge facilities, called Internet Data Branch (IDC), which provides commercial clients with various solutions for system deployment and operation. New technologies and practices are designed to address the scale and operational requirements of such large-scale operations. These practices eventually migrated to private data centers, and were adopted largely because of their practical results. The data center for cloud computing is called cloud data center (CDC). But nowadays, the division of these terms almost disappears and they are integrated into a long "data center".

With the increased absorption of cloud computing, business and government organizations research data centers to the next level in areas such as security, availability, environmental impact and compliance with standards. Standard documents from accredited professional groups, such as the Telecommunications Industry Association, define requirements for data center design. Well-known operational metrics for data center availability can serve to evaluate the commercial impact of disruptions. Development continues in operational practices, as well as in the design of eco-friendly data centers. Data centers typically require huge costs to build and maintain.

Maps Data center



Requirements for modern data centers

IT operations are an important aspect of most organizational operations around the world. One of the main concerns is business continuity; the company relies on its information system to run its operations. If a system becomes unavailable, company operations may be disrupted or completely stopped. It is important to provide a reliable infrastructure for IT operations, to minimize the possibility of interruption. Information security is also a concern, and for this reason, data centers should offer a secure environment that minimizes the likelihood of security breaches. Therefore, data centers must maintain high standards to ensure the integrity and functionality of hosted computer environments. This is achieved through mechanical cooling redundancy and power systems (including emergency backup power generators) that serve data centers along with fiber optic cables.

Telecommunication Industry Telecommunication Infrastructure Standards The Association for Data Centers sets minimum requirements for data center telecommunication infrastructure and computer spaces including single tenant data centers and multi-tenant data hosting centers. The topology proposed in this document is intended to apply to any size data center.

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces , provides guidelines for data center space in telecommunication networks, and environmental requirements for equipment intended to be installed in the space. These criteria are jointly developed by Telcordia and industry representatives. They can be applied to data center data processing center housing or Information Technology (IT) equipment. Equipment can be used to:

  • Operate and manage carrier telecommunication networks
  • Delivers data center-based apps directly to carrier customers
  • Provides hosted apps for third parties to provide services to their customers
  • Provides a combination of these and other similar data center apps

Effective data center operations require balanced investments in both facilities and equipment. The first step is to build a basic facility environment suitable for equipment installation. Standardization and modularity can result in savings and efficiency in the design and construction of telecommunications data centers.

Standardization means integrated building and equipment engineering. Modularity has the benefit of scalability and easier growth, even when planning estimates are less than optimal. For this reason, telecommunication data centers should be planned in recurrent equipment building blocks, and power and support (AC) related equipment when practical. The use of a dedicated centralized system requires more accurate forecasts of future needs to prevent expensive construction, or perhaps worse - in development that fails to meet future needs.

The "lamp-out" data center, also known as a dark or dark data center, is a data center that ideally has all but eliminates the need for direct access by personnel, except in exceptional circumstances. Due to the lack of staff needs to enter the data center, it can be operated without lighting. All devices are accessed and maintained by remote systems, with automation programs used to perform unattended operations. In addition to energy savings, staff cost reductions and the ability to find sites further away from population centers, implementing data centers that light up reduces the threat of malicious attacks on infrastructure.

There is a tendency to modernize data centers to take advantage of improved performance and energy efficiency of equipment and newer IT capabilities, such as cloud computing. This process is also known as data center transformation.

Organizations are experiencing rapid IT growth but their data centers are getting older. International Data Corporation (IDC) industry research firm puts the average age of data centers by the age of nine. Gartner, another research firm, says that data centers older than seven years are out of date. The growth of data (163 zettabytes in 2025) is one of the factors driving the need for data centers to modernize.

In May 2011, the Uptime Institute data center research organization reported that 36 percent of large companies surveyed expect to spend IT capacity in the next 18 months.

The transformation of the data center takes a step-by-step approach through an integrated project that is carried out over time. This differs from traditional methods of increasing data centers using serial and siled approaches. Typical projects in data center transformation initiatives include standardization/consolidation, virtualization, automation and security.

  • Standardization/consolidation: The purpose of this project is to reduce the number of data centers owned by large organizations. The project also helps reduce the amount of hardware, software platforms, tools and processes within the data center. Organizations are replacing obsolete data center equipment with newer ones that provide increased capacity and performance. Computing, networking, and management platforms are standardized so they are more manageable.
  • Virtualization: There is a tendency to use IT virtualization technology to replace or consolidate various data center equipment, such as servers. Virtualization helps lower capital and operational costs, and reduces energy consumption. Virtualization technology is also used to create virtual desktops, which can then be hosted in a data center and leased on a subscription basis. Data released by investment bank Lazard Capital Markets reported that 48 percent of the company's operations will be realized in 2012. Gartner sees virtualization as a catalyst for modernization.
  • Automating: Data center automation involves automating tasks such as provisioning, configuration, patching, release management and compliance. Because the company suffers from a number of skilled IT workers, it automates the task of making data centers more efficient.
  • Secure: In modern data centers, data security on virtual systems is integrated with the security of existing physical infrastructure. The security of modern data centers should take into account the physical security, network security, and data and user security.

Cloud computing will virtually replace traditional data centers ...
src: zdnet3.cbsistatic.com


Operator Normality

Currently many data centers are run by Internet providers solely for the purpose of hosting their own servers and third parties.

Traditionally, however, data centers are built for the use of only one large company, or as hotel operators or network-neutral data centers.

This facility allows operator interconnection and acts as a regional fiber hub that serves local businesses in addition to hosting content servers.

Important components of a Data Center
src: www.liveblogspot.com


Data center level and level

The Telecommunication Industry Association is a trade association accredited by ANSI (American National Standards Institute). In 2005 published ANSI/TIA-942, the Telecommunication Infrastructure Standards for Data Centers, which define four data center levels as a whole, can be measured. TIA-942 was changed in 2008, 2010, 2014 and 2017. TIA-942: Data Center Standard Overview describes the requirements for data center infrastructure. The simplest is the Level 1 data center, which is basically the server room, following the basic guidelines for the installation of a computer system. The most stringent level is the Level 4 data center, designed to accommodate the most mission critical computer systems, with completely redundant subsystems, the ability to continue to operate indefinitely during major power outages.

The Uptime Institute, a research data center and professional services organization based in Seattle, WA defines what is commonly referred to today as "Tiers" or more precisely, "Tier Standard". The Standard Uptime Level level describes the availability of data processing from hardware at a location. The higher the Tier level, the greater the expected availability. The Uptime Institute Tier Standards are shown below.

For TIA-942 2014 revision, TIA and Uptime Institute organizations agree together that TIA will remove the use of the word "Tier" from the published TIA-942 specification, provided that the terminology is only used by the Uptime Institute to explain the system..

Other classifications also exist. For example, the German Datacenter Star Audit Program uses an audit process to certify five levels of "satisfaction" that affect the criticalness of a data center.

While one of the industrial data center resilience systems is proposed at the time of availability is expressed as a theory, and a number of 'Nines' on the right side of the decimal point, it has generally been agreed that this approach is somewhat deceptive. or too simple, so today's vendors usually discuss the availability of detail that they can actually influence, and in much more specific terms. Therefore, the currently available leveling system no longer determines their results in an uptime percentage.

Note: The Uptime Institute also grouped the Levels for each of the three phases of the data center, its design documents, built facilities and ongoing operational continuity.

Data Center REITs | Nareit
src: www.reit.com


Design considerations

Data centers can occupy one room of buildings, one or several floors, or entire buildings. Most of the equipment is often in the form of servers installed in 19 inch rack cabinets, which are usually placed in the corridor forming a single row (called aisle) between them. This allows people to access the front and back of each cabinet. Servers are very different in size from 1U servers to large freestanding storage silos that occupy many square meters of floor space. Some equipment such as mainframe computers and storage devices are often as big as the shelf itself, and placed next to it. Large data centers can use shipping containers packed with 1,000 or more servers each; when a repair or upgrade is required, all the containers are replaced (rather than fixing individual servers).

Local building codes can set the minimum ceiling height.

Design programming

The programming design, also known as programming architecture, is the process of researching and making the decision to identify the scope of a design project. In addition to the architecture of the building itself there are three elements for designing programming for data centers: facility topology design (space planning), engineering infrastructure design (mechanical systems such as cooling and electrical systems including power) and technology infrastructure design (cable plant). Each will be influenced by performance appraisals and modeling to identify gaps related to the owner's performance needs over time.

Various vendors that provide data center design services determine the design steps of the data center are slightly different, but all address the same basic aspects as given below.

Modeling criteria

The modeling criteria are used to develop future scenarios for space, power, cooling, and cost in the data center. The goal is to create a master plan with parameters such as number, size, location, topology, IT system floor layout, as well as power and cooling technology and configuration. The goal is to enable the efficient use of existing mechanical and electrical systems as well as growth in existing data centers without the need to develop new buildings and further increase of incoming power supplies.

Design recommendations

Design/plan recommendations generally follow the modeling criteria phases. Optimal technological infrastructure is identified and planning criteria are developed, such as critical power capacity, overall data center power requirements by using agreed PUE (power efficiency), mechanical cooling capacity, kilowatt per cabinet, improved floor space, and resistance levels for facilities.

Conceptual design

The conceptual design embodies a design or plan recommendation and should consider a "what-if" scenario to ensure all operational results are met for future-proof facilities. The conceptual floor layout should be driven by IT performance requirements as well as the life cycle costs associated with IT demand, energy efficiency, cost efficiency and availability. Future testing will also include expansion capabilities, often provided in modern data centers through modular design. This allows higher floor space to be installed in the data center while utilizing the main electrical facilities at the facility.

Detailed design

Detailed designs are performed after appropriate conceptual designs are determined, usually including proof of concept. Detailed design phase should include detailed architectural, structural, mechanical and electrical information as well as facility specifications. At this stage the development of facility schemes and construction documents as well as scheme specifications and performance and specific details of all technology infrastructure, detailed IT infrastructure design and documentation of IT infrastructure are produced.

Engineering infrastructure engineering design

The engineering infrastructure engineering design discusses the mechanical systems involved in maintaining the interior environment of the data center, such as heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; air pressure; etc. This phase of the design process should be aimed at saving space and cost, while ensuring business objectives and reliability are met as well as achieving PUE and green requirements. Modern designs include modularization and scaling of IT loads, and ensuring capital expenditure on optimized building construction.

Design of electrical engineering infrastructure

Electrical Engineering infrastructure design is focused on designing electrical configurations that accommodate various reliability requirements and data center sizes. Aspects may include utility service planning; distribution, switching and cutting of resources; uninterrupted resource system (UPS); and much more.

This design must comply with energy standards and best practices while also meeting business objectives. The electrical configuration should be optimized and operationally compatible with the data center user's capabilities. The modern electrical design is modular and scalable, and is available for low and medium voltage requirements as well as DC (direct current).

Design of technology infrastructure

The technology infrastructure design addresses the telecommunication cabling system that runs throughout the data center. There is a cabling system for all data center environments, including horizontal cabling, voice, modem, and facsimile telecommunication services, location switching equipment, computer and telecommunication communications connections, keyboard/video/mouse and data communications connections. Spacious areas, local areas, and storage area networks must be connected to other building signaling systems (eg fire, security, power, HVAC, EMS).

Available expectations

The higher the data center availability, the higher the capital and the operational costs to build and manage it. Business needs must determine the level of availability required and should be evaluated based on the criticality of the IT system estimation of the cost analysis of the modeled scenario. In other words, how can an appropriate level of availability be met by design criteria to avoid financial and operational risk as a result of downtime? If the estimated downtime cost in a given time unit exceeds the amortized cost and operating costs, a higher level of availability should be taken into account in the design of the data center. If the cost of avoiding downtime greatly exceeds the cost of downtime itself, a lower level of availability should be taken into account in the design.

Site selection

Aspects such as proximity to available electricity networks, telecommunication infrastructure, network services, transportation lines and emergency services can affect costs, risks, security, and other factors to be considered for data center design. Although various location factors are taken into account (eg flight path, neighboring usage, geological risk) access to the appropriate available power is often the longest waiting time. The location affects the design of the data center as well because the climatic conditions dictate which cooling technology should be used. This in turn affects the work time and costs associated with cooling. For example, the topology and cost of managing data centers in a warm and humid climate will vary greatly from managing one in a cool and dry climate.

Modularity and flexibility

Modularity and flexibility are key elements in enabling data centers to grow and change over time. The data center module is pre-designed, standard building blocks that can be easily configured and moved as needed.

Modular data centers may consist of data center equipment contained in shipping containers or similar portable containers. But it can also be described as the design style in which the data center component is created and standardized so that it can be constructed, moved, or added as quickly as change.

Environmental control

The physical environment of the data center is strictly controlled. AC is used to control the temperature and humidity in the data center. ASHRAE "Thermal Guidance for Environmental Data Processing" recommends temperature range 18-27Ã, Â ° C (64-81Ã, Â ° F), dew point range -9 to 15Ã, Â ° C (16 to 59Ã, Â ° F), and an ideal relative humidity of 60%, with an allowable range of 40% to 60% for the data center environment. The temperature in the data center will increase naturally because the electric power used heats the air. Unless heat is removed, the ambient temperature will rise, causing damage to electronic equipment. By controlling the air temperature, server-level components at the board level are stored within the factory's temperature/humidity range. The air conditioning system helps control the humidity by cooling the space air behind the dew point. Humidity is too much, and water may start to condense on the internal components. If the atmosphere is dry, an additional humidification system can add moisture if the moisture is too low, which can cause static discharge problems that could damage the components. Underground data centers can keep computer equipment cool while spending less energy than conventional design.

Modern data centers try to use economizer cooling, where they use the outside air to keep the data center cool. At least one data center (located in Upstate New York) will cool the server using outside air during winter. They do not use cooling/air conditioning, which creates potential energy savings in the millions. Indirect air cooling is being used in data centers globally that have the advantage of more efficient cooling that lowers power consumption costs in data centers. Many newly built data centers also use Indirect Evaporative Cooling (IDEC) units as well as other environmental features such as seawater to minimize the amount of energy needed to cool the room.

Telcordia GR-2930, NEBS: Elevated General Floor Requirements for Networks and Data Centers , presents general engineering requirements for elevated floors that fall under the strict NEBS guidelines.

There are many types of commercially available flooring that offer a wide range of structural strength and loading capabilities, depending on the construction of components and materials used. Common types of elevated floors include stringer, stringerless, and structural platforms, all of which are discussed in detail in GR-2930 and summarized below.

  • Storied flooring arranged - The elevated floor usually consists of a vertical arrangement of steel support frames (each assembly is made of a base plate steel), vertical tubular, and head) are uniformly placed on two central legs and are mechanically tied to the concrete floor. The steel support head has a stud inserted into the upright base and the overall height is adjusted to the measuring nut on the stud stud welded.
  • Stringless raised floors - One type of non-earthquake generated floors usually consists of a pile arrangement that provides the required height for the routing cable and also serves to support each corner of the panel floor. With this type of flooring, it may or may not provide mechanically to tighten the floor panels to the pedestal. This type of seamless system (not having mechanical attachments between the pedestals) provides maximum access to the space below the floor. However, the soundless floors are significantly weaker than the elevated floors that are stacked to support lateral loads and are not recommended.
  • Structural Platforms - One type of structural platform consists of members built from steel angles or ducts that are welded or bolted together to form an integrated platform for support equipment. This design allows the equipment to be fastened directly to the platform without the need to switch bars or additional strength. The structural platform may or may not contain panels or strings.

Data centers typically have elevated floors consisting of 60 cm (2 feet) detachable square tiles. This trend leads to a gap of 80-100 cm (31-39Ã, Â °) to meet a better and uniform air distribution. It provides plenum for air circulating under the floor, as part of the air conditioning system, as well as providing room for electrical wiring.

Metal whiskers

Elevated floors and other metal structures such as cable trays and ventilation ducts have caused many problems with zinc-like in the past, and possibly still exist in many data centers. This occurs when microscopic metal filaments are formed on metals such as zinc or lead which protect many metal structures and electronic components from corrosion. Care on elevated floors or wiring etc. Can issue a mustache, which enters the airflow and may abbreviate the server components or power supply, sometimes through the high current metal vapor arc currents. This phenomenon is not unique to data centers, and also caused massive failures on satellites and military hardware.

Power

The power reserves consist of one or more uninterruptible power supplies, battery banks, and/or diesel/gas turbine generators.

To prevent a single point of failure, all elements of the electrical system, including backup systems, are typically fully duplicated, and critical servers are connected to a "Side-A" and "B-side" power feeds. This arrangement is often done to achieve N 1 redundancy in the system. Static transfer switches are sometimes used to ensure instantaneous switching from one supply to another in the event of a power failure.

Low voltage cable routing

Data cables are usually routed over the overhead cable tray in modern data centers. But some still recommend under lifting floor cables for safety reasons and consider adding a cooling system on the shelf in case this increase is required. Smaller/cheaper data centers with no elevated floors can use anti-static tiles for floor surfaces. Computer cabinets are often arranged in a hot alley arrangement to maximize airflow efficiency.

Fire protection

Data centers have fire protection systems, including Active and Passive Design elements, as well as implementation of fire prevention programs in operation. Smoke detectors are usually installed to provide early fire warning at a new stage. This allows investigation, power disruption, and manual fire extinguishment using a fire extinguisher hand before the fire grows into a large size. Active fire protection systems, such as spraying systems or cleaner suppressor gas suppression systems, are often provided to control full-scale fires if they occur. High smoke detector sensitivity, such as aspiration smoke detector, activates the lighter fire suppression gas system activated earlier than the fire sprayer.

  • Sprinkler = protection of structures and life security of buildings.
  • A clean agent = business continuity and asset protection.
  • No water = no damage or cleaning of the collateral.

Elements of passive fire protection include the installation of fire walls around the data center, so that fire can be limited to some of the facilities for a limited time in the event of failure of an active fire protection system. The penetration of the fire wall to the server room, such as cable penetration, penetration of cooling channels and air ducts, must be equipped with fire penetrating assemblies, such as fire termination.

Security

Physical security also plays a big role with the data center. Physical access to the site is usually limited to selected personnel, with controls including layered security systems often starting with fences, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems in it. The use of fingerprint recognition mantraps started to become commonplace.

Documenting access is required by some data protection rules. To do this, some organizations use an access control system that provides access logging reports. Logging can occur at the main entrance, at the entrance to the mechanical room and white space, as well as in the equipment cabinet. Modern access controls in the cabinet allow integration with smart power distribution units so that keys can be activated and connected via the same device.

5 Trends Shaping the Modern Data Center
src: blog.westerndigital.com


Energy usage

Energy use is a central issue for data centers. Power images for data centers range from several kW to rack servers in cabinets up to several tens of MW for large facilities. Some facilities have a power density of more than 100 times that of ordinary office buildings. For higher power density facilities, electricity costs represent the dominant operational costs and account for more than 10% of total cost of ownership (TCO) from the data center. In 2012, the cost of electricity for data centers is expected to exceed the cost of initial capital investment.

GHG emissions

In 2007, all information and communications technologies or ICT sectors were estimated to account for about 2% of global carbon emissions with data centers accounting for 14% of ICT footprint. The US EPA estimates that servers and data centers account for up to 1.5% of total US electricity consumption, or about 0.5% of US greenhouse gas emissions, for 2007. Given business scenarios such as greenhouse gas emissions from data centers projected more than doubled from 2007 levels by 2020.

Siting is one of the factors affecting energy consumption and environmental effects of the data center. In areas where climate supports cooling and much renewable electricity is available, the environmental effects will be more moderate. Thus countries with favorable conditions, such as: Canada, Finland, Sweden, Norway and Switzerland, are trying to attract cloud computing data centers.

In an 18-month investigation by scholars at the Baker Institute for Public Policy at Rice University in Houston and the Institute for Sustainable and Applied Information in Singapore, data center-related emissions will more than triple by 2020.

Energy efficiency

The most common metrics used to determine the energy efficiency of a data center are the effectiveness of power usage, or PUE. This simple ratio is the total strength that enters the data center divided by the power used by IT equipment.

                             P           U           E                 =                                                             Total Power Supply                                                                     IT Equipment Equipment                                                       {\ displaystyle \ mathrm {PUE} = {{\ mbox {Total Power Power}} \ over { annotations>  Â

The total power of the facility consists of the power used by the IT equipment plus the overhead power consumed by anything not considered a data or computing device (ie cooling, lighting, etc.). The ideal PUE is 1.0 for the hypothetical situation of zero overhead. The average data center in the US has PUE 2.0, which means that the facility uses two watt of total power (overhead of IT equipment) for every watt sent to IT equipment. The energy efficiency of a state-of-the-art data center is estimated to be about 1.2. Some major data center operators like Microsoft and Yahoo! has published the projected PUE for facilities under construction; Google publishes the actual quarterly performance performance of the data center in operation.

U.S. Environmental Protection Agency has an Energy Star rating for a standalone or large data center. In order to qualify for the ecolabel, the data center should be within the top quartile of energy efficiency of all reported facilities. The United States passed the Energy Efficiency Enhancement Act of 2015, which requires federal facilities - including data centers - to operate more efficiently. In 2014, California issued the 24th title of the California Code Rule, which mandates that every newly built data center must have some form of airflow retaining in place, as a step to optimize energy efficiency.

The EU also has similar initiatives: the EU Code of Conduct for Data Centers

Analysis of energy use

Often, the first step to controlling energy use in data centers is to understand how energy is used in data centers. Several types of analysis exist to measure data center energy usage. The measured aspect includes not only the energy used by the IT equipment itself, but also by data center facility equipment, such as chillers and fans. Recent research has shown a large amount of energy that can be preserved by optimizing IT refresh rates and improving server utilization.

Power and cooling analysis

Power is the biggest recurring cost for data center users. Power and cooling analysis, also referred to as a thermal assessment, measures the relative temperature in certain areas as well as the capacity of the cooling system to handle a certain ambient temperature. Power and cooling analysis can help identify hot spots, overly cool areas that can handle greater power usage densities, equipment loading breakpoints, improved floor strategy effectiveness, and optimal positioning of equipment (such as AC units) to balance the temperature across centers data. Cooling power density is a measure of how much the center square can cool down at maximum capacity.

Energy efficiency analysis

Energy efficiency analysis measures energy use from IT data centers and facility equipment. A typical energy efficiency analysis measures factors such as the effectiveness of the use of data center strength (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air management metrics. However, the limitations of most of the current metrics and approaches are that they do not include IT in the analysis. Case studies show that by addressing energy efficiency holistically in a data center, great efficiency can be achieved that is unlikely to happen otherwise. Computational_fluid_dynamics_ (CFD) _analysis "> Analysis of computational fluid dynamics (CFD)

This type of analysis employs sophisticated tools and techniques to understand the unique thermal conditions that exist in each data center - predict temperature, airflow, and pressure behavior from the data center to assess energy performance and consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis in the data center can be used to predict the impact of high-density shelves mixed with low-density shelves and forward-impact on cooling resources, poor infrastructure management practices and AC or AC failures. shutdown for scheduled maintenance.

Thermal zone mapping

Thermal zone mapping uses computer sensors and modeling to create three-dimensional images of hot and cold zones in the data center.

This information can help identify optimal placement of data center equipment. For example, a critical server may be placed in a cold zone served by an excessive AC unit.

Green data center

Data centers use a lot of power, consumed by two main uses: the power required to run the actual equipment and then the power required to cool the equipment. The first category is discussed by designing computers and storage systems that are increasingly power-efficient. To lower cooling costs, data center designers try to use natural ways to cool equipment. Many data centers are located near good fiber connectivity, power grid connections as well as people concentration to manage equipment, but there are also circumstances where data centers can be miles away from users and do not require much local management. Examples are 'mass' data centers such as Google or Facebook: The DC is built around many standard and array-hosted servers and the actual users of the system are located worldwide. Having built up the start of the number of central data center staff required to keep it running is often relatively low: mainly data centers that provide unnecessary mass storage or computing power near population centers. Data centers in arctic locations where outside air provides all the more popular cooling such as cooling and electricity are two major variable cost components.

Energy recovery

Data center cooling practices are a topic of discussion. It is very difficult to reuse heat coming from an air-cooled data center. For this reason, data center infrastructure is more often equipped with heat pumps. An alternative to heat pumps is the application of fluid cooling throughout the data center. Different fluid-cooling techniques are mixed and matched to allow a fully liquid refrigerated infrastructure that captures all heat in the water. Different liquid technologies are categorized into 3 main groups, water cooled racks, direct-to-chip cooling and Total coolant (complete immersion in liquids). This combination of technologies allows the manufacture of thermal cascades as part of chaining temperature scenarios to create high temperature water output from the data center.

AFCOM NYC/NJ Metro Chapter - Home
src: www.afcomnycnj.com


Network infrastructure

Communication in the data center is currently most often based on a network that runs a series of IP protocols. The data center contains a set of routers and switches that transports traffic between the server and the outside world that is connected according to the data center network architecture. Redundancy Internet connections are often provided by using two or more upstream service providers (see Multihoming).

Some servers in the data center are used to perform basic Internet and intranet services that are required by internal users in the organization, for example, email servers, proxy servers, and DNS servers.

Network security elements are also commonly used: firewalls, VPN gateways, intrusion detection systems, etc. Also common are monitoring systems for networks and some applications. Additional offsite monitoring systems are also typical, in case of communication failure within the data center.

Pursue a career in data center management with this 4-course ...
src: static.techspot.com


Data center infrastructure management

Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize intelligent monitoring, management and capacity planning of critical data center systems. Achieved through the application of specialized software, hardware and sensors, DCIM enables a common, real-time monitoring and management platform for all interdependent systems across IT infrastructure and facilities.

Depending on the type of implementation, DCIM products can help data center managers identify and eliminate sources of risk to improve the availability of critical IT systems. DCIM products can also be used to identify interdependencies between facilities and IT infrastructure to alert facility managers to gaps in system redundancy, and provide dynamic and holistic benchmarks on power consumption and efficiency to measure the effectiveness of "green IT" initiatives.

It is important to measure and understand data center efficiency metrics. Much of the discussion in this field focuses on energy issues, but other metrics beyond PUE can provide a more detailed picture of data center operations. Servers, storage, and usage metrics can contribute to a more complete view of enterprise data centers. In many cases, disk capacity is not used and in many instances the organization runs their servers with 20% or less utilization. More effective automation tools can also increase the number of servers or virtual machines that a single admin can handle.

The DCIM provider is increasingly connected to the computational fluid dynamics provider to predict complex airflow patterns in the data center. The CFD component is required to measure the impact of planned future changes on the resistance, capacity and cooling efficiency.

Loudoun floats a plan for a data center equipment tax cut ...
src: media.bizj.us


Manage the capacity of the data center

Some parameters may limit the capacity of the data center. For long-term use, the main limits will be available area, then available power. In the first phase of its life cycle, the data center will see the space it occupies growing faster than the energy consumed. With the constant compaction of new IT technologies, energy needs will become dominant, matching then addressing the needs in the region (second and then third phase of the cycle). The development and duplication of connected objects, the need for storage and data processing has led to the need for data centers to grow more and more quickly. It is therefore important to define a data center strategy before being cornered. Decisions, conceptions, and building cycles take several years. Therefore, it is important to start this strategic consideration when the data center reaches about 50% of the power capacity. The maximum occupation of the data center needs to be stabilized around 85%, either in the territory or occupied. The managed resource will allow the rotation zone to manage hardware replacement and will allow for temporary and old generation cohabitation. In cases where this limit is exceeded excessively, it is not possible to proceed to the replacement of material, which will always lead to strangling the information system. The data center is a resource in its own right from the information system, with its own time constraints and management (25-year lifespan), therefore needs to be considered in the medium-term planning framework of SI (between 3 and 5 years).

The Factors to Consider When Choosing a Data Center for Your Businesss
src: www.backblaze.com


Apps

The primary purpose of a data center is to run an IT system application that handles the organization's core business and operational data. Such systems can be owned and developed internally by organizations, or purchased from enterprise software vendors. Such general applications are ERP and CRM systems.

Data centers may be related only to the operating architecture or may provide other services as well.

Often these applications will consist of several hosts, each running a single component. Common components of the application are database, file server, application server, middleware, and various others.

Data centers are also used for offsite backup. Companies can subscribe to the backup service provided by the data center. These are often used in conjunction with backup tapes. Backups can be retrieved from the server locally to a tape. However, tapes stored on the site pose a security threat and are also prone to fire and flooding. Larger companies can also send their backup of the site for additional security. This can be done by backing up to the data center. An encrypted backup can be sent over the Internet to other data centers where they can be stored securely.

For fast deployment or disaster recovery, some major hardware vendors have developed mobile/modular solutions that can be installed and operated in a very short period of time. Companies like

  • The Cisco system,
  • Sun Microsystems (Sun Modular Datacenter),
  • Bull (mobull),
  • IBM (Portable Modular Data Center),
  • Schneider-Electric (Portable Modular Data Center),
  • HP (Performance Optimized Datacenter),
  • ZTE Corporation,
  • FiberHome Technologies Group (Solutions Data Center Modem FitMDC),
  • Huawei (Container Data Center Solution),
  • Google (Google Modular Data Center) has developed a system that can be used for this purpose.
  • BASELAYER has a patent on a modular data center specified by the software.

Apple builds data center to obey Chinese cybersecurity rules
src: s.aolcdn.com


US wholesale and retail colocation provider

According to data provided in the third quarter of 2013 by Synergy Research Group, "the scale of the wholesale colocation market in the United States is significant compared to the retail market, with Q3 wholesale revenue reaching nearly $ 700 million.Liquid Realty Trust is the wholesale market leader, followed by long distance by DuPont Fabros. "Synergy Research also illustrates the US colocation market as the most mature and well-developed in the world, based on revenue and sustainable adoption of cloud infrastructure services.

Estimates from Q3 2013 data from Synergy Research Group.

Data Center Vocabulary | Northcutt: Cloud & E-Commerce SEO
src: northcutt.com


See also


Can this 100+ TB cloud SSD controller herald the all-flash data ...
src: zdnet1.cbsistatic.com


References


AB Infoways
src: abinfoways.com


External links

  • Lawrence Berkeley Lab - Research, development, demonstration, and implementation of energy-efficient technologies and practices for data centers
  • DC Power For Future Data Centers - FAQ: 380VDC Testing and demonstration in Sun data center.
  • White Book - Property Tax: New Challenges for Data Centers
  • European Commission H2020 EURECA Data Center Project - Data center energy efficiency guidelines, extensive online training materials, case studies/lectures (below the event page), and tools.

Source of the article : Wikipedia

Comments
0 Comments