Planning for a Data Center Infrastructure Management Solution

Traditional organization separates data center functionality into two groups:

  • Physical assets controlled by facility management
  • IT domains maintained by the information technology (IT) department












Analysts discovered that this scheme ignores many opportunities for increased efficiency by supporting overlapping functions between the two groups. Data center infrastructure management (DCIM) combines the two groups into a single system. In OSI (Open Systems Interconnection) terms, the management of the physical layer 1 is included in the supervision of the remaining six layers for a “full stack” solution. Complete management systems include the ability to centralize monitoring, maintenance, and expansion activities.

Components of DCIM

The components of a DCIM system include hardware, specialized software, and sensors. The hardware specifications remain the same as for a traditional implementation. The sensors provide a bridge between the hardware and software. These sensors convert physical information such as ambient temperature, humidity, and power supply integrity into digital signals suitable for a computer interface. DCIM software integrates traditional functionality with sensor monitoring capability, in order to allow the software to supervise the hardware.

As companies shop the DCIM marketplace, they will find that providers fall into two categories:

  • Suite vendors – sell complete DCIM solutions
  • Specialists – sell enhancements to suite solutions that are often viable stand-alone products

Performance of DCIM Systems

Performance metrics for DCIM are evolving as quickly as the systems themselves. Some metrics derive from existing industry standards, including the following:

  • PUE (power usage effectiveness) – a metric for energy efficiency for a data center
  • CUE (carbon usage effectiveness) – measures efficient use of water and tracks limits on carbon emissions
  • DCeP (Data Center Energy Productivity) – relates the net value of the data center service to the consumption of energy resources
  • PAR4 – Server Power Usage – tracks server power during four states: system off, idle, loaded, peak
  • DCPM (Data Center Predictive Modeling) – a model to predict future performance, including energy use, energy efficiency, and cost

Many companies contract an industry expert to come into the firm, analyze the existing systems, and compile the business requirements. This exercise often concludes with a serious of seminars to educate management and the IT department. Armed with this knowledge, the company is ready to work with the contractor to choose the appropriate DCIM solution.

DCIM Tracking Parameters

DCIM systems manage resources through the following capabilities:

  • Visual representation of the physical structure – some advanced systems offer a 3D virtual fly-through of the data center
  • Capacity planning
  • Modeling and Simulation – allows managers to model proposed changes in order to analyze performance and cost impact
  • Change management – controls the change order process
  • IP camera management – captures motion alarms and coordinates physical access control
  • Environment monitoring
  • Power management – monitors the entire power chain, from the server chip sets to the data center power generators
  • Asset management (cost containment)
  • Monitoring – the system monitors all electrical, mechanical, and power equipment, including servers, routers, switches, and virtual machines (VM).
  • Dashboards – quick-view screens display a summary of system integrity
  • Reports – systems record data to document tracking, identify trending, and perform predictive analysis

Two of these capabilities, capacity planning and cost containment, are examined in more depth.

Capacity Planning

Capacity planning for DCIM allows managers to allocate resources against present and future needs. Following is a suggested outline of steps to generate an effective capacity plan:

  • Step 1: Compile a complete inventory of the data center resources. Identify which are the critical infrastructure assets. List the mission interdependencies for each.
  • Step 2: Generate a comprehensive monitoring report for each of the critical assets in the data center. Perform an analysis of the critical performance parameters. Use the analysis to identify bottlenecks in the system.
  • Step 3: Ensure the DCIM solution will include data for all of the space, power, and cooling attributes of the environment. Verify that snapshot data is readily available to the supervisor in real-time, and that alarms are in place to indicate out-of-spec conditions.
  • Step 4: Monitor daily workloads for a month or other suitable period. Make sure the system provides the flexibility to deal with on-demand needs as they arrive.
  • Step 5: Validate system performance against service level agreements (SLAs).
  • Step 6: Use DCIM trend analysis to complete capacity planning against future needs.

These steps will provide the current state of the system, and allow scenario planning for future expected expansions.

Cost Containment

As part of a critical business analysis, companies use a DCIM system to provide the information necessary to generate an ROI (return on investment) on the data center performance. This ROI is based on a careful analysis of all cost contributors. Cost monitoring starts by analyzing the power and other resource consumption of all key assets from the capacity plan. It continues by monitoring dynamic use on a real-time basis.

One of the strategies of cost containment is to avoid chasing up time. This pursuit can be a very expensive undertaking. Like many business processes, the cost sensitivity of up time tends to show decreasing returns on investment at high levels. Companies should perform a marginal cost analysis to find the optimal operating point.

To perform a marginal analysis, the company can start by considering price points for each of a Tier-1, Tier-2, Tier-3, and Tier-4 solution. These points would be calculated for a fixed power budget (in megawatts) and an available data center floor space (in square feet). This will generate four cost values for each of the performance tiers. Then, the company should complete pro forma calculations of these values over a suitable period, perhaps 30 years. This will yield a marginal cost per year for each tier. These four values, as a ratio to marginal uptime for each tier, will yield the four values for marginal cost per hour of uptime. These values will climb exponentially as performance increases. This analysis presents a clear cost vs. benefits result to the company.

The Power of DCIM

Data center infrastructure management solutions offer powerful control to data center enterprises. DCIM allows demonstrable improvements in efficiency, performance, and cost control.

Posted in Data Center Design | Comments Off on Planning for a Data Center Infrastructure Management Solution

Energy-Efficient Big Data Businesses See a Better Bottom Line



Green Building Data CenterOne of the biggest costs for a big data center is power. In fact, according to Greenpeace, data centers typically require close to 100 megawatts of energy. Put in simpler terms, that’s enough energy to power 80,000 homes. While high-capacity systems are inevitable, there are ways for these businesses to reduce their energy consumption and improve their bottom line.


The Cold, Hard Facts


The Internet is energy-intensive. One of the biggest areas where data centers stand to improve their carbon footprint, according to Greenpeace, is where that energy comes from. Some companies are working on just that, branching out to solar power options or using air from the great outdoors to cool data centers.


There is still much that can be done. For example, there are still states in which Internet companies largely rely on fossil fuels for energy. The Greenpeace report noted in North Carolina, only 4 percent of the energy powering the state’s data centers comes from renewable resources. When looking at the big picture, data centers account for more than 2 percent of all of the power that is generated in the U.S. Therefore, improvements in data centers can actually have a ripple effect, boosting efforts across the country.


Why It Matters


How does that 100 megawatts of energy translate into dollars? Think millions. These centers can consume millions of dollars’ worth of energy every year. It makes sense, as there are servers that need to be running around the clock. Therefore, simply powering down every night is not an option for most companies. The fact remains that preserving fossil fuels and utilizing cleaner sources can actually help grow revenue by minimizing expenses. It goes without saying that when you can maximize efficiencies in terms of power, utility bills shrink.


Where Issues Arise


The ideal power use effectiveness (called PUE) for a data center is measured at 1.0, which would indicate that all of the power the center is using goes toward processing and storing data with no energy being used to cool any servers. For smaller companies, the ideal PUE is not out of reach. In fact, most start with stellar ratings.


What happens is the business starts to grow, increasing the number of computers they need and adding more racks. To keep things cool, they increase air conditioning to accommodate for the additional equipment. They grow again, they cool more intensely. The cycle continues and can easily spiral out of control, especially if no one is keeping tabs on how the power is (or is not) being used efficiently. So what can be done?


  1. 1.     Take Stock


Experts note that the companies that succeed in reducing expenses are the ones that view power as more than just a cost; it is something that can be controlled. The first step any data center should take is to do an audit of how energy is put to use. Most utility companies can provide this service, and some will even do it for free. The trick is to balance reducing power needs with still maintaining the quality output you need. There are energy-efficiency firms that will partner with companies to piggy-back on top of what a utility company can do.


Depending on the result of the audit, big data centers may note that making small changes can produce big results. For example, in 2013, Forbes Magazine reported on one retailer that was consuming $7 million of power annually and wanted to make a change. They brought in experts and made changes to things like the racks they used and determined a smart back-up power supply that would keep them afloat if something happened.


  1. 2.     Give Air Conditioning the Boot


Many data centers rely on a traditional method of cooling servers, which involves air conditioning. These units typically run around the clock to ensure equipment does not get over-heated. There is an alternative way to generate cool air inside the center, however, and it is known as evaporative cooling. The process involves installing a rooftop tower that will cool the water that keeps servers at a safe temperature. The aforementioned retailer now only uses air conditioning a very small fraction of the time, reducing the power they needed to cool the center by 93 percent.


  1. 3.     Rethink the Configuration


One of the most compelling aspects of what the retailer did was to put the focus on the equipment itself. Typically, a data center utilizes a universal energy supply. An inefficient system will actually produce wasted heat, which means energy is being used to generate unnecessary warmth that in turn needs more cool air to prevent overheating.


The solution? Combine racks to actually shrink the amount of space that is used. The universal power supply is then reduced, thus minimizing the energy that is used and the bill that hits the doorstep every month.


The Cost of Energy Efficiency


Upgrading equipment and software comes with a cost. Depending on the size and scope of the center, that figure could exceed six or seven figures. However, the savings can kick in almost instantly. In the Forbes report, the retailer noticed a return on its energy investment in less than a year, now operating at the energy levels they were using half a decade ago despite having added more than 30 stores.


Big data centers will inevitably devote a decent part of their budgets toward power, but the kind of power they use and the cost are flexible. By taking the right steps, companies that make the investment can notice a return on their investment in a short period of time, thus improving their bottom line for years to come.

Posted in Data Center Construction | Comments Off on Energy-Efficient Big Data Businesses See a Better Bottom Line

Helping Data Centers Survive a Cold Snap



cold-serverFor all of their power and capabilities, data centers are still susceptible to the disastrous effects of water, fire, lightning, earthquakes and extreme winter cold. It seems that winter has received an upgrade, which means that it’s more important than ever to make sure that data is being well-protected while it’s being housed in a data center.


The Frigid Fallout


Should a data center succumb to an arctic chill, it can cause problems with cooling and fuel while making it more difficult for equipment to work at peak efficiency. There’s also the fact that data center employees might have difficulty getting to work on time if they’re able to make it there at all, which can create an all new set of problems and setbacks.


It’s beneficial to have contingency plans in place that can mitigate any damage and make sure that operations proceed as normally as possible should something go wrong with the data center. It’s always better to be acting in these situations as opposed to reacting.


Know Your Enemy


In order to properly prepare a data center for the cold, you need to understand how the cold can affect a data center. The cold can add undue stress on the data system, and that’s especially true if the frigid air outside is being used to keep the center warm. The center’s drain lines can freeze over in addition to heating coils, fuel systems, humidification units and cooling towers. You never realize how much effort goes into keeping data centers at the proper temperature and setting until a cold snap blows through town.


Frozen air conditioning units might spring leaks, and snow might find its ways into the intake vents, which can make it all but impossible for air to properly circulate and lead to a system wide shut down. One of the best ways to prevent these mishaps from occurring or at least keep them from completely crippling a data center is to be diligent when it comes to maintenance and upkeep. Make sure all battery warmers, block heaters and engine oil heaters are in working condition.


Location, Location, Location


Prevention and contingency plans will also depend on where the data center is located. If the data center is located in an area that receives especially bitter winters, it might be necessary to set aside the funds necessary to make sure that the center is always being well-heated. It’s a good idea to start saving this money up as soon as possible beforehand so that finances don’t take a bigger blow than necessary. You know that winter is coming, so you might as well prepare for it in the spring.


Generators should be insulated and heated, and you’ll want to keep humidity in mind whenever you’re dealing with electricity and cold.


It’s not unusual for anti-static wrist straps to be used in order to make sure that equipment is properly protected during the winter. The use of ultrasonic humidification can result in much better energy savings when compared to the use of conventional humidification methods.


The weather forecast should also be monitored so that there’s ample time to call in extra employees and help if necessary and so that there’s enough time to gather any necessary extra supplies.


Consider Employees


Employees should be just as well-protected from the extreme winter cold as the data center itself. Even if a data center is being built in an area that isn’t known for its punishing winters, there should still be plenty of room in storage for salt in case things take a turn for the worst. It’s also a good idea for there to be stairs instead of ladders for employees to use to move between levels since ladders can be a huge hazard if they ever ice over.


The areas employees frequent for extended periods of time should be kept warm so that employees remain comfortable enough to be able to focus on their jobs. Snow drifts are something else that should be taken into account whenever a data center is being built since they can damage the structure of the center and lead to numerous other complications.


Aside from keeping employees comfortable, there’s also the fact that employees might not be able to make it in to work due to road closings. Individuals who are at the data center when inclement weather hits need to well-supplied so that they can continue working as normal and so that productivity doesn’t dip.


There should also be proper employee protection, such as gloves and heavy coats, for when maintenance needs to be done on outside equipment. Even with the right winter protection, there’s still a chance that maintenance might take longer than it usually does since employees might need more breaks and have to deal with equipment that could be iced over.


Proper Preparation


In order to protect a data center from inclement winter weather, certain steps need to be taken, including:


  • Making sure that all alarms and remote monitoring panels are operational
  • Keeping a close and constant eye on outside equipment
  • Stocking up on all necessary parts in case they should fail
  • Ensuring that there are plenty of de-icing products on hand and that temperature probes have been calibrated
  • Keeping an eye on the forecast in order that maintenance can be postponed if necessary


The worst case scenario always needs to be taken into account when it comes to preparing data centers for a cold snap. Be proactive and make sure that employees are fully aware of what they should do if the weather takes a frigid turn.

Posted in Mission Critical Industry | Comments Off on Helping Data Centers Survive a Cold Snap

Hack Your Way to the Truth about Hacking


truth-about-hackersIt doesn’t matter how big or how small your business is, there’s always a possibility that your website could be hacked. And the worst part is that several weeks or months might go by before you even realize that something is amiss with your site. No matter what business you’re in or how small of a budget you have, you always want to make sure that you, your website and your customers are properly protected against digital attacks. Besides firewalls and antivirus programs, knowledge is another effective tool in protecting yourself against hackers.


The Cost of Hacking


One thing that you have to always keep in mind about your business website is that it’s always open, so there’s always the possibility that it could become hacked. If you have a small- to medium-sized business, you can expect to pay more than $180,000 if your website is attacked, and that’s just on average.


Not only is there the financial cost that comes with hacking, there’s also the fact that future jobs could be lost as well. It’s estimated that at least 500,000 American jobs are lost every year because of the costs of hacking and cyber espionage. So not only do businesses have to pay for a hacker’s misdeeds, potential future employees do as well.


Other than businesses and potential future employees, there’s also the possibility that the web designer will also have to pay the price of a hack because they were the ones who designed the website that was hacked in the first place. Should clients spread the word about the web designer, there’s a possibility that the designer could lose out on business while the same happens to the company that they originally designed the website for.


The Ripples Caused By Hacking


Should your site ever become compromised, you can immediately expect a loss in profit and revenue because you have to shut down while you get everything taken care of. Even if you open up a temporary website while you’re building a new website, there’s still the possibility that your customers will be uneasy about doing further business with you, and that’s especially true if their credit or debit card information was stolen during the hack.


Just like you have to do everything that you can to protect your business reputation after a cyber-attack, search engines also have to protect their reputation as well as anyone who uses their search engine. What this means is that your search engine rankings can take a serious hit after it’s been discovered that you’ve been hacked. Should your site be blacklisted by search engines, you might show up significantly lower on search engine results than you did before the hack.


Businesses that have been hacked also have to consider the very real possibility that some of their customers might take legal action if their financial information was stolen during the hack. Now you have to take time away from rebuilding your business in order to go to court and you might even have to spend money to hire a lawyer and pay for legal fees.


And to add insult to injury, if your business has a credit card issuer, they might fine you hundreds of thousands of dollars for the breach in security.


After the Dust Settles


You can still be feeling the effects of getting hacked several months and possibly even several years after it happened. Should the media get ahold of the information, they might always throw in the fact that you were hacked whenever they mention your name in the future.


If you have sensitive employee information that was also hacked, social security numbers, health care information and even home addresses might have been compromised. Now your employees have to worry about their identities being stolen at any time in the future, which might cost them additional frustration on top of being associated with a business that’s been hacked.


How it Happens


There are several ways that your website can be hacked, including:


  • Content Management Systems(CMS) are often vulnerable to access to the site through back doors created by the permissions needed for a CMS to operate.  Old CMS’s or not updated versions are a very common entrance for a hacker
  • Plugins are often used in CMS’s for many reasons to make adding content, custom code, or images easier for the user.  Some plugins, however can leave you vulnerable to a hack.
  • Insecure passwords. There are computer programs that are designed to filter through random combinations of passwords until they find the right one that will give them access to your site
  • Old code. Outdated or poorly written code can also act as a gateway for hackers. You might want to think about updating any old plug-ins or themes on your website.


Some of the warning signs that your site might have been hacked are:


  • Sudden surges in traffic from odd locations
  • Massive uploads of spam
  • Malware warnings
  • Several 404 error messages
  • A sluggish site


If you even suspect that you site might have been hacked, act as quickly as possible in order to mitigate any damage that might have been done. Get in touch with your hosting provider and inform them of your suspicions before you change all of your passwords. You can also hire someone to professionally “scrub” your website, which might cost upwards of $200.


To protect yourself from being hacked, it’s a good idea to change your passwords often, install security plug-ins and make sure that your website is always up to date. At the end of every month you’ll want to take a close look at your website and get rid of any themes or plug-ins that you aren’t using so that they don’t become a security liability in the future.


Think of business website protection like insurance—while you might not ever need it, it’s still a good thing to have and one of the best ways to save yourself time and money in the future. There’s no way to make your site completely hack-proof, but you can most definitely make it hard for hackers to make off with ill-gotten gains.



Posted in Technology Industry | Comments Off on Hack Your Way to the Truth about Hacking

Considering the Fire Concerns of a Data Center

Fire Danger Data CenterData centers have become the new warehouse of the 21st century. Literally millions of bits of information flows through their servers daily, and corporations and organizations rely on them for optimal performance. A corporation’s data is among some of its most valuable capital, and should be thusly protected. When considering data protection, the old adage that “it’s better to be safe than sorry” certainly applies.

Any data loss or network interruption can be catastrophic to an organization. Yet many often think that once they have their data flowing through a server farm, concerns about its safety are unwarranted. That attitude can be dangerous, as there are still a number of ways outside of conventional methods through which data can be lost. Often a single layer of protection isn’t sufficient, and there are some things that even a firewall can’t stop. Thus, it’s encouraged to consider any and all aspects of your data’s safety, even when it’s under the care of a data center. This is certainly one area of business where overkill is underrated.

A Threat from the Center Itself?

When considering the security of their data, most focus on protecting it from intrusion. And with good reason; cyber crime and the theft of intellectual property can be disastrous. Yet many overlook concerns about the where their data is being stored, and the potential risk that the actual data center can pose.

Try leaving a refrigerator door open for than 5 minutes and see what happens. Most are surprised to find out that the room actually gets warmer. That’s due to the increased energy consumption that the refrigerator has to use to power the motor to keep its contents cool with the introduction of the outside air. That energy is given off of the unit as heat, and its effects are noticeable.

The same principle of thermodynamics applies to a data center. Servers that are in constant use expend a lot of energy. This energy causes the actual machine housing to heat up. Now, imagine many of those servers stacked one atop the other from floor to ceiling, all arranged in row after narrow row, filling the room. Couple that with the miles of wiring comprised mostly of copper and other alloys that are terrific conductors of heat that connect the machines, and one can imagine the immense heat that builds up inside of the rooms of a date center. The introduction of any small amount of combustible material to such a hot environment could easily produce a literal fireball.

Most data centers have measures in place in to combat these potential hazards. The rooms housing the servers must be well-ventilated to help transfer some of the heat outside of the room. Yet, in the event of a fire, these ventilation systems can also serve to delay response. The rapid airflow caused by a ventilation system can actually carry smoke away from the source before a smoke detector can trigger any sort of fire suppression system.

Aside from fire, water is the next most harmful substance to a server. Yet most data centers employ some form of sprinkler system as part of their fire suppression system. Water raining down upon the servers in the event of a fire can often cause more damage to a server farm than the fire itself. Often, the suppression systems themselves are the actual hazard. Accidents have happened at data centers where fire alarms either triggered incorrectly or by an inconsequential amount of smoke have accidently damaged or destroyed all of a center’s assets when their servers were doused by the sprinkler systems, causing data loss and greatly limiting the operational effectiveness of entire companies and organizations.

How to Mitigate These Risks

Corporations are faced with a bit of a dilemma: finding data centers to house their information and run their servers  that don’t in-and-of-themselves pose a threat. The answer only comes by placing an extensive amount of time into researching how a data center handles the risks of fire, and what methods it employs in extinguishing one should it occur.

Because the mission-critical nature of data centers require that extended downtime be avoided at all costs, simply turning servers off to allow them to cool isn’t a practical way to mitigate the risk of fire. Given that a center can’t totally eliminate the risk, the key to better protecting a center and it’s servers from fire then is to find way to immediately identify fire at its source and to suppress it without damaging the servers in the area.

Recent advances in fire suppression systems have turned such ideas into possibilities. In order to better identify the source of fires, photoelectric smoke detectors have been created that to better detect combustion particles in the air. The detectors can be placed on ceilings or within air ducts. These new systems also help in preventing false alarms from causing any potential sprinkler damage, as they use microprocessing devices that function in real-time to determine if whatever the alarm is sensing is real or not.

New suppression solutions have also been created to replace conventional sprinkler systems. Rather than using water, these systems instead deploy gas-based, waterless agents aimed at extinguishing the fire in its inchoate stage. These agents work by either absorbing the heat from the source of the fire or by depleting the oxygen in the area and essentially choking the fire out. Once settled, no residue is left on the machines and their performance is unhindered.

Data centers have helped countless organizations to increase their operational effectiveness and protect their data. Yet there are safety trade-offs to be made when considering renting space within a center. However, a little research into the safety practices of a data center can help one know with which providers he/she will be able to enjoy the effectiveness that they offer without the safety risks.


Posted in Uncategorized | Comments Off on Considering the Fire Concerns of a Data Center

Looking Into The Future Automation Of Data Centers

Data Center AutomationData storage is as demanding as ever, pushing server providers to seek innovative solutions that are still barely able to keep up with the rapid growth. Server providers have the delicate task of maintaining fast, safe, and reliable user access—not to mention simple concerns such as preventing equipment from overheating. Automation appears to be the future of servers, as purely manual control of the incredible amount of data will likely no longer be reasonable. The solution is expected to be a mix of high-tech artificial intelligence used in both computers and equipment that is best described as robots. While current technology and engineering is still not quite up to the task, the vast resources being poured into research and development will soon bring this tech revolution to reality.

What is Fueling the Incredible Growth of Data Storage and Server Demands?

In 2012 the estimated stored digital data in the world was over just under three zettabytes, a number that is expected to soar to over eight zettabytes by 2015. A zettabyte is over one billion terrabytes; many new computers feature one or two terrabytes of storage, an amount of storage that was at one time limited to only super computers. Data storage has grown at such a fast clip that the tech world has had to rush to develop names to quantify storage amounts. As a frame of reference, a zettabyte is capable of storing two billion years of music! So what is fueling this frantic data growth?

  • Growing World Wide Access to the Internet. Roughly 40% of the global population now has access to the web, representing a 250% growth since the mid 2000s. Growing third world economic development will continue to add more and more users over the next decade.
  • Vast Amounts of Video. By the year 2016 over half of web traffic will involve internet video, further creating a massive storage demand. YouTube users alone upload somewhere in the area of fifty hours of video a minute, 24 hours a day.
  • Increase in Mobile Device Use. Smartphones, tablets, and other mobile devices are quickly become internet users’ primary method of web surfing, dramatically raising the average amount of time spent on the web.
  • Booming Online Commerce. Online retail purchases are set to exceed in store purchased over the next ten years, further boosting data demands.

The above are just a few of the many contributors to increasing data demands. Government and business internet usage are also increasingly adding to total digital data, leading both to develop their own storage server facilities.

How Will Servers Keep Up With Storage Demands?

Google, Facebook, and other organizations that handle massive amounts of data continue to build bigger and more advanced data centers throughout the globe, and are still struggling to keep up. Building more and more data centers is not a feasible long term solution to data demand; the development of new technology is a necessity. The following are some of the current and anticipated trends that will move the data storage forwards:

  • Automated Monitoring and Fixing of Processes. Identifying the root of server performance problems can be a time intensive task, as is fixing them. Newer computer technologies that automatically find and fix faulty processes are growing in popularity. Developing programming and artificial intelligence technology will build on current systems to allow for more complex fixes and for performing daily tasks that currently have to be done manually.


  • Increased Energy Efficiency. Energy use has been a consistent thorn in the side of server operators, with some experiencing as much as a 90% waste rate. This problem has been not only expensive but has drawn the ire of environmental advocates as well, who are also upset at the fact that most data centers rely on gas powered generators in the event of a power out. Google, Facebook, and other internet giants have turned to building servers in areas like Sweden and Greenland to take advantage of natural cooling and nearby hydroelectric power.


  • All in One, Portable Data Centers. AOL and other online operations are experimenting with the use of small, portable unmanned data centers that can be used to supplement high demand areas or used in the event that a main data center is damaged. The smaller nature of these portable centers necessitates that they have the ability to function on their own as much as possible. Advancements in robotic and AI technology have the ability to fuel the widespread use of small, self-reliant data centers.

The Dawn of the “Lights-Out” Data Center

AOL’s vision of a “lights-out” data center has attracted the attention of other major web players. A “lights-out” data center is one designed to be completely human free in day to day operation, save for significant breakdowns in equipment. Data server robotics are currently focused on a rail system that allows robotic equipment to move throughout a data to center to move servers, perform minor repairs, clean, and use integrated software to handle processing problems. Current data centers require administration and engineering staff to be responsible for a high number of servers; the use of robotics has the potential to cut down on operating costs and reduce employee workloads.

The development and use of robotics is currently too expensive to make it feasible for widespread use, as the upfront costs associated with building a lights-out center make it an uneasy investment to make. However, in the near future the upfront costs will likely be more than made up for due to higher efficiency, decreased labor costs, and the ability to remotely control robotics from anywhere in the world.

Given the staggering growth in data needs, server operators have no choice but to adapt in order to meet demands and provide the performance that users expect. Robotic technology, innovative software, and artificial intelligence will likely be the foundation of data center improvements, potentially revolutionizing one of the most important resources in the world.

Posted in Data Center Construction | Comments Off on Looking Into The Future Automation Of Data Centers

The Damage Caused by Downtime

Data center reliability has helped to increase the operational effectiveness of thousands of companies around the globe. Improvements in technology and the design of server farms, as well as in the facilities that house them and the level of training for the people who oversee them have allowed collocation centers to reasonably offer the expectation of the holy grail or network reliability: the seldom-achieved 99.999% optimal running time.

Yet it’s often that exceptional reliability that leads to organizations into trouble. 99.999% still isn’t 100%, yet too many companies take for granted how difficult dealing with that 0.001% can be. This is only compounded by the fact that corporations allow the security they feel in their data center’s reliability to convince them to allocate resources normally dedicated to network maintenance to other areas. While this line of thinking may produce the desired results in the immediate, it need take only one instance of peak-hour downtime to reveal just how flawed this idea truly is.

Given the vast resources some organizations have in terms of staff and production capability, it’s remarkable to see how easily their operations can come to a grinding halt during network downtime. This is most often to due poor training of employees on how to handle manual processes as well as what to do in the event of a server failure. While this typically isn’t an issue for those organizations that have chosen to go with a third party collocation provider whose only role with them is server maintenance, it can spell disaster for those who operate an in-house data center, as limited resources often contributes to extended downtime.

The Difference 0.001% Can Make

This is an issue facing both large and small businesses alike. To put a monetary value on exactly how much that cumulative 0.001% represents, downtime cost companies $26.5 billion in revenue in 2012, an increase of 38% over 2010 numbers. Considering the numbers from companies that reported server outages, that breaks down to roughly $138,000 per hour lost during downtime. And considering that those number’s only signify those outages that were reported, one has to wonder if the actual loss isn’t much higher. Experts actually estimate that were every data center in the world to go dark, $69 trillion would be lost every hour.

That number represents the oddity in these figures. If data center reliability has actually increased, why the increase in lost revenue? One might expect the numbers to actually go the other way. Yet despite the potential for network outages, demand for services provided by data collocations center is growing at an astronomical rate. Currently, data centers worldwide use as much cumulative energy as does the country of Sweden annually. That usage will only increase with time, as it’s believed IP traffic will hit 1.4 zettabytes by 2017.

Another contributing factor is the virtual devaluing of data storage. in 1998, the average cost of 1 GB of hard drive space was $228. The same amount of space was valued at $0.88 just 9 years later. As data storage and transmission needs continue to grow exponentially, enormous data warehouses are becoming more common. Eventually, this leads to an increased need for data collocation, as more companies need to dedicate increased attention to their server maintenance.

The Myth of 100% Network Reliability

In anticipation for such an increased need for collocation services, many may think that the next logical step in data center evolution to reach the 100% reliability plateau. Unfortunately, such a estimate is beyond reach (five 9s is barely achieved a handful of the time). The reason being is that there virtually are almost as many causes of server failure as there are bits of data that they carry. And the chief amongst these is a problem for which there is no solution.

The most common case for downtime is, by far, human error, accounting for almost 73% of all instances of downtime. Whether it be from lack of operational oversight, poor server maintenance techniques, or just poor overall training, people can’t seem to get out of their own way when it comes to guaranteeing the performance of their servers. Companies can and should invest resources into mitigating this, but eliminating it all together remains an illusion.

Aside from human error, downtime can be caused by literally anything. Anyone doubting this need only do a quick internet search for “squirrels” and “downtime” to see just how often these furry little rodents can bring elements of the human business world to a standstill by chewing through cables and thus limiting a data center’s operational capacity.

Focus on Preparation, not Prevention

Given that downtime is an inevitably, an organization is much better served in turning its efforts to dealing with downtime rather than trying to eliminate it. Most, if not all companies aren’t doing all they can to adequately deal with network downtime. 87% of companies admitted that a data center crash resulting in extended downtime or data loss could be potentially catastrophic to their businesses. Yet more than half of American companies admit that they don’t have sufficient measures in place to deal with such an outage.

Given this information, organizations looking to either establish an on-site data center or rent space through a third party data collocation company should consider these two aspects:

  • What degree of network reliability can the data center deliver? Taking into account what’s actually feasible, can it guarantee operational effectiveness to the highest of current standards?
  • What needs to be done to improve in-house performance in the inevitable event of downtime? Are sufficient efforts being placed on maintaining like operational capacity as well as on running data recovery once the network is back up?

As corporations and organizations continue to rely more and more on their data collocation services for optimal performance, understanding the limits of data center reliability and being prepared to deal with network outages is the key to avoiding huge revenue losses from downtime.

Posted in Uninterruptible Power Supply | Comments Off on The Damage Caused by Downtime

How to Select the Hard Drive That is Right for You

hdd-data-centerSelecting and purchasing a hard drive used to be a simple task. Generally people would find a few of the highest capacity drives they could afford, pick the fastest of the group, and were done. Today, however, there are many different types of storage devices on the market today filled with different and more complex technology. The sheer variety of devices available can make it difficult to select one that will work best for your needs. Another thing that just increases the selection problem is choosing between disk drive interfaces Solid State Drive (SSD) and Serial ATA (SATA).There are also other factors to consider when selecting a storage device like random access performance, cost, sequential performance, reliability, and density.

There are many factors that make selecting the right drive a challenge. Understanding the intricacies of each storage device feature will help tease out which device will better suit your needs. The fundamental question that needs to be considered is which is better: conventional hard drives (HDD), or Solid State Drives (SSD)

Technology: HDD

The standard version of HDD drives contains multiple disks, otherwise known as platters. These platters are covered in a magnetic coating and rotated at high speeds. The platters are then read by drive heads, which change the magnetization of the material beneath to either a reading state or a recording state.

Reading data and writing data requires a lot of work. Even while the HDD ideas are simple, having manufacturers create high capacity drives with reasonable prices poses several problems.

During normal operation, the heads and platter must spin to an exact point before the drive can actually do anything. This whole process takes time and is one of the main reasons for performance bottlenecks in PCs. This problem is also found in netbooks and laptops because moving all of the components around creates a constant power drain.

Drive heads are positioned very close to the platter, only the thickness of a human hair separates them. If there is a shock or electrical bump and the wrong time, they may collide, which then damages the drive and loses data. Although manufacturers do a lot to prevent these types of collisions from happening, there is always still a risk.

All of these issues with HDD are not seen with SSD, although the price difference between the two is very significant.

Technology: SSD

Solid state drives are very technologically different than HDD drives. They forgo the platter and the head components completely and replace them with a simply, non-moving memory chip. The chips do vary from drive to drive, but the majority of them use flash memory, the same technology that is found in MP3 players, cameras, and memory cards. The advantage of this flash memory is its ability to store data without any power.

The more efficient technology found in SSD is much more expensive than the technology found in HDD. SSD drives on the market typically have low capacities with a high price tag.

SSDs do make up for the high price by their excellent performance. A typical hard drive with HDD takes seconds for its platters to accomplish their full speed, whereas SSDs are ready to go immediately. SSD also doesn’t have to move heads around or wait for a platter to reach the miraculous special point to be able to reach data. SSD can be up to fifty times faster than a regular HDD drive. Equipping a PC with an SSD would more than halve the boot time, which can be very beneficial for some users.


When it comes to the technologies of HDD and SSD, it is very obvious that SSD drives are technologically superior to HDD, but there is a problem that comes with that technical increase: cost and lower capacities.

An SSD is capable of reading large chunks of data at over 500 MB per second, write data at over 300 MB/s per second, and access random data in .1 milliseconds. This speed is significantly different from the fastest HDD which is only 150 MB/s for sequential reads and writes with random data access at 17 milliseconds. SSDs allow for Windows to boot and run faster, load programs and games in seconds, consume less power, puts out less heat, noise, and vibrations. SSD is superior when used in the right context.

If choosing between equally priced drives, SSD will only provide about 3% of the capacity of the same priced HDD. There are higher capacity drives available, but they are prohibitively expensive. Unless money is not an issue, the first desktop drive should always be standard HDD. Although the performance won’t be up to SSD standards, it is adequate for most tasks, and the money saved can offer a more significant boost of speed on a different part of the computer. If optimizing an existing PC, an SSD would be the way to go. The faster boost times and overall speed boost will help Windows load more quickly, even with a small capacity SSD because data and other applications are installed on the regular hard drive.

SSDs are great for laptops, depending on what their intended use is. If a laptop is being used as a replacement desktop system (running lots of applications), SSD won’t really have the capacity to help. If the laptop’s intended use is basic (things like email, word processing, browsing), SSD, even a low capacity would be worth it. The SSD would improve the overall system performance, reduce weight and noise, and increase the battery life.

There are many things to consider when selecting the right drive for your PC environment. Don’t get sucked in by new fads or hypes. Know your personal requirements and PC environment and price out drives from there.



Posted in data center equipment | Comments Off on How to Select the Hard Drive That is Right for You

The Future of Data Center Design and the Open Compute Project

Recently, Intel and facebook banded together to create the next-generation of data center rack technology. The prototype includes an innovative type of architecture called the photonic rack architecture. The design of this new prototype improved every aspect of data center racks including the design, cost, and reliability by implementing a disaggregated rack environment, Intel switch, and silicon photonics technology.


What are Silicon Photonics


Silicon photonics take advantage of light photons to move gigantic amounts of data at extremely high speeds over a small, thin optical fiber. Traditionally, to move data, electrical signals were used and sent over a copper cable. The prototype that Intel came up with can move data at up to 100 gigabites per second. This speed allows the components to work together, even when they are not in close proximity to one another.


New Options in Design


Intel has created a rack that separates components to their own server trays, one tray for atom CPUs, one tray for Xeon CPUs, and another for storage. This design is great because when a new generation of CPUs is on the market, the user can swap out the CPU tray instead of having to wait for an entire new server and motherboard design.


This design approach enables the independent upgrading of compute, network and storage subsystems. This independent upgrading ability will absolutely define the future of datacenter designs for the next ten years. Intels photonic rack allows for fewer cables with an increased bandwidth, power efficiency, and farther reaches than today’s copper based connections. These new technologies makes hardware much more flexible, and when coupled with the silicon photonics, enables interconnection without much concern over physical placement.


Rack Disaggregation

The term ‘rack disaggreagation’ simply refers to the separation of resources that exist in a rack, this includes storage, networking, comput and power distribution. The separation is in the form of discrete modules. The traditional arrangement of a data center rack would have a server with its own group of resources. When data center racks are disaggregated, resources are distributed and grouped by their types and upgraded on their own pacewithout affecting others.


Disaggregation not only increases the lifespan for each resource, it enables IT managers to be able to replace individual resources rather than the entire system. This modulation makes data centers much more flexible and serviceable which subsequently improves the total cost for an infrastructure rehaul investment. This arrangement also improves thermal efficiency because it allows for more optimal placements within a rack.


Connector Design


The optical interconnects today generally use a connector called MTP. The MTP connector was not optimized for data communication applications, it was designed in the 1980’s for telecommunications. Even though, at the time it was created, MTP utilized the state-of-the-art technology, it has not kept up todate. Many parts of the MTP connecter utilizes parts that are individually expensive and can be easily contaminated by dust.

New Connector Design


In the last 25 years, there have been significant changes in materials and manufacturing technology. Utilizing this technology, Intel, with the help of optical fiber and cable specialists, designed a brand new type of connector that uses modern technology and manufactoring technique. They have included a telesckiping lens to help prevent dust contamination, as well as used fewer parts with up to 64 fibers in a smaller form, all at a lower cost than MTP.


New Innovations


The new Intel prototype utilizes silicon photonics technology as well as distributed input/output using Intel’s ethernet switch silicon. This prototype also supports Xeon process as well as next-gerneration system-on-chip Atom processers. These innovations dovetail nicely with many other ongoing Open Comput projects. The SOC/memory module was created with the writing of CPU/memory ‘group hug’ module specifications that were proposed by Facebook. The exisitng OCP windmill board specification that supports the 2S Xeon processors, is planned to be modified to illustrate that the signal and power deliviery were modified to be able to interface with the OCP Open Rack v1.0 spcification for power dlivery through 12V bus bars, as well as for networking, to allow for interfacing with a tray-level mid-plane board that contains the switch mezzanine module.


Intel is also planning to contribute a desing that enables a photonic recptacle to the Open Compute Project, and plan to work with Corning and Facebook to standardize the design.


Other New Features


Intel has been highly involved and added several innovations to the Open Comput Project. Their innovations include new storage technologies, racks, and systems. Specifically, Intel has been working on finalizing the Decathlete board specification for a general-purpose, dual-CPU motherboard, large-memory-footprint for enterprise adoption.



What is the Open Compute Project?

The Open Compute Project is an initiative announced in 2011 by Facebook to openly share datacenter product design. This initiative began after Facebook redesigned their data center in Prineville, Oregon by Frank Frankovsky, the Open Compute Project leader. The design is still a long way from being used in data centers, but some aspects that were published have been successfully used in the Prineville center to help them increase energy efficiency.

The future of data center design is being created right before our eyes. If successful, the Open Comput Project will enable a rapid technological increase that will make data safer and more accessible than it has ever been before. With the new rack, connector and photonics, data will be much easier to store, use, and share. Collaboration is the key, which is why the Open Compute Project is so widely accepted by many diffreent companies across the globe. The future is coming, are you ready?

Posted in Data Center Design | Comments Off on The Future of Data Center Design and the Open Compute Project

Data Protection Solutions and Virtualization

In today’s modern world, protecting data from hackers and viruses has never been more important. There are many different types of data protection available on today’s market, all of which possess unique features, which can make it difficult to select the one that is right for you.

User Friendly?

When it comes to data protection products, the bottom line is ease of use. This is important because the person responsible for data protection may not be very knowledgeable or available to manage the product every day.

When selecting a data protection product, look for a system that can be managed on one user interface. The interface should be simple so that it only takes a couple of minutes to complete daily tasks. Another must is the protection product should be able to send alerts to your mobile devices and inbox with a simple, straightforward report about the recent activities and their status.

The ease of use requirement coincides well with historical reporting. If the system user understands what the solution is doing daily, then he or she will be able to understand what the system has been doing for the previous days, weeks, months and more. This feature allows for future planning with very little hands-on work. A great example of this is if there is a report of growth for the last four months, the user will then be able to predict when the system will be running out of space and be able to know when they should purchase more tapes. Although simple, this example shows that data protection products should help plan for the future usage, future budgets, in addition to operating well in the present.

One of the other features that can dictate whether the solution will be easier or harder to manage is the single-server footprint versus the master/media footprint. This is the same with automatic client software updates because manually updating systems across an entire infrastructure takes time away from the IT administers.

A successful experience with data protection products should have administrative time-savings inbuilt to help reduce the cost of operations.

The Life-Cycle of Entering Data

All data is not created equal, and it shouldn’t all be stored the same way. Sometimes IT organizations will drive up the cost of storage because the treat all the data the same and store it on the same media. Things like hierarchical storage management (HSM) or long-term archive management allows for flexible data storage on different tiers and with specific policies. These types of storage systems allow for administrators to migrate and store data on the tier that is most appropriate for the data they are storing. With these types of storage, older and less frequented data can be moved to a storage platform that is slower and much less expensive, like tape, while newer and more frequented data can be moved to faster, more expensive data storage. Automated data archiving can also help in the life cycle of data by helping organizations comply with data retention policies as well as reducing cost that is incurred because of that compliance.

It is important to look for data storage systems that reduce the overall cost because of automated data life-cycle management based on policy. Also, moving data to the most appropriate tier helps an organization become cost effective while still maintaining service level requirements.

Hierarchical Storage Management (HSM)

Hierarchical storage management (HSM) is a technique to store data by automatically moving it between high-cost and low cost storage media based on the data itself. High-speed storage devices (hard disc drive arrays) are much more expensive than slower device (optical discs, magnetic tape drives). In an ideal world, all data would be available on high-speed devices all of the time, but in the real world, this is extremely expensive. HSM offers a unique solution by storing the bulk of the organizations data on slower devices and copying it to faster drives when needed. HSM monitors how and how frequently data is used, and ‘decides’ which data should be moved to slower drives and which should stay on the faster drives. Data files which are used frequently are stored on fast drives, but will eventually be migrated to a slower drive (like tape) if they aren’t used for a certain period of time. The biggest advantage to HSM is that the total amount of data stored can actually be much larger than the capacity of the high speed disk storage available.


The technology surrounding virtualization has drastically helped IT organizations of every size reduce their costs. It has done this by reducing application provisioning times and improving server utilization. These cost reductions, however, can disappear quickly when faced with a virtual machine sprawl. Also, the connection between physical and logical devices becomes very hard to track and map, creating a virtual environment that is more complex than has ever been seen before. In these complex virtual environments, backing up and restoring data can become very difficult. For example, restoring and backing up data for a group of virtual machines that reside in one physical server can make all other operations running off of that server stop completely, including data protection services.

Data Reduction

Data reduction technology is the first line of defense against the volumes and cost of data that is rapidly expanding. Solutions to this problem include progressive-incremental backup, data compression, and data deduplication. These things can help organizations to reduce their backup storage capacity by as high as 95 percent.  Efficient tape management and utilization can also help reduce data storage capacity requirements.


Selecting a data protection service can be arduous, but if you understand the services offered and which are most useful to you, the selection will be much easier.

Posted in data center equipment | Comments Off on Data Protection Solutions and Virtualization