Archive for the ‘Data Center’ Category

posted by | No comments

Six Sigma has evolved from a quality standard to an all-encompassing management philosophy. At its core, Six Sigma measures variation from a laid down standard. The concept entails defining, measuring, analyzing, improving or designing and controlling or verifying processes to get to acceptable standards.

Of late, lean Six Sigma has found widespread application in data centers as a means to improve efficiency. The specific application of Six Sigma methods in data centers include:

  • The Six Sigma Route to Lean and Green Data CentresUndergoing studies to identify usage to optimize server and energy loads. This allows the data center to have a firm grasp of what each available server does, which improves server utilization and removes unneeded ones. In many cases, systems run out of usable memory long before CPU becomes an issue, meaning that, in many cases, a simple memory upgrade could negate the need for new servers. A Six Sigma analysis could help the data center identify these memory issues as a precursor to leveraging new rack-level architectures that allow scaling up memory and compute independently.
  • Tuning and optimizing the IT systems in use through deployment of smart and intelligent power management systems. For instance, Six Sigma analysis helps to identify peak hours, allowing the data center to turn some servers off during non-peak or non-office hours.
  • Quarantining hardware not in use to reallocate it for new projects or to discard it if it is no longer required, thereby saving space and maintenance. This may manifest as conducting weekly audits and daily inspections to reduce inventory levels, devising a system to properly label, store and track all components.
  • Eliminating wasteful practices and procedures. This manifests as employing intuitive designs such as “herringbone” stacking pattern that increase utilization of storage space by 50%, standardizing components and parts so that management and replacement becomes easy and opting for uniform signage that make retrieval and allocations fast and easy.
  • Creating a culture of discipline to improve work practices and increase productivity. This may assume forms such as target setting, empowerment, developing cross-functional teams adept in multi-tasking and other positive interventions for the data center staff. This helps to eliminate lag, and errors and issues are nipped in the bud before they escalate, which leads to the standards related to uptime being met.

The principles of Six Sigma claim direct benefits of up to 50% reduction in operational process costs and additional indirect benefits from improvement in cycle-time, less waster and increased customer satisfaction.

Lifeline Data Centers offers fully compliant and highly efficient colocation services with flexible plans that suit all your data center and hosting requirements. All plans come with 99.995% uptime in N + N redundant facilities, and we have the most competitive pricing because we know how to run our data center efficiently.

CONTACT LIFELINE DATA CENTERS

posted by | No comments

Organizations trust data centers with the life and blood of their business: data. Therefore, one of the most important aspects of a data center is security. How do data centers keep the premises of a data center secure? Are the current methods keeping up with the increasing threats to business data? How many layers of security do you need to protect a data center?

Let’s take a look at some of the physical features that make up a secure data center.

Common and key security features of a data center include:

  • Physical Features of a Secure Data CenterAn enclosed facility is the primary security feature for a data center. An “everybody’s welcome” impression isn’t exactly what want to see. Secure data centers are manned by security staff 24 hours a day, 7 days a week and 365 days a year. Perimeter security is present in the form of physical as well as electronic surveillance.
  • Any entry into even the outer premises of a building housing a data center depends on the identification and verification process conducted by the security staff. This is a primary step, even before the gate is held open, for entry. Physical security is as important as the virtual security provided by a data center, and the staff members on the security team play an important role in adhering to the rules..
  • The use of biometric systems assists in the entry protocol of a data center building and is a reliable method of deciding what personnel can gain entrance. The top management of the data center decides on the list, and the necessary steps are taken so that no one can bypass these systems without valid permission.
  • Mantraps are an essential addition and assist in detaining anyone who has gained unauthorized entry. This allows the security personnel to engage the wrongdoer and find motives for the unlawful entry.

While these measures are only the tip of the iceberg when it comes to data center security, they are important ones. At Lifeline Data Centers, security is an integral aspect of our business structure and we have incorporated every single one of these elements into our physical data center security. Get in touch to learn more about our security measures that constantly evolve to deal with looming threats, or schedule a tour with us to see it for yourself.

posted by | No comments

Data centers and compliance go hand in hand, and whenever there is any change in the compliance front, data centers invariably have to make the necessary changes.

Companies who provide cloud services to the U.S. government now have to comply with the Federal Risk and Authorization Management Program (FedRAMP), a new set of uniform security requirements. FedRAMP makes it easy for federal agencies, who can now pick from a list of pre-screened cloud providers, to deploy their apps in the cloud. These ones lie in these pre-screened providers, and, by extension, data centers who offer service to these providers, to meet the mandated security requirements.

FedRAMP compliance mandates:

  • The system inventory, boundaries and controls satisfy 298 control requirements derived from the National Institute of Standards and Technology’s (NIST) Special Publication 800-53 Revision 3.
  • Laying down policies and procedures governing the employees who perform IT security responsibilities and processes in place for performing risk and security assessments.
  • Mapping the system inventory and boundaries to describe network, hardware and software inventory and system boundaries.
  • Documenting the System Security Plans (SSPs) as per the guidelines and templates specified in FedRAMP.gov. The FIPS 199 categorization template allows certified service providers to categorize itself as low or moderate security category, and this determines the set of applicable FedRAMP security controls.

Before FedRAMP became mandatory, each federal agency conducted its own risk assessment for the cloud service they procured. This resulted in multiple and redundant security assessments for identical services, and lack of clarity on what constituted acceptable standards. FEDRamp standardized the risk assessment process for every federal agency, and, as such, would make things easier in the long run, even if data centers and providers would have to spend considerable time upfront mapping the new security requirements.

FedRAMP standards are the result of a close collaboration with all major cybersecurity stakeholders, including NIST, General Services Administration, Department of Defense, Department of Homeland Security, National Security Agency, Office of Management and Budget, the Federal CIO Council and its working groups and the private industry.

Let the compliance experts at Lifeline Data Centers help you solve your SSAE 16, TIA-942, NFPA, HIPAA, FISMA, FDA, PCI/DSS, FedRAMP and Sarbanes Oxley audit problems. Lifeline delivers multi-level compliance solutions in audit-ready data centers with in-house expertise. We are dedicated to staying on top of compliance to make sure you don’t have to worry about it. Learn more about our data center compliance today.

posted by | No comments

As more options are becoming available for data protection, it’s important to take a look at your current data protection strategy and what’s in store for the future. And we’re here to tell you that data centers are here to stay.

Data Centers: What’s in Store for the Future?The global outlook for data centers is positive and will remain so for a long time, especially considering the indication that current data centers will have completed only half their life-cycle by the year 2025. There will, of course, be regional disparities depending on various factors such as costs, security and government norms, but the fact that there is general growth in the segment is undeniable.

Data centers are lifelines for various sectors across the grid: banking, marketing, education, telecommunications, power, technology. Name an industry and you surely find at least a handful of companies from that industry hosting their data with a data center. The reliance on data and big data analytics have helped the industry grow as well, ensuring that data centers have a strong foothold in the data protection market. .

Consider the North American data center market as an example. A dense market for the data center industry, this area boasts of around 10 million square meters of data center white space with an increase in spending of 3.5% in 2013. A look at the other side of the planet will tell you that the data center spending in Asia-Pacific is even better, with a 7% annual growth rate. In Europe, the spending is expected to be around 2%.

Here are a couple of key changes and trends that will support data center growth and future development.

  • The scales adopted by the current lot of Internet companies, cloud technology players and software service giants will define the data center industry of tomorrow.
  • The key decisive factors have evolved over the years to include monitoring and infrastructure in the top slots as the primary concerns, followed by energy and efficiency.
  • Data centers are not primary storage-driven entities anymore, as businesses look for data centers dealing with services related to access and processing of this data, too.
  • In enterprise software, revenues are projected to grow by 4.8% in 2014, followed by an even better figure of 5.9% in 2015.

The changes in the global economy as well as the information technology sector do pose concerns, of course, as they do to every other segment.

At Lifeline Data Centers, industry growth and innovation drives us to excel at what we do, which is offering premium data center services with 99.995% uptime guaranteed. We want to be a key player in promoting data center growth and development in the future.

To learn more about our data center colocation services, contact us today.

posted by | No comments

DevOps: it is the newest buzzword in IT, promises a lot of advancement and comes with warning if misused or misinterpreted.  Some people call it a movement or a revolution, while others like to think of it as a role, a team, a department or a function. However, the most accurate definition of “DevOps” can be found when looking at it as a culture that binds together technology, people and process. On a more technical viewpoint, it is an environment where application development and systems operations are combined into one, with the promise of a highly productive, smoother and cost-effective IT delivery.

Some of the primary principles of the DevOps movement philosophy are as follows:

  • The DevOps Movement and What it Means for Data CentersAutomate Everything: In an ideal world, 100% automation would be the standard for any DevOps-based organization. However, the reality is most organizations today can only automate a certain amount of tasks, ranging from upgrades, deployments, testing, monitoring, patching and security policy management.
  • Team Formations: Flexibility is the key for employees to be successful in a DevOps operation. Rewards are given to the team as a whole and the entire team is responsible for failure. When hiring for a DevOps team, look for team players who can help create this environment.
  • Use Tools: There needs to be a tool to aid every task in a DevOps data center environment. Some of the more popular DevOps tools are:
    • Chef: An automated configuration tool, which can also multitask in areas related to disaster recovery and continuous application delivery.
    • Puppet: An IT automation tool that holds the organization’s system policies and then automates the audit and compliance procedures against the predefined policies.
    • Git and GitHub: A revision control system, and its shared and hosted counterpart.
    • Jenkins: A continuous code integration server.
    • LogStash: A log parsing utility.

Even though organizations and data centers are taking their time to jump on to the DevOps bandwagon, and rightly so, there are still a few selected best practices and tools that one can adopt and make the move a more gradual one. For the best guidance on how to bridge the gap between the traditional and the modern in data center environments, get in touch with us at Lifeline Data Centers today.

posted by | No comments

As data center technologies evolve with the ever-growing needs of data protection, the promise of new career paths and roles for data center employees come with them. For those wanting to explore the entire range of data center operations, here are some exciting new career profiles that have now emerged, thanks to the revolutionary trends in modernization of data centers.

The Best of Data Center Career Profiles

Alex Carroll and Rich Banta, co-owners of Lifeline Data Centers

Compliance Officer:
Government regulations today have become increasingly complex and more demanding than before. In addition, security vulnerabilities can be a nightmare for data center managers. Having a compliance officer that owns compliance and security regulation, communication, monitoring and correction programs give the data center a single point of contact to go to for all compliance issues.

Cloud Evangelist:
As a new role in the data center ecosystem, a cloud evangelist is responsible for growing and promoting cloud adoption in an organization. Along with interacting with the influential decision makers, this is also a hands-on role that involves prototyping and demonstrating various features of the cloud platform and its benefits to the organization.

DCIM Specialist:
Data center infrastructure monitoring and management structures have become more intelligent and technology savvy. The DCIM specialist will have the responsibility of creating a dynamic automated infrastructure to allow for self adjustment to meet variations in data center workload demands.

Business Liaison:
Even though the role of having an IT to business translator is common in many organizations, this role is now being revamped to a more strategic position. This person needs to have a deep understanding of the business and also of the capabilities of IT systems. Analytics, its presentation and interpretation are also an important part of this role. Social media interactions and its measurement also form part of the business liaison’s activities.

Dedicated Programmer:
Most data centers today are opting to have an in-house dedicated programmer that they can rely on for all their software needs. Even though this role may not involve developing software from scratch, it is important for the person to have an in-depth understanding of IT systems so as to ensure smooth interfacing between different applications and components. This person also needs to understand the broad vision of the organization and recommend the best tools and infrastructures that are best fit to realize the vision.

We are continually growing at Lifeline Data Centers, with a variety of people filling these positions listed above. We’re always looking for great talent, so if contact us if you’re interested in being a part of our data center operations.

posted by | No comments

The Internet robot, more famously known as the “bot,” is one of the most destructive threats to a data center’s network infrastructure. A bot is a malicious code program that can very silently enter your system either by visiting an infected website or by a download of a virus-infected software. What is really harmful is the self-propagating nature of the bot that leads to exponential attacks on systems, which creates an entirely infected network called the “botnet.”

Protecting Your Network From the Dreaded Bot ThreatA botnet has a botnet master who controls the entire botnet network by means of remote access. Once the access has been established, the controller can either control the network for malicious access or it can be sold to other agencies that specialize in causing damage attacks to networks. Some examples of attacks that commonly infect networks are distributed denial of service (DDOS), data theft, spam and spyware.

Prevention Strategies
In order for networks to be protected from the harmful effects of bots, data center network experts design key strategies that address protection techniques, such as using layers of defense systems so that attacks on critical systems can be prevented or at least delayed. In the case of a security breach, the strategy will adopt various security measures so that the impact and damage can be minimized.

Some of the bot identification and prevention strategies that are popularly used include:

Using Metadata: Metadata can give administrators a wide variety of information and can allow publishers to discover suspicious trends and behaviors. This gives input on the common factors in bot-based urls.

Traffic Classification: From the metadata analysis, all incoming traffic can be classified into patterns. There are two main categories that will emerge. One will be high intent traffic coming from serious customers that are looking to convert. The second one will be low intent traffic that will come from automated robots.

Worried about all things “bots”? For a data center who can assure you of a well protected network, do get in touch with Lifeline Data Centers today.

posted by | No comments

Every data center manager has disaster recovery and downtime prevention at the top of their list. While most logical scenarios and events are planned for and even insured by due diligence teams, there are some bizarre occurrences that can leave even the most diligent planners in a daze. Call it the hand of God or the hand of Evil, you will be surprised to hear of some of the strangest data center downtime causes in recent times.

The Most Bizarre Data Center Downtime StoriesBlame it on the squirrels: Did you know that squirrels are infamous when it comes to their affinity for chewing on telecommunication wires? Yahoo was a victim of a squirrel nuisance in 2010 when half their Santa Clara data center was down thanks to frying squirrels and their affinity for tampering with electrical equipment. If you are authoring a data center disaster recovery plan, do factor in the possibility of squirrel attacks if they are found in your vicinity.

False alarms caused by cigarette butts: This has actually happened in Perth, Australia, where a harmless cigarette butt startled an extremely sensitive very early smoke detection apparatus (VESDA) system. The data center in Perth was forced to shut down for almost an hour until the cause of the alarm could be unravelled and traced to a smoldering garden bed outside the facility.

The leap second bug: The value of one second was revealed in this incident that took place in 2012. The world atomic clock is occasionally adjusted to add one second on account of variations in the rotation speed of the Earth. This one second caused so much flutter in global IT systems, it brought down popular websites, including LinkedIn. This incredible one second also caused a delay of 400 flights in Australia as manual check-in had to be used as an alternative to automated check-in.

It’s kind of scary to think there are causes that are unheard of that have the capacity to bring down your data center in the blink of an eye. While some of these are great stories for you to laugh about in a business meeting, you may want to ensure your business is never featured in a list like this in the future.

Lifeline Data Centers is a Tier 4 data center with maximum uptime. We are dedicated to making sure things like this don’t happen, and we want to protect your data. Contact us for a tour today.

posted by | No comments

The latest trend in the data center industry is creating lean and green data centers. This has to do with much more than concern for the environment and is a viable business strategy to remain competitive. Going lean allows data centers to improve their efficiency or produce more output for the same resources, and going green allows data centers to reduce costs substantially. Adopting these two concepts therefore provides a great competitive advantage for data centers.

How Data Centers are Going LeanThe concept of a lean data center entails creating more value using lesser resources. The basic approach towards lean data centers is virtualizing and consolidating. Virtualization, for instance, makes it possible to replace six racks of heat-belching servers to just four highly-contained physical machines, with a hybrid storage system combining both solid-state and serial ATA drives thrown into the mix. This initiative automatically furthers the cause of “green” data center by drastically reducing the energy and cooling requirements.

However, methods to go lean extend beyond simple virtualization. Here are some other ways:

  • By opting for higher power density server racks, this helps in not just operational efficiency, but it also meest specific needs of the customer.
  • Co-opting or benchmarking the latest advancements in software development, such as Facebook’s Hip Hop for PHP engine to speed up code execution, increases the processing capacity of servers.
  • Working to establish a greater degree of granular control over servers and clusters will allow appropriate provisioning factoring in peak workloads and growth in users, and, at the same time, keep utilization levels always high for higher returns.
  • Designing server rooms with pillars and air-conditioning units set outside will help the entire space be put to optimal use. Furthermore, there are no restraints in adopting the most efficient design. This also maximizes the gross floor area (GFA), allowing the data centers to allocate more space to existing or new customers.

The savings from these initiatives allows data centers to add to their capacity without compromising investment in more resources. When combined with green initiatives that aim to reduce operating costs, especially energy costs, it offers a significant competitive advantage, enabling data centers to deliver more for less.

Lifeline Data Centers offers state-of-the art outsourced colocation facility, with guaranteed uptime, and connectivity. There are flexible plans offering scalability and customization and guaranteeing compliance. Contact us today for more information.

 

posted by | No comments

The latest trend in date centers is to go green and for good reasons, too. The main focus of “going green” is to reduce energy costs by reducing the requirement for cooling. This is not just environment-friendly, but since energy costs are a significant chunk of operating costs, data centers can reduce their overheads substantially.

How Data Centers are Going GreenThe prerequisite towards going green in data centers is regular and proactive monitoring of the energy usage. While simply logging the electricity meters helps to some extent, coming up with a metric that explains the energy efficiency of the data center would allow data centers to quantify their savings and efficiency improvements. A good option is to benchmark eBay’s “miles per gallon” measurement, which divides the total number of kilowatt hours consumed by the total number of transactions processed, making it possible to link energy cost and revenue directly.

Monitoring energy usage facilitates the following, among other things:

  • Identify inefficient equipment or the least efficient equipment, allowing the data center to replace them with better energy-efficient units.
  • Manipulate the air flow to reduce cooling costs. Data centers in cold locations could funnel ambient air vented in from surrounding offices to sealed server cabinets, and shunt the hot air coming off the servers through chimneys back into the building’s heating and air conditioning system.
  • Very often, shifting to more energy efficient equipment, such as virtualized servers, would reduce heat and energy usage.

“Green” can be ingrained into the data center architecture itself. Here are some ways to do that:

  • In hot tropical climates, reducing the building’s solar gain helps to reduce the energy required for cooling. The way to reduce the building’s solar gain is by minimizing direct sun exposure on surfaces. A rectangular-shaped design, where the shorter sides face the direct path of the sun, helps in minimizing the exposure of surface area to direct sunlight.
  • For data centers with rooftops, placing various mechanical and electrical equipment, such as power generators, cooling towers and solar panels, on an elevated mesh at the top spares the data center from direct sunlight and reduces heat.
  • Opting for modular or containerized data centers, where only the required units are cooled, helps to separate hot and cold air and reduce energy demands.

Energy costs are one of the biggest overheads of a data center, and green initiatives help data centers reduce these costs by 10 to 15%. In a highly competitive industry, data centers can pass on these cost savings to their clients, who in turn can reduce the costs for end-customers, which creates a win-win situation across the value chain.

Lifeline Data Centers offers highly efficient wholesale colocation facility with guaranteed uptime and connectivity and flexible plans. All plans are fully compliant with regulatory requirements and offer great value for the cost. Contact us today to learn more.

Latest: