What Makes A Data Center “Hyperscale”?

A Hyperscale Data Center can be defined as:

“…large-scale data centers often architected for a homogeneous scale-out greenfield application portfolio using increasingly disaggregated, high-density, and power-optimized infrastructures. They have a minimum of 5,000 servers and are at least 10,000 sq ft in size but generally much larger.” -IDC (2016)

Hyperscale is used loosely in three contexts:

  1. Infrastructure and distributed systems to support data centers (Accenture, 2016)
  2. The ability to scale computing tasks to achieve performance orders of magnitude greater or better than the status quo (Accenture, 2014)
  3. The financial power and source of revenue of data center operators (Cisco, 2016).

Hyperscale Cloud Architecture

The first two contexts share a common architectural perspective in that hyperscale refers to a single compute architecture that is massively scalable. Such architectures are built on infrastructure and systems made up of hundreds of thousands of individual servers (nodes) that offer compute and storage resources connected by high-speed networks. The form factors of hyperscale data centers differentiate them from other data centre architectures in that they are primarily, if not exclusively, designed to optimize performance and cost (IDC, 2011). This form factor assumes small and highly densely packaged servers (Weerasinghe et al. 2015)

The ability to scale can be viewed from two perspectives. Horizontal scaling (scaling out) involves increasing the number of machines whereas vertical scaling (scaling up) involves adding more power to existing machines. The magnitude and complexity of scalability inherent in modern hyperscale data centers also serves to distinguish them from other data centers.

Hyperscale Workloads

Hyperscale data centers have diverse workload requirements and high service level expectations from customers and end users including both uptime and load times. These workloads include not only relatively simple websites that attract high traffic volumes but workloads such as 3D rendering, genome processing, cryptography and other technical or scientific computing tasks that are more effectively and efficiently processed by specialized processors or system configurations.

Scalability

Historically, scalability was addressed by building platforms that could scale massively using relatively high powered general-purpose commodity processors and upgrading these processors in line with Moore’s law. Thus, horizontal scaling was the primary scaling tactic. While effective for simpler workloads, this is neither energy efficient nor effective or efficient for more complex workloads. The emergence of new types of special processor architectures including graphics processing units (GPUs), many integrated cores (MICs) and field-programmable gate arrays/data flow engines (FPGAs/DFEs) are both more effective and efficient for more complex workloads and more energy efficient, and as a result more cost efficient. By leveraging these new processor types, hyperscale data centers have a greater selection of hyperscaling tactics to meet market needs.

One of difficulties with defining hyperscale is our ability to perceive the difference in order of magnitude. It is simply a measure too big for most of us to perceive. Unsurprisingly, some commentators focus on the characteristics of firms operating hyperscale platforms or data centers. Accenture (2014) refers to the number (in hundreds of thousands) of servers operated by a firm, while Cisco (2016) use annual revenue generated from cloud (US$1-2bn), internet/search/social media(US$4bn), or e-commerce/payment processing activities (US$8bn). The latter identify only 24 hyperscale operators representing c. 259 data centers. Clearly these criteria are related as only a handful of companies can afford the infrastructure necessary for hyperscale computing. They include companies such as Microsoft, Apple, Google, Amazon.com (and AWS), IBM, Twitter, Facebook, Yahoo!, Baidu, eBay, PayPal amongst others.

recap_data_center_market_briefing_paper

Interested in learning more? Download our free 2017  Data Center Market Briefing Paper.

This market briefing provides an overview of the global data center market, segmenting the market by both datacenter type and the individual hardware components that make up a data center. It outlines the factors influencing adoption of data centers worldwide and investigates the different factors changing the dynamics of this market, drawing on publicly available desk research from industry analysis to predict where this market will be in 2020.

 

 

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

cloud-backgroundcolocation_data_centers