Scale Out or Sputter Out? Why Every All-Flash NAS Platform Isn’t Created Equal

SHARE:
Copied!

Scale-Out NAS is known for incredible scale, and the added horsepower of all-flash provides incredible speed and efficiency—at least, that’s the theory. Unfortunately, not every all-flash NAS platform is created equal.

Isilon is the market and technology leader in Scale-Out NAS and achieved this recognition through years of innovation. Isilon introduced the concept of a node-based, distributed scale-out file system. Here’s a quick snapshot of a few of the recent advances we’ve brought to scale-out NAS with Isilon:

The Multi-Protocol Data Lake: Enabling in-place analytics and further consolidation of customer’s workflows.

Edge, Core, Cloud: Expanding the Isilon Data Lake to encompass seamless connectivity from the Isilon core with the software defined edge and policy-based tiering to private and public clouds. Providing our customers with a single repository with global reach.

All-Flash Isilon: We delivered this most recent advancement with the recent availability of the revolutionary new generation Isilon platform.

Dell EMC All Flash Isilon is meeting our customers’ demands to deliver faster outcomes for their unstructured data workflows. But it’s more than just flash performance that defines all flash-scale-out NAS. All Flash Isilon is the combination of our new, ultra-dense, purpose-built for flash node-architecture with our established OneFS scale-out file system technology. We deliver flash as an extension to Isilon – all flash with the full capabilities and functionality of OneFS. What’s not to love?

  • Multi-Protocol, In-Place Analytics
  • Seamless Automation Across Edge, Core, and Cloud
  • Enterprise Data Protection, Management and Security
  • Policy-Based, Automated Tiering
  • Unconstrained Scale and Performance

Scale-out NAS can seem simple to its users, but it’s a very complex kind of product to design, build and test properly. With all the possible, and often expected, features, the bar for success is extremely high – the competitive race is a marathon, not a sprint.  To illustrate this, I’d like to introduce this new blog series, closely examining scale-out NAS features required for customer success and how we and other vendors choose to tackle them.

Competition in All Flash Scale-Out NAS

One of the defining characteristics of the technology industry is the intense competition between vendors. This is a great thing for our customers, as the competition makes each vendor continuously raise the bar on its products and services. There are some vendors, however, that seem to have misunderstood the notion of competing. They seem to focus instead on “raising the bar” on their marketing claims faster than their products can keep up.

To explore the truth behind such pure nonsense, we’re going to be looking at one particularly outrageous (and occasionally laughable) claim over the next few weeks. For this first blog, I’d like to discuss scalability as, of course, it’s the defining characteristic of scale-out NAS.

This week we look at Pure Storage’s claim that its FlashBlade NAS product is BIG. Up to 10s of Petabytes.

A FlashBlade chassis fully populated with the highest capacity blades holds only 1607TB in “usable” capacity, assuming one chooses to believe in the general applicability of Pure’s claimed 3:1 compression (scroll down to the fine print) on file data sets.

 

A single FlashBlade cluster can “scale” up to 2-chassis or ~3.2PB of “useable” capacity – much lower than “10s of Petabytes” by at least 3x.  Can a “scale-out” architecture that cannot scale beyond two chassis really be considered “scale-out?” Maybe Pure is just counting multiple independent systems as scaling, like a warehouse full of USB memory sticks?

FlashBlade’s lack of scalability creates independent islands of flash, the very problem that scale-out NAS is supposed to solve.  As a customer, would you rather have a single file system spanning 100+ nodes – simplifying management and eliminating mount-point and data migration drama, or would you rather manage many isolated silos?

Think of the question this way; what type of “scaling” will be enough for organizations struggling to deal with the wave of unstructured data growth? Let’s consider two examples:

With the dramatic reduction in the cost of sequencing the human genome, the technology is becoming more accessible for people around the world. Modern gene sequencers can produce a human genome sequence every hour. In just 30 days a single sequencer will create ~100TBs of pre-compressed data. With just four sequencers, a FlashBlade cluster will be out of capacity in just over 90 days. Now, imagine the cumulative impact in year 2 or 3. Without any means to automatically push archive datasets to another tier you’re forced into creating silos and manual data migrations.

In chip design (EDA), innovation and time to market are keys to success. EDA storage requirements are measured in 10s of petabytes with only 20%-30% of the data active at any point. And as chips get smaller, the datasets grow by 4x, doubling the chip-maker’s capacity needs every 12 months. Assuming the EDA company has 40PBs of data today, the 10PBs of active EDA organizations looking to utilize FlashBlade would likely be forced to create multiple data silos to get around its namespace capacity limitations.

These are not exceptions. As the digital economy takes hold, the need to support the unconstrained scale and massive performance requirements of powerful applications is putting IT organizations everywhere under pressure. And for these reasons, Pure Storage’s claim of scaling “Big. Up to 10s of Petabytes”, simply doesn’t cut it.

By contrast, Dell EMC Isilon All-Flash enables truly large datasets, scaling up to 144 nodes and 33PBs of flash in a single file system. Don’t require 33PBs of Isilon flash today? No problem. You can start small with a single 72TB 4-nodes chassis. Now imagine changing the tires on a Formula 1 race car without a pit stop while it’s speeding around the track at over 200 MPH. That’s how you scale-out your Isilon cluster. Non-disruptive. Push-button simple. And, fast.  In less than 60 seconds you’ve increased your Isilon cluster’s performance and capacity.

With Isilon’s automated policy based tiering, you won’t create islands of flash in your data center. With multiple node-types available – all-flash, hybrid and archive, you can choose the right mix of Isilon nodes to consolidate your unstructured data workload and drastically reduce  your TCO.

Again, you don’t need to take just my word for it. Organizations like Translational Genomics Research Institute (TGEN) and Lightstorm Entertainment are turning to Isilon All-Flash due to its enormous power and true scale-out capabilities.

Continue Reading
Would you like to read more like this?

Related Posts

Hadoop Is Growing Up.

As a part of my regular duties, my job is to pay attention to macro-level movements of various industries and technology sectors. One of those sectors is facing some rather … READ MORE

Keith Manthey August 9th, 2017
Click to Load More
All comments are moderated. Unrelated comments or requests for service will not be published, nor will any content deemed inappropriate, including but not limited to promotional and offensive comments. Please post your technical questions in the Support Forums or for customer service and technical support contact Dell EMC Support.