Deep Learning Gets a Boost with New Intel Processor

Dell EMC Ready Solutions for HPC incorporate the new 2nd Generation Intel® Xeon® Scalable Processor and Intel® Optane™ technology to accelerate deep learning and other parallel-processing workloads.

This week has brought some great news for organizations that want to deploy artificial intelligence solutions on Intel-based Dell EMC systems. Specifically, Dell EMC launched new Ready Solutions for HPC that incorporate the latest 2nd Generation  Intel® Xeon® Scalable processor, code-named Cascade Lake. This processor has all kinds of optimizations for people running parallel workloads.

Among other advances, the 2nd Generation Intel® Xeon® Scalable Processor delivers the groundbreaking Intel® Deep Learning Boost (Intel® DL Boost), known informally as Vector Neural Network Instructions, or VNNI. With this new feature, Intel has reduced the instruction set on the chip, so it will perform faster in the parallel workloads used with many HPC and AI applications, including inferencing. Intel reported that this new technology can increase AI/deep learning inference performance in some applications by up to 17 times compared with Intel® Xeon® Scalable Platinum processors at its announcement.[1]

Even when compared to a Intel® Xeon® Scalable processor, (Code named Skylake), the 2nd Generation Intel® Xeon® Scalable shines — which is something that we at Dell EMC have confirmed in the HPC and AI Innovation Lab. In benchmark testing, our engineers have realized more than 3x faster inferencing for image recognition with INT8, ResNet50. These tests compare the performance of a 2nd Generation Intel® Xeon® Scalable Gold Processor 6248 and an Intel® Xeon® Scalable Gold Processor 6148 (Skylake) on an inference benchmark for image classification, as summarized in this slide.

What does all this mean other than faster speeds and feeds? Well, it means that AI is the future and Dell EMC and Intel are helping you get there faster with your existing applications. To take advantage of the Intel DL Boost feature, you don’t need to re-program.

With the launch of the 2nd Generation Intel Xeon Scalable processor, Intel also announced two solution architectures for HPC & AI Converged Clusters, both of which focus on augmenting resource managers to support broader workloads. The first is based on the community project Magpie, which automates the process of generating interfaces between analytics frameworks like SPARK and AI frameworks like TensorFlow, so that they can run seamlessly without any modifications to a traditional HPC resource manager such as Slurm.

The second is a more integrated solution that builds on the work of Univa Grid Engine and their Universal Resource Broker, an engine that sits alongside a traditional HPC batch scheduler and can interface into resource manager plugins created with an Apache Mesos* framework. Both solutions allow workload co-existence and workflow convergence across simulation & modeling, analytics, and AI.

Tests at Centers of Excellence

It’s not just Dell EMC that is putting the new 2nd Generation Intel Xeon Scalable Processor to the test. At least two of our Dell EMC HPC and AI Centers of Excellence, Texas Advanced Computing Center (TACC) and the University of Pisa, have been testing Dell EMC PowerEdge servers with the new processor.

The new TACC Frontera system will incorporate the 2nd Generation Intel Xeon Scalable Processor, along with new Intel® Optane™ DC Persistent Memory, for extreme-scale science workloads, in fields ranging from medicine and materials design to natural disasters and climate change. When it goes into production this year, Frontera, based on Dell EMC PowerEdge servers, will overtake the TACC Stampede2 cluster to claim the title of the fastest university supercomputer in the United States and one of the most powerful HPC systems in the world.[2]

The Frontera supercomputer will also incorporate Intel® Optane™ DC persistent memory, which is an innovative new technology. Intel says this memory technology increases server memory capacity with DIMM sizes of up to 512GB, accelerates application performance and, unlike DRAM, offers the benefits of data persistence.

A side note: Intel Optane DC Persistent Memory is different from Intel Optane Storage.

  • Intel Optane DC persistent memory moves and maintains larger amounts of data closer to the processor, so workloads and services can be optimized to reduce latencies and enhance overall performance. It also supports App Direct and Memory Mode which enable a host of new use data center use cases.   
  • Intel® Optane™ SSDs enable data centers to deploy bigger, more affordable data sets, accelerate applications, and gain critical, enterprise-level insights that result from working with larger memory pools.

You can explore these differences at Intel Optane Technology.

The University of Pisa, meanwhile, has tested Intel Optane SSDs, as well as the 2nd Generation Intel Xeon Scalable Processor. The university is collaborating with the IMAGO7 Foundation to use Intel Optane SSD technology to reduce and accelerate the magnetic resonance imaging (MRI) examination process. This collaborative effort determined that Intel Optane SSD technology can significantly reduce MRI scanning time while improving the accuracy of scans.[3]

Collectively, the recent advances in the Intel portfolio will make it easier for organizations to dive into parallel-processing use cases, such as inferencing in deep neural networks. And, Dell EMC is making the technologies easier to adopt by embedding them in select PowerEdge servers, and Dell EMC Ready Solutions for HPC.

To learn more

For a closer look at the Intel technologies in play at TACC and the University of Pisa, visit our HPC and AI Centers of Excellence site. Also, check out dellemc.com/hpc and the performance benchmarking at www.hpcatdell.com.

[1] Intel news release, “Intel Innovations Define the Future of Supercomputing,” November 11, 2018.

[2] HPCwire, “TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme‑Scale Science,” August 2018.

[3] Intel solution brief, “University of Pisa Uses Intel® Optane™ SSDs to Significantly Reduce MRI Scanning Times,” November 2017.

About the Author: Janet Morss

Janet Morss previously worked at Dell Technologies, specializing in  machine learning (ML) and high performance computing (HPC) product marketing.