Gen-Z – An Open Fabric Technology Standard on the Journey to Composability

SHARE:
Copied!

Over the last few blogs I’ve touched on Composable Infrastructure – the value and challenges.

2 Jun 2016 Reality Check: Is Composable Infrastructure Ready for Prime Time?

30 Nov 2015 A Practical View of Composable Infrastructure

We highlighted the three major challenges that must be resolved before the industry can truly achieve composable infrastructure:

1)      Lack of software-defined intelligence

2)      Lack of new industry standard around a modern IO/Memory fabric at rack scale, which is needed for full composability

3)      Lack of industry standard around openness, allowing customers to allocate resources across multiple vendors’ technology.

We’ve made positive progress on these fronts.  We continue to address software standards and APIs through:

DMTF Redfish(TM) API

Scalability in today’s data center is increasingly achieved with horizontal, scale-out solutions, which often include large quantities of simple servers. The usage model of scale-out hardware is drastically different than that of traditional enterprise platforms, and requires a new approach to management.

SNIA Swordfish™

The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. SNIA Swordfish is an extension of the DMTF Redfish specification, so the same easy-to-use RESTful interface is used, along with JavaScript Object Notation (JSON) and Open Data Protocol (OData), to seamlessly manage storage equipment and storage services in addition to servers.

Chinook spawns and expands Redfish

Dell, through work in the industry group codenamed Chinook (Dell, Hewlett Packard Enterprise, Microsoft, VMware, and Intel), submitted work in PCIe Switch, BIOS and local storage models to the DMTF SPMF (RedFish).  Chinook submissions to the SPMF for Redfish expand coverage with the goal of addressing all the components in the data center with a consistent API enabling customers and operators to better deal with the velocity of deployment and management of constantly evolving technologies.

DMTF Shares Redfish In-Band Host Interface Plans

DMTF’s innovative Redfish standard continues its fast progression, thanks to the dedicated efforts of the Scalable Platforms Management Forum (SPMF). Now on version 2016.2 (recently released in final form), Redfish work to date has focused on defining a TCP/IP-based out-of-band interface between a client and a management controller.  As SPMF continues its open approach, responding to feedback and addressing the needs of the industry, the group’s Host Interface Task Force is working to add an in-band host interface to Redfish. This will allow applications and tools running on an Operating System (both deployment and production) to communicate with the Redfish service, which is managing the system using the Redfish API.

Today I’m happy to report progress on #2) lack of new industry standard around a modern IO/Memory fabric at rack scale, which is needed for full composability.

Introducing Gen-Z – New Gen-Z consortium to develop scalable, high-performance fabric technology aimed at simplifying data access at rack scale.

On October 11, 2016 – A group of leading technology companies announced the Gen-Z consortium, an industry alliance working to create and commercialize a new scalable computing Interconnect and protocol. This flexible, high-performance memory semantic fabric provides a peer to peer interconnect that easily accesses large volumes of data while lowering costs and avoiding today’s bottlenecks. The alliance members include AMD, ARM, Broadcom, Cavium, Cray, Dell EMC, IBM, Hewlett Packard Enterprise (HPE), Huawei, IDT, Lenovo, Mellanox Technologies, Ltd., Microsemi, Micron, RedHat, Samsung, Seagate, SK Hynix, Western Digital and ilinx.

Modern computer systems have been built around the assumption that storage is slow, persistent, and reliable while data in memory is fast but volatile. As new Storage Class Memory technologies emerge that drive the convergence of storage and memory attributes, the programmatic and architectural assumptions that have worked in the past are no longer optimal. The challenges associated with explosive data growth, real-time application demands, the emergence of low latency storage class memory, and demand for rack scale resource pools require a new approach to data access.

Gen-Z provides the following benefits:

  • High Bandwidth, Low Latency: Simplified interface based on Memory Semantics, scalable to 112GT/s and beyond with DRAM class latencies
  • Advanced Workloads and Technologies: Enables data centric computing with scalable memory pools and resources for real time analytics and in-memory applications. Accelerates new memory and Storage innovation.
  • Compatible and Economical: Highly software compatible with no required changes to the Operating System. Scales from simple, low cost connectivity to highly capable, rack scale interconnect.

Why do we need Gen-Z?

  • System memory is flat or shrinking
  • Memory bandwidth per core continues to decrease
  • Memory capacity per core is generally flat
  • Memory is changing on a different cadence compared to the CPU
  • Data is growing
  • Data that requires real-time analysis is growing exponentially
  • The value of the analysis decreases if it takes too long to provide insights
  • The industry needs an open architecture to solve the problems
  • Memory tiers will become increasingly important
  • Rack-scale composability requires a high bandwidth, low latency fabric
  • Must seamlessly plug into existing ecosystems without requiring OS changes

What is a Memory Semantic Fabric?

  • Handles all communication as memory operations such as load/store, put/get and atomic operations typically used by a processor
  • Memory semantics are optimal at sub-microsecond latencies from CPU load command to register store
  • Unlike, storage accesses which are block based and managed by complex, code intensive, software stacks

Why Now?

  • The emergence of low latency, Storage Class Memory (SCM) and the demand for large capacity, rack scale resource pools, and multi node architectures

Gen-Z Consortium Mission

  • Create a next generation interconnect that will bridge existing solutions while enabling new unbounded innovation
  • Develop in an open, non-proprietary standards body where adoption, differentiation and innovation is promoted as an industry standard.

About the Gen-Z Consortium

  • A Transparent Organization: Gen-Z has been formed as a not-for-profit organization, its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties.
  • Wide availability: The Gen-Z standard will be published and available free of charge.
  • End-User Choice: There are no constraints on the re-use of the standard. Gen-Z creates a fair, competitive market for implementations of the standard.
  • Equality: Gen-Z does not favor one implementer over another.

As I have said in previous blogs, the path to composability will be a journey that requires new thinking and new technologies.   With Gen-Z in place, along with continued advancement on Open & Standard Software APIs, the industry is now one step closer to enabling a future composable data center.  Now it’s time to get busy and build the future based on these new technologies.

In conclusion, truly composable infrastructure is becoming more possible in the future.  At Dell EMC, we’re actively working to make true composable infrastructure a reality in the long run – expanding Redfish/Chinook and now Gen-Z are great examples of Dell EMC working with industry leaders on the journey. We’ve already delivered innovative and transformative solutions like Active System Manager, PowerEdge FX architecture, Open Networking, and G5/DSS 9000 that make progress towards the desired end state.  We are working hard to address the key issues to deliver a true “fully composable infrastructure”.

Continue Reading
Would you like to read more like this?