Flexing AI Workloads Using KubeFlow and OpenShift Container Platform

Want to create a flexible environment for machine learning and deep learning workloads? Deploy Kubeflow on an OpenShift Container platform with Dell EMC PowerEdge servers.

Many enterprises invest in custom infrastructure to support artificial intelligence (AI) and their data science teams. While the goal is right, this approach can be a problem. Oftentimes, these ad-hoc hardware implementations live outside of the mainstream data center, and that can limit adoption.

To facilitate wider adoption of AI-driven applications in the enterprise, organizations can integrate production-grade, experimental AI technologies in already well-defined platforms. That’s the idea behind Kubeflow: The Machine learning Toolkit for Kubernetes.

Kubeflow is an open-source Kubernetes-native platform to accelerate machine learning (ML) projects. It’s a composable, scalable, portable stack that includes the components and automation features you need to integrate ML tools. These tools work together to create a cohesive machine learning pipeline to deploy and operationalize ML applications at scale.

A proven platform for Kubeflow

Kubeflow requires a Kubernetes environment, such as Google Kubernetes Engine or Red Hat OpenShift. To help your organization meet this need, Dell EMC and Red Hat offer a proven platform design that provides accelerated delivery of stateless and stateful cloud-native applications using enterprise-grade container orchestration.

This enterprise-ready architecture serves as the foundation for building a robust, high-performance environment that supports various lifecycle stages of an AI project: model development using Jupyter Notebooks, rapid iteration and testing using Tensorflow, training deep learning (DL) models using graphics processing units (GPUs), and enabling prediction using developed models.

Among other advantages:

  • Running ML workloads in the same environment as the rest of your company’s applications reduces IT complexity.
  • Using Kubernetes as the underlying platform makes it easier for your ML engineers to develop a model locally using readily accessible development systems, such as laptop computers, before deploying the application to a production cloud-native environment.

To learn more

Ready for a deeper dive? To learn more, check out these resources.

About the Author: Ramesh Radhakrishnan

Ramesh is an Engineering Technologist in Dell's Server CTO Office. He has led technology strategy and architecture for Dell EMC in the areas of Energy Efficient MicroServer Architecture (ARM/Xeon-D), Microsoft Hybrid Cloud and is currently engaged in driving technology strategy and architecture for Dell EMCvaround advanced Analytics and Machine Learning/Deep Learning. He is a member of the Dell Patent Committee and has 15 published patents. He received his Ph.D in Computer Science and Engineering from the University of Texas at Austin.