IT Architecture Building Blocks for Artificial Intelligence Prototypes

About a year ago I wrote a blog on the emergence of Artificial Intelligence in the enterprise. Since then we’ve seen a steady increase in the numbers and types of organizations starting to use AI to unlock the value in their data. Indeed, we are in the midst of a continuing deluge of data so it’s no surprise that AI/DL initiatives are kicking off across all sectors of our economy.

This is great news, as the technology has matured immensely and the costs to entry have lowered. However, there are a number of steps to be considered as you look to start your AI journey in a sandbox environment. And because it can be a complex undertaking there are lots of questions too. The most important one being “where to start?”

The first step is the planning phase. A couple months back I was talking with a customer from the manufacturing sector who wanted to bring a more data-driven approach to their business. The business leaders were concerned that they were falling behind industry competitors in the digital transformation race. The business needed to quickly build and accelerate a data first strategy and he was eager to get going. So, they setup an AI focus group to consider ways to incorporate Machine Learning and Deep Learning into their business lines. The group had executive sponsorship and met monthly over a year however the effort stalled before moving to production. Although the project began with a plan and intent the teams were stymied by the sheer number of available use cases and technology options. At this point they were reaching out to Dell Technologies as an experienced partner, one with a broad and deep solutions portfolio, to move the Proof of Concept (POC) along.

 Artificial Intelligence Center of Excellence

 The customer had the right idea with an internal cross functional team focused on AI, aka an AI Center of Excellence. To be successful the team should be comprised of lines of business leaders, developers, engineers, data architects, data scientists and IT staff. In my experience having a diversity of skills and perspectives from different areas of the business, along with executive sponsorship, is key to success. The executive sponsorship is extremely important because you need someone with budget and the authority to make decisions along the way. Once the foundation is laid the team must consider the use cases. They should be both quantifiable and time bound, perhaps reachable in three to six months. As an example, for manufacturing the team might identify a use case to reduce unscheduled machine maintenance by 15 percent. Reducing unscheduled maintenance would save millions by lowering machine down times and reducing energy costs. Another use case might be to reduce defects on the assembly line by 3 percent, which would also have a positive revenue impact.

Focused Deep Learning Proof of Concept

Now that our AI Center of Excellence team has their use case it’s time to scope out a proof of concept. To level set expectations, moving from AI idea to production can be intimidating but that’s why experiments are conducted. Just as scientists build experiments to test their hypotheses the business uses prototypes or POCs to test their AI ideas and algorithms. However just because the prototype phase is seen as an experiment doesn’t mean that IT Architecture best practices should not be followed. In fact, it’s just the opposite. By focusing the prototype build-out with production in mind organizations can more quickly transition from sandbox to monetization. Many successful organizations use their POCs to deploy critical foundational elements that will scale. It makes no sense running a POC that takes years to successfully implement. More so, if you’ve deployed a solid IT foundation it’s easier to replicate another one when new use cases and models arise.

In the prototype phase architecture plays a critical role. By building a prototype environment with the same building blocks as the scaled-out production environment you will accelerate the time to monetization for each model. However, AI workloads can be tricky. Here are a couple rules they tend to follow. First, our digital world provides an unlimited amount of data points so much so that AI applications must scale with the demand for unstructured data (Petabyte Scale). Second time is an essential factor necessary to answer your AI question so accelerated compute is a must. Therefore, while your solution may start small you need an architecture that can process unstructured data quickly and scale to keep up with the growing unstructured datasets.

Entry Level AI Solution

Beginning an AI journey is challenging from a business perspective, but the technology doesn’t have to be so. As a proof point we just published a reference architecture for an entry-level AI solution. It includes the world’s most powerful workstation, the Dell Precision 7920 Data Science Workstation, which provides ultimate performance and scalability to grow alongside your AI initiatives and data. It’s coupled with Dell EMC Isilon scale-out NAS  to give data science teams the ability to share massive amounts of data while providing high performance, reliability and seamless access from multiple operating systems. And without the need for costly and time-consuming data migration as you move to production. Dell EMC Isilon hybrid storage platforms, powered by the OneFS operating system, use a highly versatile yet simple scale-out storage architecture to speed access to massive amounts of data, while dramatically reducing cost and complexity. The hybrid storage platforms, such as the H400 used here, are highly flexible and strike the balance between large capacity and high-performance storage to provide support for a broad range of enterprise file workloads. The H400 delivers up to 3 GB/s bandwidth per chassis and provides capacity options ranging from 120 TB to 720 TB per chassis.

Your journey to AI can take many paths. Choosing the one that begins with a solid plan, use cases and an affordable yet scalable AI prototype is the best route to success. Let Dell Technologies help you navigate: the Entry Level AI Solution referenced here is a great way to get started and to grow.

If you want to learn more then please see the Dell Precision Data Science Workstation with Isilon H400 whitepaper. You’ll find the complete reference architecture with reproducible benchmark methodology, hardware and software configuration, sizing guidance and performance measurement tools.

About the Author: Thomas Henson

Thomas Henson an Unstructured Data Solutions Systems Engineer with a passion for Streaming Analytics, Internet of Things, and Machine Learning at Dell Technologies. He brings experience in Machine Learning Anomaly Detection, Open Source Data Analytics Frameworks, and Simulation Analysis. Thomas is also heavily involved in the Data Analytics community.