Engineering Design Automation: Uncovering the Secrets Behind Performance – or Lack Thereof

Having years of experience as both a design engineer and more recently, an EDA hardware/software product marketing manager, I’ve seen EDA from both sides of the performance battle. A particularly fond memory was one Saturday when, as a young design engineer, I was on the phone chewing out a timing analysis vendor over poor run time – and later learned it was their VP of R&D. Oops.

Years later, I was on the other side of the fence – working for one of the big-3 EDA vendors.  Once again, I was in a performance battle – arguing that a requirement to double EDA tool performance after 9 months of work could not be met by simply installing the tool on a server that now runs 2x faster.  Yes – that really was R&D’s defense.  I was real confident I’d win this argument….I didn’t.

Some things never change. Comparing today to the past 25 years, design complexity continues to grow out of control and schedules are shrinking. And while the EDA vendors have really done a great job boosting performance while adding new features, the truth is that some tasks just demand more performance. Unfortunately, waiting for a next-generation CPU that doubles performance is not realistic, and even throwing more cores at the problem has started to run out of steam.  Sometimes taking a step back and looking at the big picture is a good thing, and that’s what I learned shortly after arriving at EMC.

BlogTo my surprise, I learned that even at many of the top semiconductor companies, there were untapped opportunities to boost throughput performance. And I’m not talking about the 20% that we all dream of today – I’m talking about 100% or more. For example, next time you launch a batch of a couple hundred simulation jobs onto your compute farm, take a look at CPU utilization? I’ll bet it’s nowhere near 100%.  In fact it’s highly likely to be a lot less than you ever imagined. If that’s the case, you could be in luck.  The performance problem probably isn’t the CPU, nor is it the EDA tool.  It’s the infrastructure behind the server farm – with storage being the most likely bottleneck. And while this was a surprise at first, it really makes sense. As designs grow larger, and the EDA tool flow grows more sophisticated, the burden on storage, in terms of numbers of files, directories, transient data creation has also grown out of control. And not surprisingly, a storage platform that used to work great may no longer be sufficient for today’s EDA flow.

Today we can no longer rely solely on processor speed advancements to keep up with the growing demands on EDA tools, and neither can the EDA tool vendors. We need to look at the big picture and look for the hidden infrastructure choke points.  Networking, storage, policies, geography – even security, all play a role.   Want to see your EDA tool performance double overnight? That’s impossible – or maybe not. You’d be surprised what magic can be done by someone that understands the substiles of EDA tool flows and their interaction with storage infrastructure. No surprise – That’s EMC.

About the Author: Lawrence Vivolo