What's Changed in HPEC? Part One

22 July 2015
HPEC Center of Excellence

Sometimes, it seems like our HPEC Center of Excellence opened yesterday—when in fact it opened its doors in May 2012. That’s over three years that we’ve been helping customers solve some of the toughest problems they face in developing sophisticated applications.

Needless to say, we’ve seen a lot of changes in the HPEC landscape during that time. Of course we have many more cores to play with than we did back then. We’re seeing clusters of dozens of Intel processors, each with anything from four to 12 cores, with 40G fabrics tying them all together. It won’t be long till 100G fabrics are commonplace. What all that means is many more cores per slot than we used to be able to achieve. For most applications, Linux and standard middleware are the go-to software architecture. For safety-critical applications, customers are favoring Freescale’ Power Architecture with ARINC 653 and DO-178 variants of real-time operating systems.

Focus on SWaP, latency

Those sophisticated applications I mentioned earlier? Radar and EW continue to challenge our customers, especially with the continuing emphasis on SWaP and the need to minimize latency. In the past three years, we’ve seen a big rise in our customers looking to develop situational awareness and autonomous systems. In the former case, they want to ingest massive amounts of sensor-derived data, such as video, crunch that incoming data and output it as a range of views. Oh, and just to make life much more interesting, they’re looking to do all that with glass-to-glass latency as close to zero as possible; they’re trying to shoehorn it into the smallest space possible; and they want to deploy it on a platform that was never designed to accommodate it.

And if that wasn’t challenging enough: for autonomous systems, they also need to ensure that the safety-critical element is designed in.

I’ve mentioned how our customers are focusing on minimizing size, weight and power—and, for that, 3U VPX is the obvious system architecture. Given that focus, you might think that 6U is going away—but that’s not what we’re seeing. There’s still plenty of demand for 6U. In terms of interconnect, we’ve historically relied on PCI Express for 3U systems, but 10Gb Ethernet is making its presence felt in that space now, and that can be enormously helpful when you’re trying to achieve scalability and a common software environment over 3U and 6U systems.

GPGPU

A really big development over the past three years has been the rise of GPGPU technology, especially from NVIDIA—who have named GE as their preferred provider of GPGPU solutions in the rugged space. GE was a pioneer in leveraging the massively parallel, multi-core architecture that you get with graphics processing units. That whole technology area is evolving rapidly too, with larger and quicker memories and new interconnects on the way. 

Getting the most out of multi-core / manycore is particularly challenging. One of the things our customers like about working with GE on HPEC applications is the availability of tools like our AXIS software development environment, which is designed to ease and speed the process dramatically.

In my next post on where we’re up to with HPEC, I’ll be looking at some other development—notably, in the area of how we keep all this powerful computing cool.

Peter Thompson

Peter Thompson is Vice President, Product Management at Abaco Systems. He first started working on High Performance Embedded Computing systems when a 1 MFLOP machine was enough to give him a hernia while carrying it from the parking lot to a customer’s lab. He is now very happy to have 27,000 times more compute power in his phone, which weighs considerably less.