In the business world, data scientists and others seeking insights from the big data deluge are looking for ways to maximize the potential of all their contemporary and high-performance computing (HPC) analytics systems. Many are trying for force-fit what they have available — in most cases CPU clusters, some augmented with smaller GPU clusters — to answer their business questions. In reality, what’s happening in many cases is that technology is dictating what businesses can do with — and ask from — their valuable data. This is an unacceptable compromise.

It’s time to upend that mentality. Instead of asking what type of computing architectures and HPC systems are available, it’s time to start enabling business-centric APIs to drive new breeds of hybrid computing systems that derive the most value possible from all data. In other words, the type of question that needs answering and its urgency should drive the HPC architecture and not the other way around. Some processors and architectures are better at some things than others, and they all can be used in various ways to analyze the crushing volume of data stemming from mobile, social media, cloud computing and the Internet of Things.

The heterogeneous computing model (also sometimes referred to as hybrid computing) is about using the right tool for the right job at the right time. Using a toolbox analogy, perhaps CPUs are a hammer and GPUs are a screwdriver. So, what do you do when you need a set of pliers? Perhaps turn to FPGAs. Why FPGAs? FPGAs are interesting, because their computing model offers something that CPUs and GPUs do not and cannot: a chance at true hardware parallelism with zero reliance on any sequential instruction set.

In fact, FPGAs have been used in the financial industry for years, given the unique ability of FPGA technology to arrive at split-second answers derived from quickly changing data. FPGAs are gaining mainstream attraction as seen with Intel buying Altera and Microsoft using FPGAs for Bing Search. The next step is to integrate FPGAs, GPUs and CPUs together to seamlessly create a high-performance data analytics engine that allows you to get the answers you need — not just the answers you can get — from your data.

One of the complexities associated with heterogeneous computing is how to manage disparate computing resources (CPU, GPU, FPGA, etcetera). This is an issue because standard programming languages such as C, Java, and so on, work well on CPU fabrics and, to some extent, on GPU fabrics. Similarly, GPU-intended languages like CUDA and OpenCL — which can also be used on CPU fabrics to a limited extent with limited performance — require different programming skills. Traditionally, FPGAs have also required specialized, skilled hardware designers to achieve the high performance needed.

There must be a different, more simple and scalable way. Relying solely on a single processor type does not provide the tools nor the performance required given the increasing amount of high-velocity data. And creating a piecemeal solution of all three processors is costly, complex, and also won’t provide optimum performance. The best way to solve the problem of different computational models is to solve the business problem (“I need to find every time this SKU is mentioned in our sales history”), not the rudimentary low-level computer science problems behind the business problem (“Fetch data from storage and/or RAM, clear the capitalization bit, pipe it through an algorithm that compares to the SKU, count the differences, when a difference is encountered at a specific byte position, create a tree to analyze all possible different spellings, etcetera”). It becomes readily clear that being able to focus on the business problem and letting your data analytics technology work for you and for that problem is the ideal solution, with no requirements to understand the underlying technology to achieve needed business results.

But how can we achieve this? How can data scientists focus on the business problem when today’s hybrid architectures are so complex to design, implement and manage? The answer lies in open APIs that are compute-agnostic and solve a problem — the business problem. Using the previous SKU example, perhaps a single function call to perform a fuzzy search, parameterized solely with business requirements, would be a reasonable solution to ensure that the function call is not bogged down with any low-level algorithmic details. This allows the implementation to answer the business problem in a vendor-independent, and compute-methodology-independent fashion.

Figure 1: Business-centric API example for fuzzy search

Let’s look at an actual example, written in the rather ubiquitous C programming language. Figure 1 shows a simple fuzzy search function call, using Ryft’s open API. Note that nowhere in the fuzzy search function call example is there any information required about low-level compute-engine implementation details. Translating the function call to the business question is simple, even to those not well versed in programming languages: “Do a fuzzy search against my local copy of Wikipedia dated May 18, 2015, looking for a case-insensitive ‘Babe Ruth’ string with a fuzzy distance of 2 (in case someone didn’t spell the phrase quite the way I expect), returning 32 bytes on each side of any results found for my human analysts to have some context. Append a carriage return line feed to each of those results so I can export it to my favorite visualization tool, and while you’re at it, generate an index file of where each of the matches were actually found in my input Wikipedia dataset, so my custom programming can use that output programmatically later.” All business!

The beauty of this and other open APIs centered on business needs is that they are exceptionally easy to port across multiple compute engines and, just as importantly, they are easy to port to other languages, such as Java, Python, R, Scala and others. Another benefit of such an API is that, since the API is wrapped to a variety of compute engines (such as perhaps an FPGA engine), no measurable extra overhead is typically introduced by higher-level languages, such as Java and Python. That’s very powerful. It allows data scientists to use whatever language they want, but still get exceptional performance for hard problems (such as fuzzy searching), since those API calls wrap to an underlying appropriately tailored compute engine — that is, using the right tool, at the right time, for the right job.

That’s game-changing. As data scientists, C-Level executives and data center managers, it is time for us to recognize the need for a varied set of tools (CPU, GPU, FPGA, and whatever the future may bring), and a simplified mechanism (open APIs focusing on the business problems, not the low-level minutia) to use each tool at the right time to solve the right business problem. This isn’t something we should have to wait for — the technology is here today; it is ready now. Just like the answers to your business questions should be.

Pat McGarry is Vice President of Engineering at Ryft.