Last week we took at look at the demise of Moore’s Law. It’s no secret that workloads are becoming larger and more complex than ever, and the traditional x86 hardware and scaled out infrastructure cannot meet low latency, high compute and massive storage demands. In fact, as you can see in AMD’s graph below, data-driven computing has already outpaced Moore’s Law, which has created a technology gap that limits users from extracting the insights they need from data.

Moore's Law_AMD Graph

And there are industry initiatives out there trying to dissect and solve this problem. The Bloor Group, for example, have taken a keen interest in the topic and have looked at data flow patterns in comparison to IT architectures. They have interesting discussions around the answer to data analytics challenges being a data-flow-based architecture that accounts for on-the-fly changes as well as swift and considered responses. This is all well and good until you consider what else they say, which is that data analytics solutions as they stand right now aren’t actually solutions—they’re just components to what needs to be the overall solution.

So what is that solution? Vendors are trying to become more than just a component to the overall analytics process, but are they going about it the right way?

IBM, for instance, announced last year that it is working to develop chips that more closely mimic how the brain works, in the hopes of speeding data analytics implementations. But the question remains if the development will be timely and at a cost that makes these chips feasible for organizations to use. Not to mention if the problem will have evolved past the solution by the time it comes to market.

Hadoop and Spark have been created (and revised and revised and revised again) to help tackle the problem. But in most cases, they simply don’t. In fact, these “solutions” often complicate it. To make these environments workable for large, rapidly changing data sets, IT has been forced to scale out to larger and larger commodity hardware clusters. This has created sprawling systems that require complicated layers of software tools to manage and months of work to design, deploy and load—all before any real analysis can begin. But most complex analysis requires a further investment in time and resources to index the data appropriately for a given problem set, which artificially inflates the size of the data significantly. This process results in more delays before obtaining value from data. Making matters worse, the data often needs to be re-indexed, which kills the possibility of obtaining results in real time or even in near real time.

When we looked at this problem, instead of trying to extract incrementally more processing power from von Neumann architectures, we built the Ryft ONE from the ground up to deliver exponential gains of 100X above and beyond the performance of current x86 systems. Using our background working with organizations handling some of the world’s most complex data analytics needs, we created our proprietary FPGA architecture to leave Moore’s law in the dust—just like today’s data needs already have.

Leave a Reply

Your email address will not be published. Required fields are marked *