Earlier this year we announced the Ryft ONE, and as when any new product comes on the market that is architected differently than the norm, there are always questions about what’s new. Datanami posted an article on the Ryft ONE, and there was a comment with some good questions that brought up some points around common misconceptions regarding FPGAs, especially around their complexity. At Ryft, we’ve been working with FPGAs for data analytics for years and have learned the best ways to make the analytics not only simple to do but also astonishingly fast. This is why we’ve been able to take away the complexity from the user without taking away any of the performance benefits. Are FPGAs the Architecture of the Future? Maybe not for all applications, but certainly for data-intensive applications. FPGAs are used all over the place, no doubt about it. It’s just that they haven’t been used intelligently until recently, and that is certainly true in the data analytics space. Making matters more interesting is that most people don’t really understand how to use them or how to program them. That’s where the genius of Ryft products come into play: the Ryft ONE open API abstracts all of the complexity of FPGAs away from the user and makes it really simple to run macro data analytics. You just call simple functions in a language of your choice, so it’s really no different than a C programmer today who might call qsort() to run a quick sort or bsearch() to run a binary search. It’s just that instead of that, they are calling Ryft’s open API functions to perform exact search and fuzzy search, generate term frequency documents and so on. I, personally, have always found that FPGAs are very extendable. That’s actually their purpose. You’ll note the first two words of the name are “field programmable.” The idea is that you can change the way the silicon is connected together, as much as you want. So if you need an algorithm that does an exact search, you load that in. If you need one for fuzzy search, you load that algorithm onto the appliance. If you need one to process images, then you add it to the appliance. The beauty of the Ryft ONE design is that complexity is completely abstracted away. The user never even knows (unless they read the manual) that there are FPGAs inside doing all the hard work. Our engineers are constantly working on new analytics primitives (and updating our published, open C-language API to reflect them), and these primitives are made available at no extra cost to all customers who maintain an active annual subscription. Additionally, there are no hardware changes required for any of them. New primitives run seamlessly on the Ryft ONE once they are loaded, and the loading is via simple debian packages since the end user sees only a front end Ubuntu 14.04.2 LTS Linux system. All New Hardware All the Time? The answer is not necessarily, and even not usually. What’s interesting about FPGAs is that they actually reduce the need for constant hardware upgrades. This is because they offer reconfigurable silicon. Standard ICs, like Intel’s x86 architecture, don’t allow that. All you can change on a sequential processor is the sequence of instructions you’re giving it. You’re stuck with an architecture, and specifically, you’re stuck with a von Neumann architecture. What architecture is the FPGA? It’s whatever you need it to be, which really helps when designing analytics primitives, it turns out, and that’s another great element to the Ryft ONE. That’s not to say the hardware will never be upgraded. Every good business continues to innovate, and Ryft is no different. We will be working to develop new, better, faster hardware. But it is an important point to keep in mind that even as the Ryft FPGA primitives are constantly being expanded and updated, they run on the existing hardware (and as a reminder, at no extra cost to customers with active subscriptions). I guess the best analogy here for software-focused readers is that you can run your old x86 programs on either old hardware or new. Well, you’ll be able to run your Ryft algorithms on either old hardware or new. Whether you run it on old/legacy hardware or newer hardware is entirely up to you. Could there be some primitives in the future that require newer hardware to get better performance? Certainly. It’s a fact of life that the software industry, in general, is rife with more complex debugging of problems than for any other engineering discipline in the history of, well, engineering. Can debugging FPGA problems be complex? Sure, but so can software debugging problems. But we’re working through those, and will continue to do so. What is interesting is that FPGA logic, as it is deterministic and can be logically simulated, is typically quite robust after it has been designed, integrated and tested. Making Decisions Before Decisions Are Made for You The commenter brings up a good question on our benchmark results, and why we tested against Spark. First, let me say that the benchmark numbers that Ryft provides for exact search, fuzzy search and term frequency (which is the same thing as traditional word count for Spark/Hadoop) are amazing and something we’re really proud of. 10 GB/second of analytics throughput for a fuzzy search, and 2.5 GB/second for generating term frequency documents from arbitrary input? We couldn’t be more thrilled with those stats! The Ryft ONE has been designed for both stream processing and historical data analysis. The 10 GB/second number comes from reading data simultaneously from up to 48 TB of SSD storage (which answers another question—it does store data if you want and/or need it to), plus 10 gigabit Ethernet ingress. Since gigabit Ethernet is 8B10B encoded, that means that’s approximately 1 gigabyte per second across a saturated 10 gigabit link (and based on our market research we haven’t seen anyone really saturate their 10 gigabit links). Ryft ONE can easily handle 10 gigabit streaming data while also handling historical datasets sitting on the 48 TB of storage, allowing you to process your streaming data while simultaneously processing your legacy data (perhaps correlating the two?) at a 10 GB/second clip. That’s really something, allowing you to process—simultaneously—new data arriving across the network link while processing data resident on the 48 TB of storage. Also, we actually compared against not just Spark and Hadoop but also Solr and ElasticSearch. One major point to keep in mind is that those technologies require a lot of complex, time-consuming front end extraction and indexing to work, and that time needs to be factored in when determining how long it takes you to really analyze new data. Ryft talks about “mean time to decisions” (MTTD) where the full pipeline is factored in, and this makes perfect sense: it’s about the total time it takes you to analyze unknown data. For most people wanting real-time answers, if it takes me a week or more to get my data in a format to actually use Solr or ElasticSearch, it doesn’t matter how fast the final queries are, since it took a week of prep time. By then, the data’s stale. It’s not real-time data analytics if your data has changed (often multiple times) before you get the analysis completed. One other note to consider on indexing is that we don’t increase data size by adding indexes. We can provide fuzzy search with deterministic performance regardless of data. Also, some data doesn’t index: binaries, for example. We can search that, too. Redundancy is also simple with the Ryft ONE, since it really does look just like a Linux box. As such, it wouldn’t be difficult for anyone who has a basic understanding about rsync to set up a redundant system on a second Ryft ONE box, or even set them up in a sharding arrangement, if such a need existed. Although so far, it appears from our initial engagements with folks that 48 TB is an awful lot of data, and most organizations will have a need for only a single 1U Ryft ONE. Needing All the Tools in the Toolbox One thing is certainly true—the Ryft ONE isn’t meant to be a full stop replacement for Spark or even for Hadoop, and we caution anyone who thinks that. Ryft ONE is a tool, just like Spark is a tool, just like Hadoop is a tool. I can tell you this though—if I do 10 GB/second of fuzzy search in a 1U appliance, instead of requiring 3 to 4 fully loaded racks of servers running Spark, then I would absolutely use that 1U box for fuzzy search instead of that 100+ node Spark cluster. That not only makes good fiscal sense but means I can use that massive Spark cluster for other work. We don’t mind telling the world what we’re good at and what we’re not so good at. And we aren’t claiming to be a one-size-fits-all. What we are is the world’s smallest, fastest solution for key algorithms such as exact search, fuzzy search, term frequency and eventually others. If you think that this is interesting to you, or could be the right approach for your data analytics, please contact us about getting started with a free trial by filling out the contact form here: https://www.ryft.com/free-trial. One response to “Truths and Myths About FPGA Acceleration” One minor point. 10GE is really 10Gb user data rate. It usually uses 64b/66b encoding. Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.