Xilinx SDAccel Development Environment: FPGA Acceleration Performance and Ease of Use Aren’t Mutually Exclusive

With the major FPGA vendors all-in on Open Compute Language (OpenCL) acceleration and the widespread adoption and deployment of OpenCL compute capable FPGA cloud services, Xilinx’s equivalent SDAccel functionality is an emerging technology that can’t be ignored. FPGA’s are making their debut in the data center driven by the OpenCL programming model. In support of […] Read More

Modern Data Analytics: Ryft Goes Petabyte-Scale, Breaks 200 GB/second Performance

Ryft made news this week by accelerating petabyte-scale data analytics to 200+ Gigabyte/second, powered by a cluster of high efficiency Ryft appliances built on the company’s hybrid FPGA/x86 architecture. As I scanned Twitter for the news, I stumbled upon @GigaStacey’s timely repost of an article she penned way back in 2011 calling for new, more […] Read More

Supercharge Big Data Success with New Analytics Architectures

Real-time analysis of a wide array of both person—and machine—made data streams is becoming integral to getting value from data. However, current infrastructures just cannot process the velocity and volume of data these streams produce. Recently, we gave the below presentation at the Enterprise HPC forum to showcase how the new Ryft ONE can simultaneously analyze […] Read More

Enterprise HPC Voters Pick Ryft to Propel Big Data Industry Forward

Thanks for the vote of confidence, Enterprise HPC attendees! At last week’s Enterprise HPC forum in Carlsbad, CA, the Ryft ONE was honored with the Most Innovative Technology Award! Attendees at the conference, which consisted of technology leaders driving high performance computing projects within their organizations, voted on what technology solution at the conference would […] Read More

Sorry, But 1940’s Compute Architectures Can’t Overcome Big Data Performance Bottlenecks

The von Neumann computer, developed in the 1940s, refers to a type of computer architecture in which instructions and data are stored together in a common memory. Computers using this design revolutionized “life as we know it” by providing an efficient appliance for compute-centric tasks. That’s why von Neumann computers—most commonly seen in x86 systems—still […] Read More

For High Performance in Big Data, Look Under The Hood

“Insanity is doing the same thing over and over again and expecting different results.” – Albert Einstein. For many, this quote resonates with how most companies are solving data performance problems today. Got Performance Problems? Option 1: Throw bigger servers at the problem. This is called scale-up, until you run out of conventional processors, memory or disk you can fit on […] Read More