There’s no surprise—businesses are moving faster than ever before while at the same time, data is being amassed at greater and greater speeds. It’s the story that everyone is talking about right now. The other piece being discussed is how exactly do you reconcile the two scenarios? The answer: agile data.

The idea of agile data has been around for years—companies using real-time information to quickly adjust business strategies and better meet the current needs of customers. And while the concept is ideal, the current execution is not. What could be the answer to actionable intelligence and smart competitive advantages is stymied by the lack of cost-effective technology ecosystems that allow for real-time analysis. Sluggish systems and disparate data lead to weak data performance. Intelligence goes from being real-time to being past-time, and by that point, businesses don’t have the option to act quickly and efficiently.

This hasn’t stopped today’s companies from trying to make their data more agile. Many IT organizations have adopted open source based technologies, like Spark and Hadoop, to help ease and speed data analysis. However, with days spent on ETL processes and even more weeks added for indexing time, data is inherently weeks to months old before the business has any ability to analyze it. Even with the technology to cut data preparation time significantly, the ability to quickly and accurately analyze unstructured data poses yet another challenge. While it would be nice if all data came from neat and labeled forms—that is just not today’s reality. Technology for analyzing large amounts of unstructured data is currently not reliable while being expensive enough that organizations are hesitant to implement it. And when you consider the combination of structured and unstructured data, the technology complexity and the cost continue to increase.

Of course, this is all without the assumption that the data lives in silos, which is faulty thinking. The typical enterprise IT infrastructure includes multiple data warehouses, cloud systems, business units and more, which mean that achieving true data agility becomes even more difficult. Time and resources go into ensuring that these disparate systems are connected, but it’s not a foolproof method for guaranteeing that the right data is available for analysis at the right time.

There’s no denying that there are several hurdles to clear before making data more agile, but 2015 may be the year that changes. Big data and the technologies supporting it are continuing to mature, and with new innovations coming to market, ideas like real-time data stream analysis and consolidated data storage will become more possible. Is 2015 the year of true data agility? That has yet to be seen, but what we do know is that strides forward will be made and more companies will have greater access to the tools designed to analyze data and provide actionable intelligence.

Keep an eye out here for what Ryft is doing to help move agile data forward.

Leave a Reply

Your email address will not be published. Required fields are marked *