Moore’s law is quite the topic of conversation lately—especially since it turns 50 this week. NPR, Forbes, The Washington Post and PC World are just a few publications talking about it. For those who aren’t familiar, Moore’s Law dates back to 1965 when Gordon E. Moore, co-founder of Intel and co-founder and director of R&D at Fairchild Semiconductor, observed that the number of transistors in a densely integrated circuit doubled every year. In 1975, he revised that to say that it doubles every two years, and this observation has been used heavily by the semiconductor industry to forecast product needs and R&D spend.

According to recent research from The Bloor Group, data is now growing at the cube of Moore’s Law, so there must be a significant shift in today’s computing architectures in order to keep up. Meaning that as big data volume and velocity ramp up dramatically, and companies are looking to merge high performance computing with data analytics (IDC calls it the high performance data analysis market), it’s a natural topic of conversation. In fact, it’s a topic that the industry should have been addressing long before now.

Recently, the Wall Street Journal posted an interesting article on the topic—aptly titled “Moore’s Law Shows Its Age.” The article points out where a lot of the inherent issues with this continued reliance on Moore’s Law, and the current technologies associated with it, lie. First and foremost is the interesting discussion on the cost of developing chips:

“The design and testing of a chip with the latest technology now costs $132 million, up 9% from the previous top-of-the-line chip, estimates International Business Strategies Inc., a consulting firm in Los Gatos, Calif. A decade ago, designing such an advanced chip cost just $16 million. Meanwhile, some companies for the first time are unable to reduce the cost of each tiny transistor.”

This is not insignificant. Especially during a time when budgets are being monitored closely and data is growing so rapidly, the increasing cost of chips (and therefore the increasing cost of servers) cannot continue at this pace.

“But the industry’s costs keep rising, with new chip-fabrication plants costing as much as $10 billion. Cost pressures led International Business Machines Corp. last year to pay $1.5 billion to another company to take over its semiconductor operations.”

In fact, the article discusses how chip manufacturers are already seeing diminishing financial returns.
WSJ graphic

And just as importantly, performance has been throttled by limitations of today’s conventional hardware, and organizations are now forced to scale out to larger and larger commodity hardware clusters to maintain even the slow status quo. This has a dramatic impact on performance as well as on the success of IT initiatives—especially big data initiatives. Decreasing value coupled with increasing costs is not the equation for success.

Being able to reach past the edge of Moore’s Law (and eliminating the bottlenecks caused by the associated von Neumann architectures) informed and helped guide our team as we developed the Ryft ONE. And as such, we feel strongly about what the industry needs to do to make high performance computing valuable in terms of both results and costs. Keep an eye on this blog as we continue discussing Moore’s Law—including our next post looking at what industry research initiatives are being done to address the issue.

Leave a Reply

Your email address will not be published. Required fields are marked *