My recent travels have taken me to some customer meetings where I have had the chance to talk to some very forward-thinking CIOs and technologists. In our conversations it has become clear to me that the new models of computing are changing the intrinsic value of our data. What I mean is, the value of data today is at best mislabeled, and at worst misrepresented.  Why is that important you might ask?  It’s because data is increasingly being created and analyzed in an automated way where intervention is either not necessary or not even possible.  There is simply too much information, and it is coming too fast. Let me explain.Here is the tried and true, traditional graph of data’s value. Something any storage professional will recognize:

It’s pretty simple on the surface. Let’s call this an email. I wrote it and it was read, forwarded, copied, printed (gasp), and generally used a lot in the first day of its life. Valuable, yes. But days or weeks later it’s old, moved to a folder, and stored forever, likely not to be looked at again. Unless, that is, something happens like a conflict or legal issue, then it might be dredged up, and then its value curve will look something like this:

All of a sudden this piece of data has value again, at least for a short time. Make sense? So now let’s say this is not an email, but 100 million records of transactions by your customers. What is that value? If each record is an order, then it is pretty valuable (that is money, congratulations), but does the curve look the same? Sort of. This is the point where companies are seeking to do more with their data. What we see is a new line forming—the new data value reality—and it looks something like this:

Individual data elements still retain value close to the point of creation, but a new value line has formed with the advent of applications that can “crunch” this data into meaningful business answers (read value). It is in this value that the most forward-thinking companies are seeking to differentiate themselves through greater insights and delivery speed.

Now I’ll explain what this has to do with Micron. It’s all about scale.

Today these value curves can and do exist, and they can be handled. However, in the (near?) future, the raw scale of transactions will overcome the architectures of today (you can read more about this in my last blog post), and the velocity will be such that all of these transactions will only be possible with super-fast non-volatile memories fed by large pools of flash that can both keep up with and contain the amount of data we are talking about. Micron is developing an infrastructure that will be able to cope with this data-intensive world, and we are partnering with software companies to make all of this possible. The good news is that you can start preparing now with today’s technologies and set yourself up to be ready to use your data (in aggregate) to better differentiate and grow your business in the new economy.