The selfish economics of Big Data

This is a pretty rough set of ideas. I’m putting them down here in an unpolished manner in the hope of getting feedback, and pushback, so I can finesse them a bit.

10 Comments

Get the short URL

I think a quantified, connected society will have some interesting consequences for businesses based on managing uncertainty. As we get better at prediction, the economics of amortization get worse.

Big data is fundamentally trying to make a prediction. It wants to reduce uncertainty by finding patterns, fitting to a curve, and correlating things. If I can use data to predict your chance of an accident, or the flu, or your likelihood of a terrorist act, I can mitigate it. So big data increases certainty.

But consider, for a moment, that there are entire industries predicated on amortizing risk across populations. Insurance is a good one; socialized medicine is another. There are plenty more: travel ticket pricing, credit cards, crop futures, and so on.

A mortgage is—literally—a bet on death. The word comes from the French “mort” and “gage.” It’s a bet that you’ll die before you pay something off.

Update: Chris Bidmead corrects me on this.

A gage isn’t a bet, it’s an undertaking, an “engagement”. And mort refers to the death of the gage: it will expire when a particular condition defined in the mortgage is fulfilled.

Prediction and amortization are fundamental opposites. If you know nothing about a population’s risk factors, you use a socialized medicine model where everyone is taxed equally. Risk is shared. If you know more, you have differentiated pricing based on pre-existing conditions, smoking, etc.

In any amortized model, even with some prediction and correlation, there is an element of uncertainty. Among the population of, say, fast drivers, or smokers, you don’t know which will have an accident or die of throat cancer.

So the “perfect” prediction would be an insurance policy tailored to one person. If I knew with absolute certainty the chance that you’d have an accident and the economic impact of that accident, then your monthly insurance payment would simply be monthly deposits into a savings account whose value would equal the cost of the accident on the day of the accident. Plus, of course, the insurer’s administration and profit margins.

In other words, a tax on your life.

Obviously, no prediction system is perfect. But as we use data to make more and more accurate decisions based on the latest information (thanks to a connected, sensor-equipped world and Bayesian probability calculations) things get absurd. In the split-second before a collision, when someone slams on the brakes, we have a very good idea about their chances of being in an accident. Can we revoke their coverage or up their premiums?

Ultimately, industries that deal with mitigating uncertainty run head-on into the near-certainty of a more quantified, more instrumented, more accurately predicted world.

Economists have talked about the “perfect market” of a commodity good where supply and demand drive pricing for decades, but we know that it’s not a very useful model in the real world. Branding, human whim, cognitive bias and more play a far greater role in price elasticity and market share than we had thought.

At some point, arbitrage markets get deflated by data. The billions of dollars of taxi cab medallion speculation in New York—in some cases, a family inheritance handed from part to child—are being rapidly devalued by a service like Über that remove the “tax” a taxi dispatcher could extract.

I suspect that we’ll soon up-end many economic theories that have been accepted as true when we are able to collapse the inherent risk in an industry using data. Initiatives like Lean Startup are attempts to collapse the risk up front, leaving less reward for subsequent investors because the certainty of market demand exists.

Consider one more example of risk removal: Kickstarter. The company has already funneled $200M to new projects, and should fund half a billion dollars to projects by next year. But none of these projects get funding until they have proven both consumer demand and a compelling message. So the old build-it-and-see-if-they come model—and the investment returns and profit margins needed to justify the resulting risk—is somewhat outdated.

Does this mean that as we get better at predicting outcomes, we’re less likely to pool our resources because of inherent uncertainties? Is a predicted world an individualistic, Libertarian world?