A few weeks ago, I was in the Bay Area buying supplies for a week of camping in the desert. I’d booked the flight into San Francisco several months beforehand, as well as the rental car, with my credit card. Knowing I’d be in the country for a while, I’d notified my bank, and Visa, that I’d be making purchases from California and Nevada.
As I made my way down towards the South Bay, I stopped at Trader Joe’s for provisions, and used my credit card. I also bought materials at Home Depot, and camping equipment at REI, without incident.
Then I went to Target in the East Bay, and bought food and some bedding. With $400 worth of purchases in my shopping cart, I tried to pay for my purchases with my card—and was refused.
Fortunately, there was a bank machine nearby, and I was able to withdraw cash and pay for the goods without further incident. As I pushed the cart towards my outsized SUV for loading, I called the 800 number on the back of my Visa card.
Why cards get refused
Anyone who’s had their card declined knows what happened next. The service agent verified my identity, asked me to review recent purchases, and then apologized profusely and removed the block on the card. But since I’m curious about the machinery behind such decisions, I dug a bit further.
I asked him whether I had correctly informed Visa of my trip—I had. I asked if he had access to past travel purchases showing I was in California—he did. And I asked him why the other purchases had been approved but this one hadn’t—apparently, Target stores in the East Bay are subject to credit card fraud, particularly in the purchase of personal electronics.
We’re wooed by the promise of Big Data to make our lives better. So beyond just venting about my first-world camping problems, I want to use it as an example of why, even armed with all the right data and tools, we don’t act.
Credit card fraud is a huge problem for the finance industry, costing $190B in 2011 (though much of this is online, not physical retail.) So there’s certainly motivation to tackle the issue. You’d think it would be easy to predict well—there are few industries with this much authenticated data available. In my example above, there are plenty of details that could have been used to improve the prediction.
Based solely on the data that the credit card company, and Target, had:
- The system could have looked at the geography of purchases to construct a travel profile, seeing that I was headed towards Target and giving it confidence that I was a legitimate buyer.
- Moreover, it could have looked at the things I’d bought—mostly junk food and boxed wine—to see that I wasn’t buying the kinds of things fraudsters often acquire.
- Finally, it could have looked at the places I’d shopped before, and see that I’ve shopped at this Target in the past, and not filed a stolen card report, and approve the transaction.
These are all just software and data solutions. Granted, some of the approaches require co-operation between merchants—Target may not want to share a list of purchases with Visa, and might only deliver a “risk score” based on what was in my cart. But they’re feasible, and don’t require me, as the consumer, to do anything differently.
There’s much more that could be done with innovation, if I’m willing to do new things:
- Using secondary measures, such as credit card PINs and chips, to increase the confidence they have that I am who I claim to be. Apparently these are coming to the US in 2015 or so, and are already common elsewhere.
- My phone, which is usually with me, might provide some kind of two-factor authentication, since it’s unlikely that a credit card thief has also stolen my locked Smartphone.
- The Visa system could automatically contact me, rather than me calling Visa, and use IVR to verify the purchase or have me flag it as fraudulent. If they call me, it also means they don’t have to verify my identity.
Even where such systems exist, however, they break down. Visa’s online Verified By Visa model asks buyers to type in a secret password to confirm big-ticket purchases such as air travel, but doesn’t always lead to an authorization.
These solutions won’t be implemented, even if they could fix things, because they depend on the complex interactions between credit card companies and their customers. We’re all in a dance with our banks and service providers, and that dance seldom changes, because it follows the tune of a branch of economics called game theory.
Blame the game
Game theory tries to understand how competing parties find the solution that maximizes their outcomes (sometimes simplified as “interactive decision theory.”) It’s the stuff of A Beautiful Mind, has been used in everything from nuclear disarmament to trade talks, and it gets complex fast.
Most people are familiar with the classic example of the Prisoner’s Dilemma. In that thought experiment, two thieves are arrested and each is separately asked to turn the other in. The experiment has specific outcomes:
- If neither prisoner rats on his friend, they will each get one year in jail on a lesser charge.
- If one prisoner rats on the other, the rat will go free and the other prisoner will go to jail for five years.
- If both prisoners rat out the other, both go to jail for three years.
Here’s how a game theorist might show this predicament:
|Prisoner A stays quiet||Prisoner A rats out B|
|Prisoner B stays quiet||A gets 1 year
B gets 1 year
(total of 2 years’ jail time)
|A gets 0 years
B gets 5 years
(total of 5 years’ jail time)
|Prisoner B rats out A||A gets 5 years
B gets 0 years
(total of 5 years’ jail time)
|A gets 3 years
B gets 3 years
(total of 6 years’ jail time)
A rational outsider, looking at this, might say, “clearly they should both stay quiet, because then the total incarceration is 2 years, which is the best overall outcome.” But that’s not how selfish prisoners behave in this very simple example. Each prisoner weighs the options, concluding that ratting out the other is the best for their (selfish) interests. Of course, because of this, both go to jail for the maximum time.
This is true even if the thieves anticipate this state of affairs, and agree not to rat each other out in jail. In fact, that just gives each thief more confidence that turning in his partner is the right course of action.
Games like these have “equilibrium states.” They’re actually called Nash Equilibria, since they were devised by A Beautiful Mind’s troubled hero. While the game presented above is played once, they can be played repeatedly, and the parties will tend towards certain positions as they learn about their opponent by playing the game.
Now let’s think about credit card fraud, using the same four-quadrant game description, and replacing the two prisoners with Visa and myself.
I should point out that this example isn’t a classic game (Visa and I aren’t playing the same game “against” one another.) But there’s an interaction. Nevertheless, economists will probably yell at me for having bastardized their models and helped to contribute to the misunderstanding of true game theory.
Credit card company blocks a legitimate transaction (false positive)
Credit card company allows a fraudulent transaction (false negative)
I continue to use the credit card company
|Visa: $5.50 support call.[i]
Me: Inconvenience, embarrassment.
Me: Slightly improved loyalty.
|I switch credit cards||Visa: Thousands.[iii]
Me: Inconvenience; brief sense of self-righteousness; same experience elsewhere.
Me: Inconvenience; blissful unawareness that Visa took a bullet for me.
It doesn’t take much to realize that the equilibrium in this case is the false positive—blocking my card. There are three big reasons for this:
- Switching providers is hard. Credit cards are often tied to loyalty programs, bank accounts, and so on. They’re entered for recurring payments with dozens of providers. And I may be carrying a balance or have problems getting approval with another company.
- Switching won’t fix things. The Big Four companies—Visa, Mastercard, Amex, and Discover—do little to differentiate themselves on this subject. So I have no expectation that switching will make a difference to the annoyance of false positives. Oligopolies don’t need to collude, because game theory makes them converge on a steady state anyway.
- Visa has no reason to change. There’s very little impetus for Visa to reduce their rate of false positives, even though data analysis and ubiquitous computing have given it an arsenal of new tools. If I switched providers, it’s unlikely that they would link the cause (putting a hold on my card) with the effect (me changing card providers.)
And this is why, for Big Data (or any other promising innovation) to be a game changer, we need some player in the market to literally change the game.
Consider what would happen if one of the credit card companies said they’d pay you $50 every time they falsely blocked your card. The equilibrium of our stand-off would change significantly. There would be strong incentives to fix the false positive problem, and the company could use this reimbursement as a campaign—telling customers, “your time is valuable to us, and we’re going to repay you for wasting it.”
Or consider what would happen if switching costs were slashed, and we all used a Paypal account with a mobile payment system. The losses due to a newly nomadic customer base would also change the economics of our stand-off.
Once the rewards and costs had changed, it would immediately trigger an improvement in data analysis that would lower the false positives. It might lead to innovation in mobile phone apps, such as a “big purchase authorization” tool that would confirm purchases with you immediately, or even in advance.
Disruption isn’t just for slide decks
When we talk about market disruption, we often mean making a significant shift in the underlying equilibrium that buyers and sellers have reached. This might be a drop in costs, or an increase in revenues, or the commoditization of a previously lucrative part of the marketing mix.
Most marketing efforts don’t qualify as disruptive, despite what their proponents would like. They don’t alter the dynamics of an industry enough to tip the scales to a new equilibrium state. The old rules remain; the old Gods prevail.
Gartner makes a lot of noise about its Hype Curve, the apparently inevitable peaks and valleys through which any technology must navigate before it becomes mainstream. Technologies languish in Gartner’s trough of disillusionment because of a market’s resistance to shifts in equilibrium. It’s unwilling to change the rules of its game—why would it, since it’s engineered those rules to reinforce the equilibrium state in which it finds itself.
As a result, innovations that might improve a product or service—such as smart apps, mobile phone authentication, or better predictive modeling—take a long time to reach us.
In game theory, equilibria change for several reasons:
- A player tries something new, opening up alternate responses. There may be two equilibria in a game, and the change allows the players to shift from the old to the new state. Car drivers play a game each time they choose which side of the road to drive on—and if you drive on the right, it’s best for oncoming traffic to drive on the right as well.
- Players introduce punishment. If the incarcerated thief has friends on the outside, the thief who earned his freedom by turning in his partner may face retribution. This changes the rewards and costs of the game, shifting the equilibrium. Behavioral economists know that the way to maximize the outputs of a system, and avoid the “tragedy of the commons” of shared resources, is to introduce some form of punishment for selfish behavior.
- Players consider the rules across many games. If the burglars intend to work together next time, they may stay quiet because they’ll evade capture next time. This is true for any business: restaurants would behave very differently if every diner only visited once and never told anyone else about their experience.
It takes a smart incumbent looking for a way to distinguish itself from competitors, or a startup eager to disrupt the status quo, to adjust the fundamental economics of an existing game. But once that tipping point is reached, the market reacts fast. game theory shows that players will converge rapidly on the new equilibrium. Netflix, the digital camera, ATMs, and SaaS are great examples of this kind of convergence.
What’s more, as we move to digital channels for front-office communication with customers, and data-driven back-office processes, the coefficient of friction of that change drops dramatically. Paypal and Google Wallet can insert themselves between the credit card and the consumer, offering value-added services, and then simply replace the credit card with a Paypal or Google balance.
It’s not technology that’s holding back innovation. It’s not even corporate culture, or the fear of what’s new. It’s the equilibria in which many industries find themselves. An outside observer can clearly see that there are better ways to solve problems. But until businesses and consumers change the costs and rewards, technologies like Big Data will remain prisoners of the status quo.
Furthermore, until marketers truly understand the dynamics of the interaction between themselves and their customers—and make changes big enough to move the equilibrium those interactions have reached—they won’t disrupt markets.