Recently, a man took his GoPro for a walk with a Neural Network. The program captioned what it saw: bikes, faces, skateboards. Boy, did it like the skateboards.
We’re at a critical juncture in computers’ ability to understand the real world. Not just to describe it, as in the video above, but to extract and quantify emotional data and to augment it. Consider some other examples of understanding that take the qualitative and subjective and make it quantitative and objective:
- Google taught a network to identify cats in videos;
- Tesla made cars drive themselves. But they made this software get better—avoiding sharp turns, changing lanes more smoothly, and so on—by downloading software that learned from humans on the road;
- Text analysis determined whether research reports were genuine or fake just by the words researchers used;
- Affectiva can extract micro-expressions from human faces in real time, quantifying emotions;
- Sociometric detects stress levels from your voice—fifteen minutes before a spike in cortisol in your saliva betrays your annoyance;
- A 20-year-old Cassanova, armed with facial recognition software Eigenfaces and a chat engine, automatically scanned prospective dates on Tinder to find those he’d find most attractive;
- Taser’s evidence.com provides a complete, chain-of-custody encrypted, way to take video evidence and metadata from a police officer’s headset to the courtroom;
- 70 percent of police forces use automated license plate readers.
Put these things together—say, a police car that tells officers what it sees and the likelihood that those people thinking of mischief—and the results are downright transformative.
Today, Big Data is a big deal. Pundits get thrilled about abundant, varied as a vital resource—the “new oil.” We speak of “data lakes,” brimming with knowledge, ready to drink in huge gulps of predictive insight or sips of tactical advantage.
We’ve been excited about new resources before. At the dawn of the atomic age, for example, the world went wild about radiation: Radium underpants, nuclear cigarettes, radioactive suppositories.
At Strata+Hadoop World in New York this October, Maciej Ceglowski asked us to set aside our excitement for twenty minutes, and consider, instead, the the toxic spills that natural resources can cause. What if someone in the world, right now, is making the big data equivalent of radioactive underwear?
We may not all be working on glowing underpants. But much of what’s happening in data science, and indeed in technology as a whole, is downright explosive. The next ten years of technology will build on three things: collecting all this data; analyzing it automatically; and presenting it to us through increasingly intimate interfaces that make it feel like we’ve grown new senses.
Easy, ubiquitous data collection has launched a rash of oddly specific, often ridiculous, inventions: Snack bowls that keep pets company; forks that yell at you when you eat too much; and so on. But while much has been made of data collection, big data is too big to analyze by hand (as Christopher Nguyen has remarked.) That means it’s collected primarily for machines to analyze. So it’s time to think about what the machines can do.
For some foreshadowing, consider this restaurant, which was losing money, and hired consultants to help figure out what was wrong. The consultants reviewed security tapes from recent weeks, but they also looked at tapes from several years ago—and learned a lot about how dining had changed. (Seriously, go read it. I’ll wait. It’s worth it.)
Now think about what happens when this is widespread and automatic. Video today is relatively inaccessible, unless a human decides to review it as the restaurant did. But once an algorithm can watch a video, those archives are unlocked. They become hard, quantifiable data. For starters, point it at IMDB and all the movies in the world, and think how that changes Netflix.
They also become something new. Machines, if they can interpret art, will be interpreting it on a scale unfathomable to humans, and their interpretations could impact the production of future art. Already, new kinds of machine learning have algorithms inventing new flowers and new alphabetic characters.
Now give that algorithm more detail. Let it look up license plates and do facial recognition. Let it analyze emotion from voice and expression. Suddenly, it can extract far more from videos we thought were long buried than even a human could: the prospective criminal at a crosswalk; the perpetrator of an unreported assault; the groom lying during his wedding toast; the gaffes and continuity errors.
We can also extract more metadata from the other breadcrumb trails we’ve left online. If an algorithm can tell which academic reports are fraudulent, why can’t it tell when we’re boasting, or downright duplicitous?
Parents caution their children, “be careful what you put online.” We’re mainly worried about the Big Vices—sex, drugs, rock and roll. We think about what a human might see, and current cultural norms. But every bit we leave for the machine to analyze yields metadata about who we really are. When an algorithm can tell whether you’re lying, it’s not just the college binge drinking that gets you in trouble—it’s everything you’ve ever written, everywhere.
Maybe there’s a new career as someone who teaches people to fool machines. After all, stylists are now working on makeup and hairstyles to “reclaim privacy” by fooling facial recognition.
And when the other party in a videoconference has machine-assisted emotional detection, your pokerface is going to be a necessity in negotiations.
Smart agents will change what we look like to the world. They’ll unearth our past and serve it to our future, and this will dramatically alter what we consider “decent.” We’ll meet the machines half-way: when everyone’s a bit of a freak, we’ll judge freakishness less harshly.
But until then, algorithms will make us all squirm.