First impressions can be terribly important, sometimes to a fault. Anyone who's lived long enough can attest to that. So what will happen when we train machines to see the world the way do?
Computers are now getting in on the game - starting with faces - thanks to the work of Mel McCurrie at the University of Notre Dame, and five others.
The team wanted to study how deep learning could be used to predict first impressions, using an algorithm to judge someone's trustworthiness, dominance, IQ, and age by just their face.
The results proved that it roughly made the same first impression judgments that the humans did, to a comparable degree. What’s interesting is that that the algorithm judged the faces solely based on features.
“These observations indicate that our models have learned to look in the same places that humans do, replicating the way we judge high-level attributes in each other,” say McCurrie and co.
But what are the ethics around training computers to see the same as us? Will they have the same biases, the same feelings, the same misconceptions about us as we already do about each other? Should we be training them to see the same as us - or better? We won't know until it happens. But you might get a sneak peek by watching Westworld.
The results make for interesting reading. Of course, the machine reproduces the same behavior that it has learned from humans. When presented with a face, the machine gives more or less the same values for trustworthiness, dominance, age, and IQ as a human would.