Ubiquitous facial recognition technology can expose individuals’ political orientation, as faces of liberals and conservatives consistently differ. A facial recognition algorithm was applied to naturalistic images of 1,085,795 individuals to predict their political orientation by comparing their similarity to faces of liberal and conservative others. Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).
January 11, 2021
Facial recognition technology can expose political orientation from naturalistic facial images
[Via Stanford Scientist Can Tell If You’re A Liberal Just By Looking At Your Face
I have often thought I could tell the difference between gun people and anti-gun people just by looking pictures of them. Self defense instructor Greg Hamilton believes, and teaches, something similar.
The research paper cited above is saying that such a thing is possible.
Now just imagine what big tech/government could do with this technology.
We live in interesting times.—Joe]
I remembered reading something, so a quick online search got me these articles from over the years
‘Most scientists ‘can’t replicate studies by their peers”
Scientific Findings Often Fail To Be Replicated, Researchers Say
Why Less Than 30% of Science Articles are Reproducible
In other words – most ‘research’ is Bullshit
Other than the ability to sense fear, which may be an ability that is a ‘more or less’ thing with people, all Kosinsk’s research is, IMO is Phrenology (bullshit), reinvented.
Did you read his entire paper?
Not the entire paper, Joe.
Since I’m neither an academic, nor an intellectual, I can only stand to deal with their ‘output’ for a short period of time.
As you quoted:
Predictability of political orientation from facial images does not necessarily imply that liberals and conservatives have innately different faces. While facial expression or head pose, facial hair, and eyewear were not particularly strongly linked with political orientation in this study, it is possible that a broader range of higher-quality estimates of those and other transient features could fully account for the predictability of political orientation.
That’s quite squishy of him, isn’t it?
Want me to have some respect for this?
Have it not just peer reviewed, but show its accuracy by – of course – repeating the experiment, several times, that wind up with results that match.
He was speculating on what was causing the algorithms (yes, multiple algorithms) to be successful in predicting political inclination. This is not intended to be something conclusive about the completed research. It is intended to be a suggestion for further research.
They tested three different algorithms: 1) Cosine similarity ratio; 2) Logistic regression; and 3) Neural Networks. “All three methods yielded similar classification accuracies.”
They used facial samples from both “a popular dating website” (977,777 users) and Facebook (108,018 users). The results were very close.
I’m inclined to believe their conclusions.
Here’s another way to look at it… Do you believe you, or perhaps at least some other people, can distinguish someone else’s sexual orientation, with better than chance accuracy, just from their appearance? Lots of people believe this. They sometimes call this ability “Gaydar”. And computer algorithms claim:
It does not seem to be a big stretch to claim political orientation can be determined as well.
This also supports r/K political theory, as it posits there is a genetic component to political leanings. Genetics affect appearance.
Or, as some folks put it, “physiognomy is real” when speaking of “pedo-face.”
Obviously not 100% reliable, but if your red-flag detector goes off when you see someone for the first time, listen to it until you can confirm one way or the other.
Now, if only we could use some low power, high frequency imaging radar to exquisitely map the dimensions of people’s craniums, down to the most minute perturbations.
Phrenology, a perfect science that regrettably came too soon. The Victorians simply didn’t have the technology to do it justice.
Of course, from the science of phrenology comes the practical application of retro-phrenology, which gives us the techniques to amend a person’s personality by the careful inducing or suppressing the relevant cranial features. For this purpose, I will procure an array of medical grade mallets of various weights, geometries and surface firmness of the strike face. For, I vouchsafe unto thee, naught will reform a recalcitrant ne’er-do-well quite like hitting him in the head until his deviance dissipates.
But if you were somewhat serious about this it means you didn’t read the paper:
Bright blue or green hair? Man bun? Braided beard? These are easy tells.
You say that often enough that I’m beginning to wonder if you’re part Chinese…
But yes, we do – and I remember reading both Snow Crash and some of the Vorkosigan books, and even some of the Vernor Vinge books, which speculated along these lines.
Going masked and gloved in public at all times seems to be in our future.
The Real Kurt
Interestingly, both political and sexual orientation can be predicted with similar accuracy (>70%)–which is significantly more accurate than several other things that have been tested in this way.
Still wrong ~25% of the time, though. I wonder how they determine the “correct” answer–I suspect through self-reporting. I wonder how accurate that is, and I wonder if the “errors” are symmetric–i.e., is it more likely to guess wrong for a self-reported liberal or for a self-reported conservative? Could the wrong guesses be a proxy measure for social pressure to present oneself in a certain way, particularly on a dating site?
Some people’s political views change as they get older. Do the algorithms pick up on this effect?
I don’t think they mentioned this in the paper but as people get older they tend toward “conservative”. So, if a person knew the dataset was 50-50 liberal-conservative, flipped a coin for each face, then occasionally override the coin flip when a particularly young or old face showed up you could get better than 50% accuracy.
What they do say in the paper that may answer your question:
The accuracy dropped from 2% to 5% depending upon which dataset and what country the data came from.
So, what that tells me is that age, gender, and/or ethnicity contribute to the accuracy of the algorithm but are not major contributors.
I don’t think that’s quite what I’m getting at. Though it might be close, and I may need to think harder about the connection.
My question boils down to, is it picking up on some (likely unconscious) behavioral “tic”–which might change over time, as one becomes more liberal or more conservative? Or is it picking up on something more “hardwired” that might be constant across time? In other words, is the algorithm telling me the past, the present, or the future?
If I give the algorithm 2 pictures of the same person, one taken when he’s 20 (self-proclaimed liberal) and another when he’s 50 (self-proclaimed conservative), do I get the same result or different results? Does that work (almost) every time?
Thanks for clarifying.
I’m not certain, but I don’t think there is enough information in the paper, and probably in the configuration of their experiment, to answer your question.