MIT has developed a new algorithm that is capable of telling a fake smile from a genuine one. It's thought that beyond making artificial intelligence that can tell when we're humouring its awful knock knock joke, it could be used to help teach people that struggle with facial recognition, like those with autism.
To create the algorithm, subjects were asked to fill in a survey. When they hit submit, it erased what they'd input. Many people then smiled, an expression that covers up the frustration over what's happened. They were also asked to mime frustration and were recorded laughing and smiling at a short video clip of a baby.
Tracking the facial movements of those involved, allowed MIT to spot differences between frustrated smiles, and genuine happy smiles. In the former, the smile appears quickly and dissipates just as fast, whereas the real one was much slower. It also noted that different muscle groups are used depending on the smile type. The genuine one uses involuntary muscles that crinkle the eyes, whereas a faux smile will often just lift the corners of the mouth using voluntary movements.
The study concluded: "These data can then be used to develop automated systems that recognise spontaneous expressions with accuracy higher than the human counterpart. We hope that our work will motivate the field to move beyond the trend of working with "six basic emotions", move beyond teaching people that 'smiles mean happy' and continue to develop methods to interpret challenging spontaneous data that contain complex patterns of expression."