An anonymous reader quotes OReilly.com’s interview with the CEO of Affectiva, an emotion-measurement technology company that grew out of MIT’s Media Lab.
We can mine Twitter, for example, on text sentiment, but that only gets us so far. About 35-40% is conveyed in tone of voice — how you say something — and the remaining 50-60% is read through facial expressions and gestures you make. Technology that reads your emotional state, for example by combining facial and voice expressions, represents the emotion AI space. They are the subconscious, natural way we communicate emotion, which is nonverbal and which complements our language… Facial expressions and speech actually deal more with the subconscious, and are more unbiased and unfiltered expressions of emotion…
Rather than encoding specific rules that depict when a person is making a specific expression, we instead focus our attention on building intelligent algorithms that can be trained to recognize expressions. Through our partnerships across the globe, we have amassed an enormous emotional database from people driving cars, watching media content, etc. A portion of the data is then passed on to our labeling team, who are certified in the Facial Action Coding System…we have gathered 5,313,751 face videos, for a total of 38,944 hours of data, representing nearly two billion facial frames analyzed.
They got their start testing advertisements, and now are already working with a third of all Fortune 500 companies. (“We’ve seen that pet care and baby ads in the U.S. elicit more enjoyment than cereal ads — which see the most enjoyment in Canada.”) One company even combined their technology with Google Glass to help autistic children learn to recognize emotional cues.
Read more of this story at Slashdot.
https://slashdot.org/slashdot-it.pl?op=discuss&id=10639755&smallembed=1