CBS Vendor Affectiva Wins Patent for Emotion Recognition

Waltham, Mass.-based startup Affectiva won a U.S. patent for an invention that measures facial expressions to help advertisers and programmers create better commercials, movie trailers and TV shows.

Affectiva, whose customers include CBS and Unilever, applied for the patent in February 2012. Chief Science Officer Rana el Kaliouby is named as lead inventor on the patent, titled, “Video Recommendation Based On Affect.”

Abstract: Analysis of mental states is provided to enable data analysis pertaining to video recommendation based on affect. Video response may be evaluated based on viewing and sampling various videos. Data is captured for viewers of a video where the data includes facial information and/or physiological data. Facial and physiological information may be gathered for a group of viewers. In some embodiments, demographics information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.

Patent

Claims:

  1. A computer implemented method for affect based recommendations comprising: playing a first media presentation to an individual; capturing mental state data, wherein the mental state data includes facial data, for the individual, while the first media presentation is played; inferring mental states, using one or more processors, based on the mental state data which was collected and analysis of the facial data for at least brow furrows; correlating the mental state data which was captured for the individual to mental state data collected from other people who experienced the first media presentation wherein the correlating is based on identifying and using maximally dissimilar responses during part of the correlating; ranking the first media presentation relative to another media presentation based on the mental state data which was captured, wherein the ranking is for the individual based on the mental state data captured from the individual; and recommending a second media presentation to the individual based on the mental state data for the individual which was captured wherein the recommending the second media presentation to the individual is further based on the correlating between the individual and the other people.
  1. The method of claim 1 further comprising analyzing the mental state data to produce mental state information.
  1. The method according to claim 1 wherein the first media presentation includes one of a group consisting of a movie, a television show, a web series, a webisode, a video, a video clip, an electronic game, an e-book, and an e-magazine.
  1. The method according to claim 1 wherein the second media presentation includes one of a group consisting of a movie, a television show, a web series, a webisode, a video, a video clip, an electronic game, an e-book, and an e-magazine.
  1. The method according to claim 1 wherein the first media presentation is played on a web-enabled interface.
  1. The method according to claim 1 wherein the first media presentation includes one of a YouTube.TM., a Vimeo.TM. video, and a Netflix.TM. video.
  1. The method according to claim 1 wherein the second media presentation includes one of a YouTube.TM., a Vimeo.TM. video, and a Netflix.TM. video.
  1. The method of claim 1 wherein the ranking is based on anticipated preferences for the individual.
  1. The method according to claim 1 wherein the mental state data is captured from multiple people and further comprising aggregating the mental state data from the multiple people.
  1. The method of claim 9 further comprising ranking the first media presentation relative to another media presentation based on the mental state data which was aggregated from the multiple people.
  1. The method of claim 9 wherein the analysis is performed on an analysis server.
  1. The method of claim 11 wherein the analysis server provides aggregated mental state information for the multiple people.
  1. The method of claim 9 wherein the aggregating recognizes trends for the individual and determines correlation vectors for the individual and the multiple people.
  1. The method of claim 13 wherein correlation is determined using a weighted distance evaluation between two vectors of the correlation vectors.
  1. The method of claim 14 wherein the recommending is based on one of the two vectors being a sufficiently small distance from another of the two vectors.
  1. The method of claim 14 wherein the correlation is further based on a weighted Euclidean or Mahalanobis distance.
  1. The method of claim 1 wherein the mental state data further includes physiological data or actigraphy data.
  1. The method of claim 17 wherein the physiological data includes one or more of electrodermal activity, heart rate, heart rate variability, skin temperature, and respiration.
  1. The method of claim 1 wherein the facial data includes information on one or more of a group consisting of facial expressions, action units, head gestures, smiles, squints, lowered eyebrows, raised eyebrows, smirks, and attention.
  1. The method according to claim 1 wherein the mental states include one of a group consisting of sadness, happiness, frustration, confusion, disappointment, hesitation, cognitive overload, focusing, being engaged, attending, boredom, exploration, confidence, trust, delight, valence, skepticism, and satisfaction.
  1. The method according to claim 1 wherein the playing of the first media presentation is done on a mobile device and further comprising recording of facial images with the mobile device as part of the capturing of the mental state data.
  1. The method of claim 1 wherein the correlating is based on identifying similar likes.
  1. A computer program product embodied in a non-transitory computer readable medium comprising: code for playing a first media presentation to an individual; code for capturing mental state data, wherein the mental state data includes facial data, for the individual while the first media presentation is played; code for inferring mental states, executed on one or more processors, based on the mental state data which was collected and analysis of the facial data for at least brow furrows; code for correlating the mental state data which was captured for the individual to mental state data collected from other people who experienced the first media presentation wherein the correlating is based on identifying and using maximally dissimilar responses during part of the correlating; code for ranking the first media presentation relative to another media presentation based on the mental state data which was captured; and code for recommending a second media presentation to the individual based on the mental state data for the individual which was captured wherein the recommending the second media presentation to the individual is further based on the correlating between the individual and the other people.
  1. A computer system for affect based recommendations comprising: a memory for storing instructions; one or more processors attached to the memory wherein the one or more processors are configured to: play a first media presentation to an individual; capture mental state data, wherein the mental state data includes facial data, for the individual while the first media presentation is played; infer mental states, using the one or more processors, based on the mental state data which was collected and analysis of the facial data for at least brow furrows; correlate the mental state data which was captured for the individual to mental state data collected from other people who experienced the first media presentation wherein correlation is based on identifying and using maximally dissimilar responses during part of the correlation; rank the first media presentation relative to another media presentation based on the mental state data which was captured; and recommend a second media presentation to the individual based on the mental state data for the individual which was captured wherein recommendation of the second media presentation to the individual is further based on correlation between the individual and the other people.
  1. A computer implemented method for affect based ranking comprising: displaying a plurality of media presentations to a group of people; capturing mental state data, wherein the mental state data includes facial data, from the group of people while the plurality of media presentations is displayed; inferring mental states, using one or more processors, based on the mental state data which was collected and analysis of the facial data for at least brow furrows; correlating the mental state data captured from the group of people who viewed the plurality of media presentations wherein the correlating is based on identifying and using maximally dissimilar responses during part of the correlating; ranking the first media presentation relative to another media presentation based on the mental state data which was captured; and ranking the media presentations relative to one another based on the mental state data.
  1. The method according to claim 25 further comprising tagging the plurality of media presentations with mental state information based on the mental state data which was captured.