Using deep learning methods, Disney Research develops a new means of assessing complex audience reactions to movies via facial expressions.
The new method, called Factorized Variational Autoencoders (FVAEs) demonstrated a surprising ability to reliably predict a viewer’s facial expression for the remainder of the movie after observing an audience member for only a few minutes.
Researcher said, the FVAEs could learn concepts such as smiling and laughing at their own. They could show how these facial expressions correlated with humorous scenes.
Markus Gross, vice-president at Disney Research, said, our research shows that deep learning techniques, which use neural networks and have revolutionized the field of AI, are effective at reducing data while capturing it hidden patterns.
The research team applied FVAEs to 150 showings of nine mainstream movies. They used four infrared cameras to monitor the faces of the audience. The result of a dataset of 3,179 audience members and 16 million facial landmarks evaluated. FVAEs look for audience members who exhibit similar facial expressions throughout the entire movie.
FVAEs able to learn a set of stereotypical reactions from the entire audience. They learn the gamut of general facial expressions, and determine how audience reacting to a given movie based on strong correlations in reactions between audience members.
These two features are mutually reinforcing and help FVAEs learn both more effectively than previous systems. It is this combination that allows FVAEs to predict a viewer’s facial expression for an entire movie based on only a few minutes of observations.
The developed pattern recognition technique not limited to the faces. It used for any time series data collected from a group of objects. Researchers said, once a model is learned, we can generate artificial data that looks realistic.
For instance, if FVAEs used to analyze a forest noting differences in how trees respond to wind based on their type and size as well as wind speed those models used to simulate a forest in animation.
While, the experimental results are still preliminary. This approach demonstrates tremendous promise to more accurately model group facial expressions in a wide range of applications.
More information: [Disney research]