Neurons Engagement Prediction

This article explains how our new Engagement metric has been developed.

The Neurons Engagement score captures the emotional appeal of an ad. It’s based on the intensity of positive emotional responses.

The score was built on Neurons’ fast-response tests to reflect gut responses, so marketers can measure what’s really resonating with people.

This article will cover everything you need to know about the Neurons Engagement feature, including:

  • Methodology: How the Engagement score is calculated and what makes it a reliable measure of emotional appeal.
  • Validation: The research and testing processes that ensure its accuracy and relevance.
  • AI Model: Insights into the advanced AI model that powers the score and delivers actionable AI recommendations for better impact.

Engagement score

Feature Overview

The Engagement prediction model is designed to predict the level of positive emotional response an asset would evoke in its audience after a quick glance.

The feature helps you tap into what makes your ads truly connect with people by measuring the intensity of positive emotional responses. AI insights and recommendations will tell you how your creative resonates on a gut level—because first impressions matter.

The Neurons Engagement feature includes:

  • AI recommendations
  • Image & video insights
  • Benchmarks
  • Downloadable reports
  • Heatmaps (for images)
  • AOI level metrics (for images)
  • Second-by-second view for videos

The Accuracy

Our FRT Engagement Prediction Model offers unprecedented accuracy in measuring positive emotional responses to media content. Using our model, we can predict engagement levels with an accuracy of 92% on test datasets. This robust statistical foundation enables us to confidently estimate how well an advertisement will emotionally engage its audience based on initial exposure.

Why new Engagement?

Our previous Beta Engagement score focused on predicting conscious experience based on a survey response. This posed some limitations on accurately measuring and then predicting emotions instead of feelings. 

Screenshot 2024-11-22 at 11.33.59

The new Engagement score is based on implicit measurement of positive emotion towards an asset and takes response time into account. 

What is Response Time?

Response Time (RT), also known as reaction time or response latency, refers to the time it takes for behavioral responses to occur during a particular task (Donders, 1969; Luce, 1991). The RT usually refers to the time between the presentation of the external stimulus and the appropriate response (Posner, 1978). 

Critically, RT is affected by emotional and cognitive processes, thus allowing it to be used as an index of unconscious emotion, motivation, and cognitive processing.

brain-1

Brain activity in the amygdala (left) is related to faster response time and easier choices (right). In our own earlier work, we have demonstrated the relationship between preference and response time: easier choices are related to faster response times. This response time is also associated with a stronger engagement of the brain’s emotional system, such as the amygdala.

The Methodology

To understand the positive emotional response to an asset, we developed a Fast Response Time (FRT) method. Participants view content for a few seconds and then rate it using association words like "Engaging," "Interesting," and "Happy" by pressing "yes" or "no." Their response time is also recorded, giving weight to how strongly they feel. By normalizing response times and responses across multiple participants, we ensure the FRT score is both comparable and reliable.

Screenshot 2024-11-22 at 11.35.33

Our data collection spans a diverse set of advertising formats—from print to digital—and represents a wide demographic to avoid bias. With over 9,000 participants in the initial study phase and continuous data updates, we promise ongoing precision and relevance in our predictions.

How we calculate the Engagement scores:

Screenshot 2024-11-22 at 11.31.41

The Model

Our engagement prediction model uses the EfficientNetB2V2 architecture, known for its efficiency in computer vision tasks. With approximately 8.8 million trainable parameters, the model is skilled at connecting visual inputs to engagement scores derived from careful regression analyses. The model processes visuals by resizing them to maintain aspect ratios and applies GradCAM to identify impactful image features for predictions.

This innovative approach extends to videos, where frame-by-frame analysis constructs an overall engagement score, though without accounting for audio or sequential visual elements. These models promise refined predictions, but future updates aim to incorporate more complex multimedia signals for heightened accuracy.

Utilizing state-of-the-art machine learning techniques assures advertisers that their content connects with audiences at an emotional level, setting a new benchmark in measuring media engagement.

Predicting Engagement on Video

The video prediction service uses the image prediction model per frame and produces frame by frame score for each frame. The limitations of this approach are that the sequential information encoded in a flow of video frames and the impact of audio are not exploited for the engagement prediction.