The Neurons platform now has a multi-layered benchmark system for both images and videos. Benchmarks are based on a total of 17.476 unique assets (10.259 images and 7.217 videos) and 67.429 Areas of Interest (AOIs).
The AOI types are different for images and videos.
Benchmark Data Pool & Collection
Categorization
The industry categorization was sorted on an asset level instead of brand categories.
This approach allows a single brand to fall into multiple categories.
New Benchmarks Categories
Image Benchmarks
Video Benchmarks
Restaurants are grouped under the Travel & Hospitality Services subcategory, and Fashion is included in FMCG. This method ensures we obtain the necessary representative samples. The dataset includes other categories as well.
Assets in Context vs. Out of Context
Some use cases, namely Out of Home and Social Media image ads are in context, meaning that besides the assets itself, the benchmarks also take the surrounding elements into consideration. This approach is used because the context, such as a Facebook or Instagram feed, affects how viewers focus their attention.
Videos are always presented Out of Context because sourcing is not scalable.
Dataset: Countries and Regions
New Benchmarks Calculation
Insights Benchmarks Methodology
The Benchmarks determine the specific interpretations and recommendations we provide in our Neurons AI Recommendations Engine.
The benchmark ranges are recommended ranges for our scores and show where generally an optimised creative should score.
To calculate these benchmarks, Neurons AI analyzes the full distribution of scores for all assets within each subcategory. Instead of using a median score range, we divide the total score range into five categories: Extreme Low, Low, Medium, High and Extreme High.
For example, this means that based on an assets Focus score, the asset will be bucketed falling into one of these five buckets. The benchmark range and colour flag is then calculated to show how far/close the asset score is from the recommended range for each subcategory.
For Attention, Focus, Engagement and Memory our flag system shows the following:
Cognitive Demand is a non linear score, so its benchmark range is a bit more complicated
The Benchmarks are dynamic and vary according to the asset's classification, which includes combinations of industry, use case, and platform, leading to
different score distributions.
Benchmark Validation
The validation of benchmark ranges has been done through in-market analyses and our decades long research and demonstrates that the recommended benchmarks align with top-performing metrics in real-world scenarios for most use cases. The analyses compared benchmark-defined ranges against actual market engagement, recall, and conversion performance, confirming that these ranges are consistent with top 20% of assets in the studied use cases. We have looked at key indicators such as "market_eyes_on_dwell" and "market_click" which strongly correlated with memory, cognitive demand, and audience engagement, supporting the use of our benchmarks as reliable performance benchmarks.
The majority of tested cases confirmed that the recommended benchmark ranges reflect real-world top performance, reinforcing their value in optimizing creative assets for branding and conversion. Of course, not all use cases can be validated through third party performance data and for those we have relied on our bespoke neuromarketing and behavioral research in the past 12 years.