How are the Neurons AI benchmarks calculated?

 

The Neurons platform now has a multi-layered benchmark system for both images and videos. Benchmarks are based on a total of 17.476 unique assets (10.259 images and 7.217 videos) and 67.429 Areas of Interest (AOIs).

The AOI types are different for images and videos.

Benchmark Data Pool & Collection 

Categorization

The industry categorization was sorted on an asset level instead of brand categories.
This approach allows a single brand to fall into multiple categories.

Restaurants are grouped under the Travel & Hospitality Services subcategory, and Fashion is included in FMCG. This method ensures we obtain the necessary representative samples. The dataset includes other categories as well.

Assets in Context vs. Out of Context

Some use cases, namely Out of Home and Social Media image ads are in context, meaning that besides the assets itself, the benchmarks also take the surrounding elements into consideration. This approach is used because the context, such as a Facebook or Instagram feed, affects how viewers focus their attention.

Videos are always presented Out of Context because sourcing is not scalable.

Dataset: Countries and Regions

New Benchmarks Calculation 

Insights Benchmarks Methodology


The Insights Benchmarks determine specific interpretations and recommendations.

To calculate these benchmarks, Neurons AI analyzes the full distribution of scores for all assets within each subcategory. Instead of using a median score range, we divide the total score range into five categories: Extreme Low, Low, Medium, High and Extreme High

 

 

Insights Benchmarks are dynamic and vary according to the asset's classification, which includes combinations of industry, use case, and platform, leading to
different score distributions.

New Benchmarks Categories

Image Benchmarks

Video Benchmarks