Working with Areas of Interest (AOIs) in the Neurons API

The Areas of Interest (AOI) feature in the Neurons API allows users to specify specific regions within images and videos for AI analysis and get Neurons metrics for those regions. All Neurons AOI metrics are described in the Areas of Interest(AOIs) in Neurons AI article. 

This guide covers three ways to use AOIs in the Neurons API:

  • Autodetection of AOIs for Images
  • Autodetection of Branding for Videos
  • Static AOIs Requests

Autodetection for Images

The Neurons API can automatically detect key visual areas in an image. Neurons's deep learning object detection model auto-detects our five AOI types: 

  • Branding (typed as logo),
  • Product (typed as product), 
  • Headline Text (typed as headline_text),
  • Body Text (typed as body_text) and 
  • Call to Action (typed as call_to_action(cta)). 

Coordinates of each bounding box will be provided, "name" will always be "auto-detected-aoi", while "aoi_type" will be one of the five options, e.g. "headline_text.

How it Works:

✅ Upload an image using the Upload Image API (supports JPEG, PNG, JPG formats, max size 20MB)【47】.
✅ Run automatic AOI detection (there's an endpoint for that) -> this returns coordinates of the autodetected AOIs

✅ Users can use the coordinates provided by the automatic AOI detection to actually calculate the AOIs (using the AOI for updating AOIs which basically draws the AOIs and then runs the calculation)
✅ Results include AOI-level metrics for each AOI type, including Total Attention, Start Attention, End Attention, Time Spent and Percentage Seen. Find the definitions for all the AOI metrics in this article

Autodetection for Videos

For videos, the API can automatically track and detect Branding through the Google Logo Detection service. The autodetection for Branding works in the same way as the autodetection for images and results are provided on a frame level. 

As our service for tracking and detection of Branding is done with a third party service, Neurons cannot improve the detection so we have created an option for static AOI calculation. 

Static AOIs Requests

For users who need custom-defined AOIs, the API supports static AOI requests for both images and videos.

For Images:

  • You can manually define AOIs by specifying coordinates (x, y) on an image【45】.
  • The AOI must be defined as a polygon, using a list of points that outline the area.
  • The API calculates attention percentage based on the specified AOI.

📌 Example Use Case: Checking whether a logo or product placement is getting enough attention in an advertisement.

heatmap1

For Videos:

  • AOIs can be set using coordinates (x, y) and a frame id (e.g. frame 13-40)【46】.
  • The API tracks attention for the selected area over time.
  • AOIs must be defined as polygons, and the system will analyze movement and focus areas within that zone.

📌 Example Use Case: Evaluating if a sponsor's logo in a sports video remains visible throughout the clip.

Conclusion

The AOI feature in the Neurons API provides both automatic and manual ways to analyze focus areas in images and videos. Users can choose autodetection for quick results or static AOIs for customized insights.

For technical details, visit:
🔗 Set AOIs for Images【45】
🔗 Set static AOIs for Videos【46】
🔗 Upload Image API【47】