Glossary · Attention Science

Saliency map

The raw grayscale probability surface behind every attention heatmap — whiter pixels mean higher predicted fixation, darker pixels mean lower.

Definition

A saliency map is the raw grayscale probability surface produced by an attention model — a pixel-level image where brighter pixels represent higher predicted likelihood of being fixated by a human viewer.

Quick facts

Itti & Koch 2000
First widely adopted computational saliency model
0.85–0.92
Correlation of modern deep models vs. lab eye tracking
SALICON
Dataset used to train most production saliency models

Full definition

A saliency map is the direct numerical output of a saliency model — a convolutional neural network trained to predict where human eyes go when viewing an image. Given an input image of any size, the model produces a same-resolution (or downscaled) grayscale image where each pixel's 0–255 intensity encodes the predicted probability of fixation, normalized across the whole image so the values sum to 1.

The concept predates deep learning. Laurent Itti and Christof Koch's seminal 2000 paper introduced a biologically inspired saliency model combining filters for color, intensity, and orientation contrast — the same features the human visual system's early layers extract. That first generation of models achieved correlations of r ≈ 0.65–0.75 with eye-tracking data. The deep learning generation — SALICON (2015), DeepGaze (2014, updated through DeepGaze IIE in 2021), TranSalNet (2022) — pushed that correlation to r ≈ 0.85–0.92.

A concrete example: drop a Facebook ad image into a saliency model. The output is a grayscale PNG the same shape as your input. If you squint at it, the brightest regions correspond to faces, high-contrast text, and the visual center of the product. The dimmest regions correspond to muted backgrounds, logos in the corners, and any CTA that's small or low-contrast. That grayscale image, colorized, is the attention heatmap you recognize.

Why it matters for ad creative

Most marketers interact with saliency maps only through their colorized form. But understanding that the map underneath is a real, measurable numeric surface changes how you use attention data. Every element-level score in a modern creative analytics tool is an integral over this surface.

Three specific reasons the raw saliency map matters:

  • Element scoring is quantitative. "CTA visibility = 73" is not a vibe — it's the percentage of saliency mass falling inside the CTA's bounding box, compared to benchmark distributions.
  • Comparisons are apples to apples. Two creatives can be ranked against each other by comparing saliency values directly, not by eyeballing two colored overlays.
  • Edits are measurable. Move the CTA 60px up, recompute saliency over its box, and you know exactly whether the edit helped or hurt — before you ship.

Without the map, attention would be another gut feel. With it, attention joins CTR and CPA as a measurable variable.

How to measure and apply it

The practical workflow for working with saliency maps:

  1. 1

    Run the model

    Submit your creative to a saliency prediction service. The output is a grayscale fixation map, not yet colorized.

  2. 2

    Compute element-level integrals

    For each conversion element (CTA, headline, product), sum the saliency values inside its bounding box. Divide by the element's pixel area for a density score, or by the image total for a share-of-attention score.

  3. 3

    Benchmark against known distributions

    A raw saliency density means little in isolation. Compare to the distribution of equivalent scores from top-performing creatives in the same format. 70th-percentile CTA saliency is a credible launch threshold.

  4. 4

    Run A/B edits on the surface

    Edit the creative, re-run saliency, and diff the maps. A successful edit shifts saliency mass onto conversion elements and away from distractors — visible as a brighter region over the CTA and a darker region elsewhere.

  5. 5

    Track model calibration over time

    No saliency model is perfect. If your live CTR data consistently diverges from pre-launch predictions, the model may be drifting on your specific creative vertical — which is a signal to update your benchmark thresholds.

For most teams, the saliency map is an implementation detail hidden behind nicer UI — and that's fine. But teams running large creative factories (50+ variants per week) benefit from treating saliency as a first-class numeric input in their QA pipeline.

Related terms

Further reading

Try it on your own ad

Score your ad's saliency map

Upload any static creative and see the saliency surface, element-level scores, and specific fixes in under 8 seconds.