MIT/Tuebingen Saliency Benchmark

This is the new MIT/Tuebingen Saliency Benchmark, the sucessor of the MIT Saliency Benchmark. The benchmark hosts the MIT300 and (in a few days) the CAT2000 datasets for saliency model evaluation. The main extension compared to the classic MIT Saliency Benchmark is that we now allow submission of models either als classic saliency maps (as in the past), or as fixation densities. The latter comes with numerous advantages (see below). In the future, we plan to extend the benchmark in several directions to help the community.

The new prefered way of evaluating models is via the submission of probabilistic models predicting fixation densities. From those densities, metric-specific saliency maps can be derived which make model evaluation much more consistent. In nearly all cases, models perform better than with the classic evaluation since penalties due to saliency maps not matching the metrics are removed. For more details see Kümmerer et al, Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics [ECCV 2018] and also here. However, of course the new Saliency Benchmark still supports evaluation of classic saliency maps as usual before.

If you use any of the results or data on this page, please cite the following:
website citation

@misc{mit-tuebingen-saliency-benchmark,
  author       = {Matthias K{\"u}mmerer and Zoya Bylinskii and Tilke Judd and Ali Borji and Laurent Itti and Fr{\'e}do Durand and Aude Oliva and Antonio Torralba},
  title        = {MIT/Tübingen Saliency Benchmark},
  howpublished = {https://saliency.tuebingen.ai/}
}
paper citations
@inproceedings{kummererSaliencyBenchmarkingMade2018,
    title = {Saliency Benchmarking Made Easy: Separating Models, Maps and Metrics},
    series = {Lecture Notes in Computer Science},
    shorttitle = {Saliency Benchmarking Made Easy},
    pages = {798--814},
    booktitle = {Computer Vision – {ECCV} 2018},
    publisher = {Springer International Publishing},
    author = {K{\"u}mmerer, Matthias and Wallis, Thomas S. A. and Bethge, Matthias},
    editor = {Ferrari, Vittorio and Hebert, Martial and Sminchisescu, Cristian and Weiss, Yair},
    date = {2018},
}
@article{salMetrics_Bylinskii,
    title    = {What do different evaluation metrics tell us about saliency models?},
    author   = {Zoya Bylinskii and Tilke Judd and Aude Oliva and Antonio Torralba and Fr{\'e}do Durand},
    journal  = {arXiv preprint arXiv:1604.03605},
    year     = {2016}
}
@InProceedings{Judd_2012,
   title     = {A Benchmark of Computational Models of Saliency to Predict Human Fixations},
   author    = {Tilke Judd and Fr{\'e}do Durand and Antonio Torralba},
   booktitle = {MIT Technical Report},
   year      = {2012}
}
@article{borji2013quantitative,
   title     = {Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study},
   author    = {Borji, Ali and Sihite, Dicky N and Itti, Laurent},
   journal   = {Image Processing, IEEE Transactions on},
   volume    = {22},
   number    = {1},
   pages     = {55--69},
   year      = {2013},
   publisher = {IEEE}
}
@article{CAT2000,
   title     = {CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research},
   author    = {Borji, Ali and Itti, Laurent},
   journal   = {CVPR 2015 workshop on "Future of Datasets"},
   year      = {2015},
   note      = {arXiv preprint arXiv:1505.03581}
}

people
Ali Borji, University of Central Florida
Zoya Bylinskii, Massachusetts Institute of Technology
Fredo Durand, Massachusetts Institute of Technology
Laurent Itti, University of Southern California
Tilke Judd, Google Zurich
Matthias Kümmerer, University of Tübingen
Aude Oliva, Massachusetts Institute of Technology
Antonio Torralba, Massachusetts Institute of Technology