Multimedia subjective quality evaluation platform

Abstract— is a web platform that provides a complete solution for conducting subjective comparisons. The service is designed specifically for the comparison of images, video, and sound processing algorithms. Apply your algorithm, its modifications, and competitive approaches to your test dataset—and upload the output to our server. We display results computed by different methods to paid participants pairwise. These participants are asked to choose the best method in each pair. We then convert pairwise comparison data to final scores and provide you with a detailed report complete with plots ready for inclusion in your paper.

Main use cases

Conduct comparisons of image, video, and sound processing algorithms (e.g. compression, denoising, inpainting, matting, and stitching)

Fine-tune parameters of your method.

Study which factors affect human quality perception.

Request early access

Fig. 1. Be among the first researchers to try We will cover costs of your first study conducted with the platform.

View sample study reports

Video matting
Image upscale

Read our recent blog-posts

Here’s how it works

What you do

You apply your algorithm and its competitors to your test dataset.

What we provide

We recruit study participants and present your data pairwise to them.

We process a myriad of collected responses and generate plots for your paper.

Main features

With pairwise comparison, there’s no need to invent score scale and explain it to respondents. Study participants simply choose the best of two.

Receive a detailed report including plots ready for inclusion in your paper.

Receive all of the raw data you need to conduct your own in-depth analysis.

Save time, letting us find study participants and filter out answers from cheating respondents.

What scholars are saying about us

Our team is developing methods for generation of new views of given video. It is important for us to know how the end viewer feels about the synthesized videos or images from a perceptual perspective, so the subjective evaluation is crucial for us. We wanted each study participant to singly evaluate videos produced by various view generation methods. platform is effective tool that satisfied our needs.

Guibo Luo

PhD student, Peking University

Papers powered by

A semiautomatic saliency model and its application to video compression

International Conference on Intelligent Computer Communication and Processing 2017
346 participants.
Saliency-aware video codec was compared with x264.

Perceptually Motivated Benchmark for Video Matting

2015 British Machine Vision Conference (BMVC)
442 participants.
12 video and image matting algorithms were compared.

Toward an objective benchmark for video completion

Submitted to Signal, Image and Video Processing Journal
341 participants.
13 video and image completion algorithms were compared.

Multilayer semitransparent-edge processing for depth-image-based rendering

2016 International Conference on 3D Imaging (IC3D)
56 participants.
3 depth-image-based rendering methods were compared.
Best Paper Award.

Learn about our upcoming public release! is currently conducting its private beta testing stage. Be one of the first to hear about our public release. Simply complete the form, below, and you’ll receive notification about this and further updates. We'll never share your e-mail with outside parties.