Multimedia subjective quality evaluation platform

Abstract— is a web platform that provides a complete solution for conducting subjective comparisons. The service is designed specifically for the comparison of images, video, and sound processing algorithms. Apply your algorithm, its modifications, and competitive approaches to your test dataset—and upload the output to our server. We display results computed by different methods to paid participants pairwise. These participants are asked to choose the best method in each pair. We then convert pairwise comparison data to final scores and provide you with a detailed report complete with plots ready for inclusion in your paper.

Main use cases

Conduct comparisons of image, video, and sound processing algorithms (e.g. compression, denoising, inpainting, matting, and stitching)

Fine-tune parameters of your method.

Study which factors affect human quality perception.

Get started

Login Sign Up

Fig. 1. Be among the first researchers to try We will cover costs of your first study conducted with the platform.

View sample study reports

Video matting
Image upscale

Read our recent blog-posts

Here’s how it works

What you do

You apply your algorithm and its competitors to your test dataset.

What we provide

We recruit study participants and present your data pairwise to them.

We process a myriad of collected responses and generate plots for your paper.

Main features

With pairwise comparison, there’s no need to invent score scale and explain it to respondents. Study participants simply choose the best of two.

Receive a detailed report including plots ready for inclusion in your paper.

Receive all of the raw data you need to conduct your own in-depth analysis.

Save time, letting us find study participants and filter out answers from cheating respondents.

What people are saying about us could revolutionize subjective comparisons.

Subjective tests are time-consuming and expensive to produce, yet really are the gold standard. In this regard, Subjectify may be a great alternative for researchers and producers seeking to choose the best codec or best encoding parameters.

Jan Ozer

Streaming media producer and consultant

Our team is developing methods for generation of new views of given video. It is important for us to know how the end viewer feels about the synthesized videos or images from a perceptual perspective, so the subjective evaluation is crucial for us. We wanted each study participant to singly evaluate videos produced by various view generation methods. platform is effective tool that satisfied our needs.

Guibo Luo

PhD student, Peking University

My research required me to choose pano processing method with the best visual quality. To conduct my previous subjective tests, I had to ask colleagues to take part in the tests as respondents. However, it was convenient neither for my colleagues (as it intervened their work), nor for me. Due to limited number of respondents, I was able to compare just two pano processing methods, but I wanted to compare more alternatives. helped me to overcome all these issues. The platform took care about all technical aspects and collected all responses in less than 1 day. collected an order of magnitude more responses than I was able to collect in my prior experiments. The platform enabled us to reliably confirm that our automatic pano processing method outperforms manual processing.

Alexander Zhirkov


Papers powered by

A semiautomatic saliency model and its application to video compression

International Conference on Intelligent Computer Communication and Processing 2017
346 participants.
Saliency-aware video codec was compared with x264.

Perceptually Motivated Benchmark for Video Matting

2015 British Machine Vision Conference (BMVC)
442 participants.
12 video and image matting algorithms were compared.

Toward an objective benchmark for video completion

Submitted to Signal, Image and Video Processing Journal
341 participants.
13 video and image completion algorithms were compared.

Multilayer semitransparent-edge processing for depth-image-based rendering

2016 International Conference on 3D Imaging (IC3D)
56 participants.
3 depth-image-based rendering methods were compared.
Best Paper Award.

Learn about our upcoming public release! is currently conducting its private beta testing stage. Be one of the first to hear about our public release. Simply complete the form, below, and you’ll receive notification about this and further updates. We'll never share your e-mail with outside parties.