Short version: please go to https://pqtest.cascadestream.com and participate in a subjective picture quality evaluation test session. Running now through March 6, 2018.
It’s the evening and the dishes are done. You are relaxing with your favorite person, starting to watch your new favorite drama on your big screen, and Ugh. The picture quality looks terrible! How does this happen? How can we fix it?
Your streaming content service provider actually does care about picture quality. Often the problem is that the process they use to create the compressed video distribution files you see is unattended, and without a human checking every program encoded at every bit rate they simply don’t have the data on picture quality.
You Can Help!
A number of researchers and developers are working on computational models (i.e. automated) that can evaluate video on a human perceptual quality scale. This is a really hard problem but the algorithms are getting better every year.
These researchers and developers need to know the true quality of video in order for them to improve the accuracy of their models. But who can tell them the true quality? Answer: you.
You tell them if it looks good or bad
Cascade Stream is currently hosting a campaign of picture quality evaluation test sessions to determine the true quality of a number of video sequences with various distortions applied to them. We are seeking volunteers to participate and score the video pictures. Your scores will be combined with those of many others resulting in Mean Opinion Scores (MOS) that are the true measurement of quality for those video sequences and distortions. By definition, according to standard recommended practices for subjective video quality evaluation, what you say is what it is.
Do you feel empowered?
You are. Please be an official observer in our picture quality evaluation campaign. To participate, just go to https://pqtest.cascadestream.com. It’s like a focus group that you can do from your home or office. Everything you need to know is on that site. The test session takes about half an hour.
This campaign runs through March 6, 2018 so don’t delay. The library of video pictures, distortions, and quality scores will become a useful resource for developers.
The next time you are watching your favorite new show and it looks great, you may have an automated quality evaluation model that was improved by your scores to thank for it.
I felt like I was being graded for consistency. Might be interesting to get a “score” on how big my spread was on the same images – if any were repeats.
Many observers say they have doubts about the consistency of their scoring when the same image appears more than once with different impairments. The test procedure which is based on ITU recommendations is designed to take variability into account by presenting pictures in a different order to different observers and by averaging the raw scores. Our experience is that scores tend to be surprisingly consistent for individual observers and even between observers. Thank for doing it!