Temporal Training Certificates in Crowdsourcing Multi-Dimensional Speech Quality Assessment
* Presenting author
Abstract:
Subjective speech quality assessment is a traditional way to evaluate quality of experience in communication systems. Such tests could be conducted in the lab environments or via crowdsourcing: both methods are standardized (in ITU-T Rec. P.800 and ITU-T Rec. P.808, respectively) and widely employed by researchers. This paper evaluates the effect of forced participant retraining on crowdsourcing speech quality judgements. As delays in the study participation, pauses and general work cadence cannot be directly controlled in the crowdsourcing paradigm, it is common to introduce some temporary certificate that is granted to the crowdworker upon a successful completion of a training task. In the paper, we analyze the effect of different training certificate expiry times on the quality ratings. We expand on the previous research in this area by including the perceptual dimensions of noisiness, coloration, discontinuity, and loudness, as well as collecting and evaluating the impact of the device and hardware used by the crowdworkers. The findings provide new insights into optimizing training protocols for robust quality assessments in remote subjective testing.