Publications

CONFERENCE (INTERNATIONAL) An Empirical Study on Short- and Long-term Effects of Self-Correction in Crowdsourced Microtasks

Masaki Kobayashi (University of Tsukuba), Hiromi Morita (University of Tsukuba), Masaki Matsubara (University of Tsukuba), Nobuyuki Shimizu, Atsuyuki Morishima (University of Tsukuba)

The 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018)

July 05, 2018

Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update his/her results according to the review. Self-correction was proposed as an approach complementary to statistical algorithms in which workers independently perform the same task. It can provide higher-quality results with few additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are needed. In addition, as self-correction gives feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks. This paper reports our experimental results on self-corrections with a real-world crowdsourcing service. The empirical results show the following: (1) Self-correction is effective for making workers reconsider their judgments. (2) Self-correction is more effective if workers are shown task results produced by higher-quality workers during the second stage. (3) Perceptual learning effect is observed in some cases. Self-correction can give feedback that shows workers how to provide high-quality answers in future tasks. The findings imply that we can construct a positive loop to improve the quality of workers effectively. We also analyze in which cases perceptual learning can be observed with selfcorrection in crowdsourced microtasks.

Paper : An Empirical Study on Short- and Long-term Effects of Self-Correction in Crowdsourced Microtasks (external link)