SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation

Workshop Summary

[pdf][bib] Vitor Carvalho, Matthew Lease, and Emine Yilmaz.
Crowdsourcing for Search Evaluation. ACM SIGIR Forum, 44(2):17-22, December 2010.

Workshop Proceedings

[pdf][bib] Entire Volume

Invited Talks

[slides] Omar Alonso, Microsoft Bing
Design of experiments for crowdsourcing search evaluation: challenges and opportunities.
Additional reference: Slides from Alonso's ECIR 2010 Tutorial

[slides] Adam Bradley, Amazon
Insights into Mechanical Turk

[slides] Lukas Biewald, CrowdFlower
Better Crowdsourcing through Automated Methods for Quality Control

Accepted Papers

[pdf][bib][slides] Mohammad Soleymani and Martha Larson
Crowdsourcing for Affective Annotation of Video: Development of a Viewer-reported Boredom Corpus.
Runner-Up: Most Innovative Paper Award ($100 USD prize thanks to Microsoft Bing -- see blog post)

[pdf][bib][slides] Julian Urbano, Jorge Morato, Monica Marrero and Diego Martin
Crowdsourcing Preference Judgments for Evaluation of Music Similarity Tasks.
Winner: Most Innovative Paper Award ($400 USD prize thanks to Microsoft Bing)

[pdf][bib][slides] John Le, Andy Edmonds, Vaughn Hester and Lukas Biewald
Ensuring quality in crowdsourced search relevance evaluation.

[pdf][bib][slides] Dongqing Zhu and Ben Carterette
An Analysis of Assessor Behavior in Crowdsourced Preference Judgments.

[pdf][bib][slides] Henry Feild, Rosie Jones, Robert C. Miller, Rajeev Nayak, Elizabeth F. Churchill and Emre Velipasaoglu
Logging the Search Self-Efficacy of Amazon Mechanical Turkers.

[pdf][bib][slides] Richard M. C. McCreadie, Craig Macdonald and Iadh Ounis
Crowdsourcing a News Query Classification Dataset.

[pdf][bib][slides] Omar Alonso, Chad Carson, David Gerster, Xiang Ji, Shubha U. Nabar
Detecting Uninteresting Content in Text Streams.

Sponsors

Bing.com Crowdflower.com