Informal discussion series on crowdsourcing, jointly hosted by the Human-Computer Interaction Institute and Language Technologies Institute in the School of Computer Science at Carnegie Mellon University
Title: Crowd Agents: Interactive Crowd-Powered Systems in the Real World
Speaker: Jeffrey Bigham (CS, University of Rochester)
Date: Wednesday, Dec 5th
Room: GHC 4405
Over the past few years, I have been developing and deploying interactive crowd-powered systems. For instance, VizWiz has the crowd answer visual questions for blind people in less than a minute, Legion allows outsourcing of desktop tasks to the crowd, and Scribe allows the crowd to caption audio in real-time. Overall, thousands of people have engaged with these systems, providing an interesting look at how end users interact with crowd work in their everyday lives. Collectively, these systems illustrate a new approach to human computation in which the diverse and changing crowd is provided the computational support necessary to act as a single, high-quality actor. The classic advantage of the crowd has been its wisdom, but our systems are beginning to show how crowd agents can surpass even expert individuals on difficult real-time motor and cognitive performance tasks.
Jeffrey P. Bigham is an Assistant Professor in the Department of Computer Science at the University of Rochester where he heads the ROCHCI Group. His work is at the intersection of human-computer interaction, human computation, and artificial intelligence, with a focus on developing innovative technology that serves people with disabilities in their everyday lives. Jeffrey received his B.S.E degree in Computer Science from Princeton University in 2003. He received his M.Sc. degree in 2005 and his Ph.D. in 2009, both in Computer Science and Engineering from the University of Washington working with Richard E. Ladner. Dr. Bigham has won a number of awards, including the Microsoft Imagine Cup Accessible Technology Award, the Andrew W. Mellon Foundation Award for Technology Collaboration, the MIT Technology Review Top 35 Innovators Under 35 Award, and Best Paper Awards at UIST, WSDM, and ASSETS. In 2012, he received the National Science Foundation CAREER Award.
Twenty years ago ubiquitous computing promised a bright future full of intelligent environments. Transit AVL systems, which detect the location of vehicles and provide real-time arrival information, are one example where this vision has come true. These systems improve transit by reducing uncertainty. However, their deployment is not ubiquitous because of high costs.
The recent rise of mobile and social computing offers a new approach to building large-scale sensing systems, on that combines people and their mobile phones as sensors. These socio-technical systems have the human ability to interpret unfolding situations. However, they suffer from the problems of sparseness, due to the fact that people are not always present in the places where sensing is needed, and because people can introduce errors into the system.
To better understand when to how to design socio-technical systems, we built Tiramisu, a mobile app that crowd sources a real-time arrival information system by getting transit riders to share location traces when commuting. We deployed this system for 10 months and collected data on its use. We had more than 10,000 different users who had almost 300,000 sessions with our app and shared nearly 30,000 location traces. In this talk, I provide an overview of the Tiramisu design, share findings from our deployment, and reflect on what we have learned.
John Zimmerman is an interaction designer and researcher with a joint appointment as an Associate Professor at Carnegie Mellon’s HCI Institute and School of Design. His research has four main themes: (i) the application of possession attachment theory in the design of interactive products and services; (ii) the design of mixed initiative systems that put the power of machine learning into the hands of end-users; (iii) research through design as a research approach in HCI; and (iv) the use of social computing and service design to transform public services. John teaches classes on HCI methods, interaction design, design theory, and service design for mobile services. Prior to joining Carnegie Mellon, he worked at Philips Research, investigating future interactive TV products and services.
Speaker: Anand Kulkarni (University of California,
Date: Wednesday, October 24th
Room: GHC 8102
How can we cultivate a crowd that's always on, highly engaged,
understands what you want, and generally right? I'll discuss ongoing
work at MobileWorks, a crowd platform that pairs peer guidance and
worker mentorship with a positive social agenda to make it easy to carry
out more complex tasks than ever before with an online crowd. I'll
share recent results from using MobileWorks to design better
crowd-powered applications that can communicate with us in real-time,
understand our needs, be creative on demand, and more.
Anand is a senior PhD candidate at the University of California,
Berkeley, and CEO of MobileWorks, a crowd computing platform designed to
enable new capabilities in crowdsourcing through training, peer
interaction, and fair worker treatment. As a former National Science
Foundation graduate research fellow at the University of California,
Berkeley, he developed new techniques to introduce and maintain fairness
in crowdsourcing, designed novel proof strategies in computational
geometry, and taught undergraduate coursework in enterpreneurship.
Speaker: Winter Mason (Stevens Institute of Technology)
Date: Wednesday, October 10th
Room: GHC 6501
In this talk I will describe in detail a technique for running
multiplayer studies using Mechanical Turk, developed by myself and
Siddharth Suri. I will explain the methods we used and the results we
observed, using two studies as running examples. In the first study,
participants played a game in which they were unknowingly placed in a
network and asked to search a hidden payoff function. In the second
study, participants were also placed unknowingly in a network and played
a public goods game with their network neighbors. To conclude, I will
outline some difficulties that can arise and discuss other studies that
could be useful with this technique.
Winter Mason received his
B.S. in Psychology from the University of Pittsburgh, and a Ph.D. in
Social Psychology and Cognitive Science from Indiana University in 2007.
From 2007-2011 he worked as a Visiting Scientist at Yahoo! Research in
the Human and Social Dynamics group, and is currently an assistant
professor in the Howe School of Technology Management at Stevens
Institute of Technology. His research draws from psychology, cognitive
science, sociology, and computer science, with a particular focus on
social influence, group dynamics and crowdsourcing.
Title: DrawAFriend: Crowdsourcing through Social Gaming
Speaker: Alex Limpaecher (CSD, Carnegie Mellon)
Date: Wednesday, May 2nd
Room: GHC 6501
DrawAFriend explores how social game mechanics can be applied to crowdsourcing. DrawAFriendis a socially integrated drawing game that allows users to easily create
drawings of their friends and share those drawings on the Facebook
social graph. The project has two primary goals: First to create a
unique, fun, and artistic experience for professionals and non
professionals alike. Secondly to elicit a large database of
human-created line drawings that we can later analyze. In this talk I
will discuss the game design of DrawAFriend and the results that have
come from a limited release.
Title: Organizing Online Production without Formal Organization Speaker: Haiyi Zhu, HCII, CMU Date: Wednesday, April 11th Time: 12p-1p Room: GHC 6501
Online production communities have successfully aggregated the efforts of millions of volunteers to produce complex artifacts such as GNU/Linux and Wikipedia. Currently most online production communities rely on a paradigm of self-direction in which people work primarily on the tasks they are interested in. However, this approach breaks down when there are conflicts between the interests of the individuals and the goal of the community as a whole. Many people may want to work on the same popular areas while ignoring less popular areas that require work. People may not want to perform cooperative behaviors (e.g., performing maintenance tasks or socializing newcomers), even though these behaviors are important for the healthy functioning of the community. Therefore, the challenge has become how to motivate people to achieve the community goal that transcends individual interest in an environment which lacks hierarchical structure and monetary incentives. I identified particular mechanisms, including group identification and shared leadership, which intrinsically influence people’s actions to achieve a common goal. I empirically examined their effectiveness in the context of Wikipedia. My research has implications for designing more effective, efficient and successful online production communities.
Title: A Review of Tiramisu - Extending Transportation Information Systems with Crowdsourcing Speaker: Anthony Tomasic, Institute for Software Research, CMU Date: Wednesday, Feb 8th Time: 12p-1p Room: GHC 6501
Tiramisu ("pick me up" in Italian) is a prototype implementation of a transportation information system that learns via crowdsourcing information from users. Through a smart phone interface, transit riders search for transit information. Once on a bus, riders contribute new information to Tiramisu through the same interface. The system also leverages the instrumentation of the smart phone to create an automatic vehicle location (AVL) service. Tiramisu is currently deployed in the Pittsburgh region. The system serves as a test bed for a variety of research areas: crowdsourcing system design, universal (accessibility) design, applied machine learning, and service design. In this talk, we will discuss our initial design rational, review research results to date, cry mea culpa over design mistakes and present some new preliminary results. If time permits, we will discuss some future directions.
Joint work with Charlie Garrod (CSD/Swarthmore), Yun Huang (RI), Aaron Steinfeld (RI), John Zimmerman (HCII & Design) and many others.
Title: Shepherding the Crowd Yields Better Work Speaker: Steven Dow, HCII, CMU Date: Wednesday, Jan 25th Time: 12p-1p Room: NSH 3305
Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.