Saturday, April 22, 2017

Solving Photo Mysteries with Expert-Led Crowdsourcing

Title: Solving Photo Mysteries with Expert-Led Crowdsourcing
Speaker: Kurt Luther, Department of Computer Science, Virginia Tech
Time: 12:30 - 1:30pm
Room: Gates-Hillman Complex 6501

Abstract:

Despite the old adage that a picture is worth a thousand words, images often need context to be meaningful to their viewers. In this talk, I show how expert-led crowdsourcing, a novel approach that combines the relative strengths of experts and amateur crowds, can be used to solve photo mysteries. In one example, I conducted a qualitative study of image verification experts in journalism, national security, and human rights organizations to understand how they perform geolocation, the process of mapping the precise location where a photo or video was taken. This research informed the design of GroundTruth, a system where experts collaborate with crowds to geolocate unknown images. In another example, I partnered with a historical photography magazine to develop Civil War Photo Sleuth, a system that leverages crowdsourcing and computer vision techniques to help experts identify unknown soldier portraits from the 19th century. I also discuss broader challenges and opportunities in crowdsourced investigations, open-source intelligence, and collaborative sensemaking illustrated by these examples.

Bio:

Kurt Luther is an assistant professor of computer science at Virginia Tech, where he is also affiliated with the Center for Human-Computer Interaction, the Department of History, and the Hume Center for National Security and Technology. He directs the Crowd Intelligence Lab (http://crowd.cs.vt.edu), an interdisciplinary research group exploring how crowdsourcing systems can support creativity and discovery. He is principal investigator for over $1.5M in sponsored research, including an NSF CAREER Award. Previously, Dr. Luther was a postdoctoral fellow in the HCI Institute at Carnegie Mellon University. He received his Ph.D. in human-centered computing from Georgia Tech, where he was a Foley Scholar, and his B.S. in computer graphics technology from Purdue University. He has also worked at IBM Research, Microsoft Research, and YouTube/Google.


Saturday, April 15, 2017

Supporting Collective Ideation at Scale

Abstract:
A growing number of online collective ideation platforms, such as OpenIDEO or Quirky, have demonstrated the potential of large-scale collaborative innovation in various domains. However, these platforms also introduce new challenges. People have to wade through a sea of possibly mundane and redundant ideas before encountering genuinely inspiring ones. Further, once all ideas are collected, the communities have to spend a lot of time and effort to synthesize the ideas into a few solutions. Alternatively, an intelligent system can select and present ideas for its users instead of leaving them to look for inspirations in a haphazard way.


In this talk, I will show how a system can decide which ideas to present to the users and when to do so. I will introduce a computational model of an idea space, two crowdsourcing methods to generate this model and the model's application for creativity-enhancing interventions. I will also present an empirical study on the effects of timing of example delivery on people's idea generation.


Bio:
Pao is a Ph.D. candidate in Computer Science focusing on Human-Computer Interaction (HCI) research at Harvard University. She works with Prof. Krzysztof Gajos in the Intelligent Interactive Systems Group. Her research explores how we can apply intelligent technologies and crowdsourcing to enable novel ways for people to come up with creative ideas together. Pao received her B.S. in Electrical Engineering and M.S. in Computer Science from Stanford University where she worked in Stanford HCI group.



Tuesday, April 4, 2017

The Collaboration and Communication Networks within the Crowd

Title: The Collaboration and Communication Networks within the Crowd
Speaker: Siddharth Suri, Microsoft Research, New York City
Time: 12:30-1:30pm
Room: Newell-Simon Hall 1507

Title: The Collaboration and Communication Networks within the Crowd
Abstract: 
Since its inception, crowdsourcing has been considered a black-box approach to solicit labor from a crowd of workers. Furthermore, the crowd has been viewed as a group of independent workers dispersed all over the world. One goal of this work is to show that crowdworkers collaborate to fulfill technical and social needs left by the platform they work on. That is, crowdworkers are not the independent, autonomous workers they are often assumed to be, but instead work within a social network of other crowdworkers. Crowdworkers collaborate with members of their networks to 1) manage the administrative overhead associated with crowdwork, 2) find lucrative tasks and reputable employers and 3) recreate the social connections and support often associated with brick and mortar-work environments. We also build on and extend these discoveries by mapping the entire communication network of workers on Amazon Mechanical Turk, a leading crowdsourcing platform. We execute a task in which over 10,000 workers from across the globe self-report their communication links to other workers, thereby mapping the communication network among workers. Our results suggest that while a large percentage of workers indeed appear to be independent, there is a rich network topology over the rest of the population. That is, there is a substantial communication network within the crowd. The existence of these networks could have implications for the burgeoning literature that involves conducting behavioral experiments and research on crowdsourcing sites. Overall, our evidence combines ethnography, interviews, survey data and larger scale data analysis from four crowdsourcing platforms. This paper draws from an ongoing, longitudinal study of crowdwork that uses a mixed methods approach to understand the cultural meaning, political implications, and ethical demands of crowdsourcing. 

Bio:
Siddharth “Sid” Suri is a computational social scientist.  His research lies at the intersection of computer science, behavioral economics and crowdsourcing.  Sid is currently writing a book with Mary Gray titled “On-Demand: Crowds, Platform Economies, and the Future of Work in Precarious Times” that combines ethnography and computer science to understand the future of work.

Sid earned his Ph.D. in computer and information science from the University of Pennsylvania in 2007 under the supervision of Michael Kearns. After that he was a postdoctoral associate working with Jon Kleinberg in the computer science department at Cornell University.  Then he moved to the Human & Social Dynamics group at Yahoo! Research led by Duncan Watts.  Currently, Sid is one of the founding members of Microsoft Research, New York City.



Wednesday, March 22, 2017

Title: Cognitive modeling explorations with crowdsourced predictions and opinions
Speaker: Michael Lee, Professor of Cognitive Sciences, University of California Irvine
Time: 12:30-1:30
Room: NSH 1507

Abstract:
The analysis of crowdsourced data can be treated a cognitive modeling problem, with the goal of accounting for how any why people produced the behaviors that were observed. We explore this cognitive approach in a series of examples, involving Thurstonian models of ranking, calibration models of probability estimation, and attention and similarity models of category learning. Many of the demonstrations use crowd-sourced data from ranker.com. Some involve "wisdom of the crowd" predictions, while others aim to describe and explain the structure of people's opinions. Throughout the talk, we emphasize the tight interplay between theory and application, highlighting not just when existing cognitive theories and models can help address crowd-sourcing problems, but also when real-world applications demand solutions to new basic research challenges in the cognitive sciences.

Bio:
Michael Lee is a Professor of Cognitive Sciences at the University of California Irvine. His research focuses on modeling cognitive processes, especially of decision making, and the Bayesian implementation, evaluation, and application of those models. He has published over 150 journal and conference papers, and is the co-author of the graduate textbook "Bayesian cognitive modeling: A practical course". He is a former President of the Society for Mathematical Psychology, a winner of the William K. Estes award of that society, and a winner of the best applied paper from the Cognitive Science Society. Before moving the U.S., he worked as a senior research scientist for the Australian Defence Science and Technology Organization, and has consulted for the Australian and US DoD, as well as various universities and companies, including the crowd-sourcing platform Ranker.


Friday, March 17, 2017

Title: Human-in-the-loop Analytics
Speaker: Michael Franklin, Liew Family Chair of Computer Science, University of Chicago
Time: 2:30 -3:30pm
Room: NSH 1507

Abstract:
The “P“ in AMPLab stands for "People" and an important research thrust in the lab was on integrating human processing into analytics pipelines. Starting with the CrowdDB project on human-powered query answering and continuing into the more recent SampleClean and AMPCrowd/Clamshell projects, we have been investigating ways to maximize the benefit that can be obtained through involving people in data collection, data cleaning, and query answering.  In this talk I will present an overview of these projects and discuss some future directions for hybrid cloud/crowd data-intensive applications and systems.

Bio:
Michael J. Franklin is the Liew Family Chair of Computer Science and Sr. Advisor to the Provost for Computation and Data at the University of Chicago where his research focuses on database systems, data analytics, data management and distributed computing systems.  Franklin previously was the Thomas M. Siebel Professor and chair of the Computer Science Division of the EECS Department at the University of California, Berkeley.   He co-founded and directed Berkeley’s Algorithms, Machines and People Laboratory (AMPLab), which created industry-changing open source Big Data software such as Apache Spark and BDAS, the Berkeley Data Analytics Stack.   At Berkeley he also served as an executive committee member for the Berkeley Institute for Data Science.  He currently serves as a Board Member of the Computing Research Association and on the NSF CISE Advisory Committee.  Franklin is an ACM Fellow and a two-time recipient of the ACM SIGMOD “Test of Time” award. His other honors include the Outstanding Advisor award from Berkeley’s Computer Science Graduate Student Association, and the “Best Gong Show Talk” personally awarded by Andy Pavlo at this year’s CIDR conference.

For more information about Dr. Franklin, visit https://cs.uchicago.edu/directory/michael-franklin and https://people.eecs.berkeley.edu/~franklin/


Friday, March 3, 2017

Constructing Visual Metaphors: Using the Design Process to Crowdsource a Creative Task

Title: Constructing Visual Metaphors: Using the Design Process to Crowdsource a Creative Task
Speaker: Lydia Chilton, Stanford University (Columbia University starting in Fall 2017)
Date: Tues, March 7
Time: 12:30-1:30pm
Room: NSH 1507

Abstract:
Visual Metaphors are a communication tool used to draw users' attention in print media, ads, public service announcements and art. They involve blending two symbols together visually to convey a new meaning. This is a creative problem with many solutions, but some solutions have more impact and meaning to readers than others.

I will introduce the problem of visual metaphors, and describe our early stages in crowdsourcing this problem. I will discuss how we had to adapt the design process to apply to microtasks and the lessons we have learned so far about designing media that speaks directly to reader’s low-level perceptual processing.

Bio:
Lydia Chilton is an assistant professor in the Computer Science Department of Columbia University in the City of New York. Actually, she won't technically start that position until July. She is currently a post-doc working with Maneesh Agrawala at Stanford University at the intersection of graphics, HCI and crowdsourcing. She has been doing crowdsourcing for ten years and is excited to see how the original goals of crowdsourcing are being realized by a large community of talented researchers. 




Monday, February 20, 2017

Real-Time Crowdsourcing for Complex Tasks

Title: Real-Time Crowdsourcing for Complex Tasks
Speaker: Walter Lasecki, Assistant Professor, University of Michigan
Date: Tues, Feb 21
Time: 12:30-1:30
Room: NSH 1507
Abstract:
Creating robust intelligent systems that can operate in real-world settings at super-human performance levels requires a combination of human and machine contributions. Crowdsourcing has allowed us to scale the ubiquity of these human computation systems, but the challenges in mixing human and machine effort remain a limiting factor of these systems. My lab’s work on modeling crowds as collective agents  has helped alleviate some of these challenges at a system level, but how we can create cohesive ecosystems of crowd-powered tools that together solve more complex and diverse needs remains an open question. In this talk, I will discuss some initial and ongoing work that aims to create complex crowdsourcing systems for applications that cannot be solved using only a single tool. 

Bio:
Walter S. Lasecki is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor, where he directs the Crowds+Machines (CROMA) Lab. He and his students create interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to exceed the capabilities of either. These systems let people be more productive, and improve access to the world for people with disabilities. Dr. Lasecki received his Ph.D and M.S. from the University of Rochester in 2015 and a B.S. in Computer Science and Mathematics from Virginia Tech in 2010. He has previously held visiting research positions at CMU, Stanford, Microsoft Research, and Google[x].​