Thursday, September 25, 2014

Title: Crowd-Powered Interactive Systems
Speaker: Walter S. Lasecki (Computer Science, University of Rochester)
Date: Tuesday, October 14
Time: 12-1pm
Room: TBD

Abstract:
I create and deploy interactive systems that use a combination of human and machine intelligence to operate robustly in real-world settings. Unlike prior work in human computation, our “Crowd Agent” model allows crowds of people to support real-time interactive systems. For example, Scribe allows non-experts to caption speech in real-time for deaf and hard of hearing users, where prior approaches were either not accurate enough, or required professionals with years of training; Chorus allows multi-session conversations with a virtual personal assistant; and Apparition allows designers to rapidly prototype new interactive interfaces from sketches in real-time. In this talk, I will describe how computationally-mediated groups of people can solve problems that neither people nor computers can solve alone, and scaffold AI systems using the real-world data they collect.

Bio:
Walter S. Lasecki is a Computer Science Ph.D. candidate at the University of Rochester advised by Jeffrey Bigham (CMU). He creates interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to exceed the capabilities of either. Mr. Lasecki received a B.S. in Computer Science and Mathematics from Virginia Tech in 2010, and an M.S. from the University of Rochester in 2011. He was named a Microsoft Research Ph.D. Fellow in 2013, has held visiting positions at Stanford and Google[x], and is currently a visiting Ph.D. Student at CMU. 

































Friday, March 28, 2014

Title: Designing Social Computing Systems Around Relationships
Speaker: Eric Gilbert (School of Interactive Computing, Georgia Tech)
Date: Wednesday, April 2
Time: 12-1pm
Room: GHC 6501

Abstract:
Relationships are the heart of social media: they make it *social*. In this talk, I will present two social computing systems that place relationships and networks (i.e., multiple relationships) at the center of their design. First, I will present our work on modeling tie strength (i.e., how strong a relationship is), and how it can act as a tool for both design and analysis. Specifically, I'll present We Meddle, a Twitter app that builds categories of friends by inferring tie strength. Next, I will present a second Twitter app called Link Different that is inspired by the structure of a single social network triad. Link Different lets its users know how many of their followers already a saw link via someone else they follow. Hundreds of thousands of people used these two systems many millions of times. Grounding my argument in them, I'll conclude the talk by suggesting that taking this kind sociological approach to social computing suggests many new open problems for design.

Bio:
Eric Gilbert is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He joined the Georgia Tech faculty in 2011 after finishing a Ph.D. in CS at Illinois. Dr. Gilbert leads the comp.social lab, a research group that focuses on building and studying social media. His work is supported by grants from Yahoo!, Google, the NSF and DARPA. Dr. Gilbert has also founded several social media sites, and has received four best paper awards and two nominations from ACM's SIGCHI. His research has recently been featured in The Wall Street Journal, The Atlantic, MIT's Technology Review and on CNN and NPR. One of his favorite activities in life is drinking coffee while hanging out on the internet. 





























Friday, February 28, 2014

Title: Precision Crowdsourcing: Leveraging the Contributory Potential of User Feedback
Speaker: Loren Terveen (CS, University of Minnesota)
Date: Wednesday, March 5
Time: 12-1pm
Room: GHC 6501

Abstract:
I will discuss my group’s emerging work on Precision Crowdsourcing: crafting targeted requests to information consumers with the goal of closing the loop and turning them into contributors. I will talk about the general approach, then illustrate it through a project we’ve done in Cyclopath. Specifically, we designed and studied several mechanisms Cyclopath users could use to give feedback on bicycling routes they had requested. We analyzed naturally occurring textual route feedback, and found that it contained a significant amount of information that would be useful to the system, e.g., evaluations of road segments and suggestions of alternative routes. We then designed a new UI to let users provide and explain feedback, and ran an experiment that showed promise that users would give and explain route feedback, and that the system could extract useful information from it.

Bio:
Loren Terveen is a Professor of Computer Science at the University of Minnesota. His research interests include a variety of topics in human-computer interaction and social computing. He helped develop one of the early recommender web sites (PHOAKS) and recently has led projects that have: revealed new information about how valuable content is created on Wikipedia and the lifecycle of Wikipedia users, produced and deployed new interface designs to enhance participation in online communities, developed a novel location-aware messaging system, and combined wiki and geographical information systems technologies to create social web sites that let people enter and access information about places in their local communities. 

Prof. Terveen received his Ph.D. 1991 from the University of Texas at Austin, then spent 11 years at Bell Labs and AT&T Labs before joining the University of Minnesota. He has served the human-computer interaction community in various leadership roles, including as co-chair of the CHI and IUI conferences, program chair of CSCW, and a member of the SIGCHI Executive Committee.


























Friday, February 7, 2014

Title: Synergy of Machine Intelligence and Human Computation
Speaker: Ece Kamar (Microsoft Research)
Date: Wednesday, February 12
Time: 12-1pm
Room: GHC 8102

Abstract:
Human computation has offered new opportunities for making computer systems smarter and more capable with easy access to human intelligence on demand. Making human computation a reliable component of computer systems requires moving away from manual designs and controls towards generalizable automation techniques, algorithms, models and designs. In this talk, I will present an overview of our recent research efforts towards this goal that have emerged through our collaboration with the Zooniverse citizen science effort. I will start by showing how machine learning and decision-theoretic reasoning can be used in harmony to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks efficiently. This methodology, which we refer to as CrowdSynth, includes predictive models for inference and efficient algorithms for making effective decisions, is shown empirically to maximize the efficiency of a large-scale crowdsourcing operation. Next, I will present how predictive modeling can be used to make inferences about attention and engagement of workers. I will conclude the talk by presenting a study of how different financial incentives provided to paid workers affect their speed, quality and attention and how their performance on a difficult citizen science task compare to volunteers.

Bio:
Ece Kamar is a researcher at the Adaptive Systems and Interaction group at Microsoft Research Redmond. Ece earned her Ph.D. in computer science from Harvard University. While at Harvard, she received the Microsoft Research fellowship and Robert L. Wallace Prize Fellowship for her work on Artificial Intelligence. She currently serves in the program committee of conferences such as AAAI, AAMAS, IJCAI, WWW, UAI and HCOMP. Her research interests include human-computer collaboration, decision-making under uncertainty, probabilistic reasoning and mechanism design with a focus on real-world applications that bring people and adaptive agents together.

























Thursday, January 23, 2014

Title: Power to the People: Approaches to Crowdsourcing
Speaker: Ben Bederson (Human-Computer Interaction Lab, University of Maryland)
Date: Wednesday, January 29
Time: 12-1pm
Room: GHC 6501

Abstract:
One of the (many) challenges of crowdsourcing is getting hard tasks done well. In this talk, I will describe several projects that approach this in different ways: 1. MonoTrans brings monolingual crowds together to translate text by improving upon machine translation, and asking for help from across the language barrier, and challenges in making them successful. 2. AskSheet helps make decisions efficiently by optimizing a spreadsheet model to decide which information needs to be collected by the crowd. 3. QA supports active learning in a traditional classroom setting by crowdsourcing in a room. It uses the students to collectively annotate and aggregate each other’s response to a question - aiming to create a more thoughtful “classroom response” system. Bring your laptop (or mobile device) to try out QA, and we’ll see if we can learn something from each other.

Bio:
Benjamin B. Bederson is a Professor of Computer Science and prior director of the Human-Computer Interaction Lab at the Institute for Advanced Computer Studies and iSchool at the University of Maryland. He is currently Special Advisor to the Provost on Technology and Educational Transformation. An ACM Distinguished Scientist, his research is on digital education, human computation, and interaction strategies. He is also Co-founder and Chief Scientist of Zumobi, a mobile app and advertising company.


























Sunday, December 1, 2013

Title: Leveraging Crowds to Inject Perception-oriented Feedback into the Visual Design Workflow
Speaker: Brian Bailey  (CS, University of Illinois) 
Date: Wednesday, December 4
Time: 12-1pm
Room: GHC 8102

Abstract:
There is rapidly growing interest in leveraging crowds as part of individual creative workflows. In this talk, I will describe the concept, implementation, and evaluation of Voyant, a system that leverages a non-expert crowd to generate perception-oriented feedback from a selected audience as part of the visual design workflow. The system generates feedback related to the elements seen in a design, the order in which elements are noticed, impressions formed when the design is first viewed, and interpretation of the design relative to guidelines in the domain and the user’s stated goals. An evaluation of the system showed that users were able to leverage the generated feedback to develop insight and discover previously unknown problems with their designs. This type of system has the potential to tighten feedback cycles in design practice and contributes to the growing movement of data-driven design methods. The talk will conclude by outlining intriguing pathways for future work and by highlighting some challenges of using crowds to build end-user applications.

Bio:
Brian Bailey is an Associate Professor in the Department of Computer Science at the University of Illinois, where he has been on the faculty since 2002. He conducts research and teaches graduate and undergraduate courses on user interface design and human-computer interaction. Dr. Bailey was a visiting researcher at Microsoft Research in 2008-2009. He earned a B.S. in Computer Science from Purdue University in 1993 and an M.S. and Ph.D. from the University of Minnesota in 1997 and 2002, respectively. His research interests include creativity support tools, design studies, crowdsourcing, and attention management. He holds affiliate academic appointments in Human Factors, the Beckman Institute, and the Graduate School of Library and Information Science. Dr. Bailey received the NSF CAREER award in 2007. His research has been supported by the NSF, Microsoft, Google, and Ricoh Innovations.

























Monday, November 18, 2013

Title: Using User Behavior to Evaluate the Quality of Crowd-Generated Content
Speaker: Jeff Rzeszotarski (Human Computer Interaction Institute) 
Date: Wednesday, November 20
Time: 12-1pm
Room: GHC 6501 

Abstract:
Users create, contribute, and disseminate an astonishing amount of information online. Yet, not all of it is valuable. While existing approaches can readily identify obvious quality problems in simple content, judging the quality of creative, complex, or subjective work remains a major challenge. In this talk I will describe a novel approach for understanding and evaluating the quality of user-generated content. Rather than look at the final products, I propose examining the way a person works as they create them. I will discuss my work developing predictive models and novel visualizations for the behavior of crowdworkers on Mechanical Turk and volunteers on Wikipedia as they generate content, and explore future applications of this approach towards helping users across the web create better content before they even press “submit”.

Bio:
Jeff Rzeszotarski (rez-oh-tar-ski) is a 4th year PhD student in the Human-Computer Interaction Institute at Carnegie Mellon University. Advised by Dr. Aniket Kittur, his research focuses on crowdsourcing and social computing, studying techniques that support groups of people generating and consuming content online. His work has received a best paper award from ACM UIST and he is the recipient of a Microsoft Research Fellowship. Jeff holds a BA from Carleton College.