Monday, December 12, 2011

EteRNA - solving RNA design problem

Title: EteRNA - solving RNA design problem with 30,000 people http://eterna.cmu.edu
Speaker: Jeehyung Lee, CS, CMU
Date: Wednesday, Dec 14th
Room: GHC 4405

Abstract:
We introduce EteRNA, an Internet-based RNA design competition where
players design RNA sequence to match given target shapes, and receive
information-rich wet-lab feedback from high-throughput RNA synthesis and
chemical mapping. We show that players were able to uncover rules for
robust RNA design from continuous wet-lab feedback.

Monday, November 28, 2011

Judging Text-to-Speech by the Wisdom of the Crowd

Title: Judging Text-to-Speech by the Wisdom of the Crowd
Speaker: Prof. Alan Black (http://www.cs.cmu.edu/~awb/)
Date: Wednesday, Nov 30th @Noon
Room: GHC 6501

Abstract:
One of the many hard issues in generating good synthetic speech is the difficulty in evaluating the quality. Objective measures are always useful when optimizing various machine learning algorithms, but in speech generation it is ultimately what the end user actually thinks about the speech that is important. Running human listening tests is expensive, and not very reliable.

This talk lays out the techniques we've used to try to find robust subjective evaluation techniques for speech synthesis. These have been implemented in the annual Blizzard Challenge where teams build synthetic voices from a common dataset and then we have many people judge the quality by listening to them. The results are robust (different subsets of listeners correlate) and there have been interesting results found about the orthogonality of naturalness and intelligibility. However as we go further into using end users are an evaluation system we note a number of issues that must be addressed. People prefer voices they've listened to before (for good speech perceptual reasons). People are not good at judging subtle differences such as voice quality, intonation, timing etc; naturalness and intelligibility are not the only goals.

This talk will present existing crowd sourcing techniques used to evaluate speech synthesis and propose new techniques that might help use evaluate future directions in speech synthesis.

Wednesday, November 9, 2011

Towards Large-Scale Collaborative Planning using Humans and Machines

Title: Towards Large-Scale Collaborative Planning using Humans and Machines
Speaker: Edith Law
Room: GHC 4405 (@Noon)


Human computation is the study of systems where humans perform a major part of the computation or are an integral part of the overall computational process. There exist several genres of human computation systems -- Games With a Purpose (e.g., the ESP Game) collect data from humans as a by-product of game play; crowdsourcing marketplaces (e.g. Amazon Mechanical Turk) enable algorithmic operations to be outsourced to paid workers in the form of micro-tasks; identity verification tasks (e.g., reCAPTCHA) leverage the help of billions of users who, in the process of gaining access to online content, are engaged in meaningful activities (e.g., digitizing books).

To date, most human computation systems have simple output requirements (e.g., accuracy). In this talk, I will discuss two of my recent work exploring human computation tasks with complex output requirements. In the first case study, I present a human computation algorithm called CrowdPlan that, given a high-level search query (e.g., "I want to ...", "I need to ..."), generates a simple plan consisted of a set of goals and web resources that help support each goal. In the second case study, I introduce Mobi, a collaborative itinerary planning environment, that allows workers to asynchronously put together a complex plan (consisted of a sequence of ordered actions) that satisfies the given qualitative and quantitative constraints. These case studies demonstrate two contrasting solutions - using an explicit algorithm versus a social computing platform - for tackling problems with complex output requirements, and reveal the importance of communication between workers and the end users of the system (i.e., requesters) during the computational process.

Speaker Bio:
Edith Law is a Ph.D. candidate at Carnegie Mellon University, working
with Luis von Ahn and Tom Mitchell on human computation systems that
harness the joint efforts of machines and humans. She co-organized the
Human Computation (HCOMP) Workshops Series (co-located with KDD 2009,
2010 and AAAI 2011), co-authored the book "Human Computation"
published by Morgan & Claypool Synthesis Lectures in Artificial
Intelligence and Machine Learning, as well as presented a tutorial
entitled "Human Computation: Core Research Questions and State of the
Art" at AAAI 2011. Her work is generously supported by a Microsoft
Graduate Research Fellowship.

Monday, October 10, 2011

Jason Hong: Applying the Wisdom of Crowds to Usable Privacy and Security

Title: Applying the Wisdom of Crowds to Usable Privacy and Security
Speaker: Jason Hong, Associate Professor, HCII
http://www.cs.cmu.edu/~jasonh/

Date: Oct 19th, @ Noon
Location: GHC 4405

In this talk, I present an overview of work my colleagues and
I have been doing in applying crowdsourcing techniques for
privacy and security. I will talk about three different projects.
The first looks at how to improve the accuracy and response times
of people in identifying fake phishing web pages. The second
looks at analyzing location data from hundreds of people and
using that to infer friendships on Facebook as well as privacy
preferences. The third, which is ongoing and will be more
speculative, looks at how to use crowdsourcing techniques to
understand people's privacy expectations, and how to use that
to improve privacy in the context of smartphone apps.

Monday, September 19, 2011

Lisa Yu: Structures for crowd creativity

Title: Structures for crowd creativity
Speaker: Lisa Yu, Ph.D. student at Stevens Institute of Technology

Date: Sept 28, 2011 (noon)
Room: GHC 4405

Abstract:
We know crowds can compute; can they create? To study crowd creativity, we need to understand organization structure. In this talk, I will discuss a structure based on the architecture of genetic algorithms. In this structure, parent ideas are selected and mated to produce children; features are passed on from generation to generation. It is essentially a combination and selection process. Combination allows advantageous features to propagate and new combinations of features to emerge. The selection process works as a filtering system: bad features drop out of the system. I will also describe several experiments that provide illustrations of how collective creativity can be studied.

Tuesday, March 15, 2011

Greg Little: Human computation algorithms

Speaker: Greg Little

When: noon, March 16 Wednesday
Where: GHC 6501

Abstract:
Research on human computation, crowdsourcing and outsourcing have been
constantly growing in the last decade. However, they are far from being
completely integrated in our lives. What needs to happen before they are?
What will happen when these new paradigms have become pervasive? This talk
begins by discussing tools and experiments I have worked on in this area
(including TurKit), and ends with a discussion of the current state of
human computation.

Friday, February 4, 2011

Crowdsourcing for speech technology

Title: Crowdsourcing for speech technology
Speaker: Gabriel Parent

When: February 9th (12:00 pm to 1:00 pm) in GHC6501

Abstract: The growing need for speech data in the last decades has lead to new guidelines for speech transcription. However, even with these "quick transcription" guidelines, transcribing a large quantity of speech using the traditional methods are very costly and slow. The use of crowdsourcing has considerably changed the way speech data is acquired and processed. In this talk, I will give an overview of the research on using crowdsourcing for speech labeling, speech acquisition and spoken dialog system assessment. I will then present both the design principles we used and the results we had using Mechanical Turk to transcribe over 250,000 speech utterances. Important issues such as quality control and throughput will also be addressed.

Wednesday, January 12, 2011

Panos Ipeirotis: Get Another Label? Improving Data Quality and Machine Learning using Multiple, Noisy Labelers

Title: Get Another Label? Improving Data Quality and Machine Learning
using Multiple, Noisy Labelers

Time: Thursday, January 13th, from 12:00pm to 1:00pm.
Room: GHC6115
Speaker: Panos Ipeirotis (http://pages.stern.nyu.edu/~panos/)

Abstract: I will discuss the repeated acquisition of "labels" for data
items when the labeling is imperfect. Labels are values provided by humans
for specified variables on data items, such as "PG-13" for "Adult Content
Rating on this Web Page." With the increasing popularity of
micro-outsourcing systems, such as Amazon's Mechanical Turk, it often is
possible to obtain less-than-expert labeling at low cost. We examine the
improvement (or lack thereof) in data quality via repeated labeling, and
focus especially on the improvement of training labels for supervised
induction. We present repeated-labeling strategies of increasing
complexity, and show several main results: (i) Repeated-labeling can
improve label quality and model quality (per unit data-acquisition cost),
but not always. (ii) Simple strategies can give considerable advantage,
and carefully selecting a chosen set of points for labeling does even
better (we present and evaluate several techniques). (iii) Labeler
(worker) quality can be estimated on the fly (e.g., to determine
compensation, control quality or eliminate Mechanical Turk spammers) and
systematic biases can be corrected. I illustrate the results with a
real-life application from on-line advertising: using Mechanical Turk to
help classify web pages as being objectionable to advertisers. Time
permitting, I will also discuss our latest results showing that mice and
Mechanical Turk workers are not that different after all.
This is joint work with Foster Provost, Victor S. Sheng, and Jing Wang. An
earlier version of the work received the Best Paper Award Runner-up at the
ACM SIGKDD Conference.