Some readings on crowdsourcing for research and innovation

1. Guinan, E., Boudreau, K. J., & Lakhani, K. R. (2013). Experiments in open innovation at Harvard Medical School. MIT Sloan Management Review, 54(3), 45-52.

” But in February 2010, Drew Faust, president of Harvard University, sent an email invitation to
all faculty, staff and students at the university (more than 40,000 individuals) encouraging them
to participate in an “ideas challenge” that Harvard Medical School had launched to generate research
topics in Type 1 diabetes. Eventually, the challenge was shared with more than 250,000
invitees, resulting in 150 research ideas
and hypotheses. These were narrowed
down to 12 winners, and multidisciplinary
research teams were formed to
submit proposals on them.”

” In May 2008, Harvard Catalyst received a fiveyear
NIH grant of $117.5 million, plus $75 million
from the university, the medical school and its affiliated
academic health-care centers. These funds were
designated to educate and train investigators, create
necessary infrastructure and provide novel funding
mechanisms for relevant scientific proposals. However,
the funds did not provide a way to engage the
diversity and depth of the whole Harvard community
to participate in accelerating and “translating”
findings from the scientist’s bench to the patient’s
bedside, or vice versa. Could open-innovation concepts
be applied within a large and elite science-based
research organization to help meet that goal?”

“Albert Einstein captured the
importance of this aspect of research:
“The formulation of a problem is far more
often essential than its solution, which may be
merely a matter of mathematical or experimental
skill. To raise new questions, new
possibilities, to regard old problems from a new
angle, requires creative imagination and marks
real advances in science.”6”

“Harvard Catalyst offered $30,000 in awards. Contestants
were not required to transfer exclusive
intellectual property rights to Harvard Catalyst. Rather,
by making a submission, the contestant granted Harvard
Catalyst a royalty-free, perpetual, non-exclusive
license to the idea and the right to create researchfunding
proposals to foster experimentation.”

“In total, 779 people opened the link at InnoCentive’s
website, and 163 individuals submitted 195
solutions. After duplicates and incomplete submissions
were weeded out, a total of 150 submissions
were deemed ready for evaluation. The submissions
encompassed a broad range of therapeutic areas including
immunology, nutrition, stem cell/tissue
engineering, biological mechanisms, prevention, and
patient self-management. Submitters represented 17
countries and every continent except Antarctica.
About two-thirds came from the United States. Fortyone
percent of submissions came from Harvard
faculty, students or staff, and 52% of those had an affiliation
with Harvard Medical School. Responders’
ages ranged from 18 to 69 years, with a mean age of 41.”

“Fostering Interdisciplinary Teams
After selecting the ideas, Harvard Catalyst set out to
form multidisciplinary teams. While researchers
tend to stay within their domains, Harvard Catalyst
wanted to learn if scientists from other life-science
disciplines and disease specialties could potentially
convert their research hypotheses into responsive
experimental proposals in the Type 1 diabetes
arena.10 Harvard Catalyst reached out to Harvard
researchers from other disciplines with associated
knowledge and invited them to submit a proposal
to address one of the selected questions.”

” The Leona Helmsley Trust
put up $1 million in grant funding at Harvard to
encourage scientists to create experiments based on
these newly generated research questions.”

“In addition to normal advertising of the grant
opportunity, Harvard Catalyst used a Harvard Medical
School database to identify researchers whose
record indicated that they might be particularly well
suited to submit proposals. The Profiles system takes
the PubMed-listed publications for all Harvard
Medical School faculties and creates a database of
expertise (keywords) based on the MeSH classification
of their published papers. Dr. Griffin Weber,
then the chief technology officer of Harvard Medical
School and the creator of the Profiles system, assisted
Harvard Catalyst in taking the coded MeSH categories
for the winning proposals — now imbedded in
the thematic areas — and matching them through a
sophisticated algorithm to the keyword profiles of
the faculty. The intention was to move beyond the
established diabetes research community and discover
researchers who had done work related to
specific themes present in the new research hypotheses
but not necessarily in diabetes.”

” The matching algorithm revealed the names of
more than 1,000 scientists who potentially had the
knowledge needed to create research proposals for
these new hypotheses.”

” The outreach yielded 31 Harvard faculty-led
teams vying for Helmsley Trust grants of $150,000,
with the hope that sufficient progress in creating preliminary
data would spark follow-on grants. These
research proposals were evaluated by a panel of Harvard
faculty, with expertise weighted toward Type 1
diabetes and immunology and unaffiliated with Harvard
Catalyst administration. Seven grant winners
were announced. Core to the mission of the openness
program was that the algorithm for potentially contributory
investigators had identified 23 of the 31
principal investigators making a submission and 14
of these 23 had no significant prior involvement in
Type 1 diabetes research — a core element of the
open-innovation experiment. Seven proposals were
funded, five of which were led by principal investigators
or co-principal investigators without a history of
significant engagement in Type 1 diabetes research.”

” Somewhat unexpectedly, Harvard Catalyst discovered
that while academic researchers tend to be
very specialized and focused on extremely narrow
fields of interest, explicit outreach to individuals
with peripheral links to a knowledge domain can engage
their intellectual passions. Harvard Catalyst
uncovered a dormant demand for cross-disciplinary
work, which many leaders within Harvard Catalyst
doubted existed. However, as soon as bridges were
built, individuals and teams started to cross over.
The lesson for managers outside academic medicine
is that there may be sufficient talent, knowledge and
passion for high-impact breakthrough work currently
inside their organizations — but trapped in
functional or product silos. By creating the incentives
and infrastructure that enable and encourage
bridge crossing, managers can unleash this talent.”

” The Harvard Catalyst approach to introducing
open innovation was to layer it directly on top of existing
research and evaluation processes. Harvard
Catalyst executives simply added an open dimension
to all stages of the current innovation process. Thus,
individuals already in the field did not feel they were
being systematically excluded. The entire effort
could be viewed as a traditional grant solicitation
and evaluation process with the exception that all
stages were designed so that more diverse actors
could participate. This strategic layering of open
dimensions on traditional processes positions open
innovation as a tweak to currently accepted practice
instead of a radical break with the past.”

2.  Lakhani, K. R., Boudreau, K. J., Loh, P. R., Backstrom, L., Baldwin, C., Lonstein, E., … & Guinan, E. C. (2013). Prize-based contests can provide solutions to computational biology problems. Nature biotechnology, 31(2), 108-111.

“To
determine whether this approach could solve
a real big-data biologic algorithm problem, we
used a complex immunogenomics problem
as the basis for a two-week online contest
broadcast to participants outside academia
and biomedical disciplines. Participants in
our contest produced over 600 submissions
containing 89 novel computational approaches
to the problem. Thirty submissions exceeded
the benchmark performance of the US
National Institutes of Health’s MegaBLAST.
The best achieved both greater accuracy and
speed (1,000 times greater)”

“It
has been projected that by 2018 there will
be a shortage of approximately 200,000 data
scientists and 1.5 million other individuals in
the US economy with sufficient training and
skills to conceptualize and manage big-data
analyses6.”

 

“To investigate the specific technical
approaches developed by contestants, we
commissioned three independent computer
science Ph.D. researchers to review all
submissions and determine what techniques
were implemented. Their analyses determined
that ten distinct elemental methods (Table 1)
were used in 89 combinations in the 654
submissions. As the number of elemental
methods in a submission increased, so did
its performance (Fig. 2 and Supplementary
Methods), with leaderboard scores
increasing by 85.3 points for each additional
method employed (P < 0.01). Analysis of
the benchmark algorithms showed that
the methods numbered 2, 3, 5 and 8 were
implemented in the MegaBLAST algorithm,
and methods 2, 4 and 7 were implemented in
the idAb code.”

Brown, B., Chui, M. & Manyika, J. McKinsey Q. 4, 24–35 (2011).

Such contests are one part
of a decade-long trend toward solving science
problems through large-scale mobilization
of individuals by what the popular press
refers to as ‘crowdsourcing’12.

Howe, J. Crowdsourcing (Crown Books, New York; 2008).

“We ran our contest on the TopCoder.com
online programming competition website, a
commercial platform that had the advantage
of providing us with an existing community
of solvers. Established in 2001, TopCoder
currently has a community of over 400,000
software developers who compete regularly to
solve programming challenges13. Our contest
ran for two weeks and offered a $6,000 prize
pool, with top-ranking players receiving cash
prizes of up to $500 each week. Our challenge
drew 733 participants, of whom 122 (17%)
submitted software code. This group of
submitters, drawn from 69 countries, were
roughly half (44%) professionals, with the
remainder being students at various levels.
Most participants were between 18 and 44
years old. None were academic or industrial
computational biologists, and only five
described themselves as coming from either
R&D or life sciences in any capacity.”

“Consistent with usual practices in
algorithm and software development contests,
participants were able to make multiple code
submissions to enable testing of solutions
and participant learning and improvement.
Collectively, participants submitted
654 solutions, averaging to 5.4 submissions per
participant. Participants reported spending an
average of 22 h each developing solutions, for
a total of 2,684 h of development time. Final
submissions that received cash awards are
available for download under an open source
license (see Supplementary Notes).”

“Karim R Lakhani1,2,*, Kevin J Boudreau2,3,*,
Po-Ru Loh4, Lars Backstrom5,
Carliss Baldwin1, Eric Lonstein1,
Mike Lydon5, Alan MacCormack1,
Ramy A Arnaout6,7,*& Eva C Guinan7,8,*
1Harvard Business School, Boston,
Massachusetts, USA. 2Harvard-NASA
Tournament Lab, Institute for Quantitative
Social Science. 3London Business School,
London, UK. 4Department of Mathematics and
Computer Science and Artificial Intelligence
Laboratory, Massachusetts Institute of
Technology, Cambridge, Massachusetts, USA.
5TopCoder.com, Glastonbury, Connecticut,
USA. 6Department of Pathology and Division of
Clinical Informatics, Department of Medicine,
Beth Israel Deaconess Medical Center, Boston,
Massachusetts, USA. 7Harvard Medical School,
Boston, Massachusetts, USA. 8Department
of Radiation Oncology, Dana-Farber Cancer
Institute, Boston, Massachusetts, USA. *These
authors contributed equally.
e-mail: eva_guinan@dfci.harvard.edu”

 

3. Boudreau, K. J., & Lakhani, K. R. (2013). Using the crowd as an innovation partner. Harvard business review, 91(4), 60-9.

“Managers remain understandably cautious.
Pushing problems out to a vast group of strangers
seems risky and even unnatural, particularly to organizations
built on internal innovation. How, for example,
can a company protect its intellectual property?
Isn’t integrating a crowdsourced solution into
corporate operations an administrative nightmare?
What about the costs? And how can you be sure
you’ll get an appropriate solution?”

“These concerns are all reasonable, but excluding
crowdsourcing from the corporate innovation
tool kit means losing an opportunity. The main
reason companies resist crowds is that managers
don’t clearly understand what kinds of problems a
crowd really can handle better and how to manage
the process.”

“Having determined that you face a challenge
your company cannot or should not solve on its own,
you must fi gure out how to actually work with the
crowd. At fi rst glance, the landscape of possibilities
may seem bewildering. But at a high level, crowdsourcing
generally takes one of four distinct forms—
contest, collaborative community, complementor, or
labor market—each best suited to a specifi c kind of
challenge. Let’s examine each one.”

“Today online platforms
such as TopCoder, Kaggle, and InnoCentive provide
crowd-contest services. They source and retain
members, enable payment, and protect, clear, and
transfer intellectual property worldwide.”

 

“A contest should be promoted in such a way—
with prizes and opportunities to increase stature
among one’s peers—that it appeals to sufficiently
skilled participants and receives adequate attention
from the crowd. The sponsor must devise and
commit to a scoring system at the outset. In addition,
explicit contractual terms and technical specifications
(involving platform design) must be created to
ensure the proper treatment of intellectual property.”

“Crowd Collaborative Communities
In June of 1998 IBM shocked the global software industry by announcing that it intended to abandon its internal development efforts on web server infrastructure and instead join forces with Apache, a nascent online community of webmasters and technologists. The Apache community was aggregating diverse inputs from its global membership to rapidly deliver a full-featured—and free—product that far outperformed any commercial off ering. Two years later IBM announced a three-year, $1 billion initiative to support the Linux open-source operating system hundreds of open-source communities to jointly create a range of software products. In teaming up with a collaborative community, IBM recognized a twofold advantage: The Apache community was made up of customers who knew the software’s defi cits and who had the skills to fi x them. With so many collaborators at work, each individual was free to attack his or her particular problem with the software and not worry about the rest of the components. As individuals solved their problems, their solutions  were integrated into the steadily improving software. IBM reasoned that the crowd was beating it at the software game, so it would do better to join forces and reap profi ts through complementary assets such as hardware and services.”

“To be sure, crowds aren’t always the best way to
create complementary products. They make sense
only when a great number and variety of complements
is important. Otherwise, a few partners or
even an internal organization will better serve the
goal.”

“There are also advantages to assembling complementor
crowds that are specifi c to a company’s own
platform. Think of the enormous ecosystems around
Microsoft, Facebook, and Apple, each of which operates
on a model that stimulates adoption on both
the complementor and customer sides to kick-start
positive interactions and initiate growth. (How to get
this started is a classic chicken-and-egg problem that
has received much research attention in the past 20
years and goes beyond the scope of this article.) The
strategies of those companies require considerable
industry experience and support and depend on the
particulars of the situation. They involve the design
of the core product, setting prices for diff erent sides
of the platform, setting expectations, and creating a
wider set of inducements, among other issues.”

“Kevin J. Boudreau is an assistant professor of strategy
and entrepreneurship at London Business School and a
research fellow at Harvard’s Institute for Quantitative Social
Science. Karim R. Lakhani is the Lumry Family Associate
Professor of Business Administration at Harvard Business
School and the principal investigator of the Harvard-NASA
Tournament Lab at the Institute for Quantitative Social
Science.”

 

4. Challenge.gov Wins “Innovations in American Government” Award
Posted by Cristin Dorgelo on January 23, 2014 at 01:10 PM EDT
http://www.whitehouse.gov/blog/2014/01/23/challengegov-wins-innovations-american-government-award

“Since its launch in September 2010 by the General Services Administration (GSA), Challenge.gov has become a one-stop shop where entrepreneurs and citizen solvers can find public-sector prize competitions. The website has been used by nearly 60 Federal agencies to source solutions to over 300 incentive prizes and challenges and to engage more than 42,000 citizen solvers.”

URL 20140505: http://www.whitehouse.gov/sites/default/files/microsites/ostp/competes_prizesreport_dec-2013.pdf


Implementation of Federal Prize Authority: Fiscal Year 2012 Progress Report
A Report from the
Office of Science and Technology Policy
In Response to the Requirements of the
America COMPETES Reauthorization Act of 2010
December 2013 ”

“A 2009 McKinsey report found that philanthropic and private-sector investment in prizes increased significantly in recent years, including $250 million in new prize money between 2000 and 2007.8 Some of these incentive prizes included the GoldCorp Challenge9, the Ansari X Prize10, the Netflix Prize11, and the Heritage Health Prize Competition12 ”

“See e.g., McKinsey & Company, “And the Winner Is…”; Capturing the promise of philanthropic prizes, 2009, http://www.mckinseyonsociety.com/downloads/reports/Social-Innovation/And_the_winner_is.pdf ”

“Pay only for success and establish an ambitious goal without having to predict which team or approach is most likely to succeed.

Reach beyond the “usual suspects” to increase the number of solvers tackling a problem and to identify novel approaches, without bearing high levels of risk.

Bring out-of-discipline perspectives to bear.

Increase cost-effectiveness to maximize the return on taxpayer dollars. ”