Some readings on crowdsourcing for research and innovation

1. Guinan, E., Boudreau, K. J., & Lakhani, K. R. (2013). Experiments in open innovation at Harvard Medical School. MIT Sloan Management Review, 54(3), 45-52.

” But in February 2010, Drew Faust, president of Harvard University, sent an email invitation to
all faculty, staff and students at the university (more than 40,000 individuals) encouraging them
to participate in an “ideas challenge” that Harvard Medical School had launched to generate research
topics in Type 1 diabetes. Eventually, the challenge was shared with more than 250,000
invitees, resulting in 150 research ideas
and hypotheses. These were narrowed
down to 12 winners, and multidisciplinary
research teams were formed to
submit proposals on them.”

” In May 2008, Harvard Catalyst received a fiveyear
NIH grant of $117.5 million, plus $75 million
from the university, the medical school and its affiliated
academic health-care centers. These funds were
designated to educate and train investigators, create
necessary infrastructure and provide novel funding
mechanisms for relevant scientific proposals. However,
the funds did not provide a way to engage the
diversity and depth of the whole Harvard community
to participate in accelerating and “translating”
findings from the scientist’s bench to the patient’s
bedside, or vice versa. Could open-innovation concepts
be applied within a large and elite science-based
research organization to help meet that goal?”

“Albert Einstein captured the
importance of this aspect of research:
“The formulation of a problem is far more
often essential than its solution, which may be
merely a matter of mathematical or experimental
skill. To raise new questions, new
possibilities, to regard old problems from a new
angle, requires creative imagination and marks
real advances in science.”6”

“Harvard Catalyst offered $30,000 in awards. Contestants
were not required to transfer exclusive
intellectual property rights to Harvard Catalyst. Rather,
by making a submission, the contestant granted Harvard
Catalyst a royalty-free, perpetual, non-exclusive
license to the idea and the right to create researchfunding
proposals to foster experimentation.”

“In total, 779 people opened the link at InnoCentive’s
website, and 163 individuals submitted 195
solutions. After duplicates and incomplete submissions
were weeded out, a total of 150 submissions
were deemed ready for evaluation. The submissions
encompassed a broad range of therapeutic areas including
immunology, nutrition, stem cell/tissue
engineering, biological mechanisms, prevention, and
patient self-management. Submitters represented 17
countries and every continent except Antarctica.
About two-thirds came from the United States. Fortyone
percent of submissions came from Harvard
faculty, students or staff, and 52% of those had an affiliation
with Harvard Medical School. Responders’
ages ranged from 18 to 69 years, with a mean age of 41.”

“Fostering Interdisciplinary Teams
After selecting the ideas, Harvard Catalyst set out to
form multidisciplinary teams. While researchers
tend to stay within their domains, Harvard Catalyst
wanted to learn if scientists from other life-science
disciplines and disease specialties could potentially
convert their research hypotheses into responsive
experimental proposals in the Type 1 diabetes
arena.10 Harvard Catalyst reached out to Harvard
researchers from other disciplines with associated
knowledge and invited them to submit a proposal
to address one of the selected questions.”

” The Leona Helmsley Trust
put up $1 million in grant funding at Harvard to
encourage scientists to create experiments based on
these newly generated research questions.”

“In addition to normal advertising of the grant
opportunity, Harvard Catalyst used a Harvard Medical
School database to identify researchers whose
record indicated that they might be particularly well
suited to submit proposals. The Profiles system takes
the PubMed-listed publications for all Harvard
Medical School faculties and creates a database of
expertise (keywords) based on the MeSH classification
of their published papers. Dr. Griffin Weber,
then the chief technology officer of Harvard Medical
School and the creator of the Profiles system, assisted
Harvard Catalyst in taking the coded MeSH categories
for the winning proposals — now imbedded in
the thematic areas — and matching them through a
sophisticated algorithm to the keyword profiles of
the faculty. The intention was to move beyond the
established diabetes research community and discover
researchers who had done work related to
specific themes present in the new research hypotheses
but not necessarily in diabetes.”

” The matching algorithm revealed the names of
more than 1,000 scientists who potentially had the
knowledge needed to create research proposals for
these new hypotheses.”

” The outreach yielded 31 Harvard faculty-led
teams vying for Helmsley Trust grants of $150,000,
with the hope that sufficient progress in creating preliminary
data would spark follow-on grants. These
research proposals were evaluated by a panel of Harvard
faculty, with expertise weighted toward Type 1
diabetes and immunology and unaffiliated with Harvard
Catalyst administration. Seven grant winners
were announced. Core to the mission of the openness
program was that the algorithm for potentially contributory
investigators had identified 23 of the 31
principal investigators making a submission and 14
of these 23 had no significant prior involvement in
Type 1 diabetes research — a core element of the
open-innovation experiment. Seven proposals were
funded, five of which were led by principal investigators
or co-principal investigators without a history of
significant engagement in Type 1 diabetes research.”

” Somewhat unexpectedly, Harvard Catalyst discovered
that while academic researchers tend to be
very specialized and focused on extremely narrow
fields of interest, explicit outreach to individuals
with peripheral links to a knowledge domain can engage
their intellectual passions. Harvard Catalyst
uncovered a dormant demand for cross-disciplinary
work, which many leaders within Harvard Catalyst
doubted existed. However, as soon as bridges were
built, individuals and teams started to cross over.
The lesson for managers outside academic medicine
is that there may be sufficient talent, knowledge and
passion for high-impact breakthrough work currently
inside their organizations — but trapped in
functional or product silos. By creating the incentives
and infrastructure that enable and encourage
bridge crossing, managers can unleash this talent.”

” The Harvard Catalyst approach to introducing
open innovation was to layer it directly on top of existing
research and evaluation processes. Harvard
Catalyst executives simply added an open dimension
to all stages of the current innovation process. Thus,
individuals already in the field did not feel they were
being systematically excluded. The entire effort
could be viewed as a traditional grant solicitation
and evaluation process with the exception that all
stages were designed so that more diverse actors
could participate. This strategic layering of open
dimensions on traditional processes positions open
innovation as a tweak to currently accepted practice
instead of a radical break with the past.”

2.  Lakhani, K. R., Boudreau, K. J., Loh, P. R., Backstrom, L., Baldwin, C., Lonstein, E., … & Guinan, E. C. (2013). Prize-based contests can provide solutions to computational biology problems. Nature biotechnology, 31(2), 108-111.

“To
determine whether this approach could solve
a real big-data biologic algorithm problem, we
used a complex immunogenomics problem
as the basis for a two-week online contest
broadcast to participants outside academia
and biomedical disciplines. Participants in
our contest produced over 600 submissions
containing 89 novel computational approaches
to the problem. Thirty submissions exceeded
the benchmark performance of the US
National Institutes of Health’s MegaBLAST.
The best achieved both greater accuracy and
speed (1,000 times greater)”

“It
has been projected that by 2018 there will
be a shortage of approximately 200,000 data
scientists and 1.5 million other individuals in
the US economy with sufficient training and
skills to conceptualize and manage big-data
analyses6.”

 

“To investigate the specific technical
approaches developed by contestants, we
commissioned three independent computer
science Ph.D. researchers to review all
submissions and determine what techniques
were implemented. Their analyses determined
that ten distinct elemental methods (Table 1)
were used in 89 combinations in the 654
submissions. As the number of elemental
methods in a submission increased, so did
its performance (Fig. 2 and Supplementary
Methods), with leaderboard scores
increasing by 85.3 points for each additional
method employed (P < 0.01). Analysis of
the benchmark algorithms showed that
the methods numbered 2, 3, 5 and 8 were
implemented in the MegaBLAST algorithm,
and methods 2, 4 and 7 were implemented in
the idAb code.”

Brown, B., Chui, M. & Manyika, J. McKinsey Q. 4, 24–35 (2011).

Such contests are one part
of a decade-long trend toward solving science
problems through large-scale mobilization
of individuals by what the popular press
refers to as ‘crowdsourcing’12.

Howe, J. Crowdsourcing (Crown Books, New York; 2008).

“We ran our contest on the TopCoder.com
online programming competition website, a
commercial platform that had the advantage
of providing us with an existing community
of solvers. Established in 2001, TopCoder
currently has a community of over 400,000
software developers who compete regularly to
solve programming challenges13. Our contest
ran for two weeks and offered a $6,000 prize
pool, with top-ranking players receiving cash
prizes of up to $500 each week. Our challenge
drew 733 participants, of whom 122 (17%)
submitted software code. This group of
submitters, drawn from 69 countries, were
roughly half (44%) professionals, with the
remainder being students at various levels.
Most participants were between 18 and 44
years old. None were academic or industrial
computational biologists, and only five
described themselves as coming from either
R&D or life sciences in any capacity.”

“Consistent with usual practices in
algorithm and software development contests,
participants were able to make multiple code
submissions to enable testing of solutions
and participant learning and improvement.
Collectively, participants submitted
654 solutions, averaging to 5.4 submissions per
participant. Participants reported spending an
average of 22 h each developing solutions, for
a total of 2,684 h of development time. Final
submissions that received cash awards are
available for download under an open source
license (see Supplementary Notes).”

“Karim R Lakhani1,2,*, Kevin J Boudreau2,3,*,
Po-Ru Loh4, Lars Backstrom5,
Carliss Baldwin1, Eric Lonstein1,
Mike Lydon5, Alan MacCormack1,
Ramy A Arnaout6,7,*& Eva C Guinan7,8,*
1Harvard Business School, Boston,
Massachusetts, USA. 2Harvard-NASA
Tournament Lab, Institute for Quantitative
Social Science. 3London Business School,
London, UK. 4Department of Mathematics and
Computer Science and Artificial Intelligence
Laboratory, Massachusetts Institute of
Technology, Cambridge, Massachusetts, USA.
5TopCoder.com, Glastonbury, Connecticut,
USA. 6Department of Pathology and Division of
Clinical Informatics, Department of Medicine,
Beth Israel Deaconess Medical Center, Boston,
Massachusetts, USA. 7Harvard Medical School,
Boston, Massachusetts, USA. 8Department
of Radiation Oncology, Dana-Farber Cancer
Institute, Boston, Massachusetts, USA. *These
authors contributed equally.
e-mail: eva_guinan@dfci.harvard.edu”

 

3. Boudreau, K. J., & Lakhani, K. R. (2013). Using the crowd as an innovation partner. Harvard business review, 91(4), 60-9.

“Managers remain understandably cautious.
Pushing problems out to a vast group of strangers
seems risky and even unnatural, particularly to organizations
built on internal innovation. How, for example,
can a company protect its intellectual property?
Isn’t integrating a crowdsourced solution into
corporate operations an administrative nightmare?
What about the costs? And how can you be sure
you’ll get an appropriate solution?”

“These concerns are all reasonable, but excluding
crowdsourcing from the corporate innovation
tool kit means losing an opportunity. The main
reason companies resist crowds is that managers
don’t clearly understand what kinds of problems a
crowd really can handle better and how to manage
the process.”

“Having determined that you face a challenge
your company cannot or should not solve on its own,
you must fi gure out how to actually work with the
crowd. At fi rst glance, the landscape of possibilities
may seem bewildering. But at a high level, crowdsourcing
generally takes one of four distinct forms—
contest, collaborative community, complementor, or
labor market—each best suited to a specifi c kind of
challenge. Let’s examine each one.”

“Today online platforms
such as TopCoder, Kaggle, and InnoCentive provide
crowd-contest services. They source and retain
members, enable payment, and protect, clear, and
transfer intellectual property worldwide.”

 

“A contest should be promoted in such a way—
with prizes and opportunities to increase stature
among one’s peers—that it appeals to sufficiently
skilled participants and receives adequate attention
from the crowd. The sponsor must devise and
commit to a scoring system at the outset. In addition,
explicit contractual terms and technical specifications
(involving platform design) must be created to
ensure the proper treatment of intellectual property.”

“Crowd Collaborative Communities
In June of 1998 IBM shocked the global software industry by announcing that it intended to abandon its internal development efforts on web server infrastructure and instead join forces with Apache, a nascent online community of webmasters and technologists. The Apache community was aggregating diverse inputs from its global membership to rapidly deliver a full-featured—and free—product that far outperformed any commercial off ering. Two years later IBM announced a three-year, $1 billion initiative to support the Linux open-source operating system hundreds of open-source communities to jointly create a range of software products. In teaming up with a collaborative community, IBM recognized a twofold advantage: The Apache community was made up of customers who knew the software’s defi cits and who had the skills to fi x them. With so many collaborators at work, each individual was free to attack his or her particular problem with the software and not worry about the rest of the components. As individuals solved their problems, their solutions  were integrated into the steadily improving software. IBM reasoned that the crowd was beating it at the software game, so it would do better to join forces and reap profi ts through complementary assets such as hardware and services.”

“To be sure, crowds aren’t always the best way to
create complementary products. They make sense
only when a great number and variety of complements
is important. Otherwise, a few partners or
even an internal organization will better serve the
goal.”

“There are also advantages to assembling complementor
crowds that are specifi c to a company’s own
platform. Think of the enormous ecosystems around
Microsoft, Facebook, and Apple, each of which operates
on a model that stimulates adoption on both
the complementor and customer sides to kick-start
positive interactions and initiate growth. (How to get
this started is a classic chicken-and-egg problem that
has received much research attention in the past 20
years and goes beyond the scope of this article.) The
strategies of those companies require considerable
industry experience and support and depend on the
particulars of the situation. They involve the design
of the core product, setting prices for diff erent sides
of the platform, setting expectations, and creating a
wider set of inducements, among other issues.”

“Kevin J. Boudreau is an assistant professor of strategy
and entrepreneurship at London Business School and a
research fellow at Harvard’s Institute for Quantitative Social
Science. Karim R. Lakhani is the Lumry Family Associate
Professor of Business Administration at Harvard Business
School and the principal investigator of the Harvard-NASA
Tournament Lab at the Institute for Quantitative Social
Science.”

 

4. Challenge.gov Wins “Innovations in American Government” Award
Posted by Cristin Dorgelo on January 23, 2014 at 01:10 PM EDT
http://www.whitehouse.gov/blog/2014/01/23/challengegov-wins-innovations-american-government-award

“Since its launch in September 2010 by the General Services Administration (GSA), Challenge.gov has become a one-stop shop where entrepreneurs and citizen solvers can find public-sector prize competitions. The website has been used by nearly 60 Federal agencies to source solutions to over 300 incentive prizes and challenges and to engage more than 42,000 citizen solvers.”

URL 20140505: http://www.whitehouse.gov/sites/default/files/microsites/ostp/competes_prizesreport_dec-2013.pdf


Implementation of Federal Prize Authority: Fiscal Year 2012 Progress Report
A Report from the
Office of Science and Technology Policy
In Response to the Requirements of the
America COMPETES Reauthorization Act of 2010
December 2013 ”

“A 2009 McKinsey report found that philanthropic and private-sector investment in prizes increased significantly in recent years, including $250 million in new prize money between 2000 and 2007.8 Some of these incentive prizes included the GoldCorp Challenge9, the Ansari X Prize10, the Netflix Prize11, and the Heritage Health Prize Competition12 ”

“See e.g., McKinsey & Company, “And the Winner Is…”; Capturing the promise of philanthropic prizes, 2009, http://www.mckinseyonsociety.com/downloads/reports/Social-Innovation/And_the_winner_is.pdf ”

“Pay only for success and establish an ambitious goal without having to predict which team or approach is most likely to succeed.

Reach beyond the “usual suspects” to increase the number of solvers tackling a problem and to identify novel approaches, without bearing high levels of risk.

Bring out-of-discipline perspectives to bear.

Increase cost-effectiveness to maximize the return on taxpayer dollars. ”

 

Some articles relevant to building bio-inspired design cognitive assistants

I just read the three articles by Prof. Ashok Goel (Georgia Tech):

Goel, A. K., Vattam, S., Wiltgen, B., & Helms, M. (2012). Cognitive, collaborative, conceptual and creative—four characteristics of the next generation of knowledge-based CAD systems: a study in biologically inspired design. Computer-Aided Design, 44(10), 879-900.

As I read this article, the first thing I thought of was that I needed to add R1/XCON and intelligent configurators and CAD systems to the list of instances in the brief history of cognitive systems.

Second, the blending of next generation computer-aided design and cognitive assistants made me think of Tony Stark’s computer assistant (in the “Iron Man” science fiction movie) and this implementation by Elon Musk: http://www.saratechinc.com/future-of-design/

Third, this article made me think more about the question – what is the difference between a “good tool” and a cognitive assistant? CAD systems may be “good tools” but if the user or even better multiple users can talk and gesture with the systems understanding the input and changing what is displayed, and the system can learn by being shown or asked to refer to relevant examples in the literature, and the system can provide levels of confidence in proposed solutions (see discussion thread “A Very Brief History of Cognitive Assistants”) – then we have from “good tool” to more clearly a cognitive assistant for engineers working on bio-inspired designs.

Next, I liked this section in the paper….

“Finally, the fourth C is creativity. It has been said many times
in the design literature that design can be routine, innovative, or
creative [108], even though these categories often are imprecise.
Brown and Chandrasekaran [6], for example, suggested that (1) in
routine design, both the basic structure of the desired system and
the plans for selecting the parametric values of each component
were known, (2) in innovative design, only the structure of
the system was known and the plans for selecting component
parameter values were unknown, and (3) in creative design, the
structure of the design itself was unknown. In our own earlier work
on case-based design [31,32], we have proposed that (1) in routine
design, the modifications needed to adapt a known design into the
desired design are limited to values of parameters of components
in the design, (2) in innovative design, the needed modifications
pertain to the components of the design, and (3) in creative design,
the modifications entail changes to the topology of the design
itself.”

I have talked about incremental, radical and super-radical innovation as: (1) incremental innovation change the numeric values – for example 50 miles per gallon, instead of 45 miles per gallon.   (2) radical innovations change the combinations of units of measure used to understand the innovation – for example, bits/joule for mobile phone communications, and (3) super-radical innovations change the units of measure themselves – for example, when e-Bay developed a measure of reputation for people using their system to sell items to others.   As instances of types of systems are designed or evolve over time, we can see incremental, radical, and super-radical innovation examples.

Finally, I like the discussion of SBF (Structure-Behavior-Function) and the DANE system very much.   I think cognitive assistants in some ways mirror textbooks.  Cognitive assistants must include concepts, relationships, case studies, problems-solutions/questions-answers.   So a good cognitive assistant should be able to operate in a mode that allows it to help users receive certifications for demonstrated competencies and skills.  A good cognitive assistant should also be able to act as a personal coach or mentor for learners trying to become more competent or master a domain of study and practice.

Vattam, SS & Goel, AK (2013) Biological Solutions for Engineering Problems: A Study in Cross-Domain Textual Case-Based Reasoning. In S.J. Delany and S. Ontañón (Eds.): ICCBR 2013, LNAI 7969,  Springer-Verlag Berlin Heidelberg pp. 343–357.

Again, this one made me think of what is the difference between a “good tool” and a cognitive assistant.  Perhaps search engines need to be added to the list of a brief history of cognitive assistants.  Textual Case Based Reasoning systems have the challenges of  findability,  recognizability, and  understandability. The Biologue interactive system was an interesting exploration of some of these challenges in web-based retrieval of documents.

Goel, A, Zhang G, Wiltgen B, Zhang Y, Vattam, S, Yen, J (203) The Design Study Library: Compiling, Analyzing and Using Biologically Inspired Design Case Studies. In Design Computing and Cognition DCC’14. J.S. Gero (ed),
Springer. pp. xx-yy.

Regarding  the Design Study Library (DSL) an interactive system that provides access to a digital library of case studies  of biologically inspired design, I liked the two-level design of projects and documents.  T-charts are also a nice innovation for comparing the problem domain and biological solution domain.  An impressive set of case studies for a textbook/cognitive assistant are also presented.  Prof. Goel’s classes are a rich source of data on learning too!

After reading the papers, I decided to do some searches on O*NET Online (Occupation Network On Line).… I searched for these keywords, and found the number of occupations interesting:

Manage – appears in 715 occupation descriptions
Communicate – 428 occupations
Design – 415 occupations
Engineer – 339 occupations
Collaborate – 253 occupations

As we think about building cognitive assistants for all occupations, it will be important to have cognitive systems components related to design.  Building cognitive assistants from textbooks that define concepts, relationships, enumerate important cases studies, and assemble problem-solution, question-answer pairs, both correct and incorrect will be helpful.   Prof. Goel’s articles, courses and systems are a gold mine of information.  Also,  O*NET is a good source of information about the tasks that professionals in an occupation must be able to perform – see http://www.onetonline.org/

A Very Brief History of Cognitive Assistants

Hopefully, someone will start a better discussion thread on the history of cognitive assistants, but here is a starting point or baseline discussion.   This very short history of cognitive assistants will examine instances of cognitive assistants.  By instances, we simply mean named systems, projects, or challenges.   However, first we must drive a stake in the ground concerning, “what is a cognitive assistant?”  For example, what capabilities help us distinguish a cognitive assistant from just a “good tool?”

How should we define cognitive assistant?

We will not try to summarize the extensive literature on cognitive assistants and evaluation measures for cognitive assistants in this short discussion, but see Steinfeld (2007) for more.  For our purposes in this short discussion, the difference between a “good tool” and a cognitive assistant can be a fine line.   We will use three cognitive capabilities to help distinguish “good tools” from cognitive assistants: language, learning, and levels (confidence levels in responses).   Language refers to natural language communications in words and sentences, but ultimately includes more – gestures, gaze, diagrams, and much more – all the ways people communicate with each other.   Learning refers to the ability to learn from positive and negative examples of questions and responses, but ultimately included more – direct user feedback and teaching dialogue, analogical reasoning, and more.  Levels refers to the ability to provide estimates of confidence levels in multiple possible responses, but ultimately includes more such as explanation, debating, and argumentation capabilities.  We might want to add a fourth capability “limbs” if we want to talk about embodied cognitive assistants – robots.

Paralleling a short four-stage history of Artificial Intelligence we will examine instances of potential cognitive assistants from these four eras: formative era, micro-worlds era, expert systems era, real world era.

Formative era

1945 MEMEX (Bush)
1962 AUGMENT (Engelbart)

These two early systems provided much of the vision for “cognitive assistants for knowledge workers ” by using technology to augment human intellect, especially with respect to symbolic processing of text and networks of inter-related concepts.

Micro-worlds era

1955 Logic Theorist (Newell and Simon)
1956 Checker Player (Samuel)
1966 Eliza (Weizenbaum)

There are of course many more examples of pioneering systems from this era, but these three provide a nice illustration of three categories.  The Logic Theorist can be seen on the path leading to systems such as WolframAlpha dealing with computable knowledge.    Samuel’s Checker-playing Program was the forerunner of so many game playing programs, culminating in Deep Blue, fulfilling and early AI prophesy of defeating the world champion in Chess, and more recently Watson Jeopardy!.   All these game playing programs can be used in a performance support mode to aid people learning and playing games.   Finally, Eliza illustrates that simple tricks and deception can make for entertaining aspects of cognitive assistants – linking to somewhat surface solutions or deception-oriented versions of the Turing Test grand challenge.

Expert systems era

1965-1987 DENDRAL
1974-1984 MYCIN
1987 Cognitive Tutors (Anderson)
1987 Knowledge Navigator System

The early expert systems led to corporations envisioning cognitive assistants for professionals to improve the productivity and creativity of knowledge-workers across a wide range of jobs.   Cognitive tutors are proxy for all the intelligent tutoring systems, learning support, performance support systems of this era – too numerous to mention.    The Knowledge Navigator video provided an update on MEMEX and AUGMENT in many ways with natural language dialogue and multimedia in an envisioned executive assistant and collaborative research assistant.

Real world era

2009 WolframAlpha
2011 Waton Jeopardy!
2011 SIRI
2014 Watson Solutions

WolframAlpha: Is WolframAlpha a “good tool” or a cognitive assistant?  Wolphra Alpha provides a natural language interface and computational engine for a wide range of natural language and mathematical language queries – pushing the limits of computable knowledge (WolframAlfa 2014).  It is a “good tool” evolving towards becoming a cognitive assistant, once users and other can help it learn, and once it provides levels of confidence on answers.  Currently, WolframAlpha tries to provide one “right” answer, or just give up and reply no answer found, rather than several possible and ranked answers.

Watson Jeopardy! Like all game playing systems, Watson Jeopardy! has a use case where it could be used by a person to compete against competitors to win a game.  Watson Jeopardy! clearly demonstrated some language, learning, and levels capabilities.  If it were to be packaged as an app, or in some other way to help people perform the task of playing and winning Jeopardy games then it could be considered a cognitive assistant.

SIRI: SIRI is clearly marketed as an intelligent or cognitive assistant.  It exhibits language capabilities, but not so much on learning or levels of confidence in alternative answer, but like a search engine it will often return alternatives, if asked for a list of possibilities.  The history of SIRI traces back to CALO and PAL, which were DARPA funded projects, at SRI International (Bosker 2013)

Watson Solutions: Engagement Advisor, Discovery Advisor, Watson Chef, and other Watson Solutions target specific market needs where human expertise needs to be augmented or scaled to improve productivity and quality of specific occupational tasks.   These systems use language, learn, and provide confidence levels for alternative responses.   Still, building these systems is complex and difficult.

Concluding Remarks

This discussion thread aims to collect comments about instances of cognitive assistants throughout history.  Beyond a workable definition of cognitive assistants in terms of capabilities, this discussion thread can ultimately contribute to the discussion of streamlined development and evaluation methodologies for cognitive assistants for all professions.  The Cognitive Systems Institute is motivated by the vision of augmenting and scaling human expertise – providing everyone eventually with an executive assistant, personal coach, and mentor for any and all occupations (Spohrer 2014).

References

Bosker B (2013) SIRI RISING: The Inside Story Of Siri’s Origins — And Why She Could Overshadow The iPhone
January 22, 2013 URL: http://www.huffingtonpost.com/2013/01/22/siri-do-engine-apple-iphone_n_2499165.html

Bush, V. (1945). As we may think. The atlantic monthly, 176(1), 101-108.

Colligan B (2011) How the Knowledge Navigator video came to be.
November 20, 2011 URL: http://www.dubberly.com/articles/how-the-knowledge-navigator-video-came-about.html

Engelbart, D. C. (1995). Toward augmenting the human intellect and boosting our collective IQ. Communications of the ACM, 38(8), 30-32.

Newell, A., & Simon, H. A. (1956). The logic theory machine–A complex information processing system. Information Theory, IRE Transactions on, 2(3), 61-79.

Spohrer, J. (2014) Cognitive Systems: Vision and Directions.  NUS Cognitive Colloquium.
September 12, 2014 URL: http://www.slideshare.net/spohrer/cognitive-20140912-v3

Steinfeld, A., Quinones, P. A., Zimmerman, J., Bennett, S. R., & Siewiorek, D. (2007, August). Survey measures for evaluation of cognitive assistants. In Proceedings of the 2007 Workshop on Performance Metrics for Intelligent Systems (pp. 175-179). ACM.

WolframAlpha (2014) Timeline of Systematic Data and the Development of Computatable Knowledge.
September 12, 2014 URL: http://www.wolframalpha.com/docs/timeline/computable-knowledge-history-6.html

A short talk

Here is a short talk I give at universities and university incubators, and conferences that bring such groups together.

I lead IBM’s Global University Programs and Cognitive Systems Institute, as well as active in ISSIP.org and service science/service innovation areas.

IBM priorities are CCAMSS = Cognitive, Cloud, Analytics, Mobile, Social, Secure – and the service innovations that tie them all together and make them work for customers.

Regarding startups, our primary interest is (1) companies built on our platform (IBM’s platform), and (2) companies the sell to the Forbes Global 2000 (IBM’s primary customers).

In general, IBM is not interested in licensing any IP from universities – we create nearly 7000 patents a year in CCAMSS and related areas, and have been #1 company in the world for 21 years on patent creation, which is about a $1B year licensing business for us.  We do have tools to help universities license their patents to others though – see IBM SIIP tool, now Watson Discovery Advisor.

IBM has acquired over 140 companies in the last 14 years, about one a month, average age 15 years old on acquisition, and about 66% of them started in a university ecosystem (e.g., SPSS), and average revenue per year on acquisition is order of magnitude $100M rev/year.

IBM is very interested in helping universities create more successful startups that can go zero to a billion in revenue.  We have programs to help startups grow that are built on our platform and sell to our customers.  IBM has programs that help startups sell to big companies – supplier connection.

We see one of the largest opportunities for startups in developing enterprise mobile apps, including cognitive assistants for all occupations as part of smart service systems.

To accelerate collaborations with IBM,  a university might ask these maturity of relationship questions:
(1) does IBM (or IBM customers) recruit students from the university?
(2) do the faculty teach with IBM tools and platform – freely available through the academic initiative?
(3) does the university create startups based on IBM platform?
(4) does the university participate in Smart Camps & Global Entrepreneurship program?
(5) do the startups as they mature make use of the IBM Supplier Connect or other platforms?
(6) does the university and broader ecosystem use any IBM solutions from HPC to asset management?
(7) are there opportunities to purse collaborative research projects together?
(8) is there a regional economic development play (e.g.,  NY state with RPI, OH state with OSU, LA state with LSU, etc.)?
(9) does the university have a full IBM team engaged, PEP, Client Exec, Academic Initiatives Lead, IBMers on Campus, etc.

The best relationships have a full IBM team engaged in regional economic development with universities at the center.

Human-Side of Service Engineering, Las Vegas, NV USA July 26-30, 2015

AHFE HSSE-2015 is less than a year away.

This is a multi-conference with over 2000 participants, with human-factors as an overall theme, and the human-side of service engineering as one of the conferences.

I hope to organize the following session as part of AHFE HSSE-2015, so let me know if you would like to contribute a presentation or a paper (send email to spohrer@us.ibm.com).

 

Title: Smart Service Systems: Augmenting and scaling human expertise with cognitive assistants

Abstract:  Cognitive assistants are beginning to appear for more and more occupations – from doctors to chefs to biochemists – boosting creativity and productivity of workers.  Given this important trend  a better understanding of the role of cognitive assistants in the design of smart service systems will be needed.  The speakers in this session will explore this trend and topic from multiple perspectives, including academic, industry, government, foundation, professional association – as well as the transformation of professions and industries.

References:

Bassett, J. 2014. Memorial Sloan Kettering Trains IBM Watson to Help Doctors Make Better Cancer Treatment Choices.  April 11, 2014.
URL http://www.mskcc.org/blog/msk-trains-ibm-watson-help-doctors-make-better-treatment-choices

Bilow, R. 2014. How IBM’s Chef Watson Actually Works. Bon Appetit. June 30, 2014.
URL: http://www.bonappetit.com/entertaining-style/trends-news/article/how-ibm-chef-watson-works

Simonite, T. 2014. Software Mines Science Papers to Make New Discoveries.  MIT.  November 25, 2014.
URL: http://m.technologyreview.com/news/520461/software-mines-science-papers-to-make-new-discoveries/

CFP: IT-Enabled Business Innovation @ IEEE IT Professional – September 1, 2014

http://www.computer.org/portal/web/computingnow/itcfp2

  • IT-Enabled Business Innovation
  • Submission deadline: 1 Sept. 2014
  • Publication: March/April 2015

IT is a key enabler of business innovation. The impact on businesses in every industry has never been greater. IT is a key source of innovations that drive growth. Indeed, it’s rare to find a product or service that’s not touched by, or enabled by, IT in some manner.

This special issue of IT Professional seeks to provide readers with an overview of the current issues and practices as well as a look into the future as IT professionals and technologies become indispensable as key enablers of product and service development and the creation of new markets. We seek articles from industry, business, academia, and government.

Topics of interest include the following:

  • IT as an innovation platform
  • IT as a driver of business transformation
  • IT innovation best practices
  • Innovative IT-enabled business models
  • IT as an innovation accelerator
  • IT and open innovation
  • IT-enabled service innovation
  • Radical and incremental innovation for digital services
  • IT-enabled business strategy
  • Leveraging big data & analytics in the innovation process
  • The role of IT in facilitating value co-creation and co-innovation
  • Defining new markets with IT innovation • Getting IT ready to innovate
  • IT professionals as business strategists
  • From IT projects to business strategy
  • Innovation of digital services
  • Emerging social technologies
  • The next wave of mobility technology
  • The future of wearable IT
  • Ideation and the wisdom of crowds
  • IT and the autonomous future
  • Internet of things
  • Medical IT innovation
  • Servitization with IT Submissions

Feature articles should be no longer than 4,200 words with no more than 20 references (with tables and figures counting as 300 words). Illustrations are welcome. For author guidelines, including sample articles, please see: www.computer.org/portal/web/peerreviewmagazines/acitpro

Submit your article at https://mc.manuscriptcentral.com/itpro-cs

Questions?

For more information, please contact the Guest Editors:

Service Science and Big Data Analytics

Companies often ask about IBM’s efforts in the area of Service Science and Big Data Analytics, so here are a few useful pointers:

 

Service Science
IBM works with over 500 universities worldwide on service science related courses and programs.
An overview of service science was created with Cambridge University in 2008, and can be downloaded here:
http://www.ifm.eng.cam.ac.uk/resources/service/succeeding-through-service-innovation/
Companies interested in working with IBM (and other companies, universities, government agencies) are encouraged to join ISSIP.org – the International Society of Service Innovation Professionals.  Contact Yassi Moghaddam, Executive Director (yassi@issip.org).
Additional information is available here:
https://www-304.ibm.com/connections/wikis/home?lang=en-us#!/wiki/IBM%20Global%20University%20Programs
http://researcher.watson.ibm.com/researcher/view_group.php?id=1230

Big Data Analytics
IBM works with over 1000 universities worldwide on big data analytics related courses and programs.
An overview of big data analytics applied to enterprise operations can be found in this recent book:
http://www.amazon.com/Analytics-Across-Enterprise-Realizes-Business/dp/0133833038
Companies interested in working with IBM (and other companies, universities, government agencies) are encourage to contact the IBM Research – Almaden Accelerated Discovery Lab.  IBM Is also active in INFORMS – Operations Research and Management Sciences professional association.
Additional information is available here and here:
http://www-304.ibm.com/ibm/university/academic/pub/page/ban_predictive_analysis
http://researcher.watson.ibm.com/researcher/view_group.php?id=144

Cognitive Systems Institute

The Cognitive Systems Institute, which is a new set of IBM university programs in conjunction with IBM Research and the Watson Business Unit,  will focus faculty collaborators on building and evaluating cognitive assistants for every profession.   Artificial cognitive systems, or cognitive systems for short,  exhibit capabilities and/or perform tasks deemed intelligent by natural cognitive systems, such as people.  Professional cognitive assistants are cognitive systems designed to boost the productivity and creativity of professionals.   Cognitive systems researchers belong to a special profession, which improves building and evaluating cognitive systems, working on teams with other professionals, such as computer and information research scientists, human factors engineers and ergonomists,   sociologistsoperations research analystsmathematicians, statisticians, industrial engineers, and others.

The Cognitive Systems Institute will focus on professional cognitive assistants that exhibit the three L’s – language, learning, and levels.  Professional cognitive assistants should interact via natural language, learn by ingesting documents, and make recommendations with confidence levels.   For example, the IBM Watson Jeopardy! winning  cognitive system answered natural language questions, ingested Wikipedia and other sources to learn, and provided confidence levels with its answers.  The IBM Watson group is working on cognitive systems to help doctors, financial planners, researchers, and even chefs.   IBM Research is also working on cognitive system that will spar with debaters and politicians to help boost their performance.

The new Cognitive Systems Institute programs will be launched in late August, and will be designed to (1) help prepare faculty to set up Watson/Cognitive aligned courses, with the goal of enabling student teams to develop cognitive apps as part of the Watson Ecosystem,  (2) help prepare faculty and their top graduate students to submit aligned collaborative research proposals to funding agencies, with the goal of developing cognitive systems that boost the productivity and creativity of specific types of professions and professional teams (potentially a key part of what NSF calls “smart service systems”),  (3) help create linkages between faculty and IBM Researchers to define cognitive system grand challenges with clear measurable business and societal impact that might be achievable in the next 3-5 years, with the goal of further advancing the field of cognitive systems research and increasing aligned national research investments.  For example of a grand challenge, IBM’s Watson Jeopardy! system (2011) required close collaboration with seven universities to develop it, and it had a very clear measurable set of performance metrics that focused everyone across organizations and helped make decision-making easier.  For this last item, we are also (4) exploring academic interest in having an IBM-Researcher(s)-In-Residence at their universities, with the goal of accelerating collaborative research and achieving measurable grand challenge objectives.

Please let me know which of items 1 – 4 might be of most interest, or if your interests are in some other direction, and we can help guide you to the right IBMers to follow-up (for example, some universities already have data sets ready to be ingested, and investors interested in developing specific applications, so they are exploring the Watson Developer Cloud for Enterprise, as a path to get on site training and developer licenses more quickly).  Building cognitive systems to boost the performance of professionals, including research and teaching faculty at universities, is likely to be an important application area, and will have associated research grand challenges.

Jim Spohrer (spohrer@us.ibm.com)

Time to re-read “As We May Think” and “Augmenting Human Intellect”

It is time to re-read  Vannevar Bush’s “As We May Think” and Douglas Engelbart’s “Augmenting Human Intellect.”

Bush (1945) wrote: There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear.

Engelbart (1962) wrote: By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.  Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble.

What advances are on the verge of reshaping how we may think and augmenting our intellect?  How might these advances contribute to a Moore’s Law for service science and smart service systems?  What are the related challenges and opportunities to this vision?

Smart phones are an everyday reality (not quite as envisioned in the 1930s, but close enough)…

Tweet 1930s futurist

Regarding how we may think and augmenting human intellect, my colleagues and I are working on a virtual community called the Cognitive Systems Institute (plan to launch next revision in September 2014) with one important goal being the creation of POVs to boost government and venture funding for cognitive systems research and startup development.   “Cogs” (Cognitive Systems/Cognitive Assistants) for boosting regional economic development will be appearing at an accelerating pace.  Gartner predicts 10% of all computers will be learning by 2017.   Many industries and professions will be disrupted including higher education.   The gist of the vision for the Cognitive Systems Institute is to have university researchers building “Cogs” for every profession, for every region and in every language, and with student teams launching startups using the Watson Developer Cloud.  IBM’s BlueMix on SoftLayer is the beginning of the API Economy leading to Cognition as a Service, which access to much cognitive componentry including Natural Language Processing on OpenPower and Pattern Recognition on SyNAPSE.  “Cogs” become our “Cognitive Bulldozers in the Era of Big Data/Internet of Things” because “Cogs” know individuals (you) and know professions (your job), boosting productivity and creativity (how to measure productivity and creativity for many professions is a challenge as well – I recommend this book). “Cogs” learn, use language to interact more naturally, and have levels of confidence in what they know and do not know – learning, language, and levels.   The Forbes Global 2000 companies generate nearly a 1/3 of the GDP of the world in just 2000 publicly traded companies’ revenue…. they have combined about 100M employees.  If there was a “Cog” to help each of their employees be twice as productive and creative – you move the needle on the GDP of the planet.  People enjoy being more productive and creative so quality-of-life might also be improved, if done right.  “Cogs” may well be a key part of a Moore’s Law for Smart Service Systems.

For those interested in helping, after re-reading Bush (1945) and Engelbart (1962), a practical next step would be to read this:
https://www.ibmdw.net/watson/docs/5-steps-powered-watson-application/

IBM has developed enabling technologies and proof-points, and is investing billions to realize this vision imagined by Bush and Engelbart. Aligning government funding and venture funding will enable university faculty researchers and student entrepreneurs to play a major role in making this vision a reality in the coming decade.

Competing for Collaborators

Competing for collaborators is the new normal in a highly interconnected, innovation-driven global economy.

Especially, when it comes to winning the hearts and minds of faculty and students to build startups on industry platforms.

Platforms (sometimes referred to as solution stacks, or HW/SW stacks) include Apple iPhone and Google Android.    IBM has platforms as well – including Watson, Big Data Analytics, and Smarter Planet platforms.  In the case of Watson, IBM is eager to encourage university start ups to compete to build “Cogs” (cognitive assistants with question answering abilities) on the Watson Developer Cloud, and join the Watson Ecosystem (see http://www.ibm.com/watson, as well as https://www.ibmdw.net/watson/docs/5-steps-powered-watson-application/).

Startups are key to regional economic development, and industry platforms can provide both a starting point and foundation for startups.

Industry is looking for top academic brand partners with courses that are reaching millions of entrepreneurial minded faculty and students globally with research based curriculum.  Industry would like one or two lectures in those courses to be geared towards teaching local/regional faculty and student teams about industry platforms, how to be certified as a developer on those platforms, how to develop business plans for startups that build on those platforms, etc.

The easy integration of industry content with globally available top academic brand partners’ courses is what is key.

The KPI from industry perspective is revenue and profit growth driven by successful startups reaching customers with valuable new offerings (in many cases service innovations – hence the connection to ISSIP, the International Society of Service Innovation).   The sustainable and viable business model is based on a small percentage of profits going to the cost of including the industry content in the global courses, and helping the newly minted startups be successful.

Competing for collaborators is the new normal, and easy integration of industry content into global available courses is key.