Join discussions in order to build understanding of concepts in service science. Here is our curriculum guide.
Follow Jim (@JimSpohrer) on Twitter
About this site & registering.
Join discussions in order to build understanding of concepts in service science. Here is our curriculum guide.
Follow Jim (@JimSpohrer) on Twitter
About this site & registering.
The 10th Annual Meeting of the
Service Management and Science Forum
June 11-13, 2015
Co-creating the Customer Service Experience with High Tech and High Touch
The Service Management and Science Forum is a truly transdisciplinary meeting involving academics and practitioners from all disciplines and organizations that focus on service delivery processes and the service systems that support them. The conference has attracted a number of established researchers across operations, marketing, information technology, design, engineering, and human resource management from domestic and international higher education institutions and businesses.
In today’s highly competitive environment, there is a growing emphasis to providing customers with a truly memorable experience as a way to increase both customer satisfaction and long term customer loyalty. The customer service experience resulting from the interaction with the service provider requires a combination of high tech and high touch, which depends on the type of service being provided and therefore is generally heterogeneous by its nature. For instance, experience at Disneyland theme park is largely different from experience at Apple or Microsoft stores. Technology and service design have pushed customer experience management towards a new era.
Information for Contributors:
Individuals from academia, business and government are invited to submit refereed research papers, non-refereed research abstracts, and proposals for workshops, panels, and symposia. All submissions should have a clear focus on enhancing the customer’s experience and are encouraged to be transdisciplinary in nature; that is, they should involve more than a single traditional discipline.
The submission deadline for refereed research papers is February 15, 2015. The submission deadline for non-refereed research abstracts and proposals is March 15, 2015.
Additional details about the 2015 Forum will be forthcoming. In the interim, please mark the dates on your calendar and for more information please contact:
David Xin Ding
University of Houston
Houston, TX 77004
Mark M. Davis
Waltham, MA 02154
The Cognitive Systems Institute is a virtual institute to support global university, government, industry, and foundation collaborations in the area of next generation cognitive systems. The vision is to augment and scale human expertise with cognitive assistants for all occupations in smart service systems.
A key question: How have researchers gone about augmenting themselves and their teams with powerful tools?
Moving up the abstraction tree one level beyond “custom analytics tools” to “general purpose cognitive assistants” for all occupations. The vision is to augment and scale human expertise with cognitive assistants for all occupations in smart service systems, where all occupations is a moving target and O*NET OnLine is a first approximation of nearly a thousand occupations, many with task breakdowns, such as biochemical engineers. I have a short presentation to explain this a bit. Cognitive assistants that have ingested all the literature in a field, and know the publications, talks, etc. of a person or team can be very useful for creativity and productivity boosts. What architecture will allow industry and academia to collaborate and take on this grand challenge? How can we make rigorous and systematic the path to building and improving these systems, so that government, foundations, and industry can allocate more investment in these areas?
Some other institutes that study cognitive systems are:
Institute for Cognitive Systems – TUM
Technische Universität München
The Institute for Cognitive Systems deals with the fundamental understanding and creation of cognitive systems. As our research interests fall in line with the …
Cognitive Systems Research Institute (CSRI)
The Cognitive Systems Research Institute (CSRI) is a research organisation, in Athens, Greece. The institute establishes highly interdisciplinary research teams …
University of British Columbia
The Institute for Computing, Information and Cognitive Systems (ICICS) is a multidisciplinary research institute that promotes collaborative research in advanced …
Building cognitive assistants is hard work. How might we make it easier by working together?
1. Do the benefits of building cognitive assistance for boosting creativity and productivity justify the costs?
2. What occupations might provide the best ROI?
3. Where can one join discussions about this topic?
4. Where can documents and other information be shared?
5. Can cognitive assistants contribute to smart service systems?
6. Given that some faculty are expert at creating textbooks (e.g., chapters introducing concepts and relationships, case studies illustrating concepts and relationships, problem sets with questions and answers, etc.), how can this existing faculty expertise be shifted or transformed with appropriate tools to make building cognitive assistants easier?
As I research the history of cognitive assistants, these blog posts by Franz Dill and one more by D.J. Powers are interesting, relating to executive information systems to help CEO’s in the pre-spreadsheet era of computing, as well as executive decision support systems:
Early Business Intelligence Needs
A Slice of the History of Executive Information Systems
A Brief History of Decision Support Systems
Broadly, Franz is interested in the topic of how the executive (and every decision maker, beyond the analyst/research), uses data, analytics, and intelligence systems to make better decisions. Cognitive assistants with 3 L’s capabilities – language, learning, and levels (of confidence) – provide a next step and and new set of pathways to explore.
Franz emailed me about his work at P&G developing and deploying early advisory systems – “The consultant to all of our advisory systems in the 90s was Stanford and Teknowledge. Using Prolog style systems, like M1. Edward Feigenbaum and others were involved. The most famous of our systems was the Coffee Blending advisory system, which was used by green coffee blenders, and in use for the next decade, saved P&G millions. Another famous system was the Copy Owl, which let Ad experts, use, reuse, modify and apply company advertising assets. We even played with learning systems, I wrote an early neural net based induction system that matched advertising campaigns to new product initiatives. A related system gathered cases and then used case based reasoning (CBR) to find best fits. It was used for several years, then the task was outsourced. Another system adapted or ‘learned’ supply chain solutions from traffic and inventory data. Early big data. Except for the four examples of executive systems I sent you, and to a certain extent the Copy Owl, these were all used by corporate experts to augment their own specialty expertise. Sometimes to replace their own expertise for easier problems and also often to scale their expertise more broadly.”
Clearly some of these systems worked better than others, and their fates depended on many factors. Franz is also interested in why some worked for a while, some did not work, and how they could have been made to work in a specific corporate executive culture. One comment that stuck with him was when an executive said something like “I am most interested in useful creativity, not keeping alive the expertise of past executives, even my own …. “. So successful cognitive assistants will require more than just the expert system processes of accurate expertise capture … more effort is needing in trying to find the true dividing line between creativity and handling routine tasks efficiently. Cognitive assistants have to be collaborative partners that handle routine tasks, but also engage in natural and creative exploration of new ideas and possibilities.
Franz also commented that “What ultimately worked was extreme focus. Narrow understanding of what was needed to be done, simple or complex. And making sure the user (exec or analyst) was willing to follow the lead. And their organization also would follow the recommendation. And that the data involved was trusted for the given purpose by all … All of this worked better with focus. In three years we did perhaps thirty projects, only three in the executive space. Many other projects in marketing, manufacturing, R&D, Supply Chain, HR, Sales … With a wide variance in success. The successes paid for the effort, but did not sustain the AI function.”
The three systems in the executive space were part of a C-Suite Advisory Development effort at P&G from 1989-1991:
Automated CEO – lead by Bob Herbold (Later COO at Microsoft)
New Initiative Advisor – lead by Tom Moore
Major Capital Appropriation Screener – lead by Bob Hunt
Across the industry it has been estimated the annual spending for artificial intelligence projects in industry was reaching $1B with nearly ten thousand employees, both the major firms and startups were experimenting with better AI tools and techniques, decision-support systems, advisory systems, performance-support systems, and a variety of other intelligent systems, but none with the language, learning, and levels (of confidence) capabilities of the cognitive assistants that are beginning to appear today.
Smart service systems will depend increasingly on the people inside and outside them equipped with cognitive assistants. The ability to serve external and internal customers in the business world will likely depend on cognitive assistants more and more, as our tools come to know us.
See “A Brief History of Cognitive Assistants:” for more in this ongoing discussion:
Jim Spohrer DRAFT 05/25/2014 07:13 AM
Cognitive Systems for Every Profession
The Dictionary of Occupational Titles (DOT) contains hundreds of short paragraph description of occupations. For example, Architect (001.061-010):
Researches, plans, designs, and administers building projects for clients, applying knowledge of design, construction procedures, zoning and building codes, and building materials: Consults with client to determine functional and spatial requirements of new structure or renovation, and prepares information regarding design, specifications, materials, color, equipment, estimated costs, and construction time. Plans layout of project and integrates engineering elements into unified design for client review and approval. Prepares scale drawings and contract documents for building contractors. Represents client in obtaining bids and awarding construction contracts. Administers construction contracts and conducts periodic on-site observation of work during construction to monitor compliance with plans. May prepare operating and maintenance manuals, studies, and reports. May use computer-assisted design software and equipment to prepare project designs and plans. May direct activities of workers engaged in preparing drawings and specification documents.
A system of professions exist in business and society (Abbot 1988). Professions have jurisdiction over types of problems and their solutions, and reflect the division of expert labor. All professions compile a body of knowledge and practices that define them, and university faculty extend and teach that codified knowledge.
We are at the dawn of an era where every professional will have one or more associated cognitive systems (see Appendix I below). Cognitive systems ingest massive amounts of data, learn, permit natural language interactions, and provide levels of confidence in their recommendations.
In the era of smart system of systems, cognitive computing and other advances at long last make it possible to have cognitive systems capable of being true cognitive assistants. Anyone lucky enough to have (or had) a great executive assistant knows the amazing boost for measures like productivity, quality, compliance, and innovativeness.
Productivity: From travel to organizing meeting and emails, to finding that presentation or contact from last year or three years ago.
Quality: Organizing and integrating feedback on draft documents, presentations, to following-up with thank-you’s and closing the loop.
Compliance: Any organization runs on a dozens of details, annual reports, and double checking compliance with process, procedures, policy and even regulations..
Innovativeness: By helping on all the above items, there is more time to think and interact with others on new topics of potential future value.
Up to this point in history, most people have never had the benefit of a great executive assistant, so they have to do-it-all-themselves, adopting good organization skills and disciplines to stay on top of everything needed in their professional and private lives. A very few people have been lucky enough to afford both executive assistants at work and in professional/public life, and personal concierges in their home/private lives. Because most people without personal assistants have both public and private roles, instead of dedicated assistants for each role, associated with each societal role, they have colleagues, friends and/or family who lend a hand to help out when needed. Trusted relationships provide needed assistance for most people.
The best assistants also help those they serve “to up their games” and adopt better organization skills and disciplines over time. Assistants who allow those they serve to remain scattered and undisciplined can “cover it up” for a shot period, but ultimately without elevating the efficiency and effectiveness of those they serve, growth, development, innovation, and advanacement suffer. Like good coaches, good assistants understand that in the long run creating debilitating dependency is not what excellent service is all about, but instead growing capabilities on both sides of the service relationship is what leads to sustainable relationships and rewarding lives. Anything less is merely transactional, not relational. The exception, of course, is end of life assistance, where capabilities of the one being served dwindle over time, and the assistant must do more and more right up to the end.
In this short essay, the focus is “ProfessionCogs” designed to augment the intelligence of doctors, lawyers, engineers, journalist, and other professionals, however first let’s understand “Cogs” a little better.
Cogs: Learning, Language, and Levels
“Cogs” are human-made cognitive systems capable of being true cognitive assistants. Cognitive assistants have three types of basic capability that we can summarize as learning, language, and levels. We expect our cognitive assistants to not only remember the past, but learn from experience as well. Furthermore, we expect to interact with cognitive assistants in natural ways, in written language at a minimum and ideally in spoken language with ability to use prosody, gestures, and even facial expressions to convey meaning naturally in multi-person interaction contexts, such as meeting and conference calls. Finally, cognitive assistants must help us weigh alternatives and understand the level of confidence associated with different possibilities and recommendations.
So learning, language, and levels get cognitive systems to the point where they can be really useful. For example, IBM’s Jeapordy! winning Watson system demonstrated all three capabilities. Learning from previous correct and incorrect responses (training data), using a broad range of natural language terms and phrases (Wikipedia-type breadth), and levels of confidence in its responses (shown to the audience during the competition). It is interesting to note that while no single IBMer could defeat the two most winning Jeopardy! players, the Watson Jeopardy! system, a very specialized “Cog,” which was developed by IBM and several university partners, could and did win. “Cogs” can do some things that their developers or owners cannot do.
Cognitive Systems: A Diverse Set of Entities
In general, the set of all cognitive systems is a very broad set of entities. The term itself “cognitive systems” is somewhat hard to define clearly, simply, and precisely. The set of all cognitive system entities that one could study include both biologically evolved entities as well as human-made, or artificially intelligent entities. Cognitive science, the interdisciplinary field that studies the mind and it processes, includes researchers with expertise in both the evolution and function of brains as well as the design and practical applications of increasingly sophisticated artificial intelligence componentry. Both fields provide methods for studying the physical structures that give rise to diverse cognitive processes (functions and behaviors). Cognitive science and artificial intelligence are both also concerned with collective capabilities of interconnected networks of simpler cognitive systems that form larger structures – from social insects to cities to multi-agent systems of flying smart robots.
ProfessionCogs for Diverse Professions
In large organizations, ranking the performance of individuals and teams is a common practice. From the best performers in a role to the weakest performers in a role, quantitative and qualitative assessments are used to evaluate performance. Those with the lowest ranking have the greatest potential for improved performance, and therefore boosting their performance or replacing them with more qualified employees can be key to improving overall organizational performance.
Which brings us to the question of what is the best way to make progress going forward?
Definitions of cognitive systems make reference to human-level capabilities, such as the abilities to sense and respond to the world in intelligent ways. However, cognitive science and artificial intelligence researchers include ants and thermostats as examples of cognitive systems, because they sense and respond to their environment.
Human-made systems with capabilities to sense and react to their environment have been around for years.
In sum, ProfessionCogs are cognitive systems/cognitive assistants designed to help professionals do their jobs better.
There is a lot out there. And more all the time.
Abbott, A. (1988). The system of professions: An essay on the division of expert labor. University of Chicago Press. URL: http://psycnet.apa.org/psycinfo/1988-97883-000
Appendix 1: By Stephen Hamm, Cognitive Computing Overview
Cognitive Computing for Smarties, by Stephen Hamm
What is cognitive computing?
Cognitive computing enables the next level of partnership between people and computers to augment human intelligence, boosting the productivity and creativity of individuals and teams, thereby transforming industries and professions.
These systems ingest vast amounts of data, learn from their interactions with people and information sources, reason about their their level of confidence in derived knowledge, and interact using language and other means that are more natural to us.
Cognitive computing techniques provide the building blocks for iteratively developing increasingly sophisticated systems which help us to make better, faster decisions in our personal and professional lives.
The science behind cognitive computing:
To demonstrate cognitive computing in action, IBM built the now famous Jeopardy! TV game show winning machine known as Watson.
This history-making machine is capable of searching encyclopedic collections of information for potential answers to questions, ranking answers based on its confidence level in them, and pressing a button if it has enough confidence in its top-rated answer—all in less than 3 seconds.
IBM scientists worked for years to combine and innovate techniques from a number of computer science-related fields including machine learning, data mining, natural language processing, knowledge representation, text-to-speech synthesis, operations research, decision-making, game theory, cognitive science, psychology, linguistics, and more.
In universities, scholars typically pursue these fields in relative isolation. The Watson breakthrough came, in part, because IBM scientists and engineers combined the disciplines in new ways with a grand challenge goal always in mind. Nevertheless, the breakthrough would not have been possible without the assistance of university researchers, and open frameworks such as Apache UIMA.
The annals of artificial intelligence, now include three game-playing machines from IBM: Watson Jeopardy! (2011), Deep Blue (Chess, 1997), Samuel’s Checker Player (1956).
However, the best is yet to come.
Cognitive systems will be able to…
–Understand multiple languages.
— Reason about levels of confidence in their derived knowledge.
–Converse with people in spoken dialogues.
– Multimodal interactions (see, gaze, facial expressions, hear, small, taste, touch, feel, empathy)
–Understand how professionals think—such as doctors and lawyers.
— Understand facial expression, voice, sensory information, and build deeper user models
–Help people make better decisions, learn complex material faster, make discoveries and create new knowledge.
And, they will get smarter over time.
Of all these capabilities, learning is key.
Like people, cognitive systems exhibit three types of learning over time.
Optimization: Learning to use existing knowledge more efficiently for specific tasks.
Education: Learning from other knowledge sources, people, books, the web, and other cognitive systems.
Discovery: Learning new surprising derived knowledge.
For now, most of the algorithms (cognitive computing techniques) for optimization, education, and discovery are provided manually by research scientists and engineers programming machines. However, as the amount of knowledge in machine-readable form grows, knowledge itself with become a new form of big data for cognitive systems to use to derive new knowledge and algorithms in partnership with people and organizations that can benefit from this new knowledge.
As cognitive computers for all professions get smarter over time they will help people improve their performance as well. With cognitive systems everyone can eventually have a combined expert tutor and cognitive assistant. As professionals exhibit new best practices, their cognitive assistants will notice and learn, ultimately contributing to the body of knowledge for every profession.
In sum, cognitive computing enables the next level of partnership between people and computers to augment human intelligence, boosting productivity and creativity of individuals and teams, thereby transforming industries and professions.
Appendix II: Building Useful “Cogs” Is Hard Work, But No Longer Impossibly Hard
Why now? Over the years, AI (Artificial Intelligence) researchers have come to appreciate just how hard the problems they have been working on actually are. Overly optimistic claims in the past have contributed to so called “AI Winters” where funding dried up formost AI proposals. So rightly so, many skeptics are asking “Why Now?” to the claims that an AI Renaissance is underway. Building these systems is still very hard work, but no longer impossibly hard.
Building “Cogs” (cognitive systems/cognitive assistants) that can answer questions is hard work. However, for the first time in history, Linked Data on the web makes it feasible for many tasks. As the amount of Linked Data on the web increases, some of the traditional very hard natural language processing tasks that early AI (artificial intelligence) systems attempted can be addressed. The WWW’s Linked Data provides a practical solution to several early AI problems, such as (1) comprehensive online data sets, (2) knowledge representation at scale, (3) combinatorial explosion of inferences, and (4) insufficient memory or compute resources.
Jeopardy! is an interesting test of natural language and breadth of knowledge. The correct response is always an entity (in the form of a question “What is X?”) that fits within a category (e.g., “Famous People”) and is referred by a clue. In the age of Wikipedia, Wiktionary, Wordnet,Web Pages, and Linked Data, the words and phrases in the typical clue link to documents which contain the correct response in most cases within one hop, or in some cases a relatively small number of hops – three to five. The diagram below illustrates what is meant by a “hop” from one web-page to the next (see Wiki Linked_Data for a better understanding). Linked Data is a practical solution to the “combinatorial explosion” problem faced by early AI (Artificial Intelligence) systems. Sometimes the correct response may require finding a set and performing some linguistic or numeric “Calculation” (e.g., Jeopardy Category: Letters of the Alphabet, Clue: Most Common State Name First Letter; Correct Response “M”). This procedural knowledge common in word puzzles is part of what make Jeopardy! challenging for contestants. Most Jeopardy! correct response includes an entity for which there is a web page, and a small number may include an entity for which there is no web page, such as result of a linguistic or numeric “Calculation.”
In multiple-choice grade level reading comprehension tasks, the correct response is most often an entity that is directly referred to in the answer passage, or in a web page document linked to a word or phrase in the answer passage by one hop, or in some cases a relatively small number of hops – three to five. Again, Linked Data is a practical solution to the “combinatorial explosion” problem.
Professional certification can be thought of as grade level reading comprehension for a very long answer passage (one or two dozen books).
Professionals or experts are also often called on to present arguments for and against multiple future options. Unlike tasks that require finding a single entity answer, there is no single best or right answer, only a list of possibilities with pro/con statements of support rank ordered for relevance to each option. For example, expert debaters perform these types of tasks when they research issues and build cases. Lawyers also perform these types of tasks.
Also, unlike tasks that require finding a single entity response, some tasks require summarization of events, meetings, publications, bodies of research. For example, journalists are often confronted with these tasks, or textbook writers creating material that is grade-level appropriate.
Problems in pattern recognition and robotics are also still very hard, but many specific tasks are no longer impossibly hard.
Appendix III: Non-Technical Issues Are Also Hard Work, But Not Impossibly Hard
As the range of tasks that cognitive systems can address there are three types of non-technical concerns: (1) threats to privacy, safety and security (??? by governments, businesses, criminals, terrorist, etc.), (2) threat to job security (Byrnjolffson & MacAfie,), (3) threat to species security (Weizenbaum, Joy, etc.). Based on the law of comparative advantage, getting all entities to “up their game” creates the most individual and collective benefits.
1. Guinan, E., Boudreau, K. J., & Lakhani, K. R. (2013). Experiments in open innovation at Harvard Medical School. MIT Sloan Management Review, 54(3), 45-52.
” But in February 2010, Drew Faust, president of Harvard University, sent an email invitation to
all faculty, staff and students at the university (more than 40,000 individuals) encouraging them
to participate in an “ideas challenge” that Harvard Medical School had launched to generate research
topics in Type 1 diabetes. Eventually, the challenge was shared with more than 250,000
invitees, resulting in 150 research ideas
and hypotheses. These were narrowed
down to 12 winners, and multidisciplinary
research teams were formed to
submit proposals on them.”
” In May 2008, Harvard Catalyst received a fiveyear
NIH grant of $117.5 million, plus $75 million
from the university, the medical school and its affiliated
academic health-care centers. These funds were
designated to educate and train investigators, create
necessary infrastructure and provide novel funding
mechanisms for relevant scientific proposals. However,
the funds did not provide a way to engage the
diversity and depth of the whole Harvard community
to participate in accelerating and “translating”
findings from the scientist’s bench to the patient’s
bedside, or vice versa. Could open-innovation concepts
be applied within a large and elite science-based
research organization to help meet that goal?”
“Albert Einstein captured the
importance of this aspect of research:
“The formulation of a problem is far more
often essential than its solution, which may be
merely a matter of mathematical or experimental
skill. To raise new questions, new
possibilities, to regard old problems from a new
angle, requires creative imagination and marks
real advances in science.”6″
“Harvard Catalyst offered $30,000 in awards. Contestants
were not required to transfer exclusive
intellectual property rights to Harvard Catalyst. Rather,
by making a submission, the contestant granted Harvard
Catalyst a royalty-free, perpetual, non-exclusive
license to the idea and the right to create researchfunding
proposals to foster experimentation.”
“In total, 779 people opened the link at InnoCentive’s
website, and 163 individuals submitted 195
solutions. After duplicates and incomplete submissions
were weeded out, a total of 150 submissions
were deemed ready for evaluation. The submissions
encompassed a broad range of therapeutic areas including
immunology, nutrition, stem cell/tissue
engineering, biological mechanisms, prevention, and
patient self-management. Submitters represented 17
countries and every continent except Antarctica.
About two-thirds came from the United States. Fortyone
percent of submissions came from Harvard
faculty, students or staff, and 52% of those had an affiliation
with Harvard Medical School. Responders’
ages ranged from 18 to 69 years, with a mean age of 41.”
“Fostering Interdisciplinary Teams
After selecting the ideas, Harvard Catalyst set out to
form multidisciplinary teams. While researchers
tend to stay within their domains, Harvard Catalyst
wanted to learn if scientists from other life-science
disciplines and disease specialties could potentially
convert their research hypotheses into responsive
experimental proposals in the Type 1 diabetes
arena.10 Harvard Catalyst reached out to Harvard
researchers from other disciplines with associated
knowledge and invited them to submit a proposal
to address one of the selected questions.”
” The Leona Helmsley Trust
put up $1 million in grant funding at Harvard to
encourage scientists to create experiments based on
these newly generated research questions.”
“In addition to normal advertising of the grant
opportunity, Harvard Catalyst used a Harvard Medical
School database to identify researchers whose
record indicated that they might be particularly well
suited to submit proposals. The Profiles system takes
the PubMed-listed publications for all Harvard
Medical School faculties and creates a database of
expertise (keywords) based on the MeSH classification
of their published papers. Dr. Griffin Weber,
then the chief technology officer of Harvard Medical
School and the creator of the Profiles system, assisted
Harvard Catalyst in taking the coded MeSH categories
for the winning proposals — now imbedded in
the thematic areas — and matching them through a
sophisticated algorithm to the keyword profiles of
the faculty. The intention was to move beyond the
established diabetes research community and discover
researchers who had done work related to
specific themes present in the new research hypotheses
but not necessarily in diabetes.”
” The matching algorithm revealed the names of
more than 1,000 scientists who potentially had the
knowledge needed to create research proposals for
these new hypotheses.”
” The outreach yielded 31 Harvard faculty-led
teams vying for Helmsley Trust grants of $150,000,
with the hope that sufficient progress in creating preliminary
data would spark follow-on grants. These
research proposals were evaluated by a panel of Harvard
faculty, with expertise weighted toward Type 1
diabetes and immunology and unaffiliated with Harvard
Catalyst administration. Seven grant winners
were announced. Core to the mission of the openness
program was that the algorithm for potentially contributory
investigators had identified 23 of the 31
principal investigators making a submission and 14
of these 23 had no significant prior involvement in
Type 1 diabetes research — a core element of the
open-innovation experiment. Seven proposals were
funded, five of which were led by principal investigators
or co-principal investigators without a history of
significant engagement in Type 1 diabetes research.”
” Somewhat unexpectedly, Harvard Catalyst discovered
that while academic researchers tend to be
very specialized and focused on extremely narrow
fields of interest, explicit outreach to individuals
with peripheral links to a knowledge domain can engage
their intellectual passions. Harvard Catalyst
uncovered a dormant demand for cross-disciplinary
work, which many leaders within Harvard Catalyst
doubted existed. However, as soon as bridges were
built, individuals and teams started to cross over.
The lesson for managers outside academic medicine
is that there may be sufficient talent, knowledge and
passion for high-impact breakthrough work currently
inside their organizations — but trapped in
functional or product silos. By creating the incentives
and infrastructure that enable and encourage
bridge crossing, managers can unleash this talent.”
” The Harvard Catalyst approach to introducing
open innovation was to layer it directly on top of existing
research and evaluation processes. Harvard
Catalyst executives simply added an open dimension
to all stages of the current innovation process. Thus,
individuals already in the field did not feel they were
being systematically excluded. The entire effort
could be viewed as a traditional grant solicitation
and evaluation process with the exception that all
stages were designed so that more diverse actors
could participate. This strategic layering of open
dimensions on traditional processes positions open
innovation as a tweak to currently accepted practice
instead of a radical break with the past.”
2. Lakhani, K. R., Boudreau, K. J., Loh, P. R., Backstrom, L., Baldwin, C., Lonstein, E., … & Guinan, E. C. (2013). Prize-based contests can provide solutions to computational biology problems. Nature biotechnology, 31(2), 108-111.
determine whether this approach could solve
a real big-data biologic algorithm problem, we
used a complex immunogenomics problem
as the basis for a two-week online contest
broadcast to participants outside academia
and biomedical disciplines. Participants in
our contest produced over 600 submissions
containing 89 novel computational approaches
to the problem. Thirty submissions exceeded
the benchmark performance of the US
National Institutes of Health’s MegaBLAST.
The best achieved both greater accuracy and
speed (1,000 times greater)”
has been projected that by 2018 there will
be a shortage of approximately 200,000 data
scientists and 1.5 million other individuals in
the US economy with sufficient training and
skills to conceptualize and manage big-data
“To investigate the specific technical
approaches developed by contestants, we
commissioned three independent computer
science Ph.D. researchers to review all
submissions and determine what techniques
were implemented. Their analyses determined
that ten distinct elemental methods (Table 1)
were used in 89 combinations in the 654
submissions. As the number of elemental
methods in a submission increased, so did
its performance (Fig. 2 and Supplementary
Methods), with leaderboard scores
increasing by 85.3 points for each additional
method employed (P < 0.01). Analysis of
the benchmark algorithms showed that
the methods numbered 2, 3, 5 and 8 were
implemented in the MegaBLAST algorithm,
and methods 2, 4 and 7 were implemented in
the idAb code.”
Brown, B., Chui, M. & Manyika, J. McKinsey Q. 4, 24–35 (2011).
Such contests are one part
of a decade-long trend toward solving science
problems through large-scale mobilization
of individuals by what the popular press
refers to as ‘crowdsourcing’12.
Howe, J. Crowdsourcing (Crown Books, New York; 2008).
“We ran our contest on the TopCoder.com
online programming competition website, a
commercial platform that had the advantage
of providing us with an existing community
of solvers. Established in 2001, TopCoder
currently has a community of over 400,000
software developers who compete regularly to
solve programming challenges13. Our contest
ran for two weeks and offered a $6,000 prize
pool, with top-ranking players receiving cash
prizes of up to $500 each week. Our challenge
drew 733 participants, of whom 122 (17%)
submitted software code. This group of
submitters, drawn from 69 countries, were
roughly half (44%) professionals, with the
remainder being students at various levels.
Most participants were between 18 and 44
years old. None were academic or industrial
computational biologists, and only five
described themselves as coming from either
R&D or life sciences in any capacity.”
“Consistent with usual practices in
algorithm and software development contests,
participants were able to make multiple code
submissions to enable testing of solutions
and participant learning and improvement.
Collectively, participants submitted
654 solutions, averaging to 5.4 submissions per
participant. Participants reported spending an
average of 22 h each developing solutions, for
a total of 2,684 h of development time. Final
submissions that received cash awards are
available for download under an open source
license (see Supplementary Notes).”
“Karim R Lakhani1,2,*, Kevin J Boudreau2,3,*,
Po-Ru Loh4, Lars Backstrom5,
Carliss Baldwin1, Eric Lonstein1,
Mike Lydon5, Alan MacCormack1,
Ramy A Arnaout6,7,*& Eva C Guinan7,8,*
1Harvard Business School, Boston,
Massachusetts, USA. 2Harvard-NASA
Tournament Lab, Institute for Quantitative
Social Science. 3London Business School,
London, UK. 4Department of Mathematics and
Computer Science and Artificial Intelligence
Laboratory, Massachusetts Institute of
Technology, Cambridge, Massachusetts, USA.
5TopCoder.com, Glastonbury, Connecticut,
USA. 6Department of Pathology and Division of
Clinical Informatics, Department of Medicine,
Beth Israel Deaconess Medical Center, Boston,
Massachusetts, USA. 7Harvard Medical School,
Boston, Massachusetts, USA. 8Department
of Radiation Oncology, Dana-Farber Cancer
Institute, Boston, Massachusetts, USA. *These
authors contributed equally.
3. Boudreau, K. J., & Lakhani, K. R. (2013). Using the crowd as an innovation partner. Harvard business review, 91(4), 60-9.
“Managers remain understandably cautious.
Pushing problems out to a vast group of strangers
seems risky and even unnatural, particularly to organizations
built on internal innovation. How, for example,
can a company protect its intellectual property?
Isn’t integrating a crowdsourced solution into
corporate operations an administrative nightmare?
What about the costs? And how can you be sure
you’ll get an appropriate solution?”
“These concerns are all reasonable, but excluding
crowdsourcing from the corporate innovation
tool kit means losing an opportunity. The main
reason companies resist crowds is that managers
don’t clearly understand what kinds of problems a
crowd really can handle better and how to manage
“Having determined that you face a challenge
your company cannot or should not solve on its own,
you must fi gure out how to actually work with the
crowd. At fi rst glance, the landscape of possibilities
may seem bewildering. But at a high level, crowdsourcing
generally takes one of four distinct forms—
contest, collaborative community, complementor, or
labor market—each best suited to a specifi c kind of
challenge. Let’s examine each one.”
“Today online platforms
such as TopCoder, Kaggle, and InnoCentive provide
crowd-contest services. They source and retain
members, enable payment, and protect, clear, and
transfer intellectual property worldwide.”
“A contest should be promoted in such a way—
with prizes and opportunities to increase stature
among one’s peers—that it appeals to sufficiently
skilled participants and receives adequate attention
from the crowd. The sponsor must devise and
commit to a scoring system at the outset. In addition,
explicit contractual terms and technical specifications
(involving platform design) must be created to
ensure the proper treatment of intellectual property.”
“Crowd Collaborative Communities
In June of 1998 IBM shocked the global software industry by announcing that it intended to abandon its internal development efforts on web server infrastructure and instead join forces with Apache, a nascent online community of webmasters and technologists. The Apache community was aggregating diverse inputs from its global membership to rapidly deliver a full-featured—and free—product that far outperformed any commercial off ering. Two years later IBM announced a three-year, $1 billion initiative to support the Linux open-source operating system hundreds of open-source communities to jointly create a range of software products. In teaming up with a collaborative community, IBM recognized a twofold advantage: The Apache community was made up of customers who knew the software’s defi cits and who had the skills to fi x them. With so many collaborators at work, each individual was free to attack his or her particular problem with the software and not worry about the rest of the components. As individuals solved their problems, their solutions were integrated into the steadily improving software. IBM reasoned that the crowd was beating it at the software game, so it would do better to join forces and reap profi ts through complementary assets such as hardware and services.”
“To be sure, crowds aren’t always the best way to
create complementary products. They make sense
only when a great number and variety of complements
is important. Otherwise, a few partners or
even an internal organization will better serve the
“There are also advantages to assembling complementor
crowds that are specifi c to a company’s own
platform. Think of the enormous ecosystems around
Microsoft, Facebook, and Apple, each of which operates
on a model that stimulates adoption on both
the complementor and customer sides to kick-start
positive interactions and initiate growth. (How to get
this started is a classic chicken-and-egg problem that
has received much research attention in the past 20
years and goes beyond the scope of this article.) The
strategies of those companies require considerable
industry experience and support and depend on the
particulars of the situation. They involve the design
of the core product, setting prices for diff erent sides
of the platform, setting expectations, and creating a
wider set of inducements, among other issues.”
“Kevin J. Boudreau is an assistant professor of strategy
and entrepreneurship at London Business School and a
research fellow at Harvard’s Institute for Quantitative Social
Science. Karim R. Lakhani is the Lumry Family Associate
Professor of Business Administration at Harvard Business
School and the principal investigator of the Harvard-NASA
Tournament Lab at the Institute for Quantitative Social
4. Challenge.gov Wins “Innovations in American Government” Award
Posted by Cristin Dorgelo on January 23, 2014 at 01:10 PM EDT
“Since its launch in September 2010 by the General Services Administration (GSA), Challenge.gov has become a one-stop shop where entrepreneurs and citizen solvers can find public-sector prize competitions. The website has been used by nearly 60 Federal agencies to source solutions to over 300 incentive prizes and challenges and to engage more than 42,000 citizen solvers.”
URL 20140505: http://www.whitehouse.gov/sites/default/files/microsites/ostp/competes_prizesreport_dec-2013.pdf
Implementation of Federal Prize Authority: Fiscal Year 2012 Progress Report
A Report from the
Office of Science and Technology Policy
In Response to the Requirements of the
America COMPETES Reauthorization Act of 2010
December 2013 ”
“A 2009 McKinsey report found that philanthropic and private-sector investment in prizes increased significantly in recent years, including $250 million in new prize money between 2000 and 2007.8 Some of these incentive prizes included the GoldCorp Challenge9, the Ansari X Prize10, the Netflix Prize11, and the Heritage Health Prize Competition12 ”
“See e.g., McKinsey & Company, “And the Winner Is…”; Capturing the promise of philanthropic prizes, 2009, http://www.mckinseyonsociety.com/downloads/reports/Social-Innovation/And_the_winner_is.pdf ”
“Pay only for success and establish an ambitious goal without having to predict which team or approach is most likely to succeed.
Reach beyond the “usual suspects” to increase the number of solvers tackling a problem and to identify novel approaches, without bearing high levels of risk.
Bring out-of-discipline perspectives to bear.
Increase cost-effectiveness to maximize the return on taxpayer dollars. ”
I just read the three articles by Prof. Ashok Goel (Georgia Tech):
Goel, A. K., Vattam, S., Wiltgen, B., & Helms, M. (2012). Cognitive, collaborative, conceptual and creative—four characteristics of the next generation of knowledge-based CAD systems: a study in biologically inspired design. Computer-Aided Design, 44(10), 879-900.
As I read this article, the first thing I thought of was that I needed to add R1/XCON and intelligent configurators and CAD systems to the list of instances in the brief history of cognitive systems.
Second, the blending of next generation computer-aided design and cognitive assistants made me think of Tony Stark’s computer assistant (in the “Iron Man” science fiction movie) and this implementation by Elon Musk: http://www.saratechinc.com/future-of-design/
Third, this article made me think more about the question – what is the difference between a “good tool” and a cognitive assistant? CAD systems may be “good tools” but if the user or even better multiple users can talk and gesture with the systems understanding the input and changing what is displayed, and the system can learn by being shown or asked to refer to relevant examples in the literature, and the system can provide levels of confidence in proposed solutions (see discussion thread “A Very Brief History of Cognitive Assistants”) – then we have from “good tool” to more clearly a cognitive assistant for engineers working on bio-inspired designs.
Next, I liked this section in the paper….
“Finally, the fourth C is creativity. It has been said many times
in the design literature that design can be routine, innovative, or
creative , even though these categories often are imprecise.
Brown and Chandrasekaran , for example, suggested that (1) in
routine design, both the basic structure of the desired system and
the plans for selecting the parametric values of each component
were known, (2) in innovative design, only the structure of
the system was known and the plans for selecting component
parameter values were unknown, and (3) in creative design, the
structure of the design itself was unknown. In our own earlier work
on case-based design [31,32], we have proposed that (1) in routine
design, the modifications needed to adapt a known design into the
desired design are limited to values of parameters of components
in the design, (2) in innovative design, the needed modifications
pertain to the components of the design, and (3) in creative design,
the modifications entail changes to the topology of the design
I have talked about incremental, radical and super-radical innovation as: (1) incremental innovation change the numeric values – for example 50 miles per gallon, instead of 45 miles per gallon. (2) radical innovations change the combinations of units of measure used to understand the innovation – for example, bits/joule for mobile phone communications, and (3) super-radical innovations change the units of measure themselves – for example, when e-Bay developed a measure of reputation for people using their system to sell items to others. As instances of types of systems are designed or evolve over time, we can see incremental, radical, and super-radical innovation examples.
Finally, I like the discussion of SBF (Structure-Behavior-Function) and the DANE system very much. I think cognitive assistants in some ways mirror textbooks. Cognitive assistants must include concepts, relationships, case studies, problems-solutions/questions-answers. So a good cognitive assistant should be able to operate in a mode that allows it to help users receive certifications for demonstrated competencies and skills. A good cognitive assistant should also be able to act as a personal coach or mentor for learners trying to become more competent or master a domain of study and practice.
Vattam, SS & Goel, AK (2013) Biological Solutions for Engineering Problems: A Study in Cross-Domain Textual Case-Based Reasoning. In S.J. Delany and S. Ontañón (Eds.): ICCBR 2013, LNAI 7969, Springer-Verlag Berlin Heidelberg pp. 343–357.
Again, this one made me think of what is the difference between a “good tool” and a cognitive assistant. Perhaps search engines need to be added to the list of a brief history of cognitive assistants. Textual Case Based Reasoning systems have the challenges of findability, recognizability, and understandability. The Biologue interactive system was an interesting exploration of some of these challenges in web-based retrieval of documents.
Goel, A, Zhang G, Wiltgen B, Zhang Y, Vattam, S, Yen, J (203) The Design Study Library: Compiling, Analyzing and Using Biologically Inspired Design Case Studies. In Design Computing and Cognition DCC’14. J.S. Gero (ed),
Springer. pp. xx-yy.
Regarding the Design Study Library (DSL) an interactive system that provides access to a digital library of case studies of biologically inspired design, I liked the two-level design of projects and documents. T-charts are also a nice innovation for comparing the problem domain and biological solution domain. An impressive set of case studies for a textbook/cognitive assistant are also presented. Prof. Goel’s classes are a rich source of data on learning too!
After reading the papers, I decided to do some searches on O*NET Online (Occupation Network On Line).… I searched for these keywords, and found the number of occupations interesting:
Manage – appears in 715 occupation descriptions
Communicate – 428 occupations
Design – 415 occupations
Engineer – 339 occupations
Collaborate – 253 occupations
As we think about building cognitive assistants for all occupations, it will be important to have cognitive systems components related to design. Building cognitive assistants from textbooks that define concepts, relationships, enumerate important cases studies, and assemble problem-solution, question-answer pairs, both correct and incorrect will be helpful. Prof. Goel’s articles, courses and systems are a gold mine of information. Also, O*NET is a good source of information about the tasks that professionals in an occupation must be able to perform – see http://www.onetonline.org/
Hopefully, someone will start a better discussion thread on the history of cognitive assistants, but here is a starting point or baseline discussion. This very short history of cognitive assistants will examine instances of cognitive assistants. By instances, we simply mean named systems, projects, or challenges. However, first we must drive a stake in the ground concerning, “what is a cognitive assistant?” For example, what capabilities help us distinguish a cognitive assistant from just a “good tool?”
How should we define cognitive assistant?
We will not try to summarize the extensive literature on cognitive assistants and evaluation measures for cognitive assistants in this short discussion, but see Steinfeld (2007) for more. For our purposes in this short discussion, the difference between a “good tool” and a cognitive assistant can be a fine line. We will use three cognitive capabilities to help distinguish “good tools” from cognitive assistants: language, learning, and levels (confidence levels in responses). Language refers to natural language communications in words and sentences, but ultimately includes more – gestures, gaze, diagrams, and much more – all the ways people communicate with each other. Learning refers to the ability to learn from positive and negative examples of questions and responses, but ultimately included more – direct user feedback and teaching dialogue, analogical reasoning, and more. Levels refers to the ability to provide estimates of confidence levels in multiple possible responses, but ultimately includes more such as explanation, debating, and argumentation capabilities. We might want to add a fourth capability “limbs” if we want to talk about embodied cognitive assistants – robots.
Paralleling a short four-stage history of Artificial Intelligence we will examine instances of potential cognitive assistants from these four eras: formative era, micro-worlds era, expert systems era, real world era.
1945 MEMEX (Bush)
1962 AUGMENT (Engelbart)
These two early systems provided much of the vision for “cognitive assistants for knowledge workers ” by using technology to augment human intellect, especially with respect to symbolic processing of text and networks of inter-related concepts.
1955 Logic Theorist (Newell and Simon)
1956 Checker Player (Samuel)
1966 Eliza (Weizenbaum)
There are of course many more examples of pioneering systems from this era, but these three provide a nice illustration of three categories. The Logic Theorist can be seen on the path leading to systems such as WolframAlpha dealing with computable knowledge. Samuel’s Checker-playing Program was the forerunner of so many game playing programs, culminating in Deep Blue, fulfilling and early AI prophesy of defeating the world champion in Chess, and more recently Watson Jeopardy!. All these game playing programs can be used in a performance support mode to aid people learning and playing games. Finally, Eliza illustrates that simple tricks and deception can make for entertaining aspects of cognitive assistants – linking to somewhat surface solutions or deception-oriented versions of the Turing Test grand challenge.
Expert systems era
1987 Cognitive Tutors (Anderson)
1987 Knowledge Navigator System
The early expert systems led to corporations envisioning cognitive assistants for professionals to improve the productivity and creativity of knowledge-workers across a wide range of jobs. Cognitive tutors are proxy for all the intelligent tutoring systems, learning support, performance support systems of this era – too numerous to mention. The Knowledge Navigator video provided an update on MEMEX and AUGMENT in many ways with natural language dialogue and multimedia in an envisioned executive assistant and collaborative research assistant.
Real world era
2011 Waton Jeopardy!
2014 Watson Solutions
WolframAlpha: Is WolframAlpha a “good tool” or a cognitive assistant? Wolphra Alpha provides a natural language interface and computational engine for a wide range of natural language and mathematical language queries – pushing the limits of computable knowledge (WolframAlfa 2014). It is a “good tool” evolving towards becoming a cognitive assistant, once users and other can help it learn, and once it provides levels of confidence on answers. Currently, WolframAlpha tries to provide one “right” answer, or just give up and reply no answer found, rather than several possible and ranked answers.
Watson Jeopardy! Like all game playing systems, Watson Jeopardy! has a use case where it could be used by a person to compete against competitors to win a game. Watson Jeopardy! clearly demonstrated some language, learning, and levels capabilities. If it were to be packaged as an app, or in some other way to help people perform the task of playing and winning Jeopardy games then it could be considered a cognitive assistant.
SIRI: SIRI is clearly marketed as an intelligent or cognitive assistant. It exhibits language capabilities, but not so much on learning or levels of confidence in alternative answer, but like a search engine it will often return alternatives, if asked for a list of possibilities. The history of SIRI traces back to CALO and PAL, which were DARPA funded projects, at SRI International (Bosker 2013)
Watson Solutions: Engagement Advisor, Discovery Advisor, Watson Chef, and other Watson Solutions target specific market needs where human expertise needs to be augmented or scaled to improve productivity and quality of specific occupational tasks. These systems use language, learn, and provide confidence levels for alternative responses. Still, building these systems is complex and difficult.
This discussion thread aims to collect comments about instances of cognitive assistants throughout history. Beyond a workable definition of cognitive assistants in terms of capabilities, this discussion thread can ultimately contribute to the discussion of streamlined development and evaluation methodologies for cognitive assistants for all professions. The Cognitive Systems Institute is motivated by the vision of augmenting and scaling human expertise – providing everyone eventually with an executive assistant, personal coach, and mentor for any and all occupations (Spohrer 2014).
Bosker B (2013) SIRI RISING: The Inside Story Of Siri’s Origins — And Why She Could Overshadow The iPhone
January 22, 2013 URL: http://www.huffingtonpost.com/2013/01/22/siri-do-engine-apple-iphone_n_2499165.html
Bush, V. (1945). As we may think. The atlantic monthly, 176(1), 101-108.
Colligan B (2011) How the Knowledge Navigator video came to be.
November 20, 2011 URL: http://www.dubberly.com/articles/how-the-knowledge-navigator-video-came-about.html
Engelbart, D. C. (1995). Toward augmenting the human intellect and boosting our collective IQ. Communications of the ACM, 38(8), 30-32.
Newell, A., & Simon, H. A. (1956). The logic theory machine–A complex information processing system. Information Theory, IRE Transactions on, 2(3), 61-79.
Spohrer, J. (2014) Cognitive Systems: Vision and Directions. NUS Cognitive Colloquium.
September 12, 2014 URL: http://www.slideshare.net/spohrer/cognitive-20140912-v3
Steinfeld, A., Quinones, P. A., Zimmerman, J., Bennett, S. R., & Siewiorek, D. (2007, August). Survey measures for evaluation of cognitive assistants. In Proceedings of the 2007 Workshop on Performance Metrics for Intelligent Systems (pp. 175-179). ACM.
WolframAlpha (2014) Timeline of Systematic Data and the Development of Computatable Knowledge.
September 12, 2014 URL: http://www.wolframalpha.com/docs/timeline/computable-knowledge-history-6.html
Here is a short talk I give at universities and university incubators, and conferences that bring such groups together.
IBM priorities are CCAMSS = Cognitive, Cloud, Analytics, Mobile, Social, Secure – and the service innovations that tie them all together and make them work for customers.
Regarding startups, our primary interest is (1) companies built on our platform (IBM’s platform), and (2) companies the sell to the Forbes Global 2000 (IBM’s primary customers).
In general, IBM is not interested in licensing any IP from universities – we create nearly 7000 patents a year in CCAMSS and related areas, and have been #1 company in the world for 21 years on patent creation, which is about a $1B year licensing business for us. We do have tools to help universities license their patents to others though – see IBM SIIP tool, now Watson Discovery Advisor.
IBM has acquired over 140 companies in the last 14 years, about one a month, average age 15 years old on acquisition, and about 66% of them started in a university ecosystem (e.g., SPSS), and average revenue per year on acquisition is order of magnitude $100M rev/year.
IBM is very interested in helping universities create more successful startups that can go zero to a billion in revenue. We have programs to help startups grow that are built on our platform and sell to our customers. IBM has programs that help startups sell to big companies – supplier connection.
We see one of the largest opportunities for startups in developing enterprise mobile apps, including cognitive assistants for all occupations as part of smart service systems.
To accelerate collaborations with IBM, a university might ask these maturity of relationship questions:
(1) does IBM (or IBM customers) recruit students from the university?
(2) do the faculty teach with IBM tools and platform – freely available through the academic initiative?
(3) does the university create startups based on IBM platform?
(4) does the university participate in Smart Camps & Global Entrepreneurship program?
(5) do the startups as they mature make use of the IBM Supplier Connect or other platforms?
(6) does the university and broader ecosystem use any IBM solutions from HPC to asset management?
(7) are there opportunities to purse collaborative research projects together?
(8) is there a regional economic development play (e.g., NY state with RPI, OH state with OSU, LA state with LSU, etc.)?
(9) does the university have a full IBM team engaged, PEP, Client Exec, Academic Initiatives Lead, IBMers on Campus, etc.
The best relationships have a full IBM team engaged in regional economic development with universities at the center.
AHFE HSSE-2015 is less than a year away.
This is a multi-conference with over 2000 participants, with human-factors as an overall theme, and the human-side of service engineering as one of the conferences.
I hope to organize the following session as part of AHFE HSSE-2015, so let me know if you would like to contribute a presentation or a paper (send email to email@example.com).
Title: Smart Service Systems: Augmenting and scaling human expertise with cognitive assistants
Abstract: Cognitive assistants are beginning to appear for more and more occupations – from doctors to chefs to biochemists – boosting creativity and productivity of workers. Given this important trend a better understanding of the role of cognitive assistants in the design of smart service systems will be needed. The speakers in this session will explore this trend and topic from multiple perspectives, including academic, industry, government, foundation, professional association – as well as the transformation of professions and industries.
Bassett, J. 2014. Memorial Sloan Kettering Trains IBM Watson to Help Doctors Make Better Cancer Treatment Choices. April 11, 2014.
Bilow, R. 2014. How IBM’s Chef Watson Actually Works. Bon Appetit. June 30, 2014.
Simonite, T. 2014. Software Mines Science Papers to Make New Discoveries. MIT. November 25, 2014.