Graphs: Cost of Digital Workers and GDP per Employee USA

Thinking about smarter/wiser service systems with digital workers.  Feedback welcome on diagrams below, decreasing cost of digital workers due to Moore’s Law, and increasing GDP/Employees USA from 1960 to 2080 (projected)

Digital Workers

Full presentation here:

… and please read more about cognitive opentech here:

Thanks to Marius Ciortea for “revenue per employee” idea – I could only find gdp/employees as an approximation, but will keep looking for data.  Thanks to Dan Gruhl and Daniel Pakkala for help with Moore’s Law and decreasing “cost of digital workers” data. Horst Simon sent me some power information for super compters, hope to get into next version.

Learning in an age of accelerating technological change re-visited

This week I got emails from a number of colleagues pointing me to papers, books, and startup ideas about learning, education, skills, and the future…. finally had a chance to read through some of them more carefully this Saturday morning…

Thanks for the emails from Obinna Anya, Ralph Badinelli, Bill Rouse, and just a few hours ago Ted Kahn, of DesignWorlds for Learning….

Jim’s response to Ted’s email about a new book….

Thanks Ted – seen it, but have not ordered and read it yet – I need too, so thanks for the reminder…

Also when we think about technology skills importance relative to humanities and coordination skills, check out this piece from MIT Technology Review on skills:

My thoughts….  At IBM, we talk about T-shapes, with depth and breath, not just tech or humanities in isolation, hopefully we all know that the deep part (technical and problem-solving skills) and the broad part (coordination and communication skills) of the T-shape are both very important.

Nick Donofrio gets its:

Jim Corgel gets its:

Have a good weekend.  I am traveling quite a bit the next two weeks, but have not forgotten that we should have lunch or afternoon coffee sometime to catch up; apologies – just still a bit too busy these days.

Post Script….

…how much of the world is two seemingly polar opposites arguing about which is more important when the dynamic balance of the two is what is needed?  In a wise society we focus on appropriate investment in both while keeping a dynamic balance, i think.  Understanding the evolving ecology of service systems entities their capabilities, constraints, rights, and responsibilities is what service science is about.

One day I got an email from an IBM executive who wanted our service professionals to have the equivalent of a super-power suit they could put on that would augment their cognitive capabilities… certainly close to what is now called the Cognitive Enterprise, and a focus of our cognitive opentech efforts today…   I almost advocated for calling “service science,” “augmentation science” in honor of Doug Engelbart, but he said call it whatever IBM would embrace best at the time, so I advocated for Service Science Management and Engineering (SSME).

Lately, I think this CSLS line of thinking gets it most “right” as a launch vector for the future – see cyber-social-learning-systems (CSLS): ; I miss talking with Doug Engelbart and many others about these and other topics.

Perhaps we are on the verge of becoming a wiser society, as we confront the enormous responsibilities that come with so much technological capability.   We are complex systems embedded in complex systems with capabilities, constraints, rights, and responsibilities.   Learning is what we do in our quest for wisdom living first in a physical environment and now a socially constructed environment of accelerating systems change –

All this in the context of the Atlantic article that Bill Rouse sent – How America Went Haywire –

liked this summary: “The short answer is because we’re Americans—because being American means we can believe anything we want; that our beliefs are equal or superior to anyone else’s, experts be damned. Once people commit to that approach, the world turns inside out, and no cause-and-effect connection is fixed. The credible becomes incredible and the incredible credible.”

Paul Maglio, co-creator of service science at IBM wrote:

like this summary: “Mix epic individualism with extreme religion; mix show business with everything else; let all that ferment for a few centuries; then run it through the anything-goes ’60s and the internet age. The result is the America we inhabit today, with reality and fantasy weirdly and dangerously blurred and commingled.”

And “the beat goes on” as creative artists embrace technology to create new entertainment realities for people to engage with and eventually populate for many, many hours every day:

See also WorldBoard:

Collectively, our episodic dynamics memories are the world we populate.  Episodic memory is like a time machine that takes us back to any part to re-live it as a memory. We can also project into possible futures we would like to experience, public or private experiences. Our identities are strings of memories about ourselves and our interactions with others, including the imagined future possible interactions.  Some interactions are direct interactions in the physical world, but more and more we are detached, and interacting indirectly in a socio-technical world of artificial intelligences mixed with/wrapped around and augmenting natural intelligences, and it is getting harder to tell the difference.

Individually and collectively we are learning to take responsibility for all this. Civilization works because we take responsibility for our actions, and can reason about institutional facts.   See Searle’s book “The Construction of Social Reality”

For a number of reasons, I think the future of learning and education is about individuals on small teams learning to rapidly rebuild everything from scratch to explore alternative futures sequentially and in parallel.  My attempt to work with some colleagues to reframe this idea about the future of learning and education in a more serious format is here:  My attempt at less serious reframing is here:

From the more serious reframing paper:

“The world is a rich and wonderful place, full of many possibilities for how history might
have unfolded differently. Service science with its emphasis on service system entities
and value-cocreation interaction can provide perspective for attempting a new definition
of what progress is and if there is a speed limit to progress, what that speed limit is.”
From the less serious reframing blog post:
How quickly can an individual engineering student or team of students rebuild from scratch the advanced technology infrastructure of society?  From raw materials to simple tools, from simple tools and steam engines to more advanced energy systems (force multipliers), from metals and glass lenses to photography and sensors (perception multipliers), from energy systems and sensors to more precise measurement and control systems (precise production scale-up), from lithography and printing and computers and software to self-replicating machines as envisioned by John von Neumann as a real-world follow-on to the symbolic-world’s Universal Turing machines

Linguistically Fluent – Kurzweil and Schank

Reaction to Kurzweil’s Linguistically Fluent posting…

Kurzweil Linguistically Fluent:

Kurzweil’s plan is a good plan from a Google Deep Learning perspective…

The plan is an extension of the capabilities of seq2seq – sequence to sequence – which is good for DL translation between languages, for example.

However, no one has solved Reasoning, which is required for fluent conversation.

Episodic Memory is required for solving reasoning.

(Anyone who doubts this needs to read Schank’s Dynamic Memory, as well as Scripts, Plans, Goals, and Understanding, as well as Inside Computer Understanding, as well as Conceptual Information Processing – find and check out from your local library)

Video Understanding is required for solving episodic memory.

Video understanding can be solved with DL like Seq2Seq – since it depends on several sub-problems, including speech recognition, image recognition, language translation, and some of the capabilities in driverless cares (learning from watching – seeing and doing – a shallow type of simulation – or mirror neuron capability).

So we are getting close to what Kurzweil wants to do – certainly possible by 2025.

Possible timeline to solution of linguistically fluent AI for the enterprise = digital worker/cognitive collaborator…

2018-2020 video understanding solved for simple world-task model actions + context
2019-2021 video understanding solved for simple social model interactions + context
2020-2022 episodic memory solved
2021-2023 reasoning solved
2022-2024 partial fluent conversation solved
2023-2025 learning from watching (and doing) solved
2024-2026 learning from reading solved
2025-2027 full linguistically fluent = digital worker/cognitive collaborator solved

2035-2037 $1000 per year for digital worker achieved for 2017 top 1000 occupations

2055-2057 $10 per year for digital worker achieved for 2027 top 1000 occupations

Of course, it will not actually evolve this way, but this is a linear-thinking projection to provoke alternative scenarios.

Each of the above solutions is likely to spawn an open source code community on Github with an associated open data set – open AI code + data + models + stacks.

Kurzweil Linguistically Fluent:

Quoc V. Le (Google) Introduction to Seq2Seq:

Schank Dynamic Memory:

Schank Scripts Plans Goals and Understanding:

Schank Inside Computer Understanding:

Schank Conceptual Information Processing:

CFP: Journal of Enterprise Transformation – Role of Analytics

CFP: Journal of Enterprise Transformation – Role of Analytics

Submissions Welcome: The Role of Analytics in Enabling Enterprise Transformation

Advanced analytics – from big data, to machine learning, to cognitive computing — is poised to and, in many cases, already is transforming enterprises in fundamental ways.  Analytics is both an enabler and driver of enterprise transformation.  The objective of this special issue is twofold – (i) to publish rigorous and innovative research on the role that analytics plays in enabling enterprises to transform by adopting these capabilities at scale in the digital era, and (ii) to gauge the pace of adoption of advanced analytics (e.g. big data, machine learning, cognitive computing etc.) to the above end.

We welcome papers that explore how analytics, in all its forms, is being adopted to transform processes across functions or entire organizations.  Analytics can take many forms including data visualization, descriptive analysis, predictive alerts/ recommendations, dashboards, machine learning algorithms, cognitive computing etc.  Analytics can occur at many levels of an organization from the boardroom to the shop floor.  Of particular interest are not the specific applications or technical solutions but rather the adoption of these capabilities across the organization to transform decision making processes at scale, while the organizations themselves are being disrupted in the digital era.  We encourage submissions that draw from diverse theoretical backgrounds such as engineering, computer science, decision science, creative visual design, organizational design and behavioral economics. We are open to a wide set of methodological approaches including empirical research, case-based research, field studies, behavioral decision making experiments, among others.  We encourage collaboration between academia and industry and welcome diverse submissions by both industry and geography.

Some prospective topics include:

Value of Analytics 

  • Where is the value?  Targeting, focusing, measuring and realizing the value from analytics while anticipating disruptive threats in their industry.
  • Optimizing the business you have while planning with agility for the business you want to become in a dynamic uncertain market.

The Data Economy

  • Migrating to the Data Economy by taking advantage of the wealth of data being generated both internally and externally to provide more effective decision support.
  • Leveraging cloud-based digital platforms to enable speed to capability, address latency challenges and provide right-time insights.
  • Striking the right balance between this natural migration and tradeoffs around data security, data privacy and compliance.

Adoption of Advanced Analytics

  • Adoption of advanced analytics and machine learning to solve problems in new ways.
  • Cognitive is coming – adoption is slower than you think and the use case universe is sparse.
  • When will we reach the tipping point from over-hyped and inflated expectations to wide scale adoption of advanced analytics methods (e.g. explainable AI)?

Decision Science

  • Embedded analytics in decision making (e.g. fully automated, predictive alerts, role-based decision cockpits etc.).
  • Adoption of evidence-based decision making – the convergence of technology, behavioral science and organization design.
  • Impacts of interactive visualizations on acceptance and use of analytics.
  • The cusp of a new wave in the fields of management and decision science.

Organization Design

  • Organizing for success – bridging the gap between the business decision makers, the data science community and the information technology organization.  What are the tradeoffs between agility and industrialization?
  • Adapting the culture of the organization to be insight driven, especially when the insights challenge established and accepted norms.

Submission Instructions

  • Submission Deadline: September 1, 2017
  • First Round Review: November 1, 2017
  • Second Round Review (if needed): January 1, 2018
  • Publication: Summer 2018

For further information regarding this special issue, please email the Special Issue Editors:

Editorial information

  • Guest Editor: John Casti, X-Center, Vienna
  • Guest Editor: Tim Chou, Stanford University
  • Guest Editor: Alex Kass, Accenture
  • Guest Editor: Richard Larson, MIT
  • Guest Editor: Paul Maglio, University of California, Merced
  • Guest Editor: Harold Sorenson, University of California, San Diego
  • Guest Editor: James Tien, University of Miami



CFP: Cognitive Assistance in Public Sector – AAAI Fall Nov 9-11, DC

Cognitive Assistance in Government and Public Sector Applications

November 9-11, 2017 Washington, DC

Cognitive Assistance is an important focus area for AI. While it has several facets and still lacks a precise definition (one of the reasons for this Symposium!), it has been called Augmented Intelligence, the automation of knowledge work, intelligence amplification, cognitive prostheses, and cognitive analytics in the past. It is generally agreed1 that even while fully automated AI is still being developed, there are many aspects in which people can (and do already) benefit from automated support, when it is appropriate and intelligently provided.

This symposium solicits innovative contributions to the research, development, and application of Cognitive Assistance technology for use in Government (executive agencies, legislative, and judicial branches), education, and healthcare. These areas differ considerably, but they all share characteristics that make them prime candidate application areas for Cognitive Assistance: complex knowledge interdependencies that take years to master, the situation where human experts provide support to less-informed clients with urgent needs, legal and social requirements for accurate and timely help.

This year we will expand the dialog between the user, academic, and industry communities to discuss the following topics:

  • Public Sector problems where cognitive assistance may be desirable due to the potential for human-machine synergy, and where the human-machine team may be uniquely suited to the problem space. Identify how human and machine complement one another and how this co- dependency will evolve over time.
  • Reports from the field on the adoption of cognitive assistance, including best practices, lessons learned, costs/benefits, productivity results, barriers to adoption and issues that require further study.
  • Policies, regulations, and practices necessary to accelerate the opportunities and mitigate the risks of cognitive assistance
  • Skills and education necessary to obtain benefits from cognitive assistance and mitigate the impacts on displaced workers
  • Fairness, safety, dependability, ethics, transparency, trust, risk management and other cross- cutting issues around the use of cognitive assistance in the public sector
  • Standards and open source technologies for cognitive assistance

    1 It’s been noted that, “Humans will likely be needed to actively engage with AI technologies throughout the process of completing tasks [“Artificial Intelligence, Automation, and the Economy”, Executive Office of the President, December 2016].

  • Implications of cognitive assistance for all facets of government (e.g., economy, security, demographics)
  • Highlight advancements and results that have occurred since FSS-16.

    We solicit ideas for and participation in panel discussions among public sector representatives to articulate their needs for and concerns about the use of cognitive assistance in their domains. We hope to also have panels with users and technologists exploring common problems faced by users, the opportunities for the cognitive assistant to assist, what information is available, and what would be measures of success for a solution.

    We also invite students and researchers to propose demonstrations of state-of-the-art approaches to cognitive assistance technology and ideas relevant to the public sector.

    Submission instructions:

    The symposium will include presentations of accepted papers in both oral and panel discussion formats. Potential symposium participants are invited to submit either a full-length technical paper or a short position paper for discussion. Full-length papers must be no longer than eight (8) pages, including references and figures. Short submissions can be up to four (4) pages in length and describe speculative work, work in progress, system demonstrations, or panel discussions.

    Please submit directly to with FSS-17 in the subject line. Please submit by July 21.

    Organizing Committee: Frank Stein, IBM (Chair)

    Lashon Booker, MITRE Chris Codella, IBM Eduard Hovy, CMU Chuck Howell, MITRE Anupam Joshi, UMBC Andrew Lacher, MITRE

    Jim Spohrer, IBM John Tyler, IBM

The Future

Cheap AI/DL in the cloud leads to technology deflation driving costs out of service systems at all scales – so….  (a) AI will rapidly become a commodity decreasing the cost of nearly everything, (b) large companies will be transformed by automation and augmentation, (c) small companies will flourish and people will be involved in several simultaneously, and (d) individuals will learn to protect and monetize their personal data – so in the long-run, it will all be OK…  everyone who wants to learn will be able to do so rapidly in a highly personalized way XD.  This sweeping disruption of societal norms happened previously when the steam engine transformed physical work and re-ordered society (people moved from working on small farms to larger and larger factories, and people learned in schools set up like factories).  Today, the cognitive engine is transforming mental work and will also profoundly re-order society (people move from one large company to many simultaneous startups, and people will learn in schools set up like startup incubators where students work in teams to solve real-world challenges because the building blocks are so cheap and powerful).


Event – Tuesday May 23, 2017:

Event Recording:


CFP HICSS 51 – Smart Service Systems: Analytics, Cognition and Innovation

Dear Colleagues,
Hello! We are serving as a co-chair of the “Smart Service Systems: Analytics, Cognition and Innovation” minitrack in the Decision Analytics, Mobile Services and Service Science Track of the upcoming 51st Hawaii International Conference on System Sciences (HICSS) (, and writing to scholars such as you with expertise in various areas of service systems, analytics, mobile systems and cognition in hopes that you will consider submitting a paper to our minitrack. The deadline for submitting papers to HICSS-51 is June 15, 2017. Please consider submitting your work if it is related to any of the specific topics listed and/or if you feel it addresses visions of the future of this track. We expect a range of concepts, tools, methods, philosophies and theories to be discussed. We thank you, in advance, for your valuable contribution to HICSS-51. Please let us know if you have any questions or need additional information. We look forward to receiving your submission!
Best Regards,
Haluk Demirkan –
Jim Spohrer –
Ralph Badinelli –
January 3-6, 2018, Hilton Waikoloa Village, Big Island, Hawaii
Additional detail may be found on HICSS primary web site:
Smart service systems can be characterized by: (1) the types of offerings to their customers and/or citizens, (2) the types of jobs or roles for people within them, and (3) the types of returns they offer investors interested in growth and development, through improved use of technology, talent, or organizational and governance forms, which create (dis)incentives that (re)shape behaviors. Entrepreneurs and policymakers can be viewed as innovators working to improve quality-of-service for customers and quality-of-life for citizens, respectively, as well as quality-of-returns for investors.  Ideally, smart service systems are ones that continuously improve (e.g., productivity, quality, compliance, sustainability, etc.) and co-evolve with all sectors (e.g., government, healthcare, education, finance, retail and hospitality, communication, energy, utilities, transportation, etc.). Regional service systems include nations, states, cities, universities, hospitals, and local businesses. Global service systems include multi-national businesses, professional associations, and other global organizations.  Natural and human-made disasters, technology failures, criminal activities, political collapse can disrupt service systems and negatively impact quality-of-life for people living and working in them.
There is a need to apply robust research findings in the appropriate management and organizational contexts related to innovation of smart service systems. An important trend in smart service systems is the increasing availability of cognitive assistants (e.g., Siri, Watson, Jibo, Echo, etc.) to boost productivity and creativity of all the people inside them. The goal of this mini-track is to explore the challenges, issues and opportunities related to smart service services, analytics, cognitive assistants and digital innovations. Possible topics of applied, field and empirical research include, but are not limited to:
  • Theories, approaches and applications for innovation of smart service systems and smart devices
  • Value co-creation processes, metrics and analytics for smart innovation processes
  • Methods that  scale the benefits of new knowledge globally, rapidly, and profitably
  • Service-oriented agile IT realization platform for smart service co-creation
  • Place of cognitive systems, computing, system engineering, cloud for smart service systems
  • Innovation ecosystems with internet and internet-of-things
  • Theories and approaches for integrating analytical and intuitive thinking, and deep learning
  • Open innovation and social responsibility
  • Planning, building and managing design and innovation infrastructures and platforms
  • Technology and organizational platforms that support rapid scaling processes (smart phones, franchises, etc.)
  • Smart service systems include the customer, provider, and other entities as sources of capabilities, resources, demand, constraints, rights, responsibilities in value co-creation processes, and includes current applications of human and cognitive systems
  • Analytics models, tools and engine for analytics support
  • Agile business development platform for operational enablement: business processes, rules, real-time event  management
  • The commoditization of business processes (e.g. out-tasking, ITIL, SCORE), software (e.g. the software-as-service model, software oriented architecture, application service providers) and hardware (e.g., on-demand, utility computing, cloud computing, software oriented infrastructure with virtualized resources, infrastructure service providers for innovations
  • Self-service and smart technologies & management for sustainable innovations
  • Services implications to value chains, networks, constellations and shops
  • Collaborative innovation management in  B2B and B2C e-commerce
April 1, 2017: Paper submission begins.
June 15, 2017 | 11:59 pm HST : Paper submission deadline
August 17, 2017 : Notification of Acceptance/Rejection
September 22, 2017 : Deadline for authors to submit final manuscript for publication
October 1, 2017 : Deadline for at least one author of each paper to register for HICSS-51

Visit Almaden

Best address is 900 Bernal Road, San Jose, CA 95120 – head to guard station at top of hill and give your name as visitor.  Host must register all visitors name, title, organization, and citizenship with IBM Security in advance of any visit.  Visitor badges can be picked up at the reception desk, look for flagpole in front for visitor reception entrance. Best travel times: 30 minutes from San Jose Airport, 50 minutes from Stanford, 70 minutes from San Francisco Airport, 90 minutes from Berkeley – all depending on traffic conditions, double if heavy traffic.

Best address: 900 Bernal Road, San Jose, CA 95120


IBM Almaden Snowy Day

Almaden Foggy Day

IBM Almaden Aerial View

Ponder this

I am trying to think hard about the 10 million minutes of experience that people go through to develop adult capabilities, as well as the 2 million minutes of experience from adult novice to adult expert – when people transition professions.

Mapping these developmental progressions into capabilities, and then capabilities into technologies is quite challenging – but fun.

Also, going the other direction from technological capabilities to specific applications, and from specific application to general capabilities of intelligence.

(1) Grand Challenges: General person at an age level -> development of general capabilities on tasks -> universal architecture – > open source technologies
(2) Practical Applications: Specific open source technologies -> specific capabilities for tasks/applications  -> role in universal architecture
(3) Data sets for benchmarking performance improvements over time
(4) Rapidly rebuilding open source technologies from scratch -> booting up the universal architecture with minimal data/code (including synthetic data from simulations)

Building Blocks to the Future: Cognitive OpenTech

As context, consider (1) the rapid pace of development of Cognitive OpenTech and (2) the remaining Grand Challenges of AI/CogSci

Part 1: Cognitive OpenTech Progress

Now consider the relative importance of big Data, Cloud compute power, and new Algorithms as a Service in making progress… we can call all these factors the DCAaaS drivers of progress.

In 2011, IBM Watson Jeopardy! victory on the TV quiz game show would not have been possible without the existence of Wikipedia – big data that was crowdsourced, and represents a compilation of knowledge across human history, including recent movies, sports events, political changes, and other current events as well as historic events.   Wolfram had an interesting analysis on how close “brute force” approaches were coming to this type of Q&A task, based on compiled human knowledge or facts.   In many ways this is an example of GOFAI – Good-Old-Fashion AI, with a twist.  GOFAI includes “people built giant knowledge graphs,” such as ConceptNet.  The modern twist that is now available, but was not available in the 1980’s, is crowdsourcing the construction of the “big data.”

In 2016, Google/DeepMind AlphaGo victory in the game Go would not have been possible without synthetic data, massive amounts of data generated by brute force simulated game playing.  In 2017, CMU Libratus victory in poker (Texas-Hold-Em) was also dependent on big data from simulated game playing.   Generating synthetic data sets based on foundational crowdsourced data sets has been key to many recent ImageNet Challenge annual performance improvements/victories.  Additional “big data” that is synthetic data generated from crowdsourced data is a hot topic with OpenAI’s Universe project (background: generated data) as well.

Three speakers explain the importance of big Data, Cloud compute, and Algorithm advances as a Service (DCAaaS) or simple “better building blocks” – see:

Andrej_Karpathy (OpenAI)
Richard_Socher (Salesforce)
Quoc V. Le (Google)

In addition to “big data” that is (1) crowdsourced, like Wikipedia and ImageNet,  and (2) machine generated (“Synthetic Data”) as in AlphaGo, Libratus, and OpenAI Universe, each of us has a stock pile of (3) personal data on our computers, smartphones, social media accounts, etc.

Rhizome’s Blog has an interesting post about Web Recorder tool.   Web Recorder is a tool for greatly expanding the amount of personal data, while also aggregating it as part of a type of internet archive and our personal browsing history of things we find interesting on the web.   A type of collective, digital social memory is emerging.

In sum, more and better data, compute, and algorithms are fueling the rapid pace of Cognitive OpenTech developments.

Part 2: Grand Challenges of AI/CogSci Progress

A universal architecture for machine intelligence is beginning to emerge.  The universal architecture that is emerging is a dynamic memory.  Imagine a dynamic memory that stores and uses information to predict possible futures better and more energy efficiently than any processes known of in the past.   This capability provides a type of episodic memory of text, pictures, and videos for question answering (see minute 50+ in the Socher video above).   The dynamic memory includes both RNN (Recurrent Neural Net) models as well as large knowledge graph (as found in GOFAI) models for making inferences, and answering questions or making other types of appropriate actions.

What is a dynamic memory good for? Most of us have taken a standardized test with story questions.  The test taker is asked to read a story, look at a sequence of pictures, or watch a video and then answer some simple questions.  In grade school, these “story tests” are simple commonsense reasoning tasks, where the answer is always explicit in the story.  As we get older, the stories get harder, inference is required beyond commonsense knowledge, tapping into “book learning” and “expert knowledge” that has been compiled for centuries.  Some story questions we can answer based on short-term memory (STM), and others require long-term memory (LTM).  A universal architecture that is a dynamic memory can combine appropriately both STM and LTM for question-answering.

For example, to get a sense of where machine capabilities are currently at for very simple stories, consider Story Cloze Test and ROCStory Corpora.

Context___________________________________ Right Ending Wrong Ending
Gina misplaced her phone at her grandparents. It wasn’t anywhere in the living room. She realized she was in the car before. She grabbed her dad’s keys and ran outside. She found her phone in the car. She didn’t want her phone anymore.

The example above is interesting, and the ConceptNet5 website FAQ (very end) reports: Natural language AI systems, including ConceptNet, have not yet surpassed 60% on this test.

As highlighted above in the Karpathy, Socher, Le videos – data in the form of sequences of text, sequences of images, as well as sections of videos (and audio recordings) – are all being used as input to tell simple stories.   These stories (data) are snippettes of external reality representation – with some measure of internal model representation feedback loops – so are approaching a (1) experience representation and (2) episodic memory – what Schank called “dynamic memory” – that is beginning to be used in story processing and question-answering tasks – what Schank called “scripts, plans, goals, and understanding.”

The remaining grand challenge problems of AI/CogSci are being worked on by university, industry, and government research labs around the world, and rapid progress is expected, thanks in part to cognitive opentech – data, cloud (compute), and algorithms as a service offering, and very easy to access, including from smartphones that never leave our side as we operate in today’s world.   The models being generated will have more and more universal applicability over time, and should boost the creativity and productivity of end-users who use these technologies to solve new and interesting problems, as advocated by Gary Kasparov.  Kasparov, the world champion grand master player,  lost chess games to Deep Blue in 1996 and again 1997.  Today, noteworthy in the news, Gary Kasparov is now learning to love machine intelligence.

IA (Intelligence Augmentation) is a long-standing grand challenge that involves both people and machine intelligence together – thinking better together.  IA is the key to what the NSF, JST, VTT, OECD and other organizations have started referring to as smarter/wiser service systems.   IBM has made a lot of contributions to intelligence augmentation;  Both intelligence augmentation and collaborative intelligence, will benefit the world.

The past, present, and future of measuring AI progress is becoming an important area of research.