The eight unsolved grand challenges of AI/CogSci are readily apparent if one watches a child growing up to adulthood, and noticing capabilities that arrive at different ages.
The list below provides a set of goals for students building open source AI software for smartphones. Moreover, to stimulate university startups, students should be learning to build, understand, and work with an open source cognitive assistant on their smartphones that helps them learn, plan their career opportunities, develop cognitive assistants for all occupations they plan to enter.
The above student projects will become much easier, once these eight unsolved grand challenges of artificial intelligence and cognitive science have been solved:
People these capabilities developing from 0-5 years of age:
(1) minute experience (Fiore)
(2) episodic memory (Schank, Socher)
Oddly enough, (2+) deep learning for perception and action is getting pretty well solved (some super-human capabilities) – and this is why so many people think AI is solved, or nearly about to be solved, because they think this one think near the bottom-middle of the cognitive capability hierarchy allows everything to be solved quickly – and it does not. There are things lower and higher in the stack of cognitive capabilities that remain very challenging – grand challenges, in fact.
People these capabilities developing from 5-10 years of age
(3) commonsense reasoning (Lenat)
(4) social interactions (Forbus)
People these capabilities developing from 10-15 years of age
(5) fluent conversations (Klein)
(6) ingest textbooks (Etzioni)
People these capabilities developing from 15-20 years of age
(7) ingest regulations (Searle)
(8) collaboration augmentation (Engelbart)
People require about 10 millions minutes of experience to acquire these capabilities and become an adult in society.
Then adult people require about 2 million minutes of experience to go from novice to expert in an occupation or social role where experts already exist to learn from.
In slightly more detail:
(1) minute experience (Fiore) – requires representing both external inputs and internal inputs – no one really knows how one minute of experience works in a person.
(2) episodic memory (Schank) – requires building a dynamic memory that experience can be added to, and performance on certain tasks improves and does not degrade with additional experiences.
See minute 50 for dynamic memory by Socher Salesforce: https://www.youtube.com/watch?v=oGk1v1jQITw
“Cartoonization” might be a good approach to explore – see the work of Devi Parikh – https://filebox.ece.vt.edu/~parikh/CVL.html
Cartoonization is summarizing a long series of videos into a cartoon that is then part of a rank and retrieve question answering systems – episodic dynamic memory. Killer app for smart cameras might be cartoonization – a TensorFlow-based system being taught to perform the “killer end-user app – cartoonization or summary cartoons” from a smart camera that builds an episodic memory of the last decade, year, month, week, day, hour views – reduced to a two minute cartoon of “expectation violation” or “interesting incidents” over those time periods… rank-and-retrieve Q&A on the cartoon might also be good.
(3) commonsense reasoning (Lenat) – requires reasoning changes to be compiled for rapid memory lookup.
(4) social interactions (Forbus) – requires animal level and then beyond animal level awareness and modeling of others.
(5) fluent conversations (Klein) – very hard, how do people do it?
(6) ingest textbooks (Etzioni) – very hard, especially when diagrams are included, etc.
(7) ingest regulations (Searle) – very hard, to go beyond social rules and manners, to become good at understanding the laws and institutions that shape behavior.
(8) collaboration augmentation (Engelbart) – very hard, requires people first that people know how to collaborate, and then people with cognitive assistants to interact fluidly on tasks as well.
Rob Farrell (IBM Research) and Paul Maglio (IBM Research, UCMerced) look at the list of capabilities above, and were not satisfied with my list – since they wanted clear tasks that a machine would have to perform, not a loose description of a vague capability like commonsense reasoning.
Is the idea to break these down according to cognitive functions? [JIM: Actually developmental capabilities sequence, mental simulation of a child growing up]
I feel like AI challenges should be an obvious, verifiable, optimal, or indistinguishable from humans. [JIM: Yes, AI systems need to accomplish clear tasks to be evaluated.]
One way to do this is to have the challenges not jut focus on the early (“ingest”) or later (“common sense reasoning”) parts of the process, but go end-to-end and drive research in the various cognitive functions but include the target function. Another reason for this is that cognitive functions tend to be more intertwined than we think (Lakoff etc.). [JIM, yes – they are highly intertwined]
So the challenges could be things roughly in the categories you proposed but elaborated in this way. Here is a quick crack at it: [JIM: Thanks so much Rob – great first crack at it]
1) experience representation – generating a natural language description of a complex physical object (e.g. car) from a video of it performing a range of functions (e.g., avoidance)
2) episodic memory – writing a diary every day for a year based on online chat interactions , social network messaging, etc. with a simulated “life” and then answering questions about these “personal” experiences. Also measure degree of connection to others before and after.
3) common sense reasoning – carrying on a dialogue (speech) about the nuances complex concepts such as what counts as conservative or liberal politics or whether an outdoor scene is ‘beautiful’
4) social interactions – robotically navigating a complex physical space (e.g., a Disney park) by interacting with people (e.g., “excuse me”) and objects (e.g, pushing a turnstile).
5) natural conversation – learning another language from audio and text examples enough to respond to questions from native speakers about a complex activity (e.g., what is sold by this store?) with enough accent, grammar, word choice etc. to be understood by the askers
6) reading (textbook) – Learn math from textbooks and apply to a novel domain by reading different textbooks about that domain (e.g. physics)
7) reading (lawbook) – make judgements on legality similar to human judges on complex decisions
8) collaboration augmentation – participate in a collaborative activity with two other people to speed up the activity and improve its quality
1) proposal is too hard, much like an expert system – I am looking for data traces through time that include external environment data as well as hidden internal data – see/hearing and thinking data trace. Much more difficult, and fundamental what I am asking for. Robot learning at CITRIS People and Robots, Ken Goldberg, and the work of Pieter Abbeel is getting close.
2) I like this one – auto-diary idea – converting trace of a person behavior into something detailed
3) proposal is too hard, like a debater – I am looking for something that is quickly able to be surprised by commonsense reasoning violations
Commonsense approaches: http://www.kdnuggets.com/2016/08/common-sense-artificial-intelligence-2026.html
Startups looking at deep learning and commonsense reasoning: http://www.technomontreal.com/en/news-center/news/microsoft-acquires-deep-learning-startup-maluuba-yoshua-bengio-to-have-advisory
A long history with Lenat and Cyc: https://www.wired.com/2016/03/doug-lenat-artificial-intelligence-common-sense-engine/
4) sort of OK, like a robot dog at a theme part
5) kind of like this learning-another-language idea – vocabulary, context, matter – but looking for something better than keyword search of the web, but with commonsense reasoning and social interaction playing a role.
6) Allen Institute has a project to ingest a textbook and answer the questions at different grade levels -this one is OK.
7) This is more of 6) but for law books with reasoning about institutions and laws – but ingesting and answering law book questions is the right direction. RegTech (Regulation Technologies) on the rise.
8) Right – being able to think about a collaborative project, if different people with different skills are vailable – managers have to do this a lot.
To read about task versus ability evaluation of AI systems read this:
To read about Psychometric AI testing read this:
To see a nice CHC diagram, check this: