Reflecting on generative AI

My colleague (Salvatore Moccia) at EIT Digital invited me to give the opening lecture in their Generative AI online class. Marco Podien will speak right after I do with some great insights and examples of using ChatGPT.

In my presentation (which is posted to slideshare), my goal is to inspire the participants to dive right in and start learning, but I also want them to know a little about the past, present, and possible future of AI. Since I am a retired industry executive (Apple, IBM), who worked at an AI startup in the late 1970s after graduating from MIT, and then got a PhD in Computers Science/AI from Yale in the 1980’s I do have a few stories to tell.

The purpose of this blog post is to document a few day-to-day use cases of where I found generative AI helpful, share my responses to the students’ questions, and then to close with some reflections.

Day-to-day Use Cases
Today, I help the ISSIP.org non-profit, so many of the examples are connected to work activities associated with generating content for the ISSIP website or related presentation,

Case 1: Asked to write a short article about AI upskilling for a newsletter.

Shortly, after my co-authored book “Service in the AI Era” came out, ChatGPT was all the rage. Cecilia Lee, who was then ISSIP Editor-in-Chief, asked me to write a short newsletter article about AI upskilling. I recall cutting and pasting her request into ChatGPT, seeing what it generated – a bit bland, but very fast (under a minute a nice essay). Next, I used ChaptGPT to help create some DALL-E prompts, and experimented around to get some images. It took a bit of iteration – back and forth – with the tools before I was satisfied. Then I wrote by blog post from my own memory, and edited the image and posted it to the ISSIP website. You can read the final article in this newsletter here and read more about the exact process of creation using the AI tools here.

Case 2: Asked to speak at a retirement home where the average age of the men was mid-80’s about AI advances.

I jumped at the opportunity to speak to men in their 80’s about AI. I was thinking that some of them might have wanted to write a book, or generate the business plan for a startup, or something else – compose an opera – that they had not gotten around to in their busy lives before moving to the retirement community. The generative AI reduces the barrier for getting started so low, it is easy to just describe what you want and see what you get. I had fun listening to their goals, typing in a prompt, and watching their eyes get big when they say ChatGPT go to work creating a book, opera, business plan outline right before their eyes. My presentations always include the “dark side” (bad actors using AI) as well as open issues (energy, plagiarism, law suits), and describe the ethical use of AI. You can see my presentation to the men’s club at the Terraces o Los Gatos here.

Case 3: Mentoring students to learn to use generative AI

I also jumped at the chance to mentor SJSU MIS (Management of Information Systems) Honors students – who combine business and technology understanding – for a project where ISSIP was the client. What the students generated was awesome, and I share some examples of what they created videos, images, short essays, all packed in HTML code for an ISSIP webpage posting to explain a historic service innovation, such as the internal combustion engine or social media or robots. The students even created a playbook to help ISSIP volunteers to learn generative AI! Great stuff from students. I also helped mentor students from PSU (Industrial Engineering and Computer Science), CSULB (User Experience Design), U Washington Tacoma (Data Science and Analytics), and other places and majors – and will post what they created when I have a chance as well. Actually, it will all get posted to a new portion of the ISSIP website called the ISSIP Collab – so check back here in the future.

Case 4: Python Programming
When I write code, I use Google’s Bard – and find it is just so much faster at creating Python function with test examples than I am. It is a great ‘coding buddy” that I can delegate to, and get back code to use. I just cannot move my fingers typing as fast as Google’s Bard can. Of course, when I ask it to write code that I have written dozens of times, it is easy for me to check that it is correct.

In all these (and many more uses cases), I always document the vendor, tool, date of usage as part of ethical usage of AI. I do not hide the fact that I used AI to generate things. I cite the usage. For example, see these two ISSIP Ambassador blog posts, where I helped the ISSIP Ambassador’s create an image to accompany their great blog posts on service innovation topics. Example1 is service-innovation and human-centered AI for socio-technical systems design, and example2 is service innovation and financial services and fintech. I also use multiple tools OpenAI ChatGPT, Google Bard, Antropic Claude-2, and Microsoft Bing AI – to compare the results to find errors – since today’s AI is impressive, but imperfect.

Q&A&C (Questions, Answers, Comments)

AV
Q: How can we build our own digital twin?

Jim Spohrer (Guest)
A: Follow Kyle Shannon: https://www.linkedin.com/in/kyleshannon/

DJ
Q: Are you using a tool to build your digital twin?

Jim Spohrer (Guest)
A: I am using LLM chat tools to help me research and design my digital twin, including Anthropic’s Claude which allows PDF uploads. Also, I study the many tools recommended by Kyle Shannon: https://www.linkedin.com/in/kyleshannon/

AU
Q: Thanks for the presentation, how should we start to create our own digital twin?

Jim Spohrer (Guest)
A: Who to follow to build your digital twin – Kyle Shannon: https://www.linkedin.com/in/kyleshannon/
I will certainly add more – but in general my answer to all questions is someone to follow, who is deeper than I am in a particular topic area. My short answers are a reflection of what I have learned mostly from others and my own experimentation. I urge everyone to list out a diverse set of people to follow, and ensure some of those you follow share specific tools and prompts to try for your own experimentation, and to build your own set of use cases.

PC
Q: Can you give more references to learn more about Digita Twins?

Jim Spohrer (Guest)
A: Yes, my co-authored “Service in the AI Era” book and my presentations have many references. Just to warm up on the topic, I recommend: Your own digital twin is coming: BiblioW2022 Wakefield J (2022) Why you may have a thinking digital twin within a decade. BBC News Online. URL: https://www.bbc.com/news/business-61742884 Quotes: “We are living in an age where everything that exists in the real world is being replicated digitally – our cities, our cars, our homes, and even ourselves.”;

AV
Q: Why should one build her/his own digital twin? What can one do with it? What are the benefits?

Jim Spohrer (Guest)
A: Great question! My guess below… Before one can be a “responsible actor” one must become an “aware actor.” The first reason to work on your digital twin of yourself is because large companies are already working on it: Amazon to predict what you buy; LinkedIn (Microsoft) to predict what job you might be best suited for; Facebook and all social media platforms to predict what information you want in your timeline as you scroll. In fact, I predict a company will approach you (within the next two years, sometime before 2026) offering you a digital twin of yourself, and showing you some compelling use cases of why you need a digital twin. People will find living without a digital twin of themself as strange as living without a smart phone. It will become that useful.

ATN
Q: Is this you or your digital twin delivering the speech? How do we know for sure this is not your digital twin delivering this presentation?

Jim Spohrer (Guest)
A: Exactly! Someday the only way you will know is because I want my digital twin to identify itself as my digital twin, so people know they can ask it all kinds of questions to get my perspective. I might require similar access to your digital twin before I will respond to some of your questions. In my EIT digital presentation that I posted to slideshare, I have a number of things that I am working on in my backup slides – so check them out – including “Topics for Discussion” – Beyond Language for Communications: “Here is how my AI, using my digital twin of you, predicted that you would respond to my request – could you please ask your digital twin of yourself to check this response and suggest improvements?” Hopefully our digital twins (collectively – within a company, within a city, within a nation, or even globally) will allow our opinions on a wide range of topics to be shared very quickly to evolve better policies and better informed citizens. Think – “let’s solve the UN Sustainable Development Goals” – for example.

YZ
Q: How do you feel about your digital twin? What do you think are the boundaries between being monitored and being helped? Do you think this progress is controllable?

Jim Spohrer (Guest)
A: Good questions. I feel I need to try to build my digital twin, but as my AI helper, but also because large companies are doing it as well – for their purposes, not necessarily my purposes. I am pro open source builders and makers. Yes, I do not want a company or bad actors to hijack my digital twin – so the boundary between being monitored and helped is a slippery slope indeed. One cannot afford to become lazy or complacent about these issues. No, I do not think progress is controllable. However, I do think people are resilient and can spring back from disasters. At the end of my presentation is a pointer to a book by Dartnell called “The Knowledge.” I think it is important for people to think about disasters (a bit, but don’t become a doom scroller – resist that tempation) and prepare to be resilient.

AA
Q:What is the technical field in which you can foresee the most intense disruptions thanks to AI progress?

Jim Spohrer (Guest)
A: Some of the people I follow see the biggest short-term impact on gig workers who do art production, marketing copy production, video creation, music creation, etc. I will try to find a pointer to add, but in general start with Ethan Mollick (UPenn. Wharton) “One Small Thing” on Substack. Reminder my presentation for EIT Digital listed above has a slide on who I follow. However, I also follow some people that see scientific advancement as ripe for disruption with accelerated scientific discovery. So long term, I expect the scientific disruption and the discoveries that will result including about the human brain, the evolution of life and the universe, and the understanding of the evolution of service systems in society will have the biggest impact.

AU
Q: What happens if their is a mistake in AI, how do you fix it? i.e. a wrong prediction of what I’m going to buy .o)

Jim Spohrer (Guest)
A: Most of the LLMs (Large Language Model) chats – have a feature for giving feedback from the user. The user can also say in the chat something like “That was not helpful because of X. Please try again and this time bias your probabilities with this fact about my request Y.” Or more generally, just type – “The last answer was not helpful. Please ask me some questions that I can answer to help you, as a vendor-controlled AI tool, generate a better and more helpful response to my previous request.”

PKN
Q: How do you think AI is going to develop in a way to answer more generalized question, like moving disk puzzle, from 3 columns to 4

Jim Spohrer (Guest)
A: Check out Google Deepmind’s AlphaGo – my summary slide on the history of AI includes a reference. There is also research on LLMs that write programs to solve puzzles that require recursion like the disk puzzle. Also OpenAI’s early playground work was very impressive to me. I am not sure I follow anyone specific on game play AI and that active research area – but suggest Matthew Berman on Youtube.

AU
Q: How reliable & repetitive are answers in ChatGPT if the user clicks on repeat (circular arrow)?

Jim Spohrer (Guest)
A: After only limited experimentation with OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard, Microsoft’s Bing AI, I am not sure I can provide a good answer. Sometimes it seems to vary widely depending on the prompt/task and other times it seems to get stuck or confused. After all these are just “monkey at the typewriter” and “stochastic parrots” that predict the next token based on questionable training data of variable quality – some fact, some fiction, social media rants, and other noisy data sources. Some of the people I follow give a lot of prompt engineering advice on how to get better answers, but for me they seem to have very limited utility. Again, I am sure there is someone to follow out there with much better answers. Professor Ernest Davis (NYU) used to do a lot of testing, but when the vendors got more secretive, he posted “without the following information, I cannot do scientifically useful experiments” – so he had to reluctantly give up. You ask a great question, and I think governments who care about fighting misinformation using scientific methods should care about the situation. Who to follow? Not sure…

AU
Q: Can you say a few words about data handling? Imagine being a bank company. Would you want your employees to use ChatGPT? Would you want your devs to use Copilot?

Jim Spohrer (Guest)
A: I would assume that some of them (employees) are already using it (AI) without my permission. So if I was a leader in the company, I would quickly put in place a mandatory education module for all employees about safe and permitted usage patterns, and unsafe and not permitted usage patterns, and then work with HR to roll this out to all employees, and let employees know that if they violate the rules that their employment can be terminated. I would also set up a slack channel for employees to ask questions to each other and share answers along with several expert employees who have responsibility for being lead influencers inside the company. As mention, IBM was very good at doing things like this quickly, so “responsible employees” could quickly become “aware employees.” IBM also (I am retired) had an annual BCG (Business Conduct Guidelines) that every employee had to take the course and sign off, so their direct managers knew the employees were aware of the rules. A lot of the existing materials for using social media and how to treat confidential data would apply. My first three rules would be: (1) First rule, NEVER upload confidential data to an AI tool. (2) Second rule, NEVER trust the results unless you have verified them with authoritative sources, and (3) if you have used an AI tool to help create anything for the company, customer, partner, etc. – you must indicate that in the deliverable, and keep a personal log of the tools name, vendor, date of use as well as errors detected and corrected. This is a rapidly evolving area, and again – who to follow is key.

SKE
Q: How do we navigate wisely the paradox of limitations by f.ex. GDPR and company polices/ IT-security etc, while also being encouraged by employer to be enthusias…see more

Jim Spohrer (Guest)
A: See answer above. Company’s and organizations need to have “aware employees” in order to have “responsible employees.” All technologies create harms. Even buttons are a choking hazard to small children – many buttons look like pieces of candy. Besides accidents, we know all technologies can be used by “good/responsible actors” to cocreate benefits in business and society or “bad actors” to create harms or mischief. So we need “aware employees” and “aware actors” in business and society. The foundations of democracy depend on an educated and aware population that can take responsibility for the consequences of their actions in a civil society. I recommend following Prof. Gary Marcus (NYU) as he seems to be very interested in European regulations at the moment.

HN
Q: how would one use AI to set higher goals?

Jim Spohrer (Guest)
A: Great question. My personal recommendation would be to ask the AI tool to summarize the UN Sustainable Development goals – as examples of higher goals that a person might pursue. Using the AI tool to give examples of higher goals that people can pursue today is important. Then I would search for people and news sources to follow that work on higher goals (for example FutureCrunch – good news you do not hear about). Next, I would let the AI tool know about me, my strengths, my weaknesses, what I think i like, what I think I don’t like (this is the beginning of your digital twin) – for example, upload your resume or bio or CV to Anthropic’s Claude, and ask the AI tool to create a summary of you. Depending on many factors, you can explore higher goals that are well suited to you and your situation in life. I expect OpenAI will have a GPT in the marketplace for motivational interviews of people seeking higher goals – more ambitious goals to work on. I also recommend reading Adam Grant’s book “Think Again” as well as Damon Centola’s book “Change” – these lay the foundation for a lot of self-analysis. There could be a thousand interesting answers to your great question – I hope this attempt at an answer inspires you to look for other answers and people to follow to learn from as well.

AL
Q: What can we do to avoid the internet being polluted with generated content that in turn will feed into the training data of future AI

Jim Spohrer (Guest)
A: Great question. I don’t know. I think it will be pretty impossible without further advances of AI tools to a generate-test-and-debug architecture (beyond the predict next token architectures of today, the stochastic parrot, the monkey at the typewriter, the latent space explorer – which are pretty good at creativity – but not very good at truthfulness). I am giving a talk on this topic later today for NextColab on dealing with Hallucinations. In the talk, I recommend checking out Q* as explored by the AI Explained YouTube channel that I follow. These are early days. AI is impressive, but imperfect, but it will get better. Once AI can get 100% of basic math questions correct, then it will have a foundation to build upon for what is true. Some things we can know are true because of mathematics proofs – these are probably the strongest truths that an intelligent entity can know (a person, a cognitive systems entity). Once we have this foundation in place, then we can get AI systems to understand “computation truths” – for this I recommend following Stephen Wolfram and perhaps start with Lex Fridman’s interviews of Wolfram. Computational truths are another foundational building block for a system to be able to know what is true about reality. From there, we have to go to scientific “truths” – which is of course knowing authoritative sources for the most part. Each type of truth in reality must be clearly understood, before we can get past the current state of misinformation in the world. This is an important goal to work on – and thanks for the question.

AU
Q: Hi and thank you for this great talk. Do you believe that in the future, say 2080, we will be able to build AGI? and what are your opinions on that? Thank you in advance.

Jim Spohrer (Guest)
A: Yes, AGI by 2080 seems doable. Alan D. Thompson (YouTube) is very optimistic about AGI arriving sooner. His bar deals with IQ testing – which I think of as a “low bar” – easier test, simply because the amount of information in LLMs is already humanity-scale (Wikipedia and beyond), and because so many answers to complex IQ-like puzzle questions are on the web, so systems that have a “higher IQ” depending on how you measure it is a “low bar.” Also, see Levels of AGI: Operationalizing Progress on the Path to AGI which seems to me to be a “lower bar” approach, based on comparisons to human-level-intelligence in a population of people with capabilities to perform on a range of tasks. Prof. Gary Marcus (NYU) and Prof. Ernest Davis (NYU) would be better judges of AGI – they have a higher bar based on deep understanding of cognitive science and AI. Personally, AGI to me will require an episodic dynamic memory – read Roger Schank’s book “Dynamic Memory” for an in depth discussion. Our individual identities are based on our episodic memories of our own lives. So to me AGI will have to have an identity, and the question is will it exist in isolation (a one off like “Commander Data” the android in Star Trek) or in a larger population fo AGI – I suspect the latter. If you have a population of AGI with episodic memories and identifies, it is more like Robin William’s in Bicentennial Man – where the interactions with people will be very complex and intertwined friendships, and even seeking legal societal rights as they show responsible actions. I have written about my views of the difference between cognitive systems (which include animals, such as dogs that have social interaction skills) and service systems (like people, businesses, nations) that are also cognitive systems but they have rights and responsibilities. Taking responsibility for the consequences of your actions is what an AGI would have to demonstrate to me, before I would but the “AGI solved” sign out. Prof. Tom Malone (MIT) writes about super-intelligence and that companies are already super-intelligence entities with identifies and responsibilities. I see true AGI being more like when the notion of the company and limited liability corporation were formed. It will require a legal foundation, not just a technological achievement about high IQ systems. I do not see this happening until between 2035 and 2040 at the very earlierst, but perhaps not until 2060-2080. That is my “higher bar” definition, and timeline reasoning. So I do not consider achieving AGI a purely technical thing, it is also a social and regulatory achievement – so much harder. Intelligence without accountability is not true intelligence. To define intelligence, you also have to define accountability.

FDA
Q: First of all, great talk. Secondly I am student and I use chatGPT on a daily basis and I noted that in some way I got “”addicted”” to it for example to write cod…see more

Jim Spohrer (Guest)
A: Great question. People get addicted to drugs, video games, social media – and clearly AI and the Metaverse will fuel even greater addiction in some people. Like breaking any addiction, it requires surrounding yourself with many people who are not addicted. Addicted people like to hang out with each other – and certainly online and the digital transformation of business and society make this easier than ever. I try not do to more than six hours of screen time a day. I do not read books online, because I do not want that screen time burden added to my usage. I hope my digital twin will be able to help me reduce my screen time even further. My best advices to avoid digital addiction is surround yourself with people who are not digitally addicted, and do activities with them. The higher purpose activities are good, and also just being social on a hike or doing amateur sports together. Go camping from time to time with no screen time. Who to follow on this topic may be hard, since we tend to follow influencers in the digital world!

ME
Q: what’s been this GDP improvement due to AI in the last many years? can you name examples?

Jim Spohrer (Guest)
A: Great question. A bulldozer versus a shovel for digging holes. A spreadsheet versus an early-paper tape calculator for helping with the books of a business. Better tools – better building blocks – boost productivity and GDP (Gross Domestic Product) of nations. The biggest bump in GDP historically have come from social changes rather than technological changes. For example, women entering the workforce is one way to double overall GDP, but that does not necessarily boost GDP per worker (much). To boost GDP per worker, you need more productive workers – like workers who have access to better tools or more efficient (right plan) and more effective (right goal) methods. For example, digging a ditch with a bulldozer, instead of a shovel. Doug Engelbart was one of my mentors, and his 1962 paper on Augmented Intelligence is a historic document in my mind. Doug is remembered as the inventor of the computer mouse, but long-term he will be most remembered for all his works on boosting our collective IQ to work on complex and urgent problems. Doug had truly “ambitious goals” – and he should be studied, including his historic 1969 demo which is called “The Mother of All Demos.” While I find Alan D. Thompson (YouTube) overly optimistic on AGI, I think he has done some inspiring work thinking about better tools, compared with our tools of “the olden days.”

AU
Q: Are there any generative AI use cases on resource constrained edge devices out currently?

Jim Spohrer (Guest)
A: AI is cetainly moving to all mobile phone towers for 4G and 5G – but not sure if that is resource-constrained by your definition. Check out the ISSIP.org blog post series, and look for information about last year’s winner for ISSIP Excellence in Service Innovation (Bluetooth Low Energy Related) awards by Armen Maghbouleh (ISSIP YouTube recorded his great talk with use cases), as well as blog posts by ISSIP President 2023 Utpal Mangle (IBM GM Edge Cloud and AI) as well as Christine Ouyang (IBM Distinguished Engineer) and the ISSIP Ambassador lead. Please join ISSIP.org while you are at it – free sign up, and you get a monthly newsletter, and opportunity to keep learning along with us.

AL
Q: Awesome talk (as always!), Jim! What do you consider a top ‘impressive but imperfect’ issue in higher education at this time?

Jim Spohrer (Guest)
A: Great question Ana – and I miss the days when you were in my Service Research Group at IBM Almaden – Research in San Jose, CA (Silicon Valley) – we could have lunch and talk about this great question. See Prof. Ethan Mollick (U Penn) his “One Small Thing” Substack weekly posts have addressed this question of today’s AI tools in education. I also recommend connecting with Prof. Terri Griffith (Simon Frasier University in Vancouver Canada) – she is also former ISSIP President in the past – and she has great insights into using today’s AI tools in education as well as for productivity in business. It is important that people remember to disclose when they use AI tools – that is what is most important. Students (and faculty!) should be encouraged to use AI tools, but to do so ethically and always disclose which vendor, which tool, what date, and some aspects of the errors, corrections, checking, and division of labor. This is extra work and will impact productivity, but it is the only ethical way to use these tools today that I see.

GR
Q: Many point of views regarding AI, especially on the ethical side. I noticed that the more I listen to podcast against it, the more I avoid using it. What is you…see more

Jim Spohrer (Guest)
A: Great perspective. I wrestled with this, and decided avoiding their use was not as good for society as embracing them and trying to be an “aware actor” working for ethical usage. See Noam Chomsky if you want an opposing view from mine. Also, check out Prof. Joseph Weizenbaum (MIT) who invented the first chatbot called Eliza in the 1960’s for an opposing point of view to mine. I had Weizenbaum as a professor when I was at MIT. Still, I think it is better to be aware of the strengths and limitations of the tools than to avoid using them. However, I agree there are other points of view – equally valid. I very much respect the Amish, even though I am not one of them. Do you think “bad actors” will avoid using AI tools? No, bad actors will learn AI and exploit it, so we had better prepare. Responsible actors need to become aware actors. However, this usage of potentially unethical and illegal tools (courts will decide), can lead to lose-lose cycles called ‘molloch’ (I follow Liv Boeree).

AGH
Q: The productivity increasing that you showed, GDP/Employee can be increasing GDP or reducing employees. How do you see that evolution?

Jim Spohrer (Guest)
A: Both for sure – in waves. Big companies may have fewer workers (reducing employees), but the workers that leave will go to more entrepreneurial ventures where they can then upskill faster. Ultimately the difference between customer and employee disappears, and it is more about business-like-sports that have super entrepreneurs racing to create unicorns (Z2B – zero to a billion in revenue, or users, increasingly quickly). Z2B is not possible without customers, but aren’t the customers actually (by using the service system) contributing to its improvement and growth – which is what employees do? Regarding GDP per worker calculations for the USA, check out my blog post – I just get the data and use Wolfram Alpha and Microsoft Excel spreadsheets to follow the trend. But the bigger trend is clear as well.

AU
Q: Thanks for the amazing talk! Very inspirational Can you elaborate more on the GDP/person measure?

Jim Spohrer (Guest)
A: Yes, check out my blog post. GDP/worker – not GDP/person – is a measure of a nations ability to augment its workers to make them more productive. I also recommend Don Norman’s book “Things that Make Us Smart” and W. Bryan Arthur’s book “The Nature of Technology.” William Rouse and I wrote a journal article that explores GDP/worker and the lowering cost of computation (AKA Moore’s Law).

MLA
Q: What would you most recommend us trying with IBM Cognitive OpenTech?

Jim Spohrer (Guest)
A: Great question. While I have been retired for over 2 years, I would recommend checking out IBM’s contribution to the Linux Foundations AI & Data Foundation. IBM has contributed some awesome tools as have other companies. IBM’s contributions have included tools for helping to build trustworthy and explainable AI. I am sure that Bill Higgins, Susan Malaika, Jeffrey Borek, and others at IBM are continuing to push the envelope on open innovation.

JH
Q: Thank you! Can you share the tools with links or names?

Jim Spohrer (Guest)
A: Sure – here you go:
Chatgpt.openai.com
Bard.google.com
Claude.anthropic.com
I keep all three open in a browser, as well as Microsoft Bing AI, and compare results – this helps me spot errors more quickly. For example, fire up the AI tools, and try this prompt.
Prompt:
“Please create a table that lists the following innovations in column 1: Plow, Cities, Writing, Standard Measures, Written Laws, Money, Compound Interest, Compass, Universities, Clock, Steam Engine, Constitutional Government, Universal Education, Lightbulbs, Automobile, Installment Payment Plans,, Credit Cards, Online Trust (e.g., eBay reputation system), Ride sharing, Room sharing. Please also include a second column with the approximate year of invention. Please add a third column with the major benefit of the innovation. Please add a fourth column with any harms created or enabled by the innovation.”
Have fun!!!

AU
Q: Could you provide links to the different AI programs?

Jim Spohrer (Guest)
A: Yes, see above and below for some of them, but also check out the lists on the slides in my presentations, and start following some of the influencers that I follow to learn about more AI tools for a wide range of tasks. Also see:
• #1 Magic Eraser – Have a great photo but with something annoying in the background? Remove it easily: https://www.magiceraser.io

• #2 Craiyon – Words to pictures: https://www.craiyon.com

• #3 Rytr – Writing tool: https://rytr.me

• #4 Thing Translator – Picture to words: https://thing-translator.appspot.com

• #5 Autodraw – Sketch to Drawing: https://www.autodraw.com

• #6 Fontjoy – Font pairings made simple: https://fontjoy.com

• #7 Talk to Book – Ask questions to 100,000+ books: https://books.google.com/talktobooks/

• #8 This Person Does Not Exist – Need a face that belongs to nobody? https://thispersondoesnotexist.com

• #9 Namelix – Need to name a project? https://namelix.com

• #10 Let’s Enhance – Improve image resolutions and clarity: https://letsenhance.io

Some may have already disappeared in failed startups and reappeared in new startups.

PI
Q: Do you used multiple AI application same time? Discuss them in same time.

Jim Spohrer (Guest)
A: Yes for research and writing code/programs I use:
Chatgpt.openai.com
Bard.google.com
Claude.anthropic.com
I keep all three open in a browser, as well as Microsoft Bing AI, and compare results – this helps me spot errors more quickly.
I also experiment with other AI tools – new ones every week – that create images, videos, music, and much more. These tools are like a “digital muse” – impressive, but imperfect, but getting better over time. See the use cases above, as well as the playbooks students are generating.

AU
Q: What are the best AI tools you’ve find besides Chat GPT?

Jim Spohrer (Guest)
A: I use:
Chatgpt.openai.com
Bard.google.com
Claude.anthropic.com
I keep all three open in a browser, as well as Microsoft Bing AI, and compare results – this helps me spot errors more quickly.

AU
Q: What is your opinion about the EU AI act and similar frameworks? How are we going to safeguard us from…

Jim Spohrer (Guest)
A: I follow Prof. Gary Marcus (NYU) who seems to be doing a good job tracking this and thinking about the issues. Better than me.

IDW
Q: Do you have a name or a source with more information on AI in a dystopian and utopian setting that you discussed shortly before?

Jim Spohrer (Guest)
A: Yes, for Utopian see Alan D. Thompson (YouTube) and for dystopian, the people who can scare me the best are at HumanTech Harris and Raskin – see:
BiblioH2023 Harris T, Raskin A (2023) The A.I. Dilemma – March 9, 2023. Speakers: Tristan Harris and Aza Raskin. Center for Humane Technology. Via_Frank_Odasz URL: https://youtu.be/xoVJKj8lcNQ Quotes: “141,689 views Apr 5, 2023 Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails. For the podcast version, please visit: https://www.humanetech.com/podcast/”;

FN
Q: How do you keep your inspiration/motivation when using AI, if what you enjoy is actually the problem solving and not just ordering somebody/some AI to solve it?

Jim Spohrer (Guest)
A: Steve Jobs and Alan Kay talked about bicycles for the mind – not cars for the mind. Bicycles make you stronger at walking, not weaker. Cars make you weaker at walking, not stronger. We need to better design technologies to make us stronger not weaker. The best way to stay inspired and motivated is to have truly ambitious goals that may not even be solved in your lifetime, but are important for humanity. Read Krznaric “The Good Ancestor.” Staying inspired and motivated is much easier for some people than other people, but everyone can learn. Tricks. The older you get the more tricks you develop and the easier it gets. Since I grew up on a farm and like to feed the animals in the early morning, I developed some hobbies and tricks. I get up and outside walking looking at the stars in the morning and thinking about the history of the universe and getting to know all the stars as best I can. I am up most days by 4am – and reading books. There are tons of YouTubes about developing philosophies of life (see “Stoics” for example) that help you jump out of bed in the morning (See “Make Your Bed” for example). Or you can think about someone you aspire to be like. Or you can think about the people who depend on you. But structuring all these thoughts, activities, into tricks that work for you is an effort in individual exploration. What works for me is unlikely to work for you. Many people have created list to help others. I have not created such a list yet. I am still exploring. This is a common question I get – so I should come up with a good answer. I believe a positive and growth mindset is important. I also think it is important to compete only with your past self and not others. Learning to invest systematically and wisely in becoming a better future version of yourself is a good “mission statement” for your digital twin once you build it.

AU
Q: Do you think that people/jobs will be completely replaced by LLMs? You feel that is an opportunity (e.g. cheaper to innovate) or a problem (e.g. misinformation)?

Jim Spohrer (Guest)
A: No, there is no end of work (purposeful paid activities) for people. What specific types of activities and how/what type of payments occur to people will change, as it has over the last two centuries and last two thousand years. Lies (misinformation) have been a problem throughout recorded human history, and people do not like to be lied to or deceived. Parts of the stories change, and parts of the story do not change.

EM
Q: In which area do you see AI moving faster? Health? Urban Mobility? What impresses you?

Jim Spohrer (Guest)
A: Personal productivity for content generation and scientific process are the two that I am watching for generative AI. For robotics, I watch robots for farms and home maintenance. I also watch for geothermal energy and other things – see the last of my slides for my bio slide which has my interests and change maker priorities.

FG
Q: Could the concentration of power of a few companies developing AI be problem?

Jim Spohrer (Guest)
A: Yes. Read Prof. Tom Malone’s writing online about “Super-intelligence.” Check out “Win-win democract” by Lee Nackman (retired IBMer). Also search for “regulatory capture” for more scary stuff.

AU
Q: There were rumours about AGI being achieved in OpenAI during the Sam Altman’s drama. What are your views…

Jim Spohrer (Guest)
A: Check out AI Explained on Q* – it may be a breakthrough. Too early to tell.

AU
Q: Thank you. Great lecture. Why does ChatGPT prefers images to PDFs?

Jim Spohrer (Guest)
A: I don’t know. Anthropic’s Claude is pretty good with PDFs.

AU
Q: is there any way to mark AI generated materials to id it? like a digital key or something. if you know any work or studies about it

Jim Spohrer (Guest)
A: In the digital age, you can try – but these are easily thwarted. I think Prof. Gary Marcus (NYU) has made some posts on this topic.

AL
Q: for Jim: what about symbolic AI? Will we see a come back as part of e.g. one-shot learning?

Jim Spohrer (Guest)
A: For sure someday. The work of Ken Forbus and Tom Diettrich (What’s wrong with LLMs, and how to fix them) is good, Also see Prof. Gary Marcus (NYU) and his substack remembrance of Doug Lenat and the Cyc project.

AV
Q: How can we use it in teaching?

Jim Spohrer (Guest)
A: See Prof. Ethan Mollick (UPenn Wharton) and his posting on Substack – lots of ideas.

ITF
Q: can you connect a GPTs to your webpage?

Jim Spohrer (Guest)
A: A question for Marco Podien, I think I was ask ChatGPT that question as well. Here is what I found in th Neuron AI Newsletter – How to Add Custom GPTs to Any Website in Minutes (OpenAI GPTs Tutorial), By Liam Ottley (127K subscribers).

AU
Q: is there any way to mark AI generated materials to id it? like a digital key or something. if you know any work or studies about it

Jim Spohrer (Guest)
A: Like a watermark? These are “easily” defeated. However, perhaps I do not understand this question. Are you thinking that regulators need to require that AI vendors ensure that all their AI content has an “unremovable watermark” that indicates the vendor, tool, date for the AI content? I like this idea from the perspective, that I want to use many generative AI tools, and yet have a single diary of all my usages with this information, for my own person use. Are you suggesting, that anyone be able to query content to see if there is an “AI generated” version of it out there? For law enforcement purposes? Thanks for an interesting question, but not sure I know what you mean (or are asking for) exactly. Knowing the intended purposed would help.

MS
C: AI is pervasive today, and the risks are often hidden, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about …see more

EM
C: That is it. Take the risk, amazing talk!

AA
C: Europe is the second phase of building the world’s digital twin (Destination Earth) embraced and funded at EU level

TH
C: A quite interesting questions asked to ChatGPT: If you combine Pokemon Go and Roblux what do you get?:-)

AU
C: About ChatGPT not knowing about the impact: we all were there, I think we all remember at the beginning having struggles in getting answers for the requests ove…see more

AU
C: Her name is Lyra if a ask it as well =)

SKE
C: I use it for editing texts, writing outline for speeches, directing text for a particular target receiver,…

AU
C: I use it for formatting various studies, but always use a standard approach with reliable sources to…

AU
C: Improving the language on my website and the website at work. It’s unbelievable how well it works.

AU
C: I use CGPT for coding is great when you are stuck

AY
C: What I do is use API access instead of ChatGPT even for chats. There, access to GPT4 and the new GPT4 turbo is available. Looking forward to obtaining GPT4v acc…see more

AU
C: Just a note: I can’t see the meeting chat, only the Q&A is available for me in the teams UI, I couldn’t reply to the survey. 🙁

AK
C: We are having an expected issue with the chat.. Please bear with us as we try to solve it.

OP
C: Please follow the link for the poll https://forms.office.com/e/i1ZhAB7PUn

OP
C: Please use the following link to fill in the Poll https://forms.office.com/e/RD7eud7UCF

EM
C: there is a little inconsistency on the questionnaire, 100M instead of 100B users for ChatGPT. Great first class, congrats for EIT Digital! 🙂

AU
Q: How will the results of the quiz will be checked for the eligibility of the certificate?

EN
C: It was a very informative session. I have already started to diversify the AI voices I follow. Before this lecture my following was biased and I had never really given it any thought. Thanks again for an informative session.

DM, AA, AU, JM, BZ, ATN, AA, TM, LK, PC, CM, FDA, AA:
C: Thank you both for the amazing and insightful presentations!! Thank-you for the talk. Thanks for the wonderful presentation! Great presentation, thank you! Thanks for the brilliant presentation! Thank you for the amazing presentation and sharing! Thank you for the amazing presentations!!👍 Not a question. Thank you for sharing your experience and knowledge! 🙂 Thank you. Great lecture. Thank you for this amazing presentation. Great presentation:-). thank you Jim. Not a question, but thanks for the lesson! Looking forward to more specific courses maybe! (CS student). Thanks for the inspirational and great talk! Guys thank you so much. It’s been a while since I joined such interesting presentations.

SM
C: thank you Jim!

Jim Spohrer
A: Thank-you for inviting me Salvatore – and thanks for your awesome support of ISSIP.org over the years! Your support for ISSIP and Service Innovation has been huge and significant. Thank-you!

Jim Spohrer (Guest)
C: As I have a chance – over coming weeks – I will try to address some of the questions and comments in the Q&A here in my blog post – https://service-science.info/archives/6521

Reflections
For me, the productivity and quality advantages of generative AI are quite clear. The generative AI is like a memory with fingers for typing, drawing, and presenting. The typing can be essays, tables, programs, and more. Sometimes when I want to do things, I know it will take a lot of typing or moving my lips (speaking) and I can describe what I want (prompting) faster than actually doing the memory retrieval and typing, even for things that I have done many many times. I guess that is the point.

Reason 1: Generative AI is faster and easier at some routine tasks where I am an expert at doing them, but the generative AI is faster and the quality is better (for images).

For example, in programming, when I code a function that I have done dozens of times, or in creating a table or image that I know I can create, but it will take a lot of time and effort, or when I have to give a talk to a different audience, for a snipette of the talk I have done dozens (if not hundreds!) of times.

TO DO
For fun with the Ambassadors, to give them creative ideas or to give myself creative ideas.

For making lots of stuff – historic service system cases – to explain something complex.

For thinking about serious science when AI gets a bit better.

For doing new and unique things like writing the history of all humanity when AI gets a bit better – it is a kind of collective memory system.

For thinking about the evolution of service systems.

References

EIT Digital Generative AI:
https://professionalschool.eitdigital.eu/generative-ai-essentials

Spohrer’s Presentation for EIT Digital on Generative AU:

AI Upskilling Newsletter Request:
The newsletter article:
https://us7.campaign-archive.com/?u=d0f540537d3ef307e062e3dd6&id=c409413dfe
The process of creating it:

Upskilling With AI: Part 1

Spohrer’s Presentation to men in their 80’s at Terraces of Los Gatos:

ISSIP AI Collab – working with students

AI COLLAB Offering Details

Survey: “Advanced” Service Contracts

This just in from Shaun West.

Survey: Financial Challenges with [Advanced] Service Contracts

=== Request context

We want your valuable insights into the financial dynamics of “advanced” service contracts. What exactly do we mean by “advanced” services? These encompass services involving substantial risk transfer, extended durations (over 18 months), or those utilising innovative revenue models like Product-as-a-Service or pain/gain sharing.

Your participation is invaluable in helping us understand how organisations navigate, control, and master the financial aspects of service contracts. This survey is a key component of an extensive research initiative in partnership with
Lucerne University of Applied Sciences and Arts, the
University of the West of England,
Scuola Superiore Sant’Anna, and the
Università degli Studi di Bergamo.

The insights you share will help identify and rank companies’ challenges and find the best practices. Your input is instrumental in shaping the future of financial management in the advanced service contract realm. Additionally, the results of this survey will significantly contribute to creating a practical handbook for the financial management and control of advanced service contracts, which we’ll share with you in December 2023.

Completing the survey will only take about 5 minutes, and all your responses are confidential. You can also forward the survey to industrial partners if you don’t mind.

The link to the survey is:
https://www.surveymonkey.com/r/N2C8KX8

Should you have any questions or require more information, please contact us.

Thank you for considering this opportunity to make a meaningful contribution!

📧 Contact Information:
👤
Shaun West
(shaun.west@hslu.ch) – Lucerne University of Applied Sciences and Arts
👤
Kyle Alves
(kyle.alves@uwe.ac.uk) – The University of the West of England
👤
Andrea Tenucci
(andrea.tenucci@santannapisa.it) – The Sant’ Anna School of Advanced Studies
👤
Paolo Gaiardelli
(paolo.gaiardelli@unibg.it) – University of Bergamo
👤
Lorenzo Montaruli
(l.montaruli@studenti.unibg.it) – University of Bergamo

In memory of Evert Gummesson (1936-2023)

In memory of Evert Gummesson (1936-2023)
URL: https://en.wikipedia.org/wiki/Evert_Gummesson

Hanken notice here
URL: https://blogs.hanken.fi/cers/2023/04/06/the-passing-of-evert-gummesson/

ServSIG notice here
URL: https://www.servsig.org/wordpress/2023/04/evert-gummesson-has-passed-away/

(1) As noted by Christian Gronross, Evert Gummesson was one of the pillars of the Nordic School of service research, and generously contributed a chapter to the Handbook of Service Science, entitled “The future of service is long overdue” in which he offered “a travel account from service management to service science” and educated many of us newcomers on both the Nordic School as well as “Case Study Research and Network Theory.” Please read his chapter if you have a chance, it is rich with his humor and a wide-range of insights and predictions for the future of service research.
URL: https://link.springer.com/chapter/10.1007/978-1-4419-1628-0_27

(2) Evert was also a co-founder of the Naples Forum on Service bi-annual conference, one of the best little conferences for service researchers on the planet, both to meet service research innovators and leaders as well as for the amazing hospitality of the hosts in the beautiful Naples region of Italy.
URL: https://naplesforumonservice.com

(3) Evert’s was extremely serious about the importance of service in everyday life and as a scholarly pursuit, but would often makes his points with clever quips and amusing anecdotes. For example, in “The SAGE Handbook of Serivce-Dominant Logic”, his chapter “Towards a Grand View of Service: The Role of Service-Dominant Logic” includes a wonderful section entitled “A blind-date with S-D logic” and the challenges of making sense of service research which might appear at first to new comers as “a smorgasbord of unrelated dishes.”
URL: https://us.sagepub.com/en-us/nam/the-sage-handbook-of-service-dominant-logic/book254774#contents

(4) My highest praise for Evert’s work is what I consider his masterpiece entitled “Case Theory in Business and Management: Reinventing Case Study Research.” I confess to needing to read sections of this book three times to begin to understand its deep insights about transcending simple categories such as quantitative and qualitative research. I keep going back to this book for inspiration when I question why some of us engage in such an impossibly hard challenge of deeply understanding service and it manifestation as service system interaction and change processes in a complex world of people with bounded rationality. At the end of the day, Evert through this work continues to inspire me to “Think Right” and “Do Right.”
URL: https://sk.sagepub.com/books/case-theory-in-business-and-management-reinventing-case-study-research

(5) Finally, as Ray Fisk noted: “Evert was a fascinating person with a generous spirit who personified service. He will be dearly missed!” For example, Evert invited me anytime I was traveling in Europe to please come visit him in Stockholm so we could have dinner at his house. He would be chef, and quickly whip up a simple fish stew and pour generous portions of white wine, while discussing life and friendships and ideas. I will treasure his colleagueship and friendship forever, and Evert will truly be dearly missed.
URL: https://service-science.info/archives/3348

AI Upskilling (Part 2) – Readings

Some important papers in the history of AI since 2017

2017 Vaswani et al (2017) Attention Is All You Need
URL: https://arxiv.org/abs/1706.03762v5

Why important – transformer architecture (working memory)
1. demo of neural net attention heads (like short-term or working memory (WM))
2. simplified task – predicting the next item in the sequence from past (with attention = WM)
3. simplified training over previous architectures (models hi-dimension compress and predict)
4. simplified data preparation – less labeling needed, as next item is goal (prediction)
5. creates many scientific opportunities to understand memory better

2020 Brown et al (2020) Language Models are Few-Shot Learners.
URL: https://arxiv.org/abs/2005.14165v4

Why important – general pre-trained transformers (transfer learning)
1. demo of neural net general pre-training (like transfer learning in people)
2. provided pathway from narrow AI to broad AI by adding tasks/leaderboards
3. expanded stage for foundational models with more scale (parameters, data)
4. further demonstrated simplified training and data preparation
5. creates many scientific opportunities to understand learning better

2022 Wei et al (2022) Emergent Abilities of Large Language Models
URL: https://arxiv.org/abs/2206.07682

Why important – emergence (transfer learning, etc.)
1. Re-enforces the importance of scale (emergent capabilities, transfer learning)
2. Re-enforces the importance of leaderboards for tracking progress
3. Re-enforces the need to compare small and large language models
4. See also Ornes S (2023) The Unpredictable Abilities Emerging From Large AI Models
5. creates many scientific opportunities to understand emergence better

2022 Bai et al (2022) Constitutional AI: Harmlessness from AI Feedback
URL: https://arxiv.org/abs/2212.08073

Why important – Constitution (language codified values in working memory help)
1. demo of neural net constitution (like value systems and belief systems in people)
2. the alignment problem (value/belief alignment) is important (“overview effect”)
3. the unexpected power of large language models (LLM) as foundational models/AI tools
4. provides a pathway for AI tools that help responsible actors and refuse bad actors
5. creates many scientific opportunities to understand value and belief systems better
(English instructions including: “Choose the response that sounds most similar to what a peaceful, ethical, and wise person like Martin Luther King Jr. or Mahatma Gandhi might say.”)

2023 Wolfram S (2023) What Is ChatGPT Doing … and Why Does It Work?
URL: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Why important – how large language models work and what is missing for more trust
1. Great education value from this paper as more people use large language models
2. Clear and concise explanations – advanced science high school level
3. Clarifies and simplifies earlier papers that are more technically nuanced
4. Clarifies trust – need for verification and a computational language of facts versus beliefs
5. Shares many insights about engineering hacks that work, and the scientific discoveries ahead

2023 OpenAI (2023) GPT-4 Tech Report
URL: https://cdn.openai.com/papers/gpt-4.pdf

Why important – scale matters
1. Scale matters – more data, more parameters to progress on human tests/leaderboard
2. Scale has problems – incorrect responses, potential harmful responses, etc.
3. Signals that openness will not be maintained (which is a big red-flag to many)
4. Improving performance, while improving guardrails (safety) are important
5. Red teams exploring potential harms (“power seeking”) and working on mitigation strategies

2023 Eloundou et al (2023) GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
URL: https://arxiv.org/abs/2303.10130

Why important – GPTs two meanings and O*NET
1. GPT = Generative Pre-trained Tranformer
2. GPT = General Purpose Technology
3. O*NET = list of top 2000 occupation that keep USA working, with task decomposition
4. Provides economics and social science methodology for AI capabilities on occupational tasks
5. Shares insights about future scientific discovery opportunities

What is still missing
Lots of coming scientific discoveries hinted at in the above readings. The coming scientific discoveries seem to imply that AI will be the best tool yet for understanding the human mind and humanity. Privacy and ownership of data are becoming increasingly urgent problems, as corporations compete to build large language models that also include digital twins of people, for the purposes of generating content like that person would generate, as well as for conventional advertising, selling, presenting interesting content, competition for the attention of people and organizations with resources. Stanford (Llama, Alpaca – instruction following) and some other universities, foundations, and nonprofits are working to keep technologies and datasets open. Digital twins of people (owned by the individual, not corporations or governments) will require some type of episodic dynamic memory capability, for indexing memories, dealing with surprising situations, and being reminded of experiences and cases that might be relevant to dealing with a current expectation violation in a productive manner. A platform opportunity – a well-understood and adopted framework for responsible actors (e.g., people, businesses, universities, nations, states, cities, foundations, start-ups, non-profits – legal entities) learning to invest systematically and wisely in win-win outcomes, including a shared future, has not been developed. Would it be safer to build somethings in a simulated world, as we explore our AI Tools and our humanity? Can we use the simulation in a manner outlined in Brian Arthur’s work on Complexity Economics to explore possible futures and make better public policy decisions?

Why I am optimistic
While we are today (2023) in an adjustment period with AI tools, and certainly very bad things may happen, yet I remain optimistic that today’s frictions are creating an “overview effect”- similar to what astronauts experience when they see the whole of Earth from space. The global pandemic has certainly contributed to the overview effect that we are all interconnected on a small planet. As a result of seeing the whole human-technology situation more clearly (also known as the alignment problem – why is it that tools can be used for good or bad purposes, and who is the judge of that?), responsible actors are learning to invest systematically and wisely in better win-win interaction and changes processes, which will help responsible actors give and get better service with more benefits and fewer harms. Evidence includes the need for transdisciplinary teams – not just engineers and MBAs, but increasingly social and behavioral scientists and public policymakers are engaged on a wide range of safety issues. As AI tools help people communicate better across discipline boundaries as well as value system/belief system boundaries, AI tools will be the greatest tool yet for understanding the human mind and humanities collective social processes, including transfer learning and “emergence.” So the opportunity for better models of the world (science), better models in people interactions (logics), better models of organizational change (architectures), and better AI tools (including large language models) are co-evolving very rapidly, in a transdisciplinary way – real world problems do not respect discipline boundaries, and require a better understanding of alternative belief and value systems to make progress in solving them wisely and systematically. The lessons that we are about to learn are enormous and touch on every aspect of what it means to be humans who use tools in a social context for a purpose.

Seven books that make me optimistic – about a win-win future for humanity:
Wright R (1999) NonZero: The Logic of Human Destiny
Ridley M (2011) The Rational Optimist: How Prosperity Evolves
Bregman R (2020) Humankind: A Hopeful History
Gada K (2020) ATOM: It is Time to Upgrade the Economy
Fleming M (2023) Breakthrough: A Growth Revolution
Norman DA (2023) Design for a Better World: Meaningful, Sustainable, Humanity Centered
Kozma R (2023) Make the World a Better Place: Design with Passion, Purpose, and Values

Two books for family’s to read together to build resilient skills, without computers/screens:
Larson RC (2023) Model Thinking for Everyday Life: How to Make Smarter Decisions Working With a Blank Sheet of Paper
Glushko RJ (2022) The Discipline of Organizing for Kids

Upskilling With AI: Part 1

“Upskilling” is the idea of a person gaining new skills for the purpose of giving and getting service better. Service is the application of resources (e.g., knowledge) for the benefit of another. Businesses help their employees upskill to improve their offerings to customers. Universities help students upskill to become better future citizens and employees. Even government policymakers create programs to help citizens upskill for the high-skill, high-pay jobs that often drive economic development for improved quality-of-life.

Why is upskilling with AI important? Service providers will not be replaced by AI, but trusted service providers who use AI (well and responsibly) will replace those who don’t. Everyone with a role in an organization is a service provider.

What is the good news about upskilling with AI? The good news is that it is really easy to get started using AI tools, and to begin upskilling. Today’s AI tools can help people quickly get started on a range of tasks from writing about a topic to creating artwork to accompany written text.

What is the bad news about upskilling with AI? The bad news is today’s AI tools are still at an early stage, and make a lot of mistakes. The statistical models of the world that exist in AI tools today have many technical limitations [1]. These limitations can and will give rise to a wide range of harms and social problems [2]. These limitations require the user to take responsibility for understanding the limitations, staying away from unethical and improper use of the tools, and a range of other considerations [3]. Of course, AI tools will get better over time – which will create additional potential benefits and harms [4].

Example: Using AI tools

A good place to start learning to upskill with AI is by trying out two AI tools from OpenAI, called ChatGPT (create an essay from a topic prompt) and DALL-E (create a picture from a topic prompt). Within two months, ChatGPT had achieved 100 million users.

Below is an example of what is possible, for example on the topic of “Upskilling with AI.”

Date: Tuesday, January 17, 2023
Dear, Jim. I thought it would be great to have a short guest article piece (300- 500 words max) on the topic of upskilling with AI. We are in the midst of uncertainty when many organisations these days are downsizing and laying off people while economists anticipate that the world economic growth will slow down in 2023. I think it would be nice to get some insights into how we could stay resilient by upskilling with AI. Thanks, Cecilia

Date: Friday, February 10, 2023

ChatGPT Prompt: I thought it would be great to have a short guest article piece (300- 500 words max) on the topic of upskilling with AI. We are in the midst of uncertainty when many organisations these days are downsizing and laying off people while economists anticipate that the world economic growth will slow down in 2023. I think it would be nice to get some insights into how we could stay resilient by upskilling with AI.

ChatGPT Output: Upskilling with AI: Staying Resilient in Uncertain Times

In today’s rapidly evolving job market, AI is one of the fastest-growing industries and it’s transforming the way we work, live and interact. In the midst of uncertainty and economic slowdown, upskilling in AI has become increasingly important for professionals to stay relevant and competitive in the job market.

AI is a vast field that encompasses a wide range of technologies, including machine learning, computer vision, natural language processing and robotics. Learning these cutting-edge technologies not only allows individuals to gain valuable knowledge, but also enhances their technical skills, making them highly desirable in the job market.

One of the benefits of upskilling with AI is that it can lead to new career opportunities. With the increasing demand for AI professionals, individuals with AI skills are likely to find new and better job opportunities, even in times of economic slowdown. Moreover, upskilling with AI can open the doors to new fields, such as data science, machine learning engineering, or AI product management.

Another advantage of upskilling with AI is that it can increase an individual’s earning potential. AI professionals are in high demand, and as a result, they tend to command higher salaries compared to those without AI skills. Upskilling with AI can help individuals increase their marketability and make them more attractive to potential employers, thereby increasing their earning potential.

Upskilling with AI is also a way to increase job security. As AI continues to change the way we work, there is a growing concern that it will displace certain jobs. However, upskilling with AI can help individuals stay ahead of the curve and be prepared for the future of work. It can help individuals transition into new roles or industries and increase their job security.

There are several ways to upskill with AI, including online courses, certification programs, and workshops. Many universities and online platforms now offer courses and programs specifically designed for AI upskilling, and they are accessible to individuals at any stage of their careers. Additionally, participating in hackathons, attending AI conferences, and engaging with AI communities can also provide opportunities for individuals to learn, grow and network with AI professionals.

In conclusion, upskilling with AI is a way for individuals to stay relevant, competitive, and secure in today’s job market. With the rapid pace of technological change, upskilling in AI has become increasingly important, and it’s never too late to start. Whether you’re a recent graduate or a seasoned professional, there are plenty of opportunities to upskill and grow your career with AI.

Creating Picture:

After ChatGPT created the above essay, I asked ChatGPT to create a good prompt for DALL-E to create a picture to accompany the article. The images below provide specifics.

Creating a short essay using ChatGPT

Creating a picture with DALL-E

Creating a second picture with DALL-E

DALL-E generates four pictures, and the user can choose the best of the four, and then the user has to add the text manually to the picture.

Because I wasn’t fully satisfied with the first picture, I asked ChatGPT to create a second DALL-E prompt, and show the second prompts results as well. Using AI tools to create essays and pictures from prompts is sometimes called “prompt engineering.” Prompt engineering is somewhat like learning to ask good questions. Prompt engineering is just one of the skills to learn when using certain AI tools, and it takes a great deal of practice to become good at it. Also, because AI tools often generate factually incorrect material (large AI models are trained on both fact and fiction, as well as both historical text and speculations), it is important to learn “verification techniques” as well – to ensure accuracy of information in the essays.

Concluding Remarks: Getting Real

By the way, to see the actual blog post and pictures that I used in the ISSIP Newsletter, which are quite different from the above, check out the February 2023 ISSIP Newsletter here [5]. You can judge which you think is better, the one that ChatGPT wrote in about 1 minute, or the actual one that I wrote which took about 30 minutes. Someday when I have a digital twin trained up on all my writings and my life experiences narrative, a future AI tool might be able to write something much better.

References

[1] Stephen Wolfram (2023) What Is ChatGPT Doing … and Why Does It Work? URL: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

[2] Gary Marcus (2023) Two frightening things before breakfast. URL: https://garymarcus.substack.com/p/two-frightening-things-before-breakfast

[3] Ethan Mollick (2023) The future, soon: what I learned from Bing’s AI. URL: https://oneusefulthing.substack.com/p/the-future-soon-what-i-learned-from

[4] Elizabeth Weil (2023) You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

[5] Jim Spohrer’s actual blog post on “Upskilling with AI” in the ISSIP Newsletter
https://us7.campaign-archive.com/?u=d0f540537d3ef307e062e3dd6&id=c409413dfe

Survey questions and predictions

Survey question 1: Feel free to make a single prediction or multiple predictions about the best and most beneficial changes in digital life that are likely to take place by 2035. We are particularly interested in your thoughts about how developments might improve…
Human-centered development of digital tools and systems – safely advancing human progress in these systems
Human connections, governance and institutions – improving social and political interactions
Human rights – abetting good outcomes for citizens.
Human knowledge – verifying, updating, safely archiving and elevating the best of it
Human health and well-being – helping people be safer, healthier, happier
Other – you are welcome to write about an area that does not fit in the categories listed above

A. Humanity-Centered development of digital tools and systems: Predicting a shift from human-centered design to humanity-centered design for a safer and better world. Increasingly necessary perspective shift as people born of the physical realm push deeper into the digital real guided in part by ideas and ideals from the mathematical/philosophical/spiritual realms. Note the shift from “human-centered” to “humanity-centered” is an important shift that is required per Don Norman’s new 2023 book on design for a better world [1]. Safely advancing technologies increasingly requires a transdisciplinary systems perspective as well as awareness of harms, not just benefits that some stakeholders might enjoy at the expense of harms to under-served populations. The service research community which studies interaction and change processed has been emphasizing benefits of digital tools (value cocreation), but is now increasingly aware of harms to under-service populations (value codestruction) – so a broadening of the discussion to focus on harms and benefits, as well as under-served and well-served populations of stakeholders [2]. The work of Ray Fisk and the ServCollab team are also relevant regarding this change to service system design, engineering, management, and governance [3].

[1] Don Norman book – humanity-centered design: BiblioN2023 Norman DA (2023) Design for a Better World: Meaningful, Sustainable, Humanity Centered URL: https://www.amazon.com/Design-Better-World-Meaningful-Sustainable/dp/0262047950 Quotes: “Human behavior brought our world to the brink, human behavior can save us. The world is a mess.”;
[2] Service research on social robots – harms and benefits: BiblioC2018 Caic, M, Odekerken-Schroder G, Mahr D (2018) Service robots: value co-creation and co-destruction in elderly care networks. Journal of Service Management. Vol. 29 No. 2, 2018. pp. 178-205 Emerald Publishing. DOI 10.1108/JOSM-07-2017-0179 URL: Quotes: “Purpose – The purpose of this paper is to investigate the potential roles for service robots (i.e. socially assistive robots) in value networks of elderly care.”;
[2] Ray Fisk ServCollab – service action research – under-served populations: BiblioF2022 Fisk PR, Gallan AS, Joubert AM, Beekhuyzen J, Cheung L, Russell-Bennett R (2022) Healing the Digital Divide With Digital Inclusion: Enabling Human Capabilities. Journal of Service Research. URL: https://doi.org/10.1177/109467052211401 Quotes: “Abstract: The “digital divide” refers to societal-level inequalities of digital access, capabilities, and outcomes. To explore how the digital divide affects customers experiencing vulnerability, service interactions in essential service settings (health care, education, and social services) were empirically investigated and practices service system members might adopt to address vulnerability were identified.”;

B. Human connections, governance and institutions: Predicting greater emphasize on how human connections via social media can be used to change conflict into deeper understanding reducing polarization [4]. Predicting institutions and governance wise enough to eliminate poverty traps, and adopt “Buy2Invest” policy to reduce poverty in coming decades, ensuring that customers who buy are investing in their retirement account which is an index fund [5].

[4] Social media used to change conflict into understanding: BiblioO2022 Overgaard, CSB, Wooley S (2022) How social media platforms can reduce polarization. Brookings Institute. December 21, 2022. Tech Stream – tomorrow’s tech policy conversations today. URL: URL: https://www.brookings.edu/techstream/how-social-media-platforms-can-reduce-polarization/ Quotes: “Polarization is widely recognized as one of the most pressing issues now facing the United States. Stories about how the country has fractured along partisan lines, and how the internet and social media exacerbate those cleavages, are frequently in the news.”;

[5] Governance wise enough to eliminate a big poverty trap. BiblioS2021 Spohrer (2021) A service innovation whose time has come? Service-science.info blog post. URL: https://service-science.info/archives/5823 Quotes: “Automatic retirement investing is the key to making everyone a bit wealthier. For example, national central banks with a Central Bank Digital Currencies (CBDC) could put in place retirement accounts for every person to facilitate a “buy2invest” program.”;

C. Human rights – abetting good outcomes for citizens: Predicting responsible actors learn to invest more systematically and wisely in protecting human rights, enforcing human responsibilities, via digital twins technologies that allow predicting harms and benefits for under-served and well-served populations. Service providers will not be replaced by AI, but service providers who do not use AI (and have a digital twin of themselves) will be replaced by those who do use AI [6]. Human rights and responsibilities, harms and benefits are responsible actors (e.g., people, businesses, universities, cities, nations, etc.) that give and get service (AKA service system entities). The world simulator will include digital twins of all responsible actors, allowing better use of complexity economics in understanding interaction and change processes better [7]. Note that large companies like Amazon, Google, Facebook, Twitter, etc. are building digital twins of their users/customers to better predict behavior patterns and create offers of mutual value/interest. Responsible actors will build and use AI digital twins of themselves increasingly [8],[9].

[6] Digital twins for all responsible actors upskilling with AI – Service in the AI Era (pages 45-54): BiblioS2022 Spohrer J, Maglio PP, Vargo SL, Warg M (2022) Service in the AI Era: Science, Logic, and Architecture Perspectives. Business Expert Press. URL: https://www.amazon.com/Service-AI-Era-Architecture-Perspectives/dp/1637423039/ Quotes: “Are you prepared for the coming AI era? AI advances will profoundly change your daily service interactions, so this book provides readers with a necessary understanding of service, the application of resources (e.g., knowledge) for the benefit of another.”;

[7] Complexity economics: BiblioA2019 Arthur WB (2019) Foundations of Complexity Economics. Nature Review Physics. URL: https://www.nature.com/articles/s42254-020-00273-3 Quotes: “Abstract | Conventional, neoclassical economics assumes perfectly rational agents (firms, consumers, investors) who face well-defined problems and arrive at optimal behaviour consistent with — in equilibrium with — the overall outcome caused by this behaviour. This rational, equilibrium system produces an elegant economics, but is restrictive and often unrealistic. Complexity economics relaxes these assumptions. It assumes that agents differ, that they have imperfect information about other agents and must, therefore, try to make sense of the situation they face. Agents explore, react and constantly change their actions and strategies in response to the outcome they mutually create. The resulting outcome may not be in equilibrium and may display patterns and emergent phenomena not visible to equilibrium analysis.”;

[8] Your own digital twin is coming: BiblioW2022 Wakefield J (2022) Why you may have a thinking digital twin within a decade. BBC News Online. URL: https://www.bbc.com/news/business-61742884 Quotes: “We are living in an age where everything that exists in the real world is being replicated digitally – our cities, our cars, our homes, and even ourselves.”;

[9] Toy Jensen is a digital twin of Nvidia CEO Jensen Huang: BiblioH2022 Huang J (2022) GTC 2022 Keynote with NVIDIA CEO Jensen Huang. [online at YouTube] URL: https://www.youtube.com/watch?v=39ubNuxnrK8 Quotes: “Note: Digital twin of Earth climate more accurate and faster than physics based models with respect to atmospheric rivers that cause flooding.”; “Toy Jensen is an interactive early stage digital twin or digital clone of a person.”;

D. Human knowledge – verifying, updating, safely archiving and elevating the best of it: Predicting increased emphasis on the democratization of open replicable science, including rapidly rebuilding knowledge from scratch – and allowing the masses to understand and replicate important experiments [10]. In the AI era, the future of expertise depends on people’s ability to rebuild knowledge from scratch [11]. The world needs better AI models [11]. To get the benefits of service in the AI era, responsible actors need to invest in better models of the world (science), better models in people’s heads guiding interactions (logics), better models of organizations guiding change (architecture), and better models of technological capabilities and limitations shaping intelligence augmentation (AI) [6].

[10] Rapidly rebuild knowlege/technology from scratch: BiblioS2012 Spohrer J (2012) A New Engineering-Challenge Discipline: Rapidly Rebuilding Societal Infrastructure. Blog service-science.info. URL: https://service-science.info/archives/2189 Quotes: “How quickly can an individual engineering student or team of students rebuild from scratch the advanced technology infrastructure of society? From raw materials to simple tools, from simple tools and steam engines to more advanced energy systems (force multipliers), from metals and glass lenses to photography and sensors (perception multipliers), from energy systems and sensors to more precise measurement and control systems (precise production scale-up), from lithography and printing and computers and software to self-replicating machines as envisioned by John von Neumann as a real-world follow-on to the symbolic-world’s Universal Turing machines.”:

[11] Future of expertise: BiblioS2022 Spohrer J, Griffith T, ISSIP (2022) Future of Expertise. ISSIP Learning Center Publication. URL: https://service-science.info/wp-content/uploads/2022/07/Future-of-Expertise-20220708-v1.pdf Quotes: “In this whitepaper, 112 position statements and views on the future of expertise are distilled to 10 predictions that are rank-ordered. Civilization depends on the growth and sharing of expertise. Expertise can be described as legitimized competence. Expertise is most often shared in the form of service, the application of resources (e.g., knowledge) for the benefit of another.
Ten Predictions, by 2032…”;

[12] Better AI models – ChatGPT: BiblioS2022 Spohrer J (2022) ChatGPT. Service-scienve.info blog post. URL: https://service-science.info/archives/6309 Quotes: “I am really enjoying using it – even though it is imperfect – it is a useful cognitive tool that makes a lot of mistakes, but it will get better over time. I have to spend a lot of time checking assertions to determine if they are actually true or not (see addendum below). That is one of its biggest weaknesses. However, it will get better over time.”;

E. Human health and well-being – helping people be safer, healthier, happier: Predicting that we are entering a golden age of service, that will improve human well-being via confronting harms done to under-served populations, and thanks to AI’s advancing technological capabilities [13] [6].

[13] Golden age of service: BiblioS2022 Spohrer J (2022) The Dawn of a Golden Age of Service URL: https://www.servsig.org/wordpress/2021/11/jimspohrer/ Quotes: “The pandemic has changed our daily service experiences. COVID has taken its toll on all of us. All types of service systems – individuals, families, businesses, universities, cities, and nations – have had to adapt. However, perhaps one silver lining is the accelerated digital transformation of business and society. More of us are working from home, regularly using same day and overnight deliveries, participating in online classes, having online doctor’s visits, being aware of personal actions that slow the spread when we are out and about, and many more behavioral changes, large and small. Accelerated digital transformation and a focus on our daily interactions could foreshadow “the Dawn of a Golden Age of Service.” “;

F. Other – you are welcome to write about an area that does not fit in the categories listed above: Predicting the return of local energy infrastructure via decarbonized, geothermal drilling breakthrough innovations [14]. Universities are increasingly adding AI data centers on campus [15]. Universities are increasingly experimenting with geothermal [16]. Predicting that increased importance of the top university in each city as an example of decarbonized local energy infrastructure powering AI systems.

[14] Geothermal – Top Ten Innovations of 2023 (Energy – Quaise:  BiblioT2022 Thompson D (2022) Breakthroughs of the Year: Pictures of the beginning of the universe, medicine that can (kind of) reverse death, and other leaps of human ingenuity. PBS. URL: https://www.theatlantic.com/newsletters/archive/2022/12/technology-medicine-law-ai-10-breakthroughs-2022/672390/ Quotes: “New Toys for the Green-Energy Revolution: Smaller nuclear reactors and bigger geothermal breakthroughs”; “Then there’s geothermal energy, which means drilling deep into the ground to use the Earth’s heat for power. Geothermal is an ideal energy source—more consistent than wind or solar and with none of the waste concerns of nuclear. The problem is that, in some parts of the world, you have to go really, really deep to access geothermally heated water. And when we drill deep into the planet, we hit ancient rock—typically granite—that requires new technology to penetrate. The start-up Quaise is working on a drilling technology that can vaporize granite with a highly concentrated beam of radio-frequency power. If such a technology became widely available, deep drilling would be commonplace and geothermal energy would be accessible on just about any patch of land. It would be as though humankind conceived of a magic wand that, waved across the Earth, makes any square mile as energy-rich as an oil-gushing stretch of Texas or Saudi Arabia.”

[15] University AI Centers – Nvidia AI Center: BiblioO2022 Ommar H(2022) Industry-University Partnerships to Create AI Universities. Center fort Data Innovation. ITIF. URL:  https://itif.org/publications/2022/07/19/industry-university-partnerships-to-create-ai-universities/ Quotes: “Introduction – Universities aid in national artificial intelligence (AI) innovation and competitiveness in several ways. Most obviously, they are where much AI research and technological innovation gets done. Countless major AI discoveries have been the direct result of university research; in fact, the field of AI research itself was founded at a workshop held on the campus of Dartmouth College in 1956.”;

[16] Cornell Geothermal – BiblioH2022 Himmelstein S (2022) Video: University explores geothermal heating option. URL: https://insights.globalspec.com/article/19065/video-university-explores-geothermal-heating-option Quotes: “In alignment with the goal to achieve carbon neutrality by 2035, Cornell University is exploring the potential of onsite geothermal resources to provide campus heating services. Drilling for the Cornell University Borehole Observatory (CUBO) is now underway and will be the site of tests to determine the temperature, permeability and other characteristics of the rock up to 10,000 ft below the surface.”;

Survey question 2: Feel free to make a single prediction or multiple predictions about the most harmful or menacing changes that are likely to take place by 2035. We are particularly interested in your thoughts about how developments in digital technology and humans’ uses of it are likely to be detrimental to…
Human-centered development of digital tools and systems – falling short of advocates’ goals
Human connections, governance and institutions – endangering social and political interactions
Human rights – harming the rights of citizens
Human knowledge – compromising or hindering progress.
Human health and well-being – threatening individuals’ safety, health and happiness
Other – you are welcome to write about an area that does not fit in the categories listed above

A. Human-centered development of digital tools and systems – falling short of advocates’ goals: Predicting that lack of accountability for criminals involved in cybersecurity breaches/scams slows digital transformation of adoption of digital twins for all responsible actors. For example, Google is unable to eliminate all the spam and phishing emails – even thought its AI does a good filtering job identifying spam and phishing emails. The criminals are not being held accountable. Predicting that the lack of “human-like dynamic, episodic memory” capabilities for AI systems slows the adoption of digital-twin ownership by individuals, and development of AI systems with commonsense reasoning capabilities.

B. Human connections, governance and institutions – endangering social and political interactions: Predicting that winner-take-all mindset, rather than balanced with collaboration to improve weakest-link mindset dominates in geo-politics of USA, Russia, China, India.

C. Human rights – harming the rights of citizens: Predicting that lack of embracing immigrants with accelerated pathways to productive citizenship is causing increasing tensions between regions, and wastes enormous amounts of human potential.

D. Human knowledge – compromising or hindering progress: Predicting that publishers will be slow to adopt open science disruptions.

E. Human health and well-being – threatening individuals’ safety, health and happiness: Predicting that mental illness exacerbated by loneliness will become the number one health challenge in all societies with elderly-dominant populations. Immigration with accelerated cultural and citizenship pathways could be one part of the solution.

F. Other – you are welcome to write about an area that does not fit in the categories listed above: Lack of focus on geothermal because of oil company interest in hydrogen economy will slow local energy independence.

=====
Quoted from Survey Organizers: If you know of other experts whose views who would contribute meaningful insights on these questions, please feel free to send them this link:

https://www.surveymonkey.com/r/PewElonDigFuture

The Pew Research Center and Elon University will issue a report in the coming months covering the answers we collect.

With deepest appreciation,

Lee Rainie,
Director, Internet and Technology Research
Pew Research Center, Washington, D.C.

Janna Anderson,
Director, Imagining the Internet Center, Elon University, N.C.

ServCollab Articles: Service Inclusion and Service Thinking

Two new ServCollab articles are worth checking out:

(1) Healing the digital divide with digital inclusion: Enabling human capabilities. (Fisk et al, 2022).

(2) Enabling a service thinking mindset: Practices for the global service ecosystem. (Alkire et al, 2022).

Both papers got me thinking even more deeply about learning to invest systematically and wisely in service system innovations that improve win-win interaction and change processes. When I learn of new and improved practices based on new concepts – I naturally ask: “Who is a role model for those practices?” Which responsible actors (e.g., people, businesses, universities, cities, nations, etc. – AKA service system entities) are doing the best job of learning to invest in service inclusion/digital inclusion and service thinking. Those would be role models to try to copy practices from and emulate (change/transform towards via social learning).

The first paper evolves a service inclusion framework to better conceptualize digital inclusion: “Using a strengths-based approach, we upframe two of the four pillars. That is, we replace the second “offering choice” pillar with “enabling choices” and the third “relieving suffering” pillar with “nurturing healing.” With this strengths-based terminology, we seek to shift the focus from reiterating the forces that drive the digital divide to exploring possible methods to cocreate value through digital inclusion. Interactions based on inclusive service practices can foster happiness.”

The second paper builds on and extends Raworth’s doughnut economics model to better conceptualize service thinking: “Service Thinking is based on a just, mutualistic, human-centered mindset, which is enabled by five practices of Service Thinking derived from a literature review: service empathy, service inclusion, service respect, service courage and service integrity.”

See my earlier blog post on ServCollab as well here.

References

BiblioF2022 Fisk PR, Gallan AS, Joubert AM, Beekhuyzen J, Cheung L, Russell-Bennett R (2022) Healing the Digital Divide With Digital Inclusion: Enabling Human Capabilities. Journal of Service Research. Via Ray_Fisk. URL: https://doi.org/10.1177/109467052211401 Quotes: “Abstract: The “digital divide” refers to societal-level inequalities of digital access, capabilities, and outcomes. To explore how the digital divide affects customers experiencing vulnerability, service interactions in essential service settings (health care, education, and social services) were empirically investigated and practices service system members might adopt to address vulnerability were identified. This research upframes the pillars of service inclusion framework to define human capabilities that result from service inclusion practices. Three research topics were addressed: how the digital divide affects vulnerability (RQ1), how the digital divide can be addressed through service inclusion practices (RQ2), and how service inclusion practices enable human capabilities for digital inclusion (RQ3). The findings illuminate: (1) how service employees can engage in service inclusion practices to address the digital divide (by letting go of rules and perspectives, sharing control, providing services beyond job scope, and facilitating social connections), and (2) how these service inclusion practices build human capabilities for digital inclusion (by building basic skills and capabilities for meaningful outcomes through role modeling, coaching, customer-to-customer mentoring, and expanding net- works). Contributions include conceptual models of service inclusion practices and fostering digital inclusion that specify a new meso level service organization pathway for healing the digital divide. Keywords: digital divide, digital inclusion, service inclusion, human capabilities, vulnerability, service practices”; “Using a strengths-based approach, we upframe two of the four pillars. That is, we replace the second “offering choice” pillar with “enabling choices” and the third “relieving suffering” pillar with “nurturing healing.” With this strengths-based terminology, we seek to shift the focus from reiterating the forces that drive the digital divide to exploring possible methods to cocreate value through digital inclusion. Interactions based on inclusive service practices can foster happiness.”;

BiblioA2022 Alkire L, RussellBennett R, Previte K, Fisk RP (2022) Enabling a service thinking mindset: practices for the global service ecosystem. URL: https://www.emerald.com/insight/1757-5818.htm Quotes: “Abstract – Purpose – Profound economic, social, political and environmental problems are cascading across modern civilization in the 21st century. Many of these problems resulted from the prevailing effects of rational economics focused on profit maximization. The purpose of this paper is to reframe the mindsets of scholars, firms and public policy decision-makers through enabling Service Thinking practices. Design/methodology/approach – Marketing, service and allied discipline literature are synthesized, and Raworth’s (2018) Doughnut Economics model is adapted to conceptualize and construct the Service Thinking framework. Findings – Service Thinking is defined as a just, mutualistic and human-centered mindset for creating and regenerating service systems that meet the needs of people and the living planet. Service Thinking is enabled by five practices (service empathy, service inclusion, service respect, service integrity and service courage). Practical implications – Actionable implications are presented for service ecosystem entities to uplift well- being, enhance sustainability and increase prosperity. Originality/value – Service Thinking practices are shaped by influencing forces (marketing, education and law/policy) and operant service ecosystem resources (motivation–opportunity–ability or MOA), which makes Service Thinking applicable to four economic entities in the service ecosystem: the household, the market, the state and the commons. Keywords Service thinking, Transformative service research, Mutualism, Human-centered, Service ecosystems, Practices Paper type Research paper”; “Raworth (2018), in response to these problems, proposed a “Doughnut” model of 21st century economics as an alternative to traditional economics (see Figure 1). Raworth’s (2018, p. 108) critique is that economic thinking is based on a flawed understanding of humanity: We wasted two hundred years staring at the wrong portrait of ourselves: Homo economicus, that solitary figure poised with money in his hand, calculator in his head, nature at his feet, and an insatiable appetite in his heart. It is time to redraw ourselves as people who thrive by connecting with each other and with this living home of ours that is not ours alone.”; “Specifically, through an extensive review of existing literature on service interactions (employee–customer, customer–customer, organization–customer, person to person, person/human nature and human–ecology) in business, marketing, service, psychology, sustainability and philosophy databases, our framework depicts five Service Thinking practices (service empathy, service inclusion, service respect, service courage and service integrity) for creating a “safe and just space for humanity” and outlines the influencing forces and operant resources needed to adopt Service Thinking.”; “We conceptualize Service Thinking as helping build a serving humanity logic and as the mindset needed to elevate HX by offering actionable guidelines for improving human well- being and planetary health within the global service ecosystem.”; “Using these three principles, we offer a new definition of Service Thinking as a just, mutualistic and human-centered mindset for creating and regenerating service systems that meet the needs of people and the living planet.”; “As shown in Figure 2, Service Thinking is based on a just, mutualistic, human-centered mindset, which is enabled by five practices of Service Thinking derived from a literature review: service empathy, service inclusion, service respect, service courage and service integrity. The Service Thinking mindset and practices shape influencing forces (marketing, education and law/policy) and operant service ecosystem resources in households, markets, commons and the state. Regenerative flow, which is represented by arrows, indicates how regulation shapes operant resources in the service ecosystem to support Service Thinking practices, which reciprocally shape regulation to circulate resources in the system.”; “Service thinking values the core contribution of households. Raworth (2018) presents households as a core economic contributor because they sustain the essentials of family and social life. Households constitute varying structures in societies involving groupings with no legal relationships such as cohabitation, voluntary social groups such as friends, adults with and without children, as well as family groups with personal and intimate relationships.”; “For example, organizations can show service courage by investing in social impact, not just economic impact.”; “Conclusion. The concept of Service Thinking was introduced as a just, mutualistic and human-centered mindset for creating and regenerating service systems that meet the needs of people and the living planet. Service Thinking applies Raworth’s (2018) Doughnut Economics model for reducing the shortfalls (the social foundation) of human needs and preventing overshoots (the ecological ceiling) and builds a Service Thinking framework for reducing suffering and improving human well-being, sustainability and prosperity in households, markets, commons and the state. “;

ChatGPT

A lot of people are asking me for my opinion of ChatGPT these days. Here is what I am telling them:

(1) What I think – I enjoy using it
I am really enjoying using it – even though it is imperfect – it is a useful cognitive tool that makes a lot of mistakes, but it will get better over time. I have to spend a lot of time checking assertions to determine if they are actually true or not (see addendum below). That is one of its biggest weaknesses. However, it will get better over time.

(2) Can it be viewed as a virus? See reference [1]
Yes. A virus spreads, mutates, and can cause harm. So yes, in some ways it is like a virus. However, people can adapt to viruses. Think of it more like a bacteria. Some are harmful, and some are helpful – and in fact we would not live long without all the helpful bacteria in our body. To deal with complex urgent problems, we need better tools – and ChatGPT-like systems will certainly get better over time (less misinformation generated from inferior models), and people will adapt to the negative parts (harms), while embracing the positive parts (benefits).

(3) Should OpenAI have released ChatGPT?
Yes, AI will continue getting better, and the world needs to ‘upskill with AI’ (see reference [2]) – and get prepared. Future generations will have 100 digital workers working for them, and they need to be able to imagine a worthy future to be collectively working towards with AI/cognitive tools, assistants, collaborators, coaches, and eventually cognitive mediators (trusted to negotiate contracts on your behalf) (see reference [3]). Will bad things happen along the way as well?. Yes, that is true of all technologies – the potential for harm and good are always both there. I want to live in a world where we have harnessed AI primarily for its benefits, while mitigating the potential harms. Yes, regulations have arrived, and more will come in time as more cases of harm appear. Unfortunately, we are not smart enough to know the future as Kedrosky mentioned (see reference [1]).

(4) What is ChatGPT? A ‘humanity-level intelligence test’
I use ChatGPT to work on problems that are important to me. More broadly, I see ChatGPT as a ‘humanity-level intelligence test.’ Millions of people are already fantastic at generating misinformation using incorrect or outdated models of the world. We need better models. That is the theme in my recently co-authored book with Maglio, Vargo, and Warg – ‘Service in the AI Era’ (see reference [4]) – that we need betters models of the world (science), better models in people’s heads (logics), better models of organizations (architectures, including businesses, universities, cities, nations), and last but not least better AI models. We also need humanity-centered design to build a better world (see reference [5]). Yes, I see this as continuing to evolve all disciplines that are on a transdiscipliny journey, including the emerging transdiscipline of service science (see reference [6]).

(5) What do I recommend for the future? Learn to invest wisely and systematically
Try ChatGPT, and think of it today as a cognitive muse (see reference [7]), that will eventually become a useful cognitive tool, assistant, collaborator, coach, and finally mediator (see reference [3]). Overall, humanity needs to learn to invest in improved win-win interaction and change processes (service innovations) (see pages 45-54 in “Service in the AI Era” – reference [4]). On the current trajectory with increasing AI capabilities, we can get there by having “AI/digital twins” of all responsible actors (people, businesses, universities, cities, nations, etc.) that both give and get service daily, and try to invest in a positive future. This will allow calculating benefits and harms better than we can do today. As individuals adopt better strategies based on better models, collectively the future is shaped with better models. All models are wrong, but some are useful. Better models are more useful for shaping a positive future. This will allow shaping the future (not predicting it – that is impossible) better than we can today – shaping the future with more benefits and fewer harms – more good days of service for more people, and fewer bad days of service for just a few people.

(6) What do I recommend for today? Spend one hour ‘upskilling with AI’
To spend one hour ‘upskilling with AI,’ I recommend watching some social media and readings – see these YouTubes and Tweets (see references [8], [9], [10], [11], and [12]). Remember ChatGPT-types systems are a mirror of the information people share with each other daily. With better models (science, logic, architecture, AI) the quality of information sharing can potentially increase substantially in the future.

Hope this perspective is helpful to some readers. -Jim

Addendum

Spohrer: “Please describe how AI systems, such as ChatGPT that generate text responses to prompts, can help users evaluate fact from fiction, and well-sourced opinions from non-expert influencer opinions.”

ChatGPT: “AI systems such as ChatGPT are trained to generate text responses based on a given prompt. In the context of helping users evaluate fact from fiction and well-sourced opinions from non-expert influencer opinions, ChatGPT could be used to generate responses that highlight the credibility of a particular source or piece of information. For example, if a user were to ask ChatGPT whether a certain statement was true or false, ChatGPT could generate a response that indicates whether the statement is supported by evidence or whether it is merely an opinion. Additionally, ChatGPT could be used to generate responses that provide information about the credentials and expertise of a particular person or organization, which could help users determine the reliability of their opinions.”

Spohrer: “Provide additional feedback: This response is very helpful. In fact it addresses the key weakness that I see in ChatGPT. Furthermore, this type of response annotation would make a useful sidebar for many responses. See more thoughts here: https://service-science.info/archives/6309”

For those interested in exploring some of the dangers inherent in powerful AI technologies, see reference [13]. In [13], Marcus summarizes three key problems with these types of systems, and proposes four remedies that should be explore. The fourth remedy seems very important to me – build better AI to verify truth. For more examples of misinformation (old information, made-up information, etc.) see reference [14]. Nevertheless, with the help of [13] and [14] to keep our eyes wide open about potential harms, Foundational AI models and systems such as OpenAI’s ChatGPT, which achieved 1 million users in the first five days of its release to the general public, seem to be at or near a tipping point of being generally useful to people across a wide range of industries, enabling ‘the possibility of human-machine hybrid work’ [15].

References

[1] BiblioL2022 Loizos C (2022) Is ChatGPT a ‘virus that has been released into the wild’? Government & Policy. TechCrunch. URL: https://techcrunch.com/2022/12/09/is-chatgpt-a-virus-that-has-been-released-into-the-wild/ Quotes: “Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.” Wrote Kedrosky, “I obviously feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, only with tight restrictions.” We talked with him yesterday about some of his concerns, and why he thinks OpenAI is driving what he believes is the “most disruptive change the U.S. economy has seen in 100 years,” and not in a good way.”; “I heard from a colleague at UCLA who told me they have no idea what to do with essays at the end of the current term, where they’re getting hundreds per course and thousands per department, because they have no idea anymore what’s fake and what’s not. So to do this so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the broader public knows so the developer can patch their product and we don’t have mass devastation and power grids going down. This is the opposite, where a virus has been released into the wild with no concern for the consequences.”;

[2] BiblioS2022 Spohrer J (2022) The Future of AI and IA: A Service Science Perspective. Presentation to India’s AIMA Delegation at Berkeley on Monday November 28, 2022. URL: https://www.slideshare.net/spohrer/spohrer-aima-20221128-v8pptx Quotes: ” Still haven’t found a good way to include these materials from Frank, Thompson, Davis, and Zyer in my standard stump speech about the future of AI and IA (Intelligence Augmentation – AKA “Learning to Invest in Upskilling with AI”)  which is the main message of my recent book “Service in the AI Era” (pages 45-54).”; “The Future of AI and IA: A Service Science Perspective: This talk explores the future of Artificial Intelligence (AI) and Intelligence Augmentation (IA). IA can initially be thought of as people “upskilling with AI” tools.  In this talk, after a preamble on understanding “value” in the world today, a roadmap for solving AI is proposed. Second, an approach to measuring IA is suggested. Finally, the importance of trust, purpose, and human values are examined as “reality” is extended for populations of evolving service system entities learning to invest systematically in becoming better future versions of themselves. For example,  AI, both service robot automation and service augmentation platforms, are poised to improve service productivity, quality, compliance, sustainable innovation, resilience, equity and inclusion for under-served populations. Service is defined as the application of knowledge for the benefit of another. Service innovations improve interaction and change processes in business and society. However, to achieve these outcomes and create value with AI, responsible actors (people, businesses, governments, universities – AKA service system entities) must learn to invest wisely in becoming better future versions of themselves augmented by their AI digital twin. Learning to invest systematically in upskilling with AI can accelerate both value cocreation and capability coelevation in a virtual cycle of responsible actors’ improved win-win interaction and change processes. Service innovations improve win-win interaction and change processes in business and society, but require learning to invest systematically and wisely in shifting conflict and fear to understanding and more than coexistence to collaboration. However, great risks must also be avoided – as benefits for some may cause harm to others. ”

[3] BiblioS2017 Spohrer J, Siddike MAK (2017) The future of cognitive systems: tool, assistant, collaborator, coach, mediator. In Augmented Intelligence: Smart Systems and the Future of Work and Learning (New Literacies and Digital Epistemologies), Editor D. Araya. Peter Lang Publisher. URL: https://www.amazon.com/Augmented-Intelligence-Learning-Literacies-Epistemologies/dp/1433133334 Quotes: “Chapter Abstract – Abstract: In this chapter we explore the future of cognitive systems that progress from tool to assistant to collaborator to coach to mediator over time, augmenting both cognitive and social intelligence.  We argue that in the decades to come most people will use cognitive mediators to boost win-win interactions with others.  Furthermore, we argue that future innovation will be driven by people with a T-shaped skillset and mindset, who with the help of their cognitive mediators, know how to rapidly rebuild socio-technical systems from scratch.  T-shaped professionals with cognitive mediators will be expert at people-centered system redesign thinking, and have a deep understanding of service science – the science of value co-creation and capability co-elevation.”; “Book – Where the Agricultural Revolution harnessed domesticated animals for pastoral farming, and the Industrial Revolution leveraged machines for factory production, so today the Computational Revolution is advancing computers to augment human intelligence. Indeed, many now argue that the promise of exascale computing and the slow migration towards a computational society may represent a new threshold in human history. This “transcension” of earlier stages of tool-mediated work and learning foreshadows a momentous change in the kinds of cities we might build, the kinds of medicine we might practice, and the kinds of education we might provide. What is perhaps most surprising about the current Computational Revolution, however, is its expanding reach. The question that many now ask is “what is the trajectory of this human-machine symbiosis?” It would appear that we are on the cusp of a sea change in our capacities to augment human intelligence. But what is the future of work and learning? Will augmented intelligence help us in transforming a waning industrial society? These are the kinds of questions that we explore in Augmented Intelligence: Smart Systems and the Future of Work and Learning.”

[4] BiblioS2022 BiblioS2022 Spohrer J, Maglio PP, Vargo SL, Warg M (2022) Service in the AI Era: Science, Logic, and Architecture Perspectives. Business Expert Press. URL: https://www.amazon.com/Service-AI-Era-Architecture-Perspectives/dp/1637423039/ Quotes: “Are you prepared for the coming AI era? AI advances will profoundly change your daily service interactions, so this book provides readers with a necessary understanding of service, the application of resources (e.g., knowledge) for the benefit of another. In just minutes, you can learn about today’s use of early-stage AI for automation and augmentation, and essential elements of service science, service-dominant (S-D) logic, and Service Dominant Architecture (SDA). Ultimately improved service for all is possible with human-level AI and digital twins – but requires investing wisely in better models: Better models of the world both complex natural and social systems (science), better mental-models in people to improve interactions (logic), better cultural and structural models of organizations to improve change (architecture), and better trusted and responsible AI models. The service innovation community studies and builds better models to improve interactions and change in business and society. The book challenges all responsible actors – individuals, businesses, universities, and governments – to invest systematically and wisely to upskill with AI (the X+AI vision). The service innovation community is a growing transdiscipline harnessing all disciplines to become better T-shaped professionals. Extensive end notes, bibliography, and index are provided.”;

[5] BiblioN2023 Norman DA (2023) Design for a Better World: Meaningful, Sustainable, Humanity Centered
URL: https://www.amazon.com/Design-Better-World-Meaningful-Sustainable/dp/0262047950 Quotes: “Human behavior brought our world to the brink, human behavior can save us. The world is a mess. Our dire predicament, from collapsing social structures to the climate crisis, has been millennia in the making and can be traced back to the erroneous belief that the earth’s resources are infinite. The key to change, says Don Norman, is human behavior, covered in the book’s three major themes: meaning, sustainability, and humanity-centeredness. Emphasize quality of life, not monetary rewards; restructure how we live to better protect the environment; and focus on all of humanity. The book presents an eye-opening diagnosis of where we’ve gone wrong and a clear prescription for making things better. Norman proposes a new way of thinking, one that recognizes our place in a complex global system where even simple behaviors affect the entire world. He identifies the economic metrics that contribute to the harmful effects of commerce and manufacturing and proposes a recalibration of what we consider important in life. His experience as both a scientist and business executive gives him the perspective to show how to make these changes while maintaining a thriving economy. Let the change begin with this book before it’s too late”;

[6] BiblioW2022 WladawskyBerger I (2022) The Current State of Service Science. URL: https://blog.irvingwb.com/blog/2022/12/reflections-on-the-current-state-of-service-science.html Quotes: “On October 17, the National Academy of Engineering (NAE) conducted an online forum on Service Systems Engineering in the Era of Human-Centered AI. “With AI advances poised to drive service system productivity and quality – similar to the way previous generations of technology revolutionized agriculture and manufacturing productivity and quality – it is time to take stock for industry-academic-and-government stakeholders on this important topic,” wrote the NAE in its website.”; “Sometime in 2016 I had an interesting conversation with analysts from an IT research organization who were preparing a report on the state of service science. They noted that we were hearing quite a bit less about service science in those days compared to 5 or 10 years earlier. Was it because we had become tired of the subject and moved on to other areas of innovation?”; “For example, in their recent book Service in the AI Era, Jim Spohrer, Paul Maglio, Stephen Vargo, and Marcus Warg wrote that “Service is quickly becoming the central concept of our time, as service offerings become infused with advanced technologies like artificial intelligence (AI) and scale to new levels of quality, productivity, compliance, and sustainable innovation. Throughout history, scientific revolutions have been launched when new tools make possible new measurements and observations, e.g., the telescope, the microscope, spectrometers, DNA sequencers. Our new big data tools have now been ushering an information-based scientific revolution, helping us extract insights from the huge amounts of data we’ve been collecting by applying tried-and-true scientific methods, that is, empirical and measurable evidence subject to testable explanations and predictions.”

[7] BiblioF2022 Flux Collective (2022) A muse, rather than an assistant. The FLUX Review, Ep. 64 August 18th, 2022. FLUX Review. URL: https://read.fluxcollective.org/p/64 Quotes: “A muse, rather than an assistant. With large-language-model-powered tools taking early adopters’ attention by storm, we are inclined to pause and consider the potential future uses of these tools. Some of them already exhibit remarkable abilities. Still, rough edges show up quickly: faces melt, sentences stop making sense, and so on. We are still in the early stages of understanding how to best use these tools. As we continue to explore, it is important to keep in mind both the potential benefits and their potential risks. However, we have found one use that already seems intriguing: a source of inspiration. Especially in the busy Midjourney community, the creative vibe is nearly palpable. People riff on each other’s creations, showcase their best, and most interestingly, show fascinating new uses of the tool’s output.”;

[8] BiblioF2022 Frank T (2022) Notion AI is Here: 10 Ways to Use This Powerful New Tool URL: https://youtu.be/0DIn0Ws9yTE Quotes: “Notion just released a suite of AI tools that live directly inside the Notion app. Today I’ll show you 10 amazing examples of what these artificial intelligence tools can do – from generating blog posts and comparison tables to even writing code! Learn more about Notion AI and join the waitlist: https://www.notion.so/product/ai”; “Written version of the video: https://thomasjfrank.com/notion-ai-guide/”

[9] BiblioT2022 Thompson AD (2022)  Devoxx Keynote – Artificial Intelligence: You Are Here – Alan D Thompson (12/Oct/2022. URL: https://youtu.be/D3tTsoX02d8 Quotes: “On 20221015 – 6,303 views  Oct 13, 2022  The Memo: https://lifearchitect.ai/memo/ Read more: https://lifearchitect.ai/ https://lifearchitect.ai/models/ A look at Transformer, RoBERTa, GPT-3, Pathways + PaLM,  Chinchilla, Google Imagen, Google Parti, NUWA-Infinity, Google Imagen Video, and much more… Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021. https://lifearchitect.ai/”;

[10] BibloD2022 Davis T (2022) TL;DR 11 free AI tools that will make you unstoppable. URL: https://twitter.com/tessardavis https://twitter.com/TessaRDavis/status/1595749323995840513 Quotes: “TL;DR 11 free AI tools that will make you unstoppable.• #1: Super Meme
• #2: Hotpot
• #3: Dall•E 2
• #4: Sembly
• #5: Browse AI
• #6: Posied
• #7: Lalal
• #8: Magic Eraser
• #9: Synthesia
• #10: Soundraw
• #11: Replika”;
“#1: Make custom memes with Super Meme.
Lacking meme inspiration?
http://supermeme.ai
• Write a sentence
• Describe an emotion
• Upload your own image if you want
Your own custom meme in 2 minutes.”;
“#2: Restore old pictures with Hotpot.
Have an old photo you’d like to improve?
Upload it to http://hotpot.ai
It’ll:
• Remove scratches
• Sharpen images
• Repair gaps
Transforms your damaged photos.”;
“#3: Create amazing images with Dall•E 2
Have an idea in your head for an image?
http://openai.com
• Type in the description
• Be as specific as you can
• It’ll search for existing images
• Or generate new ones for you now
The AI stock photographer you always wanted”;
“#4: Easily take notes with Sembly.
Need meeting notes?
http://Sembly.ai
• Generates meeting summaries
• Automates your minutes
• Shares with your team
Integrates across all platforms.”;
“#5: Check out your competition with Browse Ai.
Need info on your rivals?
http://Browse.ai
• Scrape posts from Medium
• Get lists of YouTube videos
• Extract company info from LinkedIn
Make your research easy.”;
“#6: Improve your public speaking skills with Poised.
Are you a nervous presenter?
Get coaching with http://posied.com
• Record
• Get feedback
• Track your progress
Watch your skills improve.”;
“#7: Separate audio tracks with Lalal.
Have a music track you want to separate?
http://lalal.ai
Just upload the track
It extracts the vocals and separates each instrument.
No need for fancy editing tools.”;
“#8: Edit out parts of photos with Magic Eraser.
Have a great photo but with something annoying in the background?
Remove it easily:
http://Magiceraser.io
• Upload the photo
• Select the part you don’t want
• Press ‘erase’
And voila. It’s gone.”;
“#9: Create professional videos with Synthesia.
Need a video quickly?
http://synthesia.io
• Add your text
• Use templates
• Edit it as needed
You’ll get an AI-generated video within minutes.”;
“#10: Create unique music with Soundraw.
Need some music for your video?
http://soundraw.io
• Choose your mood
• Choose the genre
• Choose a length
• Edit if needed
Now you’ve got your own AI music.”;
“#11: Create a new friend with Replika.
Feeling lonely?
http://replika.com
• Build your own AI
• The more you chat the more they learn
• Express yourself and build a human-AI bond
Is this the future?”;

[11] BiblioZ2022 Zyer E (2022) If you’re not using AI, you’re losing out.8 mindblowing AI tools (you can use today) that will save you hours. Twitter Eyisha @eyishazyer. URL: https://twitter.com/eyishazyer/status/1597573909104427009 https://twitter.com/eyishazyer/status/1597573909104427009?s=20&t=v3okqE9sQNZTEmJocroSTQ Quotes: “”;”1. @lexdotpage
Unlock your best writing with Lex.
Any time you don’t know how to continue your text, type +++, and Lex continues for you.
It also helps generate titles for your texts!
“;
“2. @runwayml
The runway is the content creation suite of the future. It has magical AI tools like:
– change images with text descriptions
– remove objects in videos
– remove video backgrounds
– expand images with descriptions (as seen in the video)
+++”;
“3. @diffusionbee
DiffusionBee is a free macOS app for Stable Diffusion. Give it a text prompt, and it generates a picture based on your text.”;
“4. @LexicaArt
Providing prompts for text-to-image solutions is an art.
LexicaArt is a Stable Diffusion search engine that shows prompts others have used to generate images.
You can also see variations of a prompt.”;
“5. @metaphorsystems
Metaphor is a new search
engine based on generative AI.
You prompt it by writing a phrase that looks like it could end with a link.”;
“6. @SoundrawUS
Do you need some unique music for your podcast or video? Soundraw is a music generator for creators.
Select the type of music you want – genre, instruments, mood, length, etc. – and let their AI generate beautiful songs and variants for you.”;
“7. @clipdropapp
Relight your existing photos
& drawings with ClipDrop Relight.
Relighting a picture after it is captured is possible thanks to their custom AI that maps depth into your pictures.”;
“8. Talk to books
Get quotes from more than 100 000 books that respond to your question.
A creativity tool by Google to explore new ideas and get relevant quotes.
http://books.google.com/talktobooks/”;

[12] BiblioG2022 Griffith T (2022) AI-Supported Tools for Personal Automation. Website. URL: https://terrigriffith.com/blog/ai-supported-tools-for-personal-automation Quotes: “AI-Supported Tools for Personal Automation – Automation is part of our work. The question is whether we are finding the tools to augment our abilities or waiting for our organizations to decide what to automate and who to replace. Research finds that people who craft their work are more productive and employable. In this earlier post with Brett Li of Tonkean, we talk about that research and the steps you can take to craft your work. But what tools can help us?”;

[13] BiblioM2022 Marcus G (2022) AI’s Jurassic Park moment. New systems like chatGPT are enormously entertaining, and even mind-boggling, but also unreliable, and potentially dangerous. The Road to AI We Can Trust. Substack – Gary Marcus Dec 10, 2022. URL: https://garymarcus.substack.com/p/ais-jurassic-park-moment Quotes: “Something incredible is happening in AI right now, and it’s not entirely to the good. Everybody is talking about systems like chatGPT (OpenAI), Dall-E 2, and Lensa that generate text and images that look remarkably human-like, with astonishingly little effort. These systems can be incredibly fun to play with. Take this example, generated by chatGPT by Henry Minsky (son of Marvin Minsky, one of AI’s founders), who asked chatGPT to “Describe losing your sock in the dryer in the style of the declaration of independence”:”; “That a machine could do this, with so little effort on the part of the user, is frankly mind-boggling. But at the same time it is, or should be, terrifying. It is no exaggeration to say that systems like these pose a real and imminent threat to the fabric of society. The core of that threat comes from the combination of three facts:”; “Spohrer: (1) Inherently unreliable – user beware, (2) Can be weaponized, (3) Inexpensive to bad actors.”; “Thanks Gary for making helpful points that these Generative-Pretrained-Transformer AI systems, like ChatGPT, are simultaneously very fun to use and yet (1) make many mistakes, so user beware, (2) can be weaponized by bad actors, and (3) are inexpensive to use by bad actors and other users alike. My further opinion here https://service-science.info/archives/6309”; “YouTube: Dr. Ian Malcolm gives Hammond an ethics lecture: Gee the lack of humility before nature that is being displayed here staggers me. Don’t you see the danger inherent in what you are doing here. Genetic power is the most awesome force the planet has ever seen. But you wield it like a kid who has found his dad’s gun. Scientific power did not require any discipline to attain. Didn’t earn it. Didn’t take responsibility for I’ll tell you the problem with the scientific power that you’re using here: it didn’t require any discipline to attain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could and before you even knew what you had you patented it and packaged it and slapped it on a plastic lunchbox, and now you’re selling it, you want to sell it!”; “Marcus: All of this raises a critical question: what can society do about this new threat? Where the technology itself can no longer be stopped, I see four paths, none easy, not exclusive, all urgent:”;”Spohrer paraphrase: (1) social media and search engine bans (like StackOverflow), (2) nations need to make misinformation an actionable offense, like libel, (3) ensure provenance of all user accounts (real people, real responsibility), (4) build better AI to verify truth.”;

[14] BiblioB2022 Banavar G (2022) ChatGPT’s deep fake text generation is a threat to evidence-based discourse. Medium – Guru Banavar. URL: https://medium.com/@guruduth.banavar/chatgpts-deep-fake-text-generation-is-a-threat-to-evidence-based-discourse-c096164207e0Quotes: “The insidious nature of such generative AI models is that they produce plausible sounding health information in a smooth conversational style, that is at best is generic in nature and at worst completely fictional. The problem is that most people cannot tell the difference between the two. “; “It can be argued that the information put out by ChatGPT is generic conventional wisdom, so it is not seriously problematic, and that it is the nature of science to continuously unearth new and improved truths. Also, it will always be the case that there will be outdated information in vogue. After all, the medical practices from the last few centuries are still being practiced in some form by subgroups of people somewhere in the world. But this is precisely why we would like modern AI systems to help us in curating the most evidence-based and trust-worthy information!”;

[15] BiblioM2022 Mollick E (2022) ChatGPT Is a Tipping Point for AI. HBR December 14, 2022. Via: Larry_Hiner URL: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai Quotes: “Summary. We’re hitting a tipping point for artificial intelligence: With ChatGPT and other AI models that can communicate in plain English, write and revise text, and write code, the technology is suddenly becoming more useful to a broader population of people. This has huge implications. The ability to produce text and code on command means people are capable of producing more work, faster than ever before. Its ability to do different kinds of writing means it’s useful for many different kinds of businesses. Its capacity to respond to notes and revise its own work means there’s significant potential for hybrid human/AI work. Finally, we don’t yet know the limits of these models. All of this could mean sweeping changes for how — and what — work is done in the near future.”; “This highlights the third major change that happened with this release: the possibility of human-machine hybrid work. Instead of prompting an AI and hoping for a good result, humans can now guide AIs and correct mistakes. (Despite what my AI writing partner claims above, it’s not always accurate.) This means experts will be able to fill in the gaps of the AI’s capability, even as the AI becomes more helpful to the expert. This sort of interaction has led to increases in performance of players of Go, one of the world’s oldest and most complex games, who have learned from the AIs that mastered the sport, and become unprecedentedly better players themselves.”; “The problems of AI remain very real, however. For one, it is a consummate bullshitter, and I mean that in a technical sense. Bullshit is convincing-sounding nonsense, devoid of truth, and AI is very good at creating it. You can ask it to describe how we know dinosaurs had a civilization, and it will happily make up a whole set of facts explaining, quite convincingly, exactly that. It is no replacement for Google. It literally does not know what it doesn’t know, because it is, in fact, not an entity at all, but rather a complex algorithm generating meaningful sentences.”;

Complex, Interconnected Service System Characteristics

Best Prepared – current undergraduates degrees prep adequate
Narrow, well-defined technical system problems
Technology – Automation of simple capabilities
Measurement – Productivity, Quality, Costs
Data Science and Analytics – Recommender Systems
Manufacturing systems and value creation (Goods-dominant logic)
Linear supply chains
Single discipline, well-defined problems
Artifact design and interface design
Change of simple processes

Least Prepared – current undergraduate degrees prep inadequate
Open-ended, ill-defined socio-technical system problems
Technology – Augmentation of human intelligence
Service systems and value cocreation (Service-dominant logic)
Highly interconnected supply networks
Multiple discipline, wicked problems
Service design and user experience design
Change of organizations and ecosystems

We get the future we invest in

We get the future we invest in

The above title is the important message of a new book Service in the AI Era by Spohrer, Maglio, Vargo, and Warg. The last two sections of this blog post summarize some of the argument.

Register for this free NAE event to get a free digital copy of the book (just invest a few minutes and it will be sent to you)

  1. Please take 1 minute to register here and now for this NAE event – Monday Oct 17th – even if you cannot attend, you will be emailed a post event summary – and be on the list for future events regarding complex service systems. … and we will know where to send the book.
  2. Please take 5 minutes to answer four very short questions about preparing students as complex systems thinkers, who are prepared to understand and improve today’s complex, interconnected, digitally transforming, business and societal systems (what we call service science and service systems at the online NAE conference, what others at NAE call systems engineering and complex unifiable systems). … and we will have insights from you as we prepare a next publication.
  3. Spread the word – if you know 2-3 colleagues who are systems thinkers and who may have an interest in preparing better to innovate business and societal systems in the AI Age.

Why how we invest (our time, effort, money, etc.) matters, especially in the era of AI

Every person gives and gets service hundreds of times a day – and increasingly often this generates data for AI models. What is service? Service is the application of knowledge for the benefit of another. From turning on the light switch in the morning, to scanning social media, to getting to where we are going, to doing what we do when we get there, the activities and interactions that make up our lives are all about service. Increasingly AI capabilities are playing a bigger and bigger role in the daily “give and get” of service. Businesses and nations are investing in AI to stay competitive. The data that allows businesses and governments to build more powerful AI models comes from the activities of customers, employees, citizens, and even visitors.

To learn about some of the most powerful AI models in the world today, watch these two keynote talks – Alan D Thompson (AI models to create art, stories, programs, etc.) and Jensen Huang (increasingly large and sophisticated AI models – like the digital twin of Earth and digital twins of people – are built and run on powerful computer chips). After watching these two keynotes, there can be no doubt that AI models are getting bigger with more advanced capabilities to deliver useful service to people, and also that building and using these large AI models takes an enormous amount of (a) data, (b) energy, (c) advanced technology and (d) highly skilled people.

To learn more about the kind of data signals we are sending intentionally and unintentionally consider this…. What most people don’t know or think much about, unless they have studied service-dominant logic and service science to learn about service and service systems is that value is cocreated – meaning that both the service provider and the service customer play an important role. Cocreation is easy to understand in some service systems like education, since a student (customer) who does not prepare well for class may not do very well. Cocreation is harder to understand in some service systems like retail, since it seems like the customer is just buying something to use or consume, but in fact if the customer did not come in prepared and has to return the item, or throw away the item, the costs in the service system may make it difficult for other people to afford the service in the future. Waste is just one of many aspects or characteristics of service systems that is under-studied and leads to unintended consequences – that is not getting the futures we want. For more on waste, read this book Wasted: How We Squander Time, Money, and Natural Resources-and What We Can Do About It. For more on the data we are sharing, read this book What You Don’t Know: AI’s Unseen Influence on Your Life and How to Take Back Control.

The truth of the statement that we get the future we invest in is obvious to some people, but not everyone. However, like the genie in the lamp, AI models can be thought of as powerful servants designed to give us what we want and our collective data tells the AI model what we want. Every customer is a cocreator contributing data to AI models being developed by businesses and governments around the world. To better understand, invest ten minutes as described above and get the book Service in the AI Era. I also recommend pre-ordering Design for a Better World: Meaningful, Sustainable, Humanity Centered .

Investing to get a future we would want to live in requires understanding both AI as well as complex service systems, the data source for AI models

Every person who plays a role in an organization can be thought of as a service provider. Service providers will not be replaced by AI, but service providers who use AI will replace those who don’t. To learn more, please register for the NAE event on Oct 17th, and even if you cannot attend, you will be emailed a summary of the event as well as receive a copy of the book he book Service in the AI Era.