EP 146: Roger Spitz - Disruptive Futures

A reinterview with Roger Spitz discussing the Disruptive Futures Institute, their new guidebook, existentialism and all points in between.

Interviewed by: Peter Hayward

Listen to Roger’s other FuturePod interview

Episode 70 - The Essence of Being Human

About Roger

Based in San Francisco, Roger Spitz is President of Techistential (Global Foresight Strategy) and Chairman of the Disruptive Futures Institute. Roger is author of the four-volume Collection “The Definitive Guide to Thriving on Disruption”. 

He sits on a number of Advisory Boards of companies, Climate Councils, Venture Capital (VC) funds & academic institutions worldwide. Roger is a sought after advisor and speaker on systemic change and climate strategy, and is an inaugural member of Cervest’s Climate Intelligence Council.  

Roger is also partner of Vektor Partners (Palo Alto, London, Berlin), a VC firm investing in next-generation sustainable transport, as well as a member of the Advisory Council and an LP investor in Berkeley SkyDeck fund. The funds he advises support green innovative tech in Silicon Valley, Israel, UK, and Europe. 

He has lived in 10 different cities across three continents.

 More about Roger

SHOW NOTES For the Conversation

 Artificial Intelligence & Existential References

Books on Artificial General Intelligence (AGI), Existential Considerations & Singularity

●      Existentialism is a Humanism, Jean-Paul Sartre

●      Novacene: The Coming Age of Hyperintelligence, James Lovelock

●      Team Human, Douglas Rushkoff

●      Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark

●      Heartificial Intelligence: Embracing Our Humanity to Maximize Machines, John C. Havens

●      Homo Deus: A Brief History of Tomorrow, Yuval Noah Harari

●      The Master Algorithm, Pedro Domingos

●      Superintelligence: Paths, Dangers, Strategies, Nick Bostrom

●      The Singularity Is Near, Ray Kurzweil

●      Future Minds: The Rise of Intelligence from the Big Bang to the End of the Universe, Richard Yonck

 

Organizations focusing on AI / AGI

●      Center for Human-Compatible AI

●      Center for Humane Technology (CHT)

●      Future of Life Institute (FLI)

●      OpenAI

●      Stanford Institute for Human-Centered Artificial Intelligence

 

Organizations Focusing on Existential Risks

  • Centre for the Study of Existential Risk (CSER)

  • Center on Long-Term Risk (CLTR)

  • Future of Humanity Institute (FHI), University of Oxford, UK

  • Future of Life Institute (FLI)

  • Global Catastrophic Risk Institute (GCRI)

  • Machine Intelligence Research Institute (MIRI)

  • The Bulletin of the Atomic Scientists

  • The Millennium Project

 

Audio Transcript

 

Peter Hayward: Most people would accept that we live in disruptive times. And that all decision makers are struggling with making foresightful decisions. Does it follow then that humans are stupid enough to build machines that are thought more intelligent than they are and then rely on the decisions of those machines?

Roger Spitz: I actually consider that the question around whether computers will become more clever than men and when, and women or humanity and when, etc are not necessarily the most and only important question today. My real concern is what are humans doing to upgrade their abilities and their capabilities for decision making in our complex world? How do they deal with complexity? How do humans understand that our educational systems, our governance systems, the alignment or lack of with [wrong] incentives is basically not resilient?

Peter Hayward: That is today’s guest, Roger Spitz, who we have spoken to in an earlier Futurepod chat. Roger is the Chairman of the Disruptive Futures Institute and he has just co-written The Definitive Guide to Thriving on Disruption.

Welcome back to Futurepod Roger.

Roger Spitz: Wonderful to be here again. Peter, my favorite podcast.

Peter Hayward: Oh that’s very nice to know. That you have a favorite and we are there. So how was Covid for Roger Spitz?

Roger Spitz: I’ve been very fortunate, I think in terms of, myself and direct family. The effects have been a nuisance and obviously very impactful for everyone, but we haven’t had any major consequences in terms of health or otherwise. The degree of uncertainty and unpredictability actually gave a lot of spotlight on our activities. And for me personally, when I started going out from 20 years of Investment Banking, which is pretty much a stealth activity, especially since I covered Mergers and Acquisitions (M&A). So in M&A you’re completely stealth. Everything you work on is confidential. You work on behalf of clients, a bank, you’re not going to be communicating those things.

So I came out of that stealth period three years ago to focus on the areas of interest, which were Complexity, Systems Thinking, Foresight, Unpredictability. And so as I started going a bit live with some of these topics, obviously, the interest grew quite considerably. At some point we developed a full program on decision making and uncertainty and unpredictability, and zoomed in a lot of these topics and ended up writing a book, building an Institute, building course programs, Executive Education, which we’re scaling this year. So I shouldn’t really say that, but I say it in terms of context, but it’s actually been very conducive to the activities and interests in the work.

Peter Hayward: Yes, it’s bittersweet, isn’t it, that we have been, haven’t done a scenario set over the last 20 years that hasn’t talked about dramatic disruptions to business and the need to plan for things that really can be foreseen, just not timed. And yet, time and time again, we see people in senior areas really neglecting to do that provisioning, work of preparation. And then we see, a lot of other people suffer for what really our leaders just not prepared to actually face into some of the dark possibilities that organizations and cultures face.

Roger Spitz: Yeah it’s very serious. It’s really a fundamental issue in terms of governance and incentives and systems and structures, including around education. And to your point you’re saving billions in the case of the pandemic, for instance, to waste trillions. Devastation. And foresight is not just for the negatives or the risks, it’s also the missed opportunities because, by thinking more broadly, it’s the reliance on the assumptions that’s issue. We all make assumptions every day. It’s the reliance with absolute certainty of what to not worry about or what to focus on. And those kind of narrow perceptions of the world, which are the issue of course, as you and most listeners will know because we’re all cabled like that. But I’m talking unfortunately for the rest of the world and those who lead governments and countries who may not be as tuned into to the foresight and futures thinking.

Peter Hayward: So tell the listeners about the book or the four-part book and of course the Institute that’s emerged from this.

Roger Spitz: Thanks for asking Peter. It’s actually a lifelong ambition, which I developed more recently to get it as something tangible. As we discussed with great interest, because I know these are topics dear to you as well. In my initial life as studying and before working for a living and becoming an investment banker and all those stuff, I was extremely interested in philosophy, existential philosophy in particular, and Zen Buddhism. And then when, I spent quite some time with a specific focus area, in M&A, and more recently over the past six or seven years, I would say I reconnected with those areas of interest and broadened them to understand the field of foresight or at least try to get more involved with it and complexity. And as I did, I started realizing that all those were not just areas of interest to me, but most of my clients were asking the same questions.

Gone were the days where a CEO, a Venture Capital fund, an Institutional investor, for the past five years or so I would say. Gone are the days where they can follow a cookie cutter playbook and hope that things come right. So there’s clearly at all levels, an understanding that there’s a degree of difference in terms of change and systemic disruption, and that’s obvious with Covid and then Geopolitical events and others it amplified. And so I started three years ago writing a book on these topics. Trying to connect the dots with different areas of interests. Getting a team of people I’d worked with whom I respect to contribute including Lidia Zuin who’s a great futurist and journalist who is from Brazil, whom you had on your podcast as well over the past two years. And I started working on a book and giving talks around those topics and the interest was so strong for courses and executive programs and basically for more than just a book that we decided to move into doing a guidebook.

So it’s a little bit ridiculous, but we did four volumes of it because it represents all the courses we give. If someone takes all the topics from an executive program, dozens and dozens of hours, those are basically the slides, the case studies, all aspects. It’s meant to be extremely practical and the volumes are Foundations, what you know, what’s happening in the world, how do you make sense of our complex world. Frameworks to deal with change and disruption and uncertainty. And then we have one which focuses specifically on Individuals, Beta your Life, and then one that focuses very specifically on Business Aspects and Strategies. So that’s how we package it. Whether it’s a book, a guidebook, a textbook, whether it’s courses that are in print form. We felt from the demand we had that we might as well release it to the general audience.

Peter Hayward: And how’s it been received? You sent me some early drafts of some of the chapters. It’s a very synthetic book. It covers many authors and many people’s ideas. It’s at a high level that tries to synthesize a whole lot of people’s thinking around disruption. And also tries to deliver it in the kind of, what I’d say is bite size chunks. Is that a fair kind of description of it?

Roger Spitz: That’s a generous description. We tried to make it in bite size. Invariably, there’s a lot going on. There are many chapters and many themes, but what we tried to do and put a huge effort in that is that whether you take a specific slide, we have more than 500 proprietary slides, visuals, and illustrations. Whether you take one of those or two pages on a subsection or one particular chapter is that it’s self-contained and standalone. So we hope that people get something out of it. We try to focus on bullet points. We have margin icons, if there’s a definition or key insights, we have examples. And indeed we think it’s, as broad as there’s ever been. You gonna be laughable and under the scrutiny of the judgmental world that’s out there to call something a definitive guide. But we haven’t seen something as broad in terms of really looking at systemic change in the broadest possible sense and taking it, connecting the dots in so many ways. The reception has been very good so far. Now is that just. The people who like you-

Peter Hayward: You send it to people who like you.

Roger Spitz: Exactly.

People who are tuned into these topics, either like me or tuned into these topics or being generous or what have you. But so far it’s been well received. And genuinely, I think anybody who, the team that contributed to this, we feel that these are topics which are extremely important and that they lead themselves to a practical guidebook that goes beyond the narrow definitions of disruption or just the kind of focusing just on the futures and foresight and practices and really extends that to the different aspects. I think it helps to have Lidia who has such a complimentary background to mine with science fiction and humanities. I myself spent 20 years advising boards, CEOs, venture capital funds, and I think that doesn’t hurt to understand the senior leadership decision making. And some of the people who got involved, they’ve been in Special Ops in Israel. They understand a thing or two about disruption. People in the DoD, Nae [Hayakawa], who’s amazing, who’s extremely focused on Zen Buddhism and Eastern Philosophy, and we really try to avoid the trap of having, oh, you’d write this chapter, I write this chapter. It’s really hundreds and hundreds of hours of discussions, 20 plus years of experience, and we’ve really tried to, whatever’s relevant on a given topic is addressed in that particular place.

It's not like Nae’s chapter, this is Lidia’s chapter. We think it connects the dots in a way that’s, not conventional. And so far people seem to have appreciated that aspect. But listen, time will tell. Hopefully it contributes to helpful tools out there for conditions which even those who are well versed or not as comfortable as we should be, probably in terms of complexity and unpredictability.

Peter Hayward: I’m enthusiastic to get into some of the stuff because it is such a rich field and as you said, it’s a field that I’m very interested in. So I might start digging into it. There’s a couple of things that I always want to talk to, because I’m interested in them and I’m interested in how you take on them. And the first one you actually just touched on it just then Roger is this notion of getting comfortable or thriving in disruption. And it’s an odd notion because in some respects disruption is always around us. I’m interested in your notion of the signal strength of disruption to say to us perhaps what we are doing in our lives or in our businesses is not actually working. So rather than living with disruption, isn’t disruption in some situations also a signal to say, because you are getting disrupted, maybe you should ask again what you are doing rather than just coping with it or thriving with it, you actually take it as a signal strength to say, what am I doing?

Roger Spitz: Yeah. I couldn’t agree more. At the end of the day, everything boils down to your question, right? What am I doing? Which in a sense is, I’m human and I make decisions and choices every day. And the “what am I doing” is an outcome of those decisions. And maybe just to reframe how we use disruption. We define it as disruption, what we call Disruption 3.0. Disruption 1.0 we take Joseph Schumpeter, and we think about Creative Destruction, and we think about slightly more economic or macro or institutional (e.g. after World War Two). Certain regions had to reconstruct and rebuild and, Schumpeter goes to a lot of length to explain how that is an element of renewal, hence the creative destruction concept and term he uses. And that brings obviously change. And some is good and some are effective, and some is negative, and some is creative and brings improvements to health to many different aspects.

And then Disruption 2.0 is where most of the world, when you hear Disruption, it’s Clayton Christensen, it’s Disruptive Innovation. It’s from his work in the Innovator’s Dilemma. And that looks at a very specific recipe for what may constitute disruption or not. And this is often a company doing something different and especially targeting a different client or customer base from the incumbent. And then over time, competing. So that’s, Netflix. Interestingly it’s a recipe, which is quite specific, so it’s often associated with new companies or startups or product innovation, product development, and there’s certain cases which will apply and certain which will not. So it’s isolated and discreet and replicable and definable and so Clayton Christensen, I think in one of his, before he passed away more recently, but what is the HBR I think 2015 papers revisited that with his team and said, Okay, Uber, for instance, is not disruptive innovation because although it’s slightly different and you have a app you’re actually still pretty much providing a car that drives people, which is, like a taxi and you’re competing just on price. And actually you’re targeting precisely the same customer base as taxis. And so that for him is not Disruptive Innovation, whereas Netflix is because at the time he went for people who were more arty and et cetera.

Now we use Disruption 3.0 in terms of Systemic Disruption. In other words, it’s not an isolated or discrete event that you can separate from anything else. It’s not necessarily related to innovation, it’s systemic. In other words, changes are constant in a sense. It’s really a meaningful change for it to be Disruptive, but it affects, it spills over, it cascades, it has the next-order implications all of us futurists think about. It connects in ways that are not expected. It’s in complex systems, so you can't necessarily anticipate it. It’s often non-linear. And so you look at the features of complexity, systemic nature of disruption and so that’s reframing.

Then to answer your question as to, I completely agree, which is the point around, what am I doing wrong or what should I be doing? It all boils down to the question of, freedom agency and choice. That applies to the preparation for change, whatever that change might be. And again, disruption is not necessarily positive or negative. We use it in a neutral way actually. You want to avoid existential risks and other issues. And you’ll want to benefit from opportunities and one person’s disruption may another person’s negative disruption, positive disruption, et cetera. So we use it in a neutral way whereby fundamentally, the way we think about disruption that it’s a constant, but it does create uncertainty and unpredictability.

But we see those as positive, and hence the agency and the existential philosophy. We see it as positive because if you didn’t have unpredictability and uncertainty and constant change, it would mean that basically things are predetermined. And this for me is very important because you want to have Agency, so unpredictability or disruption or change, whatever you want to call it, it simply means that there’s something yet to be created or invented that hasn’t been, and that includes your beingness. So if you think of the existential philosophers, like Jean-Paul Sartre, in Existentialism is a Humanism, we exist and then create our Essence. So for me, all we are saying is that disruption or uncertainty, unpredictability, or the way you frame the question, which is, what should we be doing or not, it really boils down to that process of Freedom, Agency and Choice. Of inventing and building, despite not having that certainty or not knowing what’s gonna happen or the impact. It’s in that sense that we try and take a different perspective on it. So yes we’re positive and that’s why we are Thriving on Disruption as a title because number one, it’s not just defensive and adapting to change.

You’re thinking about how to prepare, being anticipatory. So that’s our foresight work and everything, the way we are cabled as futurists, but it’s also how do you react when things happen and what are your foundations? How fragile or antifragile are your foundations when things happen? And then how do you react in response? What’s your agility in a complex world to constantly shift between your aspirations and the decision making here and now? Because the only thing that matters is exactly your point. What could or should I be doing? And so it’s that constant play with anticipatory, antifragility, and agility, which we call our AAA, which allows you to not just adapt or be resilient to change, but actually thrive on it because that is precisely what allows you to have Agency, and Freedom and Choice in our book.

Peter Hayward: You’ve used the phrase from Nassim Nicholas Taleb, Antifragile, and I just wanted to drill into it. For people who haven’t read Antifragile, he made a point that there are certain things that disrupt us and make us stronger. And he uses the metaphor of exercising with weights that we put our physical body under stress and the body becomes stronger through the stress, not weaker, stronger. And so for him, the opposite of fragile was antifragile. As you evoke it for disruption, if in fact the way that we approach and understand and respond to disruption, for it to be antifragile, then it must be making us stronger, more able to deal with disruption. So how does this approach actually do that, rather than just make them robust, rather than just make us so we so we can resist change, but actually stronger and better because of the disruption?

Roger Spitz: Sure. So the one thing I would say in relation to that observation and question is in our book resilience and strong degree of resilience and adaptability are a prerequisite for Antifragility. In other words, it’s not a substitute. It builds on top of it. It’s not Oh, don’t worry about being resilient, just be antifragile. No. To be Antifragile, you need to be very resilient and very adaptable. So that’s the starting point. So we are not taking anything away from that. Then to your question, what is the difference to get the additional upside asymmetrical benefit when things hit is really to do with a number of features.

So the first one is, how you anticipate and how you constantly think about agency and the choices. So you can have something that’s very resilient, that covers a broad set of possibilities, but it’s a little bit passive versus active. So you don’t necessarily have a huge amount of agency. You have agency to prepare to be adequately resilient. But you might be less constantly looking for new opportunities, for instance. So concrete examples and listen, not to make a fuss about Silicon Valley because I think there’s a lot of non-eventful things that happen here and innovation is not a prerogative of a particular region in the world. But whether it’s the most innovative part of India, whether it’s Israel, wherever innovation happens, you’ll find certain features of constant experimenting and testing and trying something new and being willing to fail. And you’re reframing constantly with first principles or challenging assumptions around what is possible versus impossible. And all these features go beyond just resilience because you’re building optionality. You might stumble upon something interesting. You might create value from the change by hacking things. Resilience, you don’t want something to break.

So let’s take, the video conference business, the pandemic hit and suddenly overnight those video conferencing platforms, whether it’s Zoom, whether it’s Google Meet, is needed to deal with, 10, 20, 100 times what they have. So some of them are resilient enough but Zoom was the most Antifragile. Zoom was the one that had so much buffer that it could take on an incredibly greater proportion of demand and bandwidth than it thought. Yes it had its issues with piracy or security or whatever early on. But basically it had in its DNA and did a number of things, which meant that when the pandemic hit, not only just scaling and having buffer and other things, but everything it had thought about, allowed it to instantaneously multiply by God knows how many hundreds its usage and become the number one. Better than a Google, better than everyone. Not to mention the Telecom operators who were not even resilient. They didn’t even have platforms for that.

Peter Hayward: It struck me, Roger, that one of our standard responses to disruption, and you’ve talked about it and your book covers it as this need to see and to use Agency. But I’m actually struck because you introduce the Buddhist philosophy to the book is this notion of Acceptance. That there is actually a choice to simply accept the disruption rather than trying to either get around it or do something about it. But simply say it is, and then that becomes the choice that creates the freedom.

Roger Spitz: If you look at the different concepts of Zen Buddhism, and, not to be exhaustive and not to pretend that I’m an expert, but if you look at the different aspects why we bring that in the context of agency and how we think about it for disruption and in particular for thriving on disruption, there are a number of elements. First of all, one very important one is Shoshin which is beginner’s mind. So here it’s like first principles it’s really helpful in terms of getting rid of the bad habit of relying on assumptions. So that’s Shoshin. I think the point you’re mentioning is like accepting Transience, accepting that nothing is permanent and therefore you shouldn’t attach yourself to anything because change is a constant. And that’s a concept in Zen Buddhism, which is called Mujo and for those who read Yuval Harari or Osamu Tezuka or many others, there’s a very significant focus on Mujo. And then you have either concepts such as Wabi Sabi, which is Acceptance, and again acceptance of Transience and Imperfection. Wabi Sabi in a way is the decaying barn, it’s beautiful. It’s elegantly weathered. There might be some solitude or decay in it, but actually it’s imperfect, it’s not trying to have the perfection of Western philosophy or Western habits of everything is perfect. Which again, then ties into the Kintsugi concept of Kintsugi pottery where you’re embracing flaws and imperfections, you’re constantly trial and error. And failure is almost critical to novelty. And just maybe to wrap up, if I may, we actually build this in what we call the six i’s. Which for us it’s Intuition, Inspiration, Imagination, Improvisation, Invention, and Impossible. And Zen Buddhism and Eastern philosophy is really capturing all of those, not only, we also have a case study with science fiction and using Israel for those six i’s.

But we feel that the idea of these six i’s, if you use them and understand what they mean and have practical tips around how to frame these ideas that is the edge and the difference between the adapting and the resilience and not being destroyed by Disruption or Systemic Disruption versus actually thriving in change. Because if you apply Intuition and Improvisation and think about the Possible versus taking for granted what people tell you is Impossible, thinking that you can achieve it. And again, we’re not just looking at the Silicon Valley meaning, you look at the Impossible finding mitigations for climate challenges, finding cures for cancer, et cetera. So if you think about these things, it’s actually pretty good that the world is not predetermined and that there’s change and that you can leverage on it. And it’s in that respect that we really try our best to make it very tangible. Very positive as opposed to, just feeling okay it’s shit happens, how do we protect ourselves from that and build a stronger house.

Peter Hayward: Thanks, Roger. Another area I just want to have scratch at I’m interested as to how you frame the notion of existential civilizational disruption. What about those disruptions that potentially go to the level of our being, our relationship to one another on a planet is do actually existential disruptions exist at 3.0 and to which our response to those has to be more than just even an antifragile acceptance, but actually a need to take direct action, political action even to actually arrest or change or do something about them.

Roger Spitz: So the short answer is Yes, there are major existential risks. And it is a very significant focus of ours in two respects. But throughout the book, in probably every single chapter. But we approach it in two specific ways. One is there’s a dedicated chapter which we call, Existential Risks as the Ultimate Disruption. And so we have a probably 60, 70, whatever pages going through very much the topics which organizations which focus on Existential Risks, so that includes the Center for Existential Risk in the University of Cambridge, it includes the Center for Humane Technology, Future of Life Institute and many other organizations, that think about existential risk that go from anything that is threatening Humanity itself or the ability for humans to be sustainable without major deterioration. Because worse than the end of humanity is probably that we’re so degraded and damaged or what have you, that life is absolutely atrocious.

So we do cover existential work very significantly. And then to your point there’s a different nature to disruption for those type of risks. What we do is in our drivers of disruption, we have a category which we call Irreversibility, and those drivers of disruption are different by nature because what happens with irreversibility is that they could become irreversible, at which point any mitigation, any building of resilience, any adaptability, any governance or legislation could arise too late. And the three specific drivers of disruption we put in that category of Irreversibility are Climate, AI, and Technology. And we separate Technology from AI for reasons we can talk about a bit later around the impact on decision making. But those three are drivers of disruption, which we categorize as irreversible. It doesn’t mean that there aren’t many other, unfortunately, existential risks, such as nuclear, whatever, but either they fall in those three categories such as, CRISPR or gene editing, you could argue that it’s a form of a, technology or biotechnology, or they are ad hoc events, which indeed are existential, but they’re not necessarily a driver of disruption.

So take nuclear. It’s  an existential risk. But day to day, is it a driver of disruption? It’s more binary, unfortunately. Whereas AI, technology, and climate are constant every day affecting everybody, every business, every country in the world. So that’s our distinction between how to think about it. I’ve had hundreds of hours of discussions with people who focus on these. And we even believe that there’s a need for Chief Existential Officer as a kind of CEO2 who thinks about that and it comes back to Antifragile because Antifragile, we looked at the positive side. If things go well, then we can benefit more than just being resilient. But on the negative side, the issue with Antifragility and with our systems is that if things don’t go okay in certain situations, they can literally be existential, either existential at the level of an individual, country or company. But existential also for humanity. And so the Chief Existential Officer obviously has less say on policy for nuclear or what have you, but has a direct say, day to day, on the existential nature of the opportunities for its stakeholders and entities.

Peter Hayward: This notion of artificial intelligence or artificial general intelligence and artificial super intelligence. I tend to follow other bloggers and groups that are talking about it, and I must say that I see and hear a lot of discussions between programmers, engineers, ethicists, talking about the Existential risks and how they could respond them or prevent them, or workarounds or governance. Generally I don’t find the futurist community really broadly actively engaged in understanding it or even discussing it. Do you think as a Futures community we are cognizant and fluent enough in understanding AGI and even Super General Intelligence and what might people do in our community to get better at becoming more fluent?

Roger Spitz: Yeah that’s really a pretty existential topic. No doubt about that. So at my end, as you say, I’m, if you take just purely from a, the amount of time I spend on activities or how long I’ve been, as you say outside, not as a futurist, and how recently I’ve been a futurist, five, six years at best and professionalized only a few years. Much of my time historically and even today, is spent on these topics which relate to AI. So day to day, I write a lot about the future of decision making, which involves some of the things we’ve talked about, existential risk and AI. I spend time with IEEE John Havens. We have a workshop around disclosure and ethics and number of things. I spend time with the Stanford Institute for Human-Centered Artificial Intelligence, HAI. I’m on a Climate Intelligence Council of AI startup and I’m also a partner in a venture capital fund which invests in deep tech, including AI including in that happens in Israel and elsewhere in terms of looking at the future mobility and very capable technologies.

So that’s my bias. Now, having said that, I think if you then take the more broader futurist Ecosystem or that I know of. I’m not, meaning to have a 360 complete view of what everybody does. But my sense is that you probably have two, three clusters where people are extremely well tuned in to these topics. But to your point, maybe outside of these, maybe the focus is not as deep in terms of some of the important, some themes or not? I don’t know. I can’t speak for that, but certainly, so I can’t speak for what generalist futurists who don’t focus specifically on these activities, how tuned in those colleagues are. But what I do know is that you have a reasonable number of futurists who are very tuned into those. And so it’s either gonna be futurists people like Thomas Frey, who I’m doing a podcast with in a few weeks’ time, who runs the Futurati podcast. Futurists like Rohit Talwar on Fast Future or Richard Yonck who wrote Future Minds. So those are futurists who are quite active, who publish, write quite a bit, and who have a strong technology focus. But again, I’m not saying that they are technologists, they definitely have a futurist hat and no doubt, 360 and cover everything broadly, but have a particular interest, affinity and write and talk a lot about these topics.

You then have a second cluster. And these are not distinct, of course, like all futurists, we cover many things and it’s all connected and it’s multidisciplinary. But just to put them in flavors, if it helps for people to follow up if they’re interested. You then have the existential topics as we talked about. So for instance, Jerome Glenn from the Millennium Project, he was driving an initiative with the UN to have an Office of Strategic Threats. So here it comes back to what we were talking about and we refer to it in our existential chapter. And in fact, I was one of the 200 signatories to establish a UN office of Strategic Threats because the observation of Jerome and many people is that there was no single point of collaboration in the UN that systemically addresses such long-term threats to human survival. So now AGI for Artificial General Intelligence or ASI for Artificial Super Intelligence is not the only focus of Jerome Glenn, that initiative, but it’s certainly very present in that. And then you have people like Allison Duetmann at the Foresight Institute, and in fact, I’ll be with her and a few hundred other people, no doubt, at the Foresight Institute the big event early December and there again there’s a strong focus on these topics. And so that’s the Existential bucket.

And I’d say there’s a third bucket, which is very interesting as well. Which Lidia [Zuin] is also very, tuned into probably more, more than me which is the Transhumanist angle. So people like David Wood, people like José Luis Cordeiro who’s focusing on Singularity, people like Ben Goertzel on, Decentralized AGI and the like. And then of course you have a million other people and celebrities, the Ray Kurtzweil, et cetera. But if you take, some of the futurist names that probably you and I interact with, probably many of those you’ve had on Futurepod. I would say that often the focus comes through one of those three, which is Futurists with Technology focus, strong Existential focus, or Transhumanist hat. The second aspect, if I may, which I think is you simply thinking about if people are interested in these topics, whether they’re very well versed or not but just want to pick up a bit more. There are very good organizations and people to follow in addition to the individuals I mentioned. And I actually categorize AI opinions in three different boxes, and they each have different individuals focusing on that.

The one is what I call DystopiA.I. which is those who acknowledge that AI is a serious existential threat for humanity. So we talked about Jerome for the Millennium Project. Here you have, the Center for Human Compatible AI, the Center for Humane Technology, Future of Life Institute, Human-Centered Artificial Intelligence, and of course people like Kay Firth-Butterfield, at the World Economic Forum or John Havens at IEEE. I wouldn’t say they’re Dystopian, but I would say they acknowledge the Existential Risk and are careful with it. The most Dystopian for the anecdote for those interested are people like Nick Bostrom, the author of Superintelligence, and these are very important works. I think that is actually one of Bill Gates favorite books. Stephen Hawking, Elon Musk, Professor Stuart Russell, Max Tegmark, who wrote Life 3.0. And I understand the debate. Everybody’s gonna take these characters with a pinch of salt. Elon Musk is okay, he says it’s the most scary thing but he’s busy developing the same aspects. Bill Gates, he created the second or third most valuable company in the world Microsoft and basically that’s a very strong proponent. So it’s all very well to say they’re acknowledging the serious risk, but I think it’s still to start to acknowledge that versus ignoring it.

The middle bucket is what I call PragA.I.matic as in pragmatic in relation to AI. And this is probably most technologists and most technology companies. The thesis is AI has benefits. You need safeguards, but we understand that it may not be that obvious to have anticipatory governance to anticipate all possible outcomes, and therefore we are pragmatic. So that’s people to follow like Oren Etzioni the CEO of the Allen Institute for AI, John Havens who I mentioned earlier, IEEE. He has a very good book called Heartificial Intelligence, where he says humanity offers options between Dystopia, robot dominance, and Utopia, tech enhanced natural abilities, and it’s for humanity to figure out where it stands. People like Andrew Ng, the co-founder of Google Brain former Chief Scientists of Baidu, or Douglas Rushkoff, Team Human, James Lovelock, who wrote Novacene. So these for me are characters who publish a lot, who are representing important organizations, who are pragmatic, acknowledging the need for safeguards, but pretty much in the middle road. Basically humanity will determine where things go in that.

And then you have the third category, which I find the most interesting, but it doesn’t mean that I adhere to that perspective. But it’s interesting because it really fits nicely with Jim Dator’s Third law if you write something it should be surprising. What I call UtopiA.I. Utopia, utopia but with AI, and that is the other extreme of DystopiA.I. which is AI has strong benefits, which clearly outweigh any imaginable risks, and they even point towards singularity. So the most famous is Ray Kurtzweil. who, again, coincidentally (or not) is head of Machine Learning and AI at Google. He’s one of the big proponents of Transhumanism, and you can see the link between Transhumanism and these topics. Other proponents, people like Max More, Natasha Vita-More, Martine Rothblatt. You then have Sam Altman, one of the founders of Y Combinator who runs Open AI. Peter Diamandis, Demis Hassabis, who’s a co-founder of Google Mind. Even Zuckerberg, to an extent. So these are yeah I get it, but really the benefits outweigh so much the risks that it’s okay we’ll figure it out.

Peter Hayward: I’m going to close on this one. I’m going to ask you to answer your own questions. Do you think that humans are stupid enough to rely on machines? Do you think there are Existential risks in this technology, and do you believe it could be governed by political and legal processes, or do we just have to trust the market forces and its animal instincts?

Roger Spitz: Yeah it’s quite a few fundamental considerations, right? So I’ll give my small specific perspective for what it’s worth. But each of these are very intricate and consequential aspects. Let me just share an anecdote before diving in quickly, which is around whether AI can be sentient or is sentient and it’s related to that and human decision making. And a person I know actually called Blake Lemoine who used to be an engineer at Google, is very well known because he was basically one of the engineers testing one of Google’s chat bots called LaMDA, Language Models for Dialogue Applications, and he felt that AI system was doing such a good job at mimicking human conversations, that he actually felt that it was sentient. And he went out with this and got fired. And he is a very clever guy, and anecdotally Google is Google, but as we also mentioned earlier Kurtzweil who believes that AI will definitely develop and overtake human intelligence supervises all of Google’s machine learning activities. So that’s quite interesting. But basically it’s a real question of What is AI? and then how humans react to that. So the way I like to address this is maybe I might be accused of being a little bit reckless or narrow minded, but I actually consider that the question around Singularity per se, and whether computers will become more clever than men and when and women or humanity and when, et cetera are not necessarily the most and only important question today. My real concern is what are humans doing to upgrade their abilities and their capabilities for decision making in our complex world? How do they deal with complexity? How do humans understand that our educational systems, our governance systems, the alignment or lack of with incentives is basically not resilient?

And that question is important because instead of just focusing on AI only, we are thinking, wait a minute, AI is gonna continue its path. We can interfere and we’ll talk about that in a second and how it looks, but ultimately it’s gonna continue to evolve as technology does. We don’t necessarily control it as much as we think. And we don’t necessarily have an understanding as much as we would like. What we do have an understanding of, to a degree, is what incentives determine what outcomes. And how educational systems are maybe not adapted to our complex world. And how we should have better tools and capabilities as humans for being comfortable with complexity and uncertainty. And that’s why to the point of thriving on disruption versus not being knocked over by disruption or why philosophy and agency is so important. Because that AI question is that we are now in what we call at the Institute Techistentialism. Which is it’s no longer Existentialism where you live in the 20th century and you make questions around Freedom, Agency, and Choice where you have exclusivity as humans on decision making. Today Existentialism is sharing decision making with technology, and therefore what we call Techistentialism is that we can’t separate the nature of human beings existence from decision making in our technological world. And now that means that we really seriously need to upgrade our capabilities. That leads me to, to trying to answer more precisely as to whether humans can be stupid enough to rely on machines. So I would say that the two or three different aspects to that.

The first one if humans are not upgrading their capabilities in terms of many of the things we’ve discussed, defacto we end up with data information systems that are too complex for humans to understand, and we are therefore defacto delegating more and more decision making to machines. By doing so, we are actually relying more and more on machines and being stupid enough to rely on them. There’s a very important center called the Markkula Center for Applied Ethics. They describe this as moral de-skilling. So you’re losing the skill of making moral decisions because of lack of experience or practice. You are therefore developing these AI technologies to make decisions for you. So the first level of always stupid enough, yes, we are stupid enough because the proof is that we haven’t updated incentives to be more aligned. We haven’t updated educational systems, governance systems to be more aligned, to being able to do a better job with complex and uncertain environments, and therefore we are delegating more and more to machines. So that’s stupidity number one.

The second level of stupidity is that these systems we’re developing are becoming incomprehensible to a degree, and therefore they’re black boxes. And therefore even if we upgrade our capabilities, and I’m not talking about Transhumanism, I’m just talking about better education at school and having the right incentive for governance. And even if we do all that, there are still interconnections and outcomes and complex systems where robots interact with super computers, with advanced algorithms which control critical infrastructures, including power plants and nuclear and semi-autonomous lethal weapons and all kinds of things, which are incomprehensible. And the more reliable they are for a certain period of time, this is another big point of Nassim Taleb, the less humans are used to touching failure or errors or need knowing how to decide, and therefore, again, that overreliance is again pushing us towards the Super Stupidity. So the question for me is not so much Super Intelligence, but the biggest worry is Super Stupidity. In the sense of are humans stupid enough to not upgrade their understanding of the world and responses to different situations? To develop systems which they don’t necessarily comprehend, and to give those systems autonomy and decision making capability to the detriment of our own ability to be well versed in doing so? And so that is really the key question.

To wrap up, I’ll try and do it succinctly, but just on your question of, can it be existential and what should regulation do or what have you. Yes. Clearly there’s a long-term trajectory which could see AI on a path to developing humanlike cognitive capabilities. There’s recursive self improvements. There’s all kinds of neural network breakthroughs, which are going to that. Don’t get me wrong, I am not saying that machines can have consciousness, that machines can have feelings, that we understand how machines and the brain works, or that machines can replicate the brain. What I’m saying is that the outcomes of machines processing certain things and coming out with outcomes which are effectively decisions, even if it doesn’t understand it, is the same outcome, whether or not there’s an understanding of the brain, and therefore that outcome can see a path which is an existential risk.

Whether it’s the one that’s the cliché science fiction stereotype of the AI system wanting to destroy humans or whether it’s simply that by complexity and not understanding those black boxes, at some point communication stops and therefore you don’t have heating and therefore you don’t have energy supplies and therefore hospitals can’t function and therefore you don’t have supply chains and therefore you can’t travel anymore and therefore you can’t deliver anything and therefore all those knock-on effects basically can contribute to big blackouts or all kinds of other things, which have an existential nature. So there are many variations on a theme beyond just the kind of science fiction stereotype trope of… Skynet… exactly. Then the remedies I think are not so obvious.

There’s a big debate, we don’t have time here, it’s not the objective here, but there’s a lot of very interesting work being done by the Cambridge Centre for Existential Risk around what they call Artificial Canaries. Do you have early warning systems for anticipatory and governance of AI? What should it be looking out for? I think there’s an element whereby a lot of this is incomprehensible and unpredictable, so we shouldn’t believe that we can have full anticipatory governance, but that doesn’t mean that we shouldn’t attempt some, closely monitor, ensure the incentives are aligned to go towards sustainability of humanity versus its own destruction. And then you can have milestones. Are certain milestones being achieved which are closer to human level intelligence? Those should be monitored carefully. Are certain developments in AI becoming so capable that they’re really threatening at scale, widespread label automation beyond what’s already happening? Are certain systems capable of threatening the safety of critical infrastructure? So you could have certain aspects which require particular attention, but for sure if you already align incentives, of leadership teams, of tech companies, if you’re already monitoring it, if we already ask ourselves the right questions without necessarily anticipating in advance every aspect of where technology might evolve, we’ll probably be in a better spot than we are today of a hundred percent laissez-faire

Peter Hayward: That’s a great answer, Roger. That was an impossible question. You did a fantastic job in answering it. Congratulations on the book, both you and Lidia. And the team at the Disruptive Futures Institute. I wish you all the best in both the book and further promulgation of this thinking. On behalf of the Futurepod community, thanks for taking some time out for our chat.

Roger Spitz: Thanks so much, Peter. It’s always such a delight, such thoughtful engagement and I generally say it when I look at all the podcasts and the people you have on and the topics that are covered. It is just so rich. So I really hope that we continue to spread the word and make sure you reach as many people as you wish to.

Peter Hayward: Thanks, Roger.

My guest today was Roger Spitz. You’ll find more details about the things that Roger spoke about in the show notes on the website including links to the Disruptive Futures Institute and the Guidebook. Plus an extensive set of links to all the other people and groups that he mentioned as well. Certainly a great resource for anyone interested in these topics and who wouldn’t be interested in learning more about them. I hope you enjoyed today’s stimulating and thought-provoking conversation. Futurepod is a not-for-profit venture. We exist through the generosity of our supporters. If you’d like to support the Pod please check out our Patreon on the website. I’m Peter Hayward saying goodbye for now.