EP 194 - Why Foresight Matters! - John Smart

John Smart, the CEO of Foresight U, joins us to chat about his book ‘An Introduction to Foresight’. Among many things he expands on his three mottos for investing in foresight and doing good work sustainably.

Interviewed by: Peter Hayward

John’s Links

Transcript

Peter Hayward: How exactly do you make all this work? How do you study the science. Listen to your muse. Do the scanning. Have a vision. And then stick the landing?

John Smart: I have the privilege of working with a lot of mid career leaders and we boil all this down to the most actionable stuff. You're always in a protection mode, a creation mode, or an adaptation mode. Toffler figured this out in 1970, and then Roy Amara at IFTF expanded it.

This is what he called the three Ps model. He said at the end of Future Shock. Foresight is fundamentally about an art of the future, there's a science of the future, and there's a politics of the future. And the politics is the fusion that sits on top of art and science. And the art and science are the possible and the probable, Fighting each other with different people, loving different corners of those bottom of that triangle.

And there's people at the top who love preferable. They want to create the preferred future. They're going to, by force of will make that beautiful vision happen. And yet. Nature uses all three of those corners.

Peter Hayward: That is my guest today on FuturePod, John Smart. The CEO of Futures U and the author of an Introduction to Foresight.

Peter Hayward: Welcome to FuturePod, John.

John Smart: It is an honor to be here, Peter. I've always wanted to be on this pod and I have gone back through so many of the past podcasts and learned so much about our community, our wonderful community. Thank you so much for having me on.

Peter Hayward: An overdue invitation, John.

Let's start with the first question, the John Smart question. How did you get involved in the futures and foresight community?

John Smart: I guess my origin story is since I was a kid, I've always been interested in thinking about what's next, what's coming. It was a fun thing. My, my parents were indulgent enough to let me play games with them, talking about where things might be going. I was bored one day, I was five or six and I actively remember starting that habit at that point.

And it's always been a passion for me, to have conversations about where things are going. And in high school, I started thinking about big picture futures and this concept of accelerating change. I saw a version of Sagan's Cosmic Calendar where you put all the interesting events on a calendar year, 13 billion years, 12 months.

And you see this crazy acceleration, the closer you get to the present time. And, back then everyone was arguing, is this an artifact or is this real? And if so, what is it and what protects it? Because it's so wonderfully smooth. Gerard Piel wrote this book in 72, The Acceleration of History.

And he talks about the emergence of human culture and civilization and science and technology, and you just see this smooth network-level acceleration. And even when you have these huge local catastrophes, like the fall of Rome, a thousand years of shrinking of large scale engineering and technology in the West.

The whole network was still accelerating. When you look closer, all those scrolls migrated to the Middle East. And that's when you saw the acceleration of arts and medicine and architecture and science in the Middle East at the same time that city sizes were shrinking and everything was being repressed technologically and scientifically in Europe.

And so this network thing is always happening. There's a network effect that transcends individual people, individual cultures, individual organizations, they each can win or lose. But at the network level, everyone's always watching and learning from everybody else. And there's this redundancy, this beautiful redundancy, built into the system.

And so you get this kind of smoothness of, let's call it the network rules, right? The network rules and the network adaptiveness. And later on I learned about the six great extinctions in evolutionary history. Virtually none of the genetic complexity disappeared. Certainly none of the developmental genes, and most of the evolutionary genes were redundantly stored in these other creatures.

So you actually see, when these catastrophes happened, something called hormesis. Nick Taleb calls it antifragility in his book Antifragile.The catastrophe often accelerates new network complexity and adaptiveness. Mammals have been around forever. When the KT meteorite hit 65 million years agothey didn't jump into this incredible complexity until the asteroid wiped out a large percentage of the terrestrial land animals.

They were just more resilient to that catastrophe. And then within, 10 million years after that, you see this incredible adaptive radiation, it's new complexity of form and function. And, this is how networks keep accelerating change happening.

So there's some really interesting positive story here, a hidden positive story. Some people call this universal development. We all know universal evolution, right? This is the idea that Darwin's model of speciation and selection and random experimentation, trial and error, is going to create more diversity over time.

More creativity, more uniqueness, more adaptive niches. But at the same time, there's something happening at the network level, one level below these individuals and species, that's creating more resilience, more general adaptiveness in a subset of individuals, of the network actors. And this is where the acceleration to these higher levels of cosmic complexity, or Earth complexity in this case, happens, because there's these protective networks.

And in human culture, it's informational networks, it's technology networks, it's knowledge networks. A guy named Louis Dartnell wrote a book called The Knowledge recently arguing that if there was a major catastrophe that didn't wipe out all of us, that we would rapidly rebuild because the knowledge is so incredibly resiliently distributed across our species.

So there's these hidden protective effects, if you will. I've always been really curious about that. And I went off and started a business and then got a good exit from that. And I did my first undergrad in Business at Berkeley, and then I had to figure out what's my next thing going to be.

And, I had never taken really interesting biology courses. I took them in high school, but not in college. And in my senior year at Cal, I took this course on molecular biology for nonmajors, and it just blew me away because that was the first time that I realized that was really the central science, right?

In my father's era, they thought chemistry was the central science. Now we actually know, no, it's the point where molecules become alive. That's the central science. And the central mystery is how does a single fertilized egg make us? That is the most amazing thing.

How do chemicals organize to do this? And, the brain is the most complex developed structure in the known universe. But that pales in comparison, in magnificence, if you will, to this little thing. A fertilized egg, a network of specially tuned genes, that can make a body, a brain, an embodied ecosystem of these cooperative and competing actors. So I went back and I got a second undergrad in molecular biology, again, because my parents were indulgent.

You're not supposed to get two bachelor's degrees, right? So I went back to another university, this time it was UC San Diego. That's when I really got to meet all these interesting evolutionary theorists and systems theorists. Now, systems theory is a subset of philosophy that hopes to become validated science in the future.

It's not today. It's a subset of what used to be called natural philosophy, right? And I have a single book to recommend if anyone wants to jump into this journey of what does science and systems theory today say? Where does complexity comes from? It came out last year and it's so fantastic because it's summarizes everything in one accessible book.

It's called The Romance of Reality. by Bobby Azarian, and subtitle is how the universe organizes itself to create life, consciousness, and cosmic complexity. And this is really hard science and systems theory. It's this beautiful blend of speculation and what we know.

And his basic idea which I came to independently and people in my evo devo universe research community, 120 scholars in that community came to independently is this idea that an autopoetic system is the most simple way of describing where complexity comes from. Everything interesting in our universe is what's called autopoetic, which means it has developmental dynamics.

So it has to replicate in a life cycle to create an additional version of itself in the future. And that's a predictable process. Development's predictable, especially if you've seen one previous cycle. You can say what's going to come next, even if you can't model it mathematically, right? See an acorn, and you know many things about what that oak tree is going to look like.

But then there's all these evolutionary components. You can't predict where the leaves or the branches on that tree are going to be. You can't predict what, how the daughter and son offspring are going to be different. They're going to be different in all these evolutionary ways.

They're going to have their own choices, their own beliefs, their own behaviors. And yet they're going to have all these predictable features. So predictable that they can talk to each other using a common language. Think of the amazing predictability developmentally, psychologically, biologically, in a complex structure like us.

This is called evolutionary development, by the way, that evolution and development are the two most fundamental ways to look at change emergence. And I got to that just because I had the good fortune of being able to keep following my passions and trying to understand, what are these fundamental featuresof complex systems. Then I went to med school because my previous company had been a science tutoring company. So I aced the MCAT. I got into five schools. And then I realized when I was in it, no, this is not for me. I don't want to be a physician.

If you leave medicine without finishing the program, you get a master's equivalency in physiology and medicine. So I loved my first two years where I learned all these things about the human body, but then, in the third year doing the residencies, it was just too much.

And I got old enough at that point to think, life is getting short, what do you really want to do? And what I really wanted to do was wrestle with these questions and the applicability of this to our lives here and now, and that's what led me back to foresight.

I'd been reading The Futurist just because I'd loved all of the stories being traded. I learned about the Houston program and jumped into the master's program. I got a second master's in future studies.

Now it's called strategic foresight and I brought my complexity and acceleration perspective to that. And it was fun having conversations and seeing all the different ways people look to and analyze the future and having some wonderful mentors mostly through WFS, going to the conferences at WFS. I break up

our field into five generations of futurists. I'm writing a little essay about that. For me, the first generation was 40 to 60, after World War II, that's when DARPA and the first think tanks were created. And then 60 to 80 was next, and then 80 to 2000, Then 2000 to 2020. And now we're four years into the fifth generation of futurists.

And, some of us span that, but when I say a generation, the two decades of their most vital work, where did they fit in that? It's fun to ask that question of yourself. For me, my most vital work really startedin 2003, when I started this Acceleration Studies Foundation to try and understand accelerating change from that Sagan thing that I told you about, the Cosmic Calendar.

We did a bunch of conferences at Stanford, and we brought a bunch of wonderful people together to really, look at this thing and say, is this predictable, right? What's going to come next if this acceleration continues? So obviously we got into AI and quantum computing and nanotechnology and all these things that are, we call them resource independent because every new generation of these special things uses less space, time, energy, and matter, unlike biological systems.

So it's a very special kind of replicator that's diving below the resource constraints of biology. And yet, it's getting more and more of these biological features to it, particularly the new versions of the AI, with the neural networks that they have. And then there's some network thing protecting that, otherwise it would all blow up and fall apart.

There's some kind of network ethics, network rule sets that are emerging, right? Like positive sum games everyone wants to play. And so I did a lot of my work in that space. And then starting around 2015, I started thinking about legacy, cause I'm old enough now what am I going to leave behind and what do I want to do in the field that, hasn't been done.

This Foresight University came out of FERN, which was the Foresight Education and Research Network with my co founder Susan Fant, another futurist from Alabama. And we decided let's do a Foresight Careers Conference, which we did in DC in 2013. Get a bunch of people together to think about how to become professionals in the field.

This is when APF was just in its winter, it hadn't quite figured out, what's footing were, so we felt we needed to do something independently. And then we decided let's write this book, let's write a good careers book, which turned into what we call The Foresight Guide, which now is this two part thing with methods andnd models, which is Introduction to Foresight. And then futures, which we call Big Picture Futures. That's the second book, but 80 percent of it's written now, and that's coming out maybe a couple of years from now. In our field, we divide things between foresight and futures

And so that's my journey with one additional thing. In 2010 from my neuroscience background in my med school classes I got really interested in this question, is AI really becoming like us? And what really is, biological intelligence?

And we started an organization to research that. That's called the Brain Preservation Foundation and Aspirational Neuroscience. I started with a neuroscientist named Ken Hayworth. And we have a community there. This is the craziest idea, if you will, Peter, that I've come across as a futurist.

I did a talk on it at WFS in 2010 and some people came up to me and said, John, this is so out there, and I said, yes it is, but I can't dismiss it. And here's what it is. It's this idea that we already have the capacity to preserve all the critical features.

that we know are involved in learning and memory in complex brains. And we've had that capacity for small bits of neural tissue for 50 years. And just recently, neuroscience has got the capacity to do it for whole mouse brains and human brains. You remember the human genome project.

Now there's this mouse connectome project. And what they're trying to do is upload the entire connectome of a mouse into the computer. They've already got a fly up there in the computer. And what everyone's trying to do now is to extract complex memories from those connectomes.

That's where neuroscience is today. And  my co founder and I got a person to put up a prize for a million dollars that we give out for 10 years. And we give it out in the form of four 25, 000 prizes. This is for the research that best proves or disproves this question.

= We give these awards out at something called the Society for Neuroscience. This is the largest conference for neuroscientists that meets internationally every year. About 30, 000 neuroscientists.

And we get about 200 neuroscientists coming to this satellite meeting where we give out these awards. We're not yet at the stage where you could pull out like a 3d spatial memory.

But. I can tell you we're on the path. Nobel Prizes were given out just, I think, 2018, for recognizing how grid and place cells work in an area of the brain called the hippocampus. So these basically, where the 3D model primitives are, from the hippocampus, which stores the last two days of memory, it's called working memory, which is huge.

That's why if you cram and you walk carefully to the test, you can spit out all this amazing information, but that special part of your brain gets wiped every night to make room for the next day's experience. And you store a subset of that into what's called long term memory in the cortex, right? And you stitch that all down into these proteins that survive for your entire life.

And that's the amazingness of the human brain. We're on a journey to being able to show that for a relatively small amount of money, people could preserve their memories and possibly come back, but at least all their memories will come back create something called a human experienceome, right?

And that's just amazing that, science and technology are at that level and we all have to decide, is that a valuable thing? I happen to think it is because I think it's going to make  human culture, the diversity of what we know so much richer, just like the invention of writing did and any other thing that increases cultural memory.

I went into that digression just to say that if you follow your passion or you find that one thing you can't dismiss, It'll lead you into this kind of weird little cul de sac.

So I've got my futures hat, I've got my complexity hat, and then I've got this kind of hat on that's basically saying, this simulation world and biology are getting more and more tightly connected and the AIs that are coming, they're going to be highly similar and interdependent with the structures and algorithms of the human brain.

That model's called natural alignment. The idea that A. I. and natural intelligence just increasingly Align because evolution found it out first and We're nothing as smart as nature. And as soon as we try and look at how nature does things suddenly we get these huge insights And nature I would argue is an incredibly self-balancing system.

it was Robert Wright in his book, Non Zero in 2000, who argued that, morality is a positive sum game, democracy is a nonzero sum game, so many games we play, even if somebody has to lose in sports, the game wins because everybody gets paid more, there's more and more coverage of the losers and helping them.

Humans do not want to play zero sum games. They want a game where the size of the pie grows every year. And the big insight for me from accelerating change is that becomes more and more intangible. This is where we get to our sustainability futurists who say, how do we get out of all the depredations of the Anthropocene and the loss of all the species and the fact that 97 percent of the terrestrial carbon biomass is humans and their domestic animals?

That is such an affront to nature, right? And 50 percent of all the species of all the terrestrialland animals are endangered or critically endangered or extinct. How do we get out of that incredible valley to the other side? And I think we have to do it through these processesthat make positive sumgames increasingly virtual or what's called intangible. And I try and cover that, in the book, this idea that there are these protecting mechanisms that we're just beginning to give ourselves permission to see in nature, and we need to use them more and more ourselves.

The best example in humans is our entire consciousness and cognitive system. We do so much exploration of possible, probable, preferred, and preventable futures. We simulate it here first in what's called fast space. Rather than all these slow, expensive, messy, unsustainable experiments in physical space, How can we use AI to improve that so we're not making all these huge mistakes that we're currently making because we're only just barely into this era where the intangibles are leading the economy, right?

The intangible value that's created comes in these models. And as we can imagine, foresight is at the center of all of those models like I say in my book the predictive processing model of the human brain, right? The human mind. I think that's the most evidence-based model.

We're continually at the subconscious level, making educated guesses using something called Bayesian inference, right? Or active inference. So foresight's at the center of what a living system does. How do we make that high quality and conscious? Bring it to our teams, bring it to our organizations?

And that's where I think we have to find these fundamental models, I've tried since 2013 to write those up the best I can and that's really been my journey the last 10 years.

Peter Hayward: Hell of a journey, John. I'm a developmentalist by nature and that was my PhD. Part of what you developed is that you laid out the development of foresight through three distinct stages, do you want to tell how the development of foresight kind of steps through three distinct developmental phases culminating in this evo-devo frame.

John Smart: Sure. The central model for me, what complex systems do, it comes from this autopoiesis, right? The auto is the development. Poiesis is the creativity, right? Living systems are always doing three things. They're protecting themselves.

That's the developmental part with your immune systems and all the predictable things that make a developed egg predictably in a complex stochastic environment create this beautiful thing. Then there's the evolutionary part, trying variety because you never know what's going to work best. So it's fundamentally creative.

Protection, creation, and then adaptation, which is the fusion of those two. A subset of those things are going to be the most adaptive, and individuals are primary creators in that model. They're the evolutionary actors. Groups are the primary protectors.

The group, by definition, has a set of agreed upon rules and frames and norms. But then, the fusion of those two is ecosystems or networks. Networks are the primary adapters, not individuals and not even groups. As I was saying, with all those species that got wiped out, what happened to the complexity underlying the network?

Virtually nothing. It actually accelerated in most cases, right? So having that triple view, I think is really, it's deep. It's the most foundational way to look at systems. In college or high school, we learned, individual versus the group as a fundamental ethical, tension, but sometimes we forgot that

It's the network sitting on top of individual and group. The network is a collection of individuals and groups, right? Cooperating and competing. And cooperating first, that's the really beautiful insight. We cooperate first  and then try and compete within a set of agreed upon rules, which we continually update in this iterative fashion.

This is all in those great books on coopetition. it's a very positive view of the future of humanity because we say the network's always winning. Everything's winning, but the networks wins the most and things are continually losing, the network learns from the loss.

That hormesis thing we talked about. So I think you've got these three fundamental mandates, if you will, which  I started doing leadership training, leadership and foresight training with law enforcement and the military in 2010. And now the Navy is my biggest group. I'm a lecturer in the Naval Postgraduate School.

And I have the privilege of working with a lot of mid career leaders and we boil all this down to the most actionable stuff. You're always in a protection mode, a creation mode, or an adaptation mode. Toffler figured this out in 1970, and then Roy Amara at IFTF expanded it.

This is what he called the Three Ps model. He said at the end of Future Shock foresight is fundamentally about an art of the future, there's a science of the future, and there's a politics of the future. And the politics is the fusion that sits on top of art and science. And the art and science are  the possible and the probable, Fighting each other with different people, loving different corners of the bottom of that triangle.

And there's people who love preferable. They want, the Bertrand de Juvenal, they want to create the preferred future. They're going to, by force of will make that beautiful vision happen. And yet nature uses all three of those corners.

And this is where Art Shostak comes in, one of my mentors from WFS. I was very privileged to have several wonderful conversations with him. And he said, you have to split the preferable into the preferred and the preventable. And I learned later that's because pleasure and pain are the two most fundamental motivators in all complex systems.

In a very simple living system like a bacterium, it's attraction and avoidance behavior. And you have the two genetic systems. A single celled animal, like a paramecium, you can mess with it with your little pen under a microscope and it will decide how many times you mess with it before it runs away from you.

You can put some nice food in one location and it'll make a decision about when it wants to go to it, especially if there's some risk involved, right? Attraction, avoidance, pleasure, pain. And this in our psychology comes down to strategic optimism and defensive pessimism.

And it was the wonderful work of Gabrielle Oettingen, the social psychologist at NYU, that first really explored that. And I go into this in my book, the two Ps of the preferred and the preventable futures, the protopias and the dystopias, which we love talking about, and different people are attracted to different sides of that, right?

That's just as much of a conflict as the probable and the possible, right? But on that strategic conflict at the top, the adaptive conflict, She showed that you want to start optimistic and then go pessimistic in roughly the same ratio before you make a plan.

And when you make a plan, you want to have a few if then statements. If this problem that I've just anticipated comes up, then I'm going to do this thing. And she showed you get like 130 percent better prediction accuracy in all these randomized clinical trials. Actually, it's 50 to 130 percent depending on the experiment and you get 50 to 150 percent more productivity in the same time.

If you've done this versus say optimism and then plan or pessimism and then plan. And if you do what's called negative contrasting, pessimistic first, then optimistic, and then plan, you get about half of what you want to see and half of what you want to get done. And that could be over any time horizon.

What a wonderful thing to see that this is baked into human psychology, our motivation to overcome obstacles and to see what we really need to see. Is improved so much if we have these two conflicts. The predictive contrasting conflict of the knowns and the unknowns are at the base.

And then it's the sentiment contrasting conflict in the network space of strategic optimism and defensive pessimism. And, the literature on leadership shows that strategic optimists can be great leaders, defensive pessimists can be great leaders.

But the ones that allow what's called psychological safety for trusted conflict between all four of those.Patricia Lustig has this wonderful concept of the eagle, right? The eagle leader, who is not just a hedgehog. That's a developmentalist thinker who has one idea and they apply everything to it.

One model. A fox is the evolutionary side. And that's I have lots of different models. And boy, I'm tactically so agile. You don't really know where I'm going to go next. The eagle tries to combine strategic vision and tactical agility In this bigger view, in this ecosystem view.

And Fox, Hedgehog and Eagle, all three of those are great leadership styles to aspire to. And that's because they're all seeing pieces of the puzzle.

And the most fundamental way to describe it, Peter, is there's two sets of physics in the universe. I learned this from my evo devo universe community, from physicists and information theorists who know way more about all this stuff than I ever will.

There's quantum physics and chaos, which are inherently unpredictable going forward. They're contingent and they create the way things are observed collapse the wave function, in different ways. And you can't say where the thing will go next. You just have to iterate it.

That's a subset of reality at the subatomic scale. And then at the macroscopic scale, there's special things that are called chaotic, right? Non linear chaos. But then opposing that is this wonderful set of useful physics at the macroscopic scale, which is equilibrium thermodynamics and nuclear decay and most importantly, classical mechanics and relativity, right?

Which, ever since Newton, we've seen there's this incredible predictability at the large scale. And then the people in a space called quantum gravity are trying to unify those two to say that the quantum physics and the classical physics, there's this theory of everything that will incorporate them.

And several people in our EDU community say, nope, you're never going to actually unify them. What you're going to find is something similar to biology. Which is these developmental genes, which are predictable. These evolutionary genes, which are unpredictable. And those two together somehow, magically, wave our hands, created preferences, created life, which is this blend of the two.

The so called theory of everything may end up being a theory of special things, that are these fundamental, predictable and unpredictable features, that together, in a replication cycle, under selection, create this wonderful network that has preferences. And that is, as Azarian would say in The Romance of Reality, that's probably the best explanation of consciousness, intentionality, agency,morality all of the higher, wonderful features of complexity that we care the most about.

And I think if you just back off from that one level, you get to Shostak's four Ps. Or what I call the KUVRmodel with my military clients. Cause it's simple, are you covering your bases? And KUVR is just Knowns and Unknowns fighting each other, Visions and Risks fighting each other.

So you have to do those four assessments, you've got to be doing them all consistently, and then you get the privilege of doing strategy, because what I learned from my  mentor in the Houston program. Is this wonderful way of explaining what foresight is? It's not perfect.

It's not a hundred percent correct, but it's it has so much play in the boardroom, right? And it's basically this: foresight is anything you do before strategy. And so you ask this question: do you do anything? Oh, here's your strategic planning group. Oh, wonderful. How many people? five people?

Okay. Did they do any foresight methods or processes? Do they survey their clients? Do they survey internally? Do they have internal prediction markets? Do they make models of trends or drivers? Do they do scenario work? Do they do explore alternatives, wild cards? What? They don't do anything? , They go right into what Richard Rumelt calls, in Good Strategy, Bad Strategy, his classic three step model of strategy.

Starts with diagnosis  of the competitive environment, opportunities and risks, and then it goes to a set of guiding principles, step two, and then it goes to a set of coordinated actions sub goals, plans, resource allocations that come off of the guiding vision in step two. And like he says, if everyone in your team can't tell you what the guiding vision is, you don't have strategy yet.

It's beautiful because it just forces people to say, all right, I need a guiding vision. I need to communicate it well. You go to his book, one of the Bibles of strategy, and you're getting a little bit of this contrasting, with the opportunities and risks.

And what is the competitive environment? How widely do you have to look? And do you have to start first with this conflict over relevant Knowns and Unknowns? Because if you start with relevant Knowns in many areas of business today, back to my accelerating change stuff, I can predict, I can bet.

Some people don't like the word prediction. It gets their hackles up. But we al make bets.I can commit resources that the AIs we're using today are going to be way better in a predictable way. I can't tell you when they're going to get generally intelligent.

Everyone speculates on that. Some people say, Oh, next 10 years. I happen to be one of the people who think it's going to be 80 years at least, because they don't have any of these higher features of the brain that I've come to understand through my BPF and AN work.

I think they're going to have to get those features to be trustable. That's my natural alignment thesis. But I can tell you predictably, they're going to be so much more useful. I can tell all the listeners here. If you get on perplexity.ai and you want to ask any research question on some complex thing that has to do with published literature, that LLM large language model, it's going to amaze you how much better it is than say Chat GPT or Claude or Gemini, any of these other ones out there today, because it's been trained on just published scientific literature and scientific podcasts and scientific websites.

And to me, the AI future for the next 10 years is just so obviously taking this incredible first generation neural mimicry that we have in these systems these so called transformers, which have a model of attention in them, of human attention, that's one of their big breakthroughs from 2017 and they have a model of human dopamine.

It's called the TD Learning Algorithm. So these things reward themselves for accurate predictions, and they punish themselves for making poor predictions. And that was actually published in a famous Nature paper in 2012. That this TD Learning Algorithm is what dopamine does in the human nervous system.

So they've stolen little pieces of how the brain works to date. Even though they're doing this very non biological stuff called matrix multiplication at the very bottom level, at these higher emergent features, they're doing all these things that are so similar to our brain that they can do what's called dimensional reduction.

They can take all this complex information, reduce the dimensions of features in latent space to the ones that are most useful to your query, and actually create a map in latent space,knowledge graph space, of how closely different features are related to each other. And now, just this last year, we can visualize that map.

So if I had, a recording of everything you've ever said, from your future smartwatch, everything you've ever emailed, everything you've ever clicked, you know how the new Microsoft Windowshas that Recall feature. It's creepy, right? Some people are going to turn it on. That was going to record every click they've ever done and they can decide which sites they don't want to have recorded, but everything else is going to be recorded.

That thing is going to be a map, a model of what you care about. And I'm going to be able to actually compare other’s models to mine and see where we overlap and where we don't, what things I don't know, what things I do know. These tools are going to be so amazingly useful. And I can totally predict with very high confidence that one of the simplest things you can do to really create value for yourself is to invest in these leading AI companies and just sit and hold them, right?

The so called Magnificent Seven, if you've heard of them and how they've outperformed the whole stock market, the last 12 years since deep learning was invented and a book like The Simple Path to Wealth by JL Collins will just basically argue just get in the market, don't get scared out by the so called big, ugly events.

Just ignore all that stuff and say, no, I can predict, just like the smartphone. Once it was invented in 2008, it was totally predictable. All 8 billion humans would get it. It took about 15 years for the vast majority of them to get it. Same thing's going to happen with AI, right? It's going to take another 10, maybe 15 years for all that stuff to emerge.

And so this is all the probability corner. This is all Knowns, but that's a very powerful known because there's a lot of empowerment that will happen with that. Once you give yourself permission to say, no, this isn't just somebody's, fanciful future story.

This is a trend that's been happening since say Moore's law was invented and it's getting more and more obvious, right? More and more data backed. Now, what are the Unknowns? What are the possibles, what are the problems? What are the wildcards? What are the risks? What are the alternative ways that could develop and cross impact analysis I want to do on that.

Then what are the key uncertainties? We need those before we can even set up a scenario, right? Scenarios in the classic model take two key uncertainties and create the 4x4 grid. Knowns to Unknowns, we had a good debate on both of those, then we can move to Visions.

Then once we have the vision, what are the Risks that will take that vision down? And so that's KUVR. It's simple enough that everyone can see the value of it. And everyone can see there's different personalities that like different corners of that. And in my book, I described the Kiersey model, which is exactly this model, right?

Kiersey's personality matrix model of the four types. There's the Guardian who fights with the Artisan. That's the person who loves uncovering knowns and forecasting and risk management and security and law and all of that. Who fights with the unknowns? The artisan, the design thinker, the entrepreneur, the experimenter.

So the Artisan fights, the Guardian. And then on top in strategy space, the Idealist fights the Rationalist. The idealist is always thinking about what should be, and the Rationalist is thinking about what needs fixing. It's your defensive pessimists and your strategic optimists. And, readers may not know this, but defensive pessimists live 10 percent longer because they don't burn the candle on both ends, and yet we need both. We need both, right? We need both.

Peter Hayward: I hope you're enjoying the podcast. FuturePod is a not for profit venture. We're able to do podcasts like this one because of our patrons. Like Johannes Kleske who has been a long time patron. Thanks for the support. Johannes. If you would like to join Johannes as a patron of the pod, then please follow the Patreon link on our website. Now back to the podcast.

Peter Hayward: Yeah, I'm going to lean into the work of another one of our wonderful colleagues Zia Sardar, and his work with the people at Post Normal Futures.

John Smart: Yeah.

Peter Hayward: And Zia, Said on his last podcast with me that so there's the complexity of the world and the weirdness of the world that he's in, but he comes back to the virtues as the things that set the compass for the actions he takes because he still thinks that even doesn't matter how weird, I have to decide what I will do, what I won't, and for him, the virtues stand. Now, you talk about mottos, and I'm hearing the personal ethic of what I choose to do or the things that guide what I will and won't do. Do you want to just unpack your mottos?

John Smart: I would love to. This is a beautiful segue into normative foresight, which, the great Wendell Bell and his two-volume book Foundations of Future Studies, the first book was all about methods. The second book was all about the Good Society. Those two books were titans in the field. We had to study them at University of Houston. And the Good, as he says, it's really hard to define it, but it's so important to try to define it.

What are the virtues? What are the universal ones? And what are the unique ones that I choose? I think the best work that's been done in this space is moral foundations theory. Jonathan Haidt's work on moral foundations theory. And his book, The Righteous Mind 2013. I think it's the best book that's been written on this topic in the last, 10, 15 years, because I think it really simplifies it in such an actionable way.

His argument is that there are these culturally universal liberal values and culturally universal conservative values, and there's a center. There's an adaptive center where those overlap. Liberal, conservative, centrist. And it's important to try and define these terms better, for normative research.

The liberal one is all about individual values, and the conservative one is about group values, and the centrist one is about network values. And, what he describes on the liberal side is freedom and diversity as fundamental values,. And then on the conservative side, it's loyalty and authority as these universal values.

And then in the middle, it's care for the liberals and sanctity for the conservatives. Sanctity is just ethics with hierarchy, higher and lower. You can get that sanctity from, from scripture. You can get it from the study of nature, from whatever you get it, , if you have a set of values that are higher or lower in your ethical systemyou have the sanctity virtue, right?

And then one level down in the center on the liberal side, it's equity. Everyone should have a stake. And on the conservative side, it's merit. These are two kinds of fairness values as Haidt describes them. The liberals care about social and distributional justice, equity. And the conservatives care about economic and criminal justice merit, right?

He also calls it proportionality. If I work harder, I should get paid more. If someone does a bigger crime, they should have a bigger punishment. And so there are these values on the conservative side, which are advocating personal responsibility to the group through those four values, sanctity, merit, loyalty, and authority.

And on the liberal side, it's individuals creating a flourishing community. So freedom, diversity, equity, and care, right? And I'm writing a paper  for a complexity journal right nowarguing that these values are on a Gaussian, that the centrist values are the most frequent, the ones we care about the most.

He's found these values independently in all cultures. And his big insight in the book is, you don't find them unless you study kids under the age of five.

You don't find all of them because Culture will stamp out certain ones of those. In the 70s and 80s, people argued, there are no universal ones, because I can always point to a culture that doesn't have one of the ones you purport to be universal. But then you go down to the level of the, the pre verbal kids.

And you see them in behaviors, and then as you get older, as he describes in his book, one kid, by experience and by genetic disposition becomes more liberal.

The other comes more conservative. They grow into their value specializations and his big insight is that these are all valuable. You've got the individual creative ones, the liberal ones that are hugely valuable and then you've got the group protective ones, which are hugely valuable.

And then you have a fusion of those, which are these network adaptive values. And I got to it with a different set of labels in my book. On the liberal side, for me, it was innovation and intelligence on the conservative side. It was strength and sustainability.Those are group values, right?

And then, in the middle, it was empathy and ethics. And to my way of thinking, empathy comes first. You can't get a set of ethics without a deep empathy in the network that you're surveying. That will give you a sense of the affordances, of what the ethics should be. Which I strongly believe are based on the situation.

Situational ethics has a bad name in, some circles, but I think it's true. I think, in different contexts, you will make different choices, but you make those choices based on what you think is best for the network as a whole. Another way of putting this is the Golden, Platinum, and Rhodium rules, which you may have heard. The Golden rule is do

unto others what you'd want done unto you, that's the individual perspective. Then there's the Platinum Rule, which is do unto others as you think would like to be done unto them. So you have to have empathy for them as an individual in a group with commonality between the two of you, and the Rhodium rule is do what you think is best for the system as a whole, for the network or ecosystem as a whole.

So you're not just sacrificing for the group, , in the rhodium, you're sacrificing for the whole network, and that could involve doing something that is against what your country wants you to do.

So you're deciding not to participate in a war that you don't consider just, right? Sometimes the individual ethics are just the ones you need. The diversity of that individual community. Some of those people   they're rule breakers, but they see the future that's coming. They're gonna survive so well in the next catastrophe.

But on the other side, you just need to protect sometimes. And then in the middle, you need to be focusing on adaptation. Now, I really need to know what's most generally adaptive. Not just adaptive in any niche, but that maddening word, general adaptiveness. You don't get to consciousness or synthetic thinking, systems thinking, without that word general.

There's some special feature of human intelligence that is unlike the intelligence of a species that goes into a cave and loses its eyes or, specializes, right? That process of generality, like the intelligence we have, gives us the ability to survive under all kinds of circumstances. Boy, don't we want that embedded in our whole culture? It's beyond robustness and resilience.

Robustness is survive all storms. Resilience is bouncing back. Now, this includes hormesis, which means a stress that makes me actually stronger, because the network reorganizes, right? That useful word anti fragility or hermesis. And it includes this idea of the really nastiest word we don't like to use, which is progress.

There is some kind of progress in general adaptiveness that the whole system does. As futurists, this is one of those tough words, like prediction, right? We get our backs up. What is progress? How do you define it? I think that, you have to look to life, and you have to look to its increasing general adaptiveness at what we call the leading edge.

And this is a bit offensive, this idea of higher and lower forms of adaptiveness, right? It's a bit offensive. Some people like to think who's to say humans are more generally adaptive? But I think it's true when you talk about all the kinds of conditions environmentally that could hit us and how we can respond to them.

Ray Kurzweil was the first to teach this to me in his book The Age of Spiritual Machines. And he said, physics does not understand this yet. That certain special planets are going to bat away asteroids that come too close to them. When that system gets to a certain level of complexity, it's just going to bat them away. Why? Because it has consciousness. It has intentionality.

It's protecting itself with some kind of informational complexity and physics isn't an information theory are not yet able to give us the math behind that. And yet we know in our bones, that's the trajectory of life, right? So there's something really amazing there in that general adaptiveness. That we as a society are trying to create.

And maybe a big part of that gets back to the fight between sustainability and innovation . It's a maddening phrase, sustainable innovation. It's almost a paradox, but it's what life does. Life breaks things and tries to break through to new things. Jeff Goldblum in Jurassic Park, and yet life is also always protecting the critical things.

It's sustainable innovation. And how does that progress? It progresses through some kind of a conflict continually between what you want to change and what you want to keep. And so Protect, Create, Adapt. For me, that PCA mandate, it's central to our leadership discussions.

And so those KUVR assessments keep us where we need to be. In the appendix of my book, I have some assessments you can take where you say, Oh, I don't have enough Protectors. I don't have enough defensive pessimists. I need to get those. Or I'm not actually having the conflict, right?

Because to me it's that trusted conflict, which we're all doing in our head and now we want to do it on our teams and on our organizations=. And I'm very heartened that, some of the greats of the third generation, obviously Toffler and Shostak are both third generation, 1980 to 2000 is when they were doing their most prolific work in their whole oeuvre, if you can call it that, of work.

They just handed that baton on to us. And now we're starting to see some of the systems theory of how that works. And I think it's going to be this fifth generation that's going to have all these wonderful tools to explore this stuff. Often in ways, statistically, survey based, the way, moral foundation theory works in social psychology.

Where we'll say, we see these in all cultures, and some of them get out of whack and so that, I think, is how you get to the virtues. You really start getting, that you need them all. And you can get liberals and conservatives to see that the center holds the most.

This is my speculation here for you, it's in my paper. Is that under conditions of scarcity and conflict, the conservative values are generally going to be most adaptive. Under conditions of peace and abundance, the liberal values are going to be most adaptive.

And then under conditions of uncertainty and complexity, the centrist values, balancing the two, are going to be the most adaptive. And I'm just totally speculating here, right?,

Peter Hayward: going to close this off, John. I think the book is certainly part of your legacy for the generation coming next.

John Smart: Doing my

Peter Hayward: best. I want to just talk about so the book. So the book is available to people. Sure.

John Smart: Yeah. Actually we put the whole book up for free on Foresightu.

com, which is our website, our Foresight University, our little Foresight consultancy nonprofit. ForesightU. com slash pubs, P U B S. And if you go there, you can find some worksheets you can do to apply some of these models.

We wrote chapter one to be the summary for the executive., Tt's our view of what are the most foundational models, includes these four P's the KUVR thing and includes, a few other things that I think are really helpful.

And then you can apply them to yourself with this personal and team Foresight Journal, which is in the appendix. And there's another appendix of, leading consultancies and books that we recommend and all of that.

But here's the exciting thing. At the end of that book, we proposed a vision for creating something called a Futurepedia, an encyclopedia of foresight methods and future stories that people are trading in various topical areas. As you may know, Wikipedia has an allergy to that.

If you put up a futures topic, they reject it as speculation. So since the beginning of Wikipedia, there's been a need for a Futurepedia. And Michelle Bowman famous futurist myself and Kevin Kelly, founder of Wired Magazine, a famous author, friend of mine.

Independently, we called for a Futurpedia. We all posted onto the web,there needs to be this thing called a Futurepedia. That was 2008 was the first post. I think it was Michelle and then it was Kevin and then me. And we did a little work on that and we wrote up a structure for a topical page with the four P structure so people can hang links to studies on the possible, the probable, the preferred and the preventable, and schools of thought in each of those areas.

And now that we have LLMs, we can ask them to generate a stub for us, which the professionals and the amateurs can keep modifying. And, we can raise money for a little three minute video to be put on the top end, because as you all know, all the kids go to the videos first before they read.

And, we can do all these wonderful things to create this resource. That has methods and future stories, and then a set of resources, published future resources. And it turns out that David Jonker at SAP has started something called the Open Foresight Hub, a compendium, of futures literature, futures reports in lots of areas.

So he has a team doing that. And the most exciting thing for me is I learned recently that, APF has decided to do a futures wiki that's more like an encyclopedia.  I had a wonderful chat with Zan Chandler, who's on the APF board and APF is now It's flowering, it's global, and it's such an amazing group and community. And now they've decided to do this. And so I offered the stuff that we did, including futurepedia. org. This URL I grabbed back in 2008 or nine. And we've just been sitting on it and we know we don't have the resources to do it. More importantly, we don't have the community of practitioners to run it as an, as a nonprofit.

Democratic practitioner community running it, right? We've always felt that should be the vision and it should be an open GPL just like Wikipedia. And Zan has said that she wants to do that. And she said, we're going to, talk to our board and see if they want to use that brand.

They might use another brand, but regardless now, Susan and I, and others here at ForesightU have just jumped on the APF bandwagon to help however we can create this thing. That, aspirationally, I put out at the end of my book, and now look, the universe manifested a whole group of people who's going to do this thing, and I really believe this thing's going to work.

And then it gives voice right. To all the people in our community who have some particular view about the future and they want to get it up for critique.

And I think that eventually, all the AI features come onto that. I know you've had Mike Jackson and others in our community, wonderful futurists who've really taken AI and applied it to the foresight space. And I think what's going to happen is everyone's going to add, vegetables to the pot and we're going to have this amazing stew, over this next 10 years where.

The field will go into what I call a Futures summer. I would say that, 40 to 60 was a Futures spring. And then the Summer was 60 to 80. That's when WFS and WFSF and all of the really, big groups were created. Then we had a Futures Fall and Winter that was 80 to 2000. We lost the OTA.

And WFS shrunk in size and everybody became focused on, materialist things and the neocons really took the agenda. And then we got a new Futures Spring in 2000 to 2020. And now we're in a new Futures Summer. We're just a few years into it since really, post COVID, that has accelerated so much.

I think that the future of our field is so bright for the next 20 years, with so many tailwinds.

Pushing us  in this community direction. And I would just advocate everybody in our community, go to join APF, go to your get a certification if you don't have one yet from one of these schools, OCAD or, Stellenbach or IFTF or the European ones.

And or University of Houston, and just get involved because you're going to see this whole field flower. It's really going to do some amazing things that are useful for the world.

Peter Hayward: I agree, John. On behalf of the listeners, thank you so much for taking some time out to have a chat with the community and congratulations on everything you've

John Smart: done. Thank you, sir. It has been an honor and I look forward to continuing to listen and learn from your amazing people.

Peter Hayward: lot for you to digest in this one. John has got a masterful grasp of both the breadth and depth of our field.. His Introduction to Foresight is also a free download and you really should give it a spin. Future pod is a not-for-profit venture. We exist through the generosity of our supporters. If we would like to support the pod then follow the patreon link on our website. This has been Peter Hayward. Thanks for joining me and I'll see you next time.