Richard Yonck is a Seattle-based futurist whose outlook on the future was informed by over 25 years experience as a computer systems programmer-analyst, during which time he guided clients through the rapidly-shifting technological landscape.
He founded Intelligent Future Consulting where he consults to businesses and organizations, speaks to audiences and writes about artificial intelligence and other emerging trends and technologies, with a focus on their impacts on business and society. and to promote our preferred futures.
Richard’s writing has appeared in numerous publications including The Futurist Magazine, Scientific American, World Future Review, Fast Company, Wired, & Psychology Today. His books include Heart of the Machine in 2017 and his new book Future Minds.
Interviewed by: Peter Hayward
More about Richard
LinkedIn - Richard Yonck
Site: Intelligent Future Consulting intelligent-future.com
Twitter: twitter/ryonck
Selected Books and Articles from Richard
Yonck, Richard, Future Minds: The Rise of Intelligence, from the Big Bang to the End of the Universe. Arcade Publishing, 2020. https://www.amazon.com/Future-Minds-Rise-Intelligence-Universe/dp/1948924382
Yonck, Richard, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence. Arcade Publishing, 2017. (Second edition with new foreword by Rana el Kaliouby, 2020.) https://www.amazon.com/Heart-Machine-Artificial-Emotional-Intelligence/dp/195069111X
Yonck, Richard, “Toward a Standard Metric of Machine Intelligence”, World Future Review, Summer 2012, http://intelligent-future.com/wp-dev/articles/Standard-Metric-of-Machine-Intelligence.pdf
Yonck, Richard, “The Age of the Interface”, The Futurist, May, 2010. http://intelligent-future.com/wp-dev/articles/Interface.pdf
References
Big History Project - https://BigHistoryProject.com
Chaisson, E. J., “Energy Rate Density as a Complexity Metric and Evolutionary Driver,” Complexity 16 (3) (2010), 27–40 https://www.cfa.harvard.edu/~ejchaisson/reprints/EnergyRateDensity_I_FINAL_2011.pdf
Reese, Byron and Nass, Clifford, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, 1996.
Pinker, Steven, Better Angels of Our Nature. Penguin Books, 2012.
Stout, Dietrich, “Tales of a Stone Age Neuroscientist”, Scientific American, Oct 2016. https://scholarblogs.emory.edu/stoutlab/files/2017/12/Stout-Sci-American-2015.pdf
Turkle, Sherry, Alone Together: Why We Expect More from Technology and Less from Each Other, Basic Books, 2011. (“Darwinian buttons”)
Audio Transcript
Peter Hayward
Hello, and welcome to Futurepod. I'm Peter Hayward. Futurepod gathers voices from the international field of futures and foresight. Through a series of interviews the founders of the field and the emerging leaders share their stories, tools and experiences. Please visit futurepod.org for further information about this podcast series. Today, our guest is Richard Yonck. Richard Yonck is a Seattle based futurist, whose outlook on the future was informed, in part, by over 25 years experience as a computer systems program analyst, during which time he guided clients through the rapidly shifting technological landscape. He founded Intelligent Futures Consulting, where he consults to business and organizations, speaks to audiences and writes about artificial intelligence and other emerging trends and technologies, with a focus on the impacts on business and society, and to promote our preferred futures. Richard's writing has appeared in numerous publications, including the Futurist magazine, Scientific American, World Future Review, Fast Company, Wired, and Psychology Today. His books include "Heart of the Machine" in 2017, and his new book, "Future Minds". And Richard importantly, is also a Futurepod patron. So it's a real pleasure to welcome you to Futurepod, Richard.
Richard Yonck
Thank you so much, Peter, for having me. It's awesome to be a part of your program your series. You've had some amazing people from the field on your show, and I'm honored to be a part of it.
Peter Hayward
Thanks, Richard. So, question one, everyone starts with this one, Richard. I'm sure as a patron you know it. So what is the Richard Yonck story? How did you become a member of the futures and foresight community?
Richard Yonck
Like everybody else, it's a story that I suppose I didn't foresee. So many of us come about, to futures in a kind of roundabout way. Like a lot of people, I approached it from a field that was only peripherally related. I have a background in computer science and media studies. And I think this gave me a unique perspective as the digital convergence unfolded during the 1980s and 1990s. So, part of my very early background. I came from a background with a lot of science, a lot of different science was that I went into media production from a very young age, and as a result, worked in the field and produced science education programs. Now, during that time, there was a lot of new technology. We were moving from film into video. The video world was transforming very, very rapidly. And it was really interesting to see, not just from a technological standpoint, all of the technology change and how we were doing different aspects of creating a program, but also that the economics the companies that were having to–it was traditional they would, because rental was so expensive, people would buy/lease equipment, and you had to commit for longer, it ended up turning out that then the technology was good for. It was becoming antiquated too quickly. And so businesses were going out of business. So that was an insight and also real interesting perspective on that convergence that was taking place between computing and different audio and video technologies at that time. So, eventually, I decided I was kind of tired of that and went into computing work. In work primarily for myself, but for some other firms too, doing various computer analyst work, programming. Worked with a lot of interface development and this ended up informing and influencing some of my perspective on interface design, and its ongoing evolution which in the course of some of my writing, I routinely reference and talk about. So, as I was in Increasingly drawn into the futures community, I found myself writing for lots of different publications. You mentioned a few of them: Scientific American and Fast Company and so forth. And then I wrote, also wrote several cover stories for the futurist magazine and became a contributing editor there. So, all of that sort of got me more and more involved, and with the different people in the field. My own work increasingly became futures oriented, helping organizations anticipate and prepare for inevitable change in their businesses and market. As you know, our world accelerated and became a more and more different place, you know, at different rates for different industries. So today, I split my time pretty much between consulting, speaking to audiences around the world and writing different books and articles, most of those around futures related topics. You mentioned, the two books. I will just make one small correction: the latest book is "Future Minds". And that book is about, well, the subtitle says at all. It's the rise of intelligence from the Big Bang to the end of the universe. So, kind of a Big History view of intelligence and how it has developed.
Peter Hayward
Technology has always changed us. History tells us with anthropology, how both our tools and such have become part of us. But I've also changed us. I t would seem that in your interest that while the computer and ultimately, you know, the way it actually operates is just another tool, it's also turning into a remarkable relationship we have with the tool, would you say?
Richard Yonck
Absolutely. In my work, in both books, and some of the articles, I frequently talk about the coevolution of humanity and technology. This is a pattern that, as you say, anthropologists have talked about, and I try to explore in my own way, that's been going on for easily three and a half million years, from the moment that we started forming stone tools. And by we, I mean, Australopithecus, one of our early hominid ancestors. That was a remarkable act to perform at that time. This is a species that had a brain case that made it, probably put it on par for intelligence with a chimpanzee. Possibly a little more. But that creating a stone tool is more complex and more elaborate and requires more sequential steps than a lot of people recognize, I talked about it in the first book a little bit more about the idea that it takes 100 hours for a modern person for them to practice it enough to become even moderately proficient at it. And this is something that provided an enormous benefit to that early species. But what was also essentially in all of that is that this is a species that is really pre-language. They did not have anything like language as we know it or think of it today. And so this had to be passed along for over 150,000 generations to come to the modern era. Now, during that time, during just the next well, 2 million years our braincase, that braincase across species Homo habilis, Neanderthal, ourselves, that increased by three times. We developed the forebrain that we have today. But all of that time, creating those stone tools, actually, we have shown in the labs in modern time that this alters our brain organization. And it's been correlated with strongly with Broadman areas 45 and 44, in the brain, which are known these days as Broca's area. Now, this is a part of the brain that is closely tied to speech use. What seems to have happened is over millions of years–a good two, three, nearly three million years–that area of the brain was developed more and more for being able to deal with sequential information, certain types of self control, and so forth. That got appropriated what in evolutionary terms is known as exapted. And this became on one side of our brains, this Broca's area that's so involved and essential to language. So here you have tools, actually transforming us into the species we are today.
Peter Hayward
So, has Richard always thought, through long time, deep time? You mentioned Big History, which is not a common way of saying history joining up the Big Bang, you know, to present data in future. I mean, is this something that was always in you are something that you picked up on your journey through life.
Richard Yonck
I can't say always. We all have to acquire a certain base level of knowledge and information. From a very young age, probably six or seven, I was very interested in inventors and inventions. How they developed. I read lots of biographies, people like Tesla Einstein, Edison, Steinmetz. All kinds of people back when I was very, very young. And it was just all part of, I was just fascinated with the whole Pantheon and foundation of science. Now, that kind of gave me a, I guess, something of a systems view of things. But only later on probably my teens, I started reading people like Sagan, and was, you know, he definitely took a much larger, something like a big history kind of view and approach, and that was something that I pretty much carried through. But it definitely has informed the structure of my books and I certainly referenced it in my writing.
Peter Hayward
Yeah, yeah. Yes. Sagan's famous line for me, which stuck with me was "We're all star stuff."
Richard Yonck
Ah yes, I think I actually used that one recently. We are very much a part of the universe and that's something I really find fascinating to explore in the course of these books. I know that they operate and and are relevant to the current day and and the coming decades and business use type perspectives in those timeframes. But I can't help but want to put it in a framework and I understand it within a much larger context. Because to my mind, that's that's not just interesting, but that's who we are.
Peter Hayward
Yeah, another guest of ours, a colleague of mine–I taught with Joe Morris, he was on the board of Big History. And he taught Big History to undergraduate students at our university. And what he said to me was that when you teach the Big History from the Big Bang till now, and then kids have had like, a multi billion year run up, they don't stop there. They want to know what's next. It goes naturally from Big History to well, what will the future be? What could the future be? And that was always the connection that Joe said that when you that when you teach bigtime, people get more interested in the future, because I understand the future is just an extension of time.
Richard Yonck
I totally agree. I think it has a corollary in the nature of culture, as well. When we understand our history as a people, we have a really different perspective on our place in all of that, and appreciation and so forth. And when you kind of extend that out even further to explore and understand how we fit into this vast cosmos, it actually provides another deeper layer of of that.
Peter Hayward
Thanks, Richard. Second question, the one I encourage the guests to talk about a concept or framework that is core to their practice. So, what do you want to talk to listeners about?
Richard Yonck
Well, I'm probably going to ramble a little bit here, but I'll try to keep it pointed. Within my work, I definitely reference things like the Houston foresight framework as a general guide. And I use tools like horizon scanning, and I read every day, I read papers and scan and do research that, in the course of I'm constantly kind of trying to make connections, put things together, connect the dots like all of us are. And this for me is one of the most natural and yet most special things that we do as futurists. Our evolution as a species turned us into these amazing pattern recognition systems. And that to my mind is incredibly powerful. It's really our superpower, when you think about it. You know, neural networks today can do all kinds of amazing, very, very narrow pattern recognition work and so forth. But when it comes to something much broader, when it comes to being able to connect the dots and make those associations of all this disparate, unlabeled, unstructured information, that's where we exceed and I think, are going to continue to exceed machines for quite some time. So, to my mind, part of what I do just always comes back to this idea that it's about finding the patterns and using the tools that we have, you know, I love a range of different ideas, the the three horizons, pace layers, CLA. These are great tools, and I, depending on what I'm doing, I'll apply them. But ultimately, for me, it's helping other people and helping myself actually see and understand and make new connections, make those insights that maybe weren't apparent before. So that's a big part of it. But I think if there's a single tool that I use, more than others, it's the scenario. Because in the course of my writing, and a certain amount of speaking as well, I'll routinely use this as a means of drawing in the audience, drawing in the reader. If you read my books, I think these last two, every chapter opens with a scenario. It was a structure that I just felt was very helpful to try to tell a story that can be dry, if don't approach it the right way. And it's important to be able to keep it in a, make it relevant to the reader. So it's not unusual that I'll do that even with some of my articles. So I probably turned to scenarios, as much as anything as a means of not just understanding my work, but trying to communicate it.
Peter Hayward
Yeah, I go back to the pattern recognition one. It's an interesting one for me, because there's a paradox there. I mean, clearly, pattern recognition is a huge evolutionary advantage, for making sense of a changing environment, dangerous environment, that kind of thing. But at the same time, we, we've also become aware of the confirmation bias. In fact, the brain will take existing patterns and apply those to, what possibly is not the same situation. So we've actually had to become consciously aware of our both both become better at pattern recognition, but also become consciously aware, to choose to use pattern recognition or to if you like, suppress it, or make it work for us.
Richard Yonck
It's a really great point. You nailed it on the head, Peter. I was going to expand in that direction. Most certainly one of the problems of pattern recognition is it's not just confirmation bias, it's a range of cognitive biases, and one that we're all dealing with in the form of recent conspiracy theories. And that kind of thing is known as apophenia. And what this is, is a tendency for us to make connections between unrelated things. Connections that are meant to suggest something very important, something that has relevance. When that doesn't happen, the one that doesn't exist at all, are evolving, to be able to recognize patterns in the environment had great evolutionary benefit. But as anyone who's sat on the ground and just kind of stared up at the sky on a summer's day knows, I mean, you can see all kinds of famous figures or animals or what have you in clouds. And this is just part of that. Our brain kind of switching into that mode. We see various mirages visually because of this, and we certainly put together in terms of information, all kinds of strange beliefs, because we think we see patterns in the environment.
Peter Hayward
A few guests particularly Zia Sardar, who talks about the rise of what he calls post normal conditions. Zia talks about, along with the rise of information has become the rise of ignorance and along with the grand narratives that may have sustained us as communities or nations for hundreds of years. That if we're moving into post normal times the failure of grand narratives to actually adequately explain what might happen next. And hence, we're in this, both scary and creating space where we need to create possibly new narratives, new scenarios, new images, for what the future could be, at the time, when we want to hang on to things that have served us well, for long periods of time.
Richard Yonck
I totally agree, I it's something I've given a lot of thought to these days, I think that there's a need for a kind of new framework, a new, call it mythology. A way of seeing and thinking about, not just our future, but I think our past as well, as we kind of move into the current day where we see our past in very, very different light than our ancestors did. And that is kind of leaving us, I think, a little unanchored, in many respects. We're at a point where we look back at some of that past, and it's like, Wow, those weren't very nice people. And that's not true. That's not a way that we should be really perceiving that. But there's a tendency to want to kind of not that we tend to throw out a little too much sometimes. There's a lot of really great aspects about our past. But yeah, we know we have slavery in our past, we have all kinds of aspects of misogyny, and we have been a very violent species at certain times. And we've, you know, certain people in certain groups at different stages have performed all kinds of atrocities. But we're also an amazing species with all kinds of potential as well. If your readers have read Steven Pinker's Better Angels of Our Nature, that is a really good book. I know some people feel that there's some cherry-picking in there. But if you actually go through that book, it is incredibly dense with some really good scholarship. And I think that that gives a lot of hope. In terms of that, yeah, we're kind of hitting a few speed bumps right now. But the overall very long-term perspective is that we are becoming a much more peaceful and generally altruistic type of society.
Peter Hayward
Thanks, Richard. – Question three: the, we've already sort of sketched a little bit this out, but this is the, this is the well, What are the patterns, what is the sense that Richard Yonck is making of the world around him and us? What are the things that are getting you excited? And you've already suggested some of those. And possibly the things that give you a cause for concern or careful attention. But how is the future looking–or futures–looking to you?
Richard Yonck
Well, there's so many different directions to take this. It's one of those questions that I think I'm assuming other guests have probably had similar feelings about since we're all exploring a lot of different aspects of the future, usually. For me, part of it has to do with the different timescale. When I'm talking to and working with clients, I'm really trying to dial things into a five-ten-twenty-year timeframe. And that is a particular perspective and a particular set of concerns. And then, obviously, as soon as I start zooming out and looking at a bigger history, view, whether that's a Big History in terms of our species, Big History in terms of our universe, it really kind of changes the perspective and how it informs what I'm looking at. So, I spoke a little bit about the idea of coevolution of humanity and technology and that's something that it's gone on all of our existence, and it's going to pretty much be our relationship to technology going forward for a very, very long time. I can't see that that's ever going to abate, but in the course of doing that, in the course of our raising up technology and it raising us up, supporting each other, going forward. One of the things that is taking place is the evolution of interfaces. And this is something that I talk about at some length. And ever since we had technology, we've needed interfaces and an interface is essentially a means by which we interact with or are able to control a given technology. The simpler the technology, the less elaborate the interface needs to be. But as technology becomes more and more complex, we need to have more and more elaborate interfaces and often interfaces that abstract the process further and further away from us. So, as we've developed computers, we've moved away from the machine language, moving up differing levels of computer languages, to the point where we got to the stage in the 1980s, where we started having graphic user interfaces, and all of a sudden, pretty much all of the populace could start operating and using a computer. We moved into natural user interfaces. This is things like voice recognition, gesture recognition, different kinds of touch screens, and so forth. And all of a sudden, you can hand an iPhone and iPad to a toddler, and they can start operating certain kinds of programs. This is something that would never have been thinkable 50 years ago. So this is a progression that is not stopping. It's going to continue and part of our upcoming few decades involve a progression of interfaces that are going to really change our relationship to technology. So we think further out, we talk about things like brain computer interfaces, and that's probably going to eventually come. People like Elon Musk and and others are working on various versions of this. But anyway, there are many other steps and stages before this. The idea of moving into an era of having a continually worn form of glasses. Apple glass should be out sometime within the next couple of years. We're going to see eventually some form of smart contact lens and so forth. But what we're already moving into right now is the digital assistants that are able to be activated and interacted with, through voice recognition, with speech being one of the most basic forms of interaction that we have. Now, this is something we do very, very naturally from a very young age. And so it's really an excellent means of interfacing with our technology. Now as AI allows this technology to become more and more sophisticated and more and more adept, these are going to essentially be our intermediaries, our fairly active emissaries, I guess, into the technological world. And so this is something that, I think is going to be an ongoing progression. So, to my mind, that's a big move that we're looking ahead to in the course of the coming decade. And certainly two.
Peter Hayward
Yeah, I mean, the thing I hear there, Richard, is what you're describing to me is the gap between the technology and us gets closer and closer and closer. To the point where the technology may become part of us, physically part of us in terms of becoming you know, the human-technology, hybrid, embedded technology. Or another way it becomes closer and closer is our relationship is less through instrumental control, and more through relational and emotional control. And that's a very different relationship that we have with technology.
Richard Yonck
I think that's very true. I, my first book heart of the machine is about emotion AI or artificial emotional intelligence and looks a lot at that, not just control, but how that eventually alters our relationship to technology. That it becomes a much more personal kind of interaction. And I can foresee that when I refer to something like a digital assistant, that if this gets to a enough sophistication, where there's the ability to recognize and incorporate our emotional states, to have an understanding of human relationship, and essentially theory of mind that that gets this technology much more to a point where we interact with it as a companion, a friend or what have you. Now, before I go too much further, this doesn't necessarily mean that the technology has to be conscious to do this. This is something that people kind of fall into that thinking of when I talk about this, and really, this could be achieved, without there being any sense or form of experience or self awareness or what have you, as we define it. That it's possible to create a system, or I can anticipate a system, that will be able to interact with us with enough similitude–Is that the right word for this?–to be able to essentially trick us. We may know, we may be told daily, this is not a living thing, this is not a conscious thing. But we have a tendency already to anthropomorphize, to fall into patterns with our technology, where we start treating it as if it's another entity, another person, or what have you.
Peter Hayward
I remember a famous story, which was when they brought, I think it was called the Roomba? Was that the little vacuum cleaner that that used to run around your house? And people sort of rang up when the Roomba stopped. And they rang up, you know, the Help Ddesk, and the Help Desk said, "Oh, it sounds like the motors burned out. If you send us your Roomba, and we'll send you another one." And the person would say, "No, I want to kick Doris." I wonder though, Richard, there's a very, very strong cultural dimension here that I think plays out. And in the West, I'm going to say, we have got, if you like, the sort of golems/Frankenstein story. Bt how different that is in a place like Japan, it has, seems to have a different cultural relationship with how it sees its interaction and relationship with technology. And I wonder whether that's also going to be going through a change process,
Richard Yonck
it's so hard to say. I totally agree that there are cultural differences, different ways that different cultures respond to and react to and anticipate working with technology going forward. Japan has not only a cultural history, in terms of media and other sources, that kind of put them in a much more accepting position for things like robotics and AI. But there's also a significant need. The country is aging, probably just about as fast as any country in the world and needs a lot of different kinds of help down the way in terms of being able to help support their aging population. Now, one of the ways that they are doing that is by investing heavily in AI and robotics for exactly those purposes. For both hospital and assisted-living type of functions. So that's, you know, there's both I think, need and probably a general trend in the society to acceptance.
Peter Hayward
Yeah, and the same with, you know, the Tamagotchi, you know, phenomenon which we saw as a fad. But we're also seeing that they are able to produce kind of pets that they can give to people with dementia. And the people, it does appear to have a kind of calming, settling effect, to give people like a lifelike object that they can't actually damage.
Richard Yonck
Absolutely. Paro is the little baby seal pup that is basically a robot, that responds to, all kinds of different sensors that respond to touch, sound, and so forth. So they, they can kind of give this impression that they are interacting with the person who is holding them. It's what I was referring to before where there doesn't have to be a consciousness there for us to connect with and interact with these machines in a way that basically they're pushing certain of our evolutionary buttons.
Peter Hayward
Thanks, Richard. Fourth question, they communication one. How do you describe what it is you do to the people who don't necessarily understand what it is you do?
Richard Yonck
Yeah, that's always been an interesting conversation. I don't have any huge insights that others on your show haven't already kind of covered. I usually say something general, like futures work as a system is based exploration that draws on insights from our past historical patterns of processes and behaviors. And combining this with trends and dynamics from our present to identify and prepare for a number of different possible and probable futures and ideally working toward a preferable future that we desire in the end. Now, I hopefully, if I'm in a more general conversation, like a cocktail party or something like that, and by the way, our listeners, I'm sure, remember when we all used to be able to stand closer than six feet together, or apart, or something like that. I might say something more like, you know, futurists do what we all do naturally, in finding patterns in the world and anticipating what comes next. And so while everyone there might be thinking and looking out ten or twenty minutes ahead and thinking about what is going to happen in modeling that, a futurist does exactly the same kinds of things, maybe does it a little more systemized way to look out five, ten, twenty years time to be able to help people anticipate what's going to happen and how to prepare for that. It's going to be fuzzier, of course, but you know, it's something we just generally do very, very naturally as a species anyway. It's a kind of a continual process that we're all in the midst of, whether we're talking about just when we get up in the morning, or we're planning for our retirement.
Peter Hayward
Do you think that the conditions that we've been through in 2020, and the experiences of the people we work with, or are you, have you already started to change how you explain what it is you do and what it is the futurists do, given the immediate experience the people had around us?
Richard Yonck
Sometimes. It's not necessarily something that I incorporate into that kind of conversation. I may reference it, you know, if anything, the fact that here in the US, we had basically a book. The Executive Office level, where we had planned for this kind of eventuality. Almost everybody in the futures world knows, we've been talking about some form of pandemic is coming, it always does. If we look back historically, we can see this occurring on a regular basis, back through the millennia. And we knew that eventually, we were going to have to contend with this. Well, having a plan, having done some projecting forward and preparing doesn't necessarily do any good, if that's just all going to be thrown away and ignored. So if anything, it's kind of, I may reference it and talk about it in terms of actual follow-through and change management and so forth, going forward from the work that's already been done. You know, I don't think it necessarily, to my mind, it doesn't necessarily change how and what we do and need to do. It certainly has made this year really challenging to stay up on, to know just, you know, where we're going to be a week, a month or even a year down the way. I think that things are definitely going to be different. But I will be surprised if in five years' time, it's not a lot more, call it normal, than a lot of people are anticipating. I think we will see some pivots, some changes. It's definitely going to impact younger people who are in the midst of a range of both forming their ideas and their perspectives and their tribes and so forth. But also it's going to, unfortunately have a significant impact on them economically as well. But I think in terms of how we interact with each other and operate and go through the world, I don't know that it's necessarily going to change a lot of that or the stories directly. I think we need to, but I am of the opinion that we may be a little more entrenched. This may not be enough to make us pivot the way we need to.
Peter Hayward
I think you've picked up an important point. I was looking at a table of the relative fatality and morbidity effects and kind of the league ladder of all the countries in the world. And what stark is that the first world countries have generally performed much worse at managing this than countries like Thailand and significant parts of Africa. If you like it, the so called second and third world had a greater capability to handle this change, and the first world didn't. That's a fascinating issue to start a conversation on, I would think.
Richard Yonck
I agree. I think there's a lot of variables in there that have to be kind of separated in the course of that exploration and conversation. But yeah, I think it's really fascinating. I think people are probably going to be dissecting that one for many years or decades to come.
Peter Hayward
Let's go to the last question. I'd love you to talk about the book.
Richard Yonck
Certainly. So, Future Minds is an outgrowth of a range of the work, the writing, the research that I've been doing over the years, and actually is a book that I kind of had it in mind for almost two decades. That this was a book that I wanted to write. When I was looking at it back then, I didn't feel it, there were a number of connections that were still not quite fitting together. And there was research in earlier this decade of insights and breakthroughs that seem to kind of tie things together in ways that I was really looking for. So the book "Future Minds", as we've talked about, starts from a really big view. In the course of wanting to explore the future of intelligence, I really felt that to begin with, I had to understand what intelligence is. This is something that we think we know, or we think we understand. But really, when you start trying to get people to define this, it really covers the gamut. There is, in the course of doing research, I ran across more than 200 definitions of intelligence. And they really, you know, whether we're talking about the intelligence of a single-celled organism, the intelligence of certain types of AI, the intelligence of a grandmaster chess player, or the intelligence of an octopus, these are all very, very different. And yet, there has to be some kernel, something that's linking all of this. And so it goes to my mind beyond just our use of language. So in the exploring it, I took a very, very big view, and built out some ideas about how this all is generated by the universe, again, and again, and again, in the course of its running down, in the course of how thermodynamics drive the general trend of increasing entropy in the universe, it also creates these pockets of complexity. And it does this across all scales in the universe. Now, we had to move through some periods to get here. We had to have the development of elements through nucleosynthesis in the stars. We had to have the beginnings of life, before we could begin to start having, you know, animal intelligence, and so forth. So they're all had to be these prior structures that were built upon, again, and again and again. And there's this increasing general trend of intelligence that occurs. Now, the intelligence that I don't actually have the definition in front of me that I apply in the book, but it is... It involves this idea that in order to survive, in order to replicate those things that are best able to perpetuate themselves into the future, to anticipate and perpetuate themselves into the future, are those that are going to survive the best. Those that are able to alter and affect and interact with their environment in a way that allows them to perpetuate only naturally makes sense that those are going to be the the systems that continue on into the future. So, something I'm playing with is the idea that, in many respects, all of these different systems have climbed this wall of intelligence over the years. This increasing complexity that has developed across the 13, 14 billion years of cosmos. That these systems are themselves becoming more and more capable of understanding and interacting with the future. And that's where we find ourselves today. And we're in the unique position that we actually get to think about this and actually reflect on. But we're not only intelligent enough to know what we're doing, but to actually influence it. And so I'm...because of that hopeful that this will allow us to survive some scary times and move into the future in a way that will allow us to perpetuate as a species, co-evolving with technology, in you know, well into the future of the universe.
Peter Hayward
There's the aspect of intelligence. And there's this notion, particularly this human notion of what we call wisdom. What is wisdom fit in the equation, so to speak?
Richard Yonck
It's a really interesting question, Peter, and one that one of your guests has brought up with me at length, Tom Lombardo. This is a an aspect of our intelligence that, to be fair, I haven't adequately explored in my books, partially because the concept is itself worthy of its own book. It's huge. But I would offer that wisdom, off the cuff here, wisdom is essentially an outgrowth or an extension of our intelligence. It involves and revolves around a lot of self-reflection, something that's essentially iterative, to be able to look at a lot of different aspects of ourselves internally and externally, to be able to understand what we can and should become. But I think it's a really, really big question and not one that I'm really prepared to answer adequately, I'm afraid.
Peter Hayward
I mean, for me the difference, the differentiation I have, Richard is that, to me, wisdom is conveyed in hindsight. In other words, wisdom is granted to us by the future. Because we cannot know if something is wise. The consequences of what we choose, or fail to choose, produce the future conditions that then are used to then, in hindsight, judge whether our acts were wise or not.
Richard Yonck
It's an interesting perspective. I can see how it applies. Personally, I would say that we then take...and when I refer to iterative, I think that ties in with us...we take that past viewing perspective on wisdom and apply it to our future choices, future decisions, in the hope that that is leading into a wise choice, a wise decision, a wise future.
Peter Hayward
Wasn't that the core of what the Brundtland Commission when it talked about a just future, iIt talked about further generations having at least the same options that the present generation has.
Richard Yonck
Yeah, I think that's probably one of the biggest things that we have to work on in terms of our the story that we were talking about earlier, going forward, to be able to see ourselves as stewards of the future, To be able to work toward a world that is being adequately considerate of those that come after us. We have to exist, we have to be happy to a degree in our current world, but we can't do it at the expense of future generations. We have to consider our place and their place in this whole scheme as we build this future together.
Peter Hayward
Well, Richard, it has been a blast to talk. I've thoroughly I thoroughly enjoyed it.
Richard Yonck
Myself too.
Peter Hayward
It's always great to talk to a Futurepod patron and say thank you for your support. Because it does mean a lot to us at Futurepod. So, on behalf of the community, thanks for taking some time out. And good luck with your book.
Richard Yonck
Thank you, Peter. I appreciate it so much and really wish you luck with Futurepod. It's an awesome series and I can't wait to hear who comes next.
Peter Hayward
This has been another production from Futurepod. Futurepod is a not-for-profit venture. We exist through the generosity of our supporters. If you would like to support Futurepod, go to the Patreon link on our website. Thank you for listening. Remember to follow us on Instagram and Facebook. This is Peter Hayward saying goodbye for now.