In this episode, Amanda speaks with Rose Genele about how we can navigate the ethics of using Artificial Intelligence. Rose is a transformative leader and AI ethics advocate committed to creating ethical, resilient companies and technologies, and the host of the What Are We Going To Do With All This Future podcast.
Interviewed by: Amanda Reeves
References
What Are We Going To Do With All This Future podcast: https://rosegenele.com/wawgtdwatf-podcast/
Website: https://rosegenele.com/
Website: www.theopeningdoor.co
Contact Rose
Email: rose@rosegenele.com
Email: rose@theopeningdoor.co
LinkedIn: https://www.linkedin.com/in/rosegenele/
Transcript
Amanda Reeves: There's no doubt that AI is quickly working its way into the nooks and crannies of our daily lives. How do we make good use of this technology and the opportunities that it offers? How do we account for human bias informing AI? With concerns about the ethics of how models are being trained and the environmental impact, what does responsible AI use look like?
I'm Amanda Reeves, your FuturePod host today.
Rose Genele: I like to think of AI as sort of a mirror of us as humans of humanity. The bias that we carry and that we hold and that we know that we have, it inherently is going to be present in our AI, in our AI systems, in our models, in our tools. And so what can we do to account for that? You know, how do we make sure that we align AI with human values? How do we decide what our collective human values are? If our intelligence stays at the same level, but AI is increasing how then are we going to continue to manage AI and to oversee AI? There's so much to consider, I love that about it. There are so many challenges. There are so many complex ideas to dive into and to explore.
Amanda Reeves: That's Rose Genele, a transformative leader and AI ethics advocate committed to creating ethical, resilient companies and technologies and my guest for today's episode.
Welcome to FuturePod Rose.
Rose Genele: Thank you so much for having me. I'm excited to be here.
Amanda Reeves: It's great to have you here. So for our listeners who might not have come across you or your work before, what's the Rose Genele story?
Rose Genele: That is such a cool question, and a great way for me to try to think about who I am and what my story is. So I consider myself an entrepreneur and a creative, and I am data inclined and futures focused. I have a consulting practice. I have a podcast. I had a nonprofit that I recently wound down. I'm somebody that likes to have projects. I'm kind of here, there and everywhere, but, some consistent threads throughout it all. And so. That's who I am a bit of my story. I started my early life was very music heavy in terms of singing and performing. I continued that, through my early years into high school into early undergraduate, where, I played instruments, mainly woodwinds and did more singing and that sort of thing. And then when I graduated from uni, I did a bachelor of commerce in law and then did a thesis in ethics. And so I thought maybe I want to go to law school, but let me work at a couple of law firms first to see if that's something that I want to do. I did that for a couple of years and decided absolutely not. There is no work life balance, people seem so miserable here. And there seems to be such clear, tiers of who is at the top of the food chain and who's at the bottom. And it just felt, like it wasn't something that I want to pursue. So I eventually pivoted into tech and then more recently decided to start my practice and go it on my own.
So that's a little bit about me and who I am.
Amanda Reeves: One of the things I love about working in the futures space is, sometimes I refer to myself as being an uber generalist. I love that it's such a big and expansive sort of space that you can take it in lots of different directions. It's great for people who have lots of different interests.
I'm really interested in that kind of move that you made from, music to then law to then tech. And I'm always interested in those sort of transitional periods. Can you tell us a bit about what was happening or what was calling you between those connections?
Rose Genele: I learned that I didn't like group projects because people were not pulling their weight. Folks weren't serious. Everything was a group project. And I'm thinking, I don't need to. Be doing work with others at all times. Like, I just want to do my own work.
So I actually shifted into law and I was really, really excited and I did super well. And I still love that analytical side that research based side. And so it's always at the back of my mind that I might go back to that at some time. But you know, doing that right out of school was, you know, Something that was exciting for me and that I enjoyed. I think when I realized that I didn't want to go to law school, I kind of had a couple of years where I was fumbling around a little bit and trying different jobs and doing different things and not really finding anything that could hold my attention for long enough. I felt like I wasn't being challenged.
It was too easy. I was getting bored and I even found myself in a situation where I was looking for other projects within the organization I worked for and my manager did not like that at all. It was very much. This is your job. This is your role. This is what you need to do.
Just focus on that and stop trying to do other things. And I thought. Oh, I got to get out of here. And so that was a large, very established organization. And so that's what kind of turned me to maybe I should try tech. Maybe I should try a startup, a smaller organization that's not as established and, there's a little bit more flexibility.
And so that's kind of how I got into tech. And really realized that I love data, I think it picked up on that analytical side for me. And again, that independent work, like everything didn't need to be in groups, you know, I could async work with others. And so that's how I kind of made that shift. And then I more recently have found myself in the futures world. And it's so funny because whenever I talk to others who are in futures or strategic foresight and they asked me, how did you get into this space? Or I asked them, how did you get into the space? It's always like, oh, it was completely accidental. And which which makes me feel so good because I absolutely accidentally fell into this space, but I'm happy to be here. And like you said, it's so wide and so broad that there's really space for everybody. Your background doesn't have to be, you know, a certain prescribed background for you to be legitimate or, you know, have value.
And so I find that exciting and encouraging.
Amanda Reeves: I really relate to that sort of sense of just, this unexpected falling into a space. Hearing you describe that path, it sounds like it's been less about the content, obviously there's something about the content of what you're studying, all those areas you're working that, that attracts you, but it feels like sort of the, those enabling conditions, the supporting environment, the what does my life look like in this space seems to be a really important consideration.
Rose Genele: Yeah, I think that we, you know, not, I think I'm fairly confident that we only live once, at least in this specific body in this specific, you know, context. And so I want to make sure I'm happy and I'm enjoying myself and I'm optimizing, and I realize that it can be a different approach from the way that others approach their life, but I have over time learned that, you know, that satisfaction that happiness that comfort that ability to grow and really learn continuously is something that is essential for me, and so I have prioritized that over, yes, the, you know, specific content or you know, area that I'm working on or in.
Amanda Reeves: I'm interested in how you approach your work, can you tell us a little about what maybe some of those core tools or frameworks or methods might be that you draw on in the way that you practice?
Rose Genele: Yeah, so my practice has 2 sides to it. I have a revenue side, and then I have a transformation side. And so on the revenue side, I really focus on using a wide variety of tools. You know, there are different systems that people have their organizations built around and there are such a wide variety of tools that people can use, depending on their systems and depending on what type of, you know, workflows or automations they're looking for. And so on the revenue side, I usually start with the data center for people, which if you're not in a very tech heavy or IT heavy industry is usually a CRM. And so, you know, wide variety, all of your Salesforce, your HubSpots and all that good stuff. And I make it a point to really try to stay on top of new tools that emerge because they're popping up every single day. I work mainly in the B2B SaaS space. And so, especially with the rise of AI, there are so many, you know, marking tech, sales tech, revenue tech tools that are popping up. Like I said, literally every day. And so I try to be as agnostic as possible. I try to, you know, ensure that I'm able to kind of pick up any new tool as it comes. I find that there are sort of your basic building blocks with a lot of these tools. And so once you kind of have a bit of proficiency in one, it's really easy to translate it to another. And then on the transformation side, that is where the AI practice sits, where I focus on responsible AI and supporting companies that want to build, deploy and use AI in a responsible way. And so a component of that is governance. Risk and compliance and so on that side of the practice, there are different frameworks. There are different standards that we use. There's regulation that has been proposed or been passed that we are keeping an eye on that are going to eventually come into the space where you need to comply with it.
It's no longer voluntary. And so with such a fast growing space as AI, you can imagine that's another area where I have to continuously be on top of the new frameworks that are coming out, the best practices, what people are suggesting that folks focus on and, you know, some of the more popular frameworks that people might be familiar with is, you know, the NIST risk management framework, the EU AI act, the OECD principles, AI principles.
And so those are some of the tools and the frameworks that I use across both sides of the practice.
Amanda Reeves: I'm really interested in this idea of responsible AI. And yeah, I'd love to hear a little bit more from you about What does responsible AI look like? What feels important for you when you're thinking about responsible AI?
Rose Genele: Yeah, so I love responsible AI because I get to pull on my ethics background and responsible AI is really the practical implementation of these ethical concepts and principles. And so it's looking at how do we make sure AI systems are transparent? How do we make sure that they are fair? How do we make sure they are explainable? How do we ensure that they're not biased? And that they are not destroying the environment? And so looking at, you know, on an organizational level and then also on a systems level, the ways that we can take those principles and make them you know, practical. And so there are again, you know, with the way that the AI industry moves so quickly and is continuously innovating, there's different methods and tools that you can use. There's these subsets of ethics and responsible AI, such as AI safety, technical AI safety, AI governance. And so I think the industry is trying its best to find ways to kind of guide us all in how we approach building, deploying and maintaining AI systems. And I think it's important while that is a focus and is you know, clearly important, I think I also want us to focus on the organizational level, making sure that we have governance in place so that we know how to manage these systems. We know how to make decisions around these systems. And so that's kind of where I spend time and focus within the responsible AI world.
Amanda Reeves: I'm interested in what you might be seeing as you're looking around this very fast moving landscape. What are you noticing that's shifting, or what's catching your attention at the moment?
Rose Genele: Yes. So because I was just speaking about AI, I'll start there.
I am so interested in the way that it's really changing our entire world and I think I'm a bit biased because I work in AI. I'm even more aware of it. And maybe for others, they have, they're not quite seeing the same changes or the speed of change. But I I think when I even think about things like education, where there is one, this increased demand for digital literacy, it's absolutely essential at this stage. So there is a need for, you know, general digital literacy, but also specifically, I think a need for upskilling and reskilling as relates to AI proficiency. With that, there is what I'm starting to see is an expansion of ways to learn in terms of a change in a reliance on traditional academia, and so there are online programs, online courses, micro credentials you know, there are open source courses that you can take from various prestigious universities.
And so I'm seeing how that is becoming more prevalent and more needed. I think what's also interested related to that is that companies are starting to realize that they have a responsibility and or it's in their best interest to help people upskill and so organizations like Walmart having a walmart University, or, you know, others where they are taking responsibility for educating their workforce to ensure that they have the supply for the demands that we know is coming and will be our reality soon where almost every single organization and or every single tool system has some component of AI and people need to know how to manage it and maintain it.
So, I think that's a really exciting and interesting trend. With that and related to education, I feel like I'm seeing that the younger generations are always, you know, a little more tech savvy than the ones before them, but especially related to AI. I'm hearing about this growing over reliance on AI that I find a little concerning you know, the reliance on AI for very basic tasks or rudimentary decision making. And I think the concern for me there is that we are not exercising critical thinking and or not building that muscle, which is so essential being able to make you know, objective assessments and evaluations and come to decisions or judgments or conclusions without the assistance or the need for some sort of external tools, source telling you what to think or how to think.
And so, while there is so much value that we get from AI and that from young people are realizing from AI. I think there's also this flip side that we need to be aware of and perhaps put some attention to.
Amanda Reeves: I can hear that ties back into what you were talking about with thinking about, you know, how do we make sure our AI is rigorous as well? Like, how do we account for the bias? Because obviously if AI is based on things that we've written and things that we've said, you know, people are inherently biased.
So how do we, I'm so interested in how we sort of work with that. And you're absolutely right, that critical thinking piece is so essential. But if we put that aside and trust you know, a hallucination or a compilation of what other people have said without that context. It's such an, I find that such an interesting space.
Rose Genele: Yes. Yes. I. It is an area of growing concern and I think rightly so. I like to think of AI as sort of a mirror of us as humans of humanity. like you said, you know, the bias that we carry and that we hold and that we know that we have, it inherently is going to be present in our AI, and so in our AI systems, in our models, in our tools. And so what can we do to account for that? You know, how do we make sure that we align AI with human values? How do we decide what our collective human values are? You know, these are the questions that people in AI safety are, you know, trying to answer.
They're trying to guide us towards a clear definition. And I think that, you know, there is a responsibility for us to think about these things, especially as we think about this concept of human intelligence, presumably, generally staying at the same level, but artificial intelligence increasingly and continually becoming more capable, smarter than humans, you know, scoring better on different tests, whether they're math based or chess and all these other things if our intelligence stays at the same level, but AI is increasing how then are we going to continue to manage AI and to oversee AI? And so we have to find ways to, and this is called, you know, AI alignment where we find ways to basically encode in our AI, the values that we have as humans so that we can ensure that we are, you know, building these safe systems that are not going to you know, impede on any of our rights, you know, we're not worried about any existential risks. And so there's so much to consider. There's and there's so many areas to get into when it relates to AI. And I, I love that about it. There are so many challenges. There are so many complex ideas to dive into and to explore. I think I'm a very much a challenges based person. I like problems. I like to try to solve problems once it's easy, you know, it gets very boring.
And so I think that's what really draws me to AI because there's so much, you know, not problems necessarily, there's so much opportunity and there's so many challenges. And so that's the excitement.
Amanda Reeves: I imagine the fast moving part is also important for you.
Rose Genele: Yes, the fast moving part. Yeah. And I think that it's, you know, it's important that there are kind of these early adopters and people who are making career shifts to really focus fully on AI because there are going to be those of us that maybe are not terribly, you know, technologically savvy or particularly digitally inclined.
And we want to make sure that nobody gets left behind. Right. And so there's a need for us all to be involved in this kind of group project. Back to the group project, this human group project that we're in. Collectively, we need to help each other, protect each other and you know, like I said, make sure no one gets left behind.
And I, especially when I think about developing nations, populations and countries there's also a consideration there that we are making sure that there is access to emerging tools like AI and that there's access to the ability to learn and to improve literacy around AI.
Amanda Reeves: If I can be a bit provocative, I'm thinking about this idea of making sure there's access and making sure no one's left behind in this large group project. I'm also thinking about how, you know, often when we see these big trends, we also see counter trends. So there'll be people who are very excited and moving towards AI, but then there's also going to be people who, you know, sort of our modern day Luddites, people who are, might have concerns or, you know, are trying to actually create a bit of distance and don't necessarily want to be engaging with AI.
Are you seeing anything interesting happening in that space?
Rose Genele: Yes, especially from the ethical perspective there are folks who, you know, are aware and are not okay. And, you know, I'm one of those people, although I feel like I straddle, where we know that a lot of the data, if not a majority of data that some of these AI systems are trained on are scraped from the web, and they are not you know, there's no consent and or there's infringement on copyright. I know that artists have been one of the first groups to really be affected and to really start to raise the alarm about the unethical gathering and scraping of data. More recently, I think just last week we saw, you know, LinkedIn had opted everyone into having their data used to train their models. And so there are people who are saying these are inherently unethical systems and tools, and I do not want to use them. I do not want to be a part of this. And as somebody who focuses on AI ethics, there is a fine line I feel that I am towing as relates to staying up to date and abreast all the changes in the tools and, you know, the different happenings in AI, of course, needing to be familiar and comfortable with using AI tools and also keeping in mind those ethical considerations where it's like, you know, at some point we have to hold these AI developers and companies accountable for the way that they have gathered this data, the way that they have scraped this data, you know, without the consent, without the compensation it's unfair and it's wrong, you know, and so that's one area that I have heard very clearly.
Another area is looking at it from an environmental perspective, where we say the immense demand, energy demands of, data centers that are required to support AI systems is fueling the resurgence of fossil fuels and dirty sources of energy. And it's like,
Amanda Reeves: Mm.
Rose Genele: this is not okay. You know, we cannot, and perhaps should not put. You know, the need for energy consumption over the quality of our environment, because we actually live here in the 3D where the environment is,
Amanda Reeves: Yes.
Rose Genele: So we should probably prioritize it so that we actually have a world to live in to use these tools in these systems. And so that's another perspective that I think is important in that I also hear from folks where they say, you know, I don't want to use AI and or, you know, I'm not interested in really getting too involved in this whole AI thing, so to speak.
Amanda Reeves: Mm. It's so interesting.
Rose Genele: Yes.
Amanda Reeves: I'm curious do you have sort of, because I can feel this sort of a tension happening here between yes, there's a lot of possibility and yes, we want to be equitable in our access and we want to, you know, make that available, but what does that mean for our environment? What does that mean for our critical thinking and the way we move through the world? What does that mean for the ethics of how this has been created and the rights of people who have created work that has trained these models, and how do we use that in an ethical way? Like, I'm really interested in how you're finding that tight rope walk between those tensions.
Rose Genele: Yes. It's definitely something that I am very intentional about. I try to think about when I'm going to use AI, why I'm using AI. And. I am somebody that defaults to using my own brain as much as possible and not becoming somebody who is over reliant on AI, you know, every single time I want to write something I'm running to ChatGPT or every time I want to find an image for you know, a post on LinkedIn, I'm running to Midjourney or I'm running to, you know, Dall-e or something of the sort. You know, I try to be intentional and I try to be thoughtful about when and why I'm using AI. That's one step. I also try to consider the models that I'm using. This industry, of course, there are practices that everybody largely uses, but I try to consider using open source you know more than I do closed source. I try to consider what model i'm using and who's created it if i'm not happy with the actions of that company I try to you know, perhaps stay away from using it quite as much. And so there's no easy answer, and this is something that I try to implement in other areas of my life, is just because I've made a decision about the way that I'm going to behave or live my life, doesn't give me the right to judge others, and I try to be, I try to keep that in mind, like, I am a vegan. I don't eat meat or animal byproducts. But that doesn't mean that I need to then, you know, ostracize or judge other people for eating meat. And so, so same way, like, it's it's challenging. It's tough. You know, everybody has different needs and has different levels of comfort, or,
Amanda Reeves: Hmm.
Rose Genele: And when it comes to AI and AI use, it's, I try to apply the same principles.
Amanda Reeves: And in your own practice, in your own use, when you're trying to be a bit intentional about when to use AI, what are some of the circumstances or context where you might be like, Oh, this is a really good opportunity for me to tap into that as a resource?
Rose Genele: I find that I use AI for transcribing my calls, I find it really helpful to have a AI bot in the call to, you know, not just transcribe, but provide a little bit of a summary. And so I do use it there. Sometimes when I'm doing research, I will use AI. I like to use tools such as Perplexity that are going to give me some of the sources that I can go in and click into and actually do some deeper research.
I know that there, this is not just a hallucination that it's provided me with. Another instance is if I am working on some sort of writing or some sort of text, and I have done my research. I've started to put things together and I want it to kind of just refine or, help me with you know, creating maybe an introduction or conclusion based off what I've I've already written. That's another instance sometimes that I will use AI and so, I like to use it as a companion versus relying on it to provide or produce some sort of content or the full output for me.
And another example that I just thought of that I love, and one that I really do use consistently is creating alt texts for my images because it's so important to have alt text. And so I do use AI for that as well.
Amanda Reeves: So Rose, we've made it to what's probably my favourite question, because I'm still figuring out how to explain what it is I do after all these years, how do you explain what it is that you do to someone who doesn't necessarily understand what it is that you actually do?
Rose Genele: Yes, I absolutely love this question. So I consider the work that I do transformation work. And so the idea is that I am at the intersection of sustainability, ethics and technology. And so the way that I help or provide services on the organizational level, like I mentioned, is through revenue and AI. And then on the personal level, I provide support through, you know, coaching and advisory. And so everything that I do kind of works around those three pillars.
I provide services and support for organizations that want to grow and change. I do that through strategy, operations and technology. And then I support leaders and people through coaching and advisory services and all that work that I do all centers around, like I said, sustainability, ethics and technology. So even on the organizational side, looking at how we make changes or implement changes that are going to be sustainable, they're going to last, they're not going to deplete resources, whether they're human resources or otherwise, looking at making sure we're making decisions that are ethical and that are, you know, supportive of, general principles that I feel like, you know, Whether it's internal or external stakeholders or just human ethics, you know, that that's a consideration.
And then always that technology piece that I feel like is such an enabler and is so valuable in every industry across every organization. And so that's the work that I do, and I just break it up into revenue operations, AI, and then some coaching and advising. So those are the three things that I do.
Amanda Reeves: Rose, I'd love to hear more about the What Are We Going To Do With All This Future podcast that you've been cooking up.
Rose Genele: Yes, I am so excited. This is my latest project. And this podcast is really the way that I accidentally found myself in the futures world. And so it is close to my heart for that reason as well. It really came to be from me thinking about what the future will look like and why we can't decide. Why does it feel like some of these large organizations or these invisible hand powers get to choose what we do, when we do, how we do, what we eat, you know, all of these things, it just felt like I want more of a say, I want more control or influence over what the future will look like, or, you know, what type of future if I choose to have children, they will have, so to speak, right? And so once I start to think about these things, I funny enough, the name of the podcast, I saw it on the side of a building many years ago,
Amanda Reeves: Mm.
Rose Genele: and it was written in kind of the scratchy black writing. What are we going to do with all this future? And I'm telling you, that was like 10 years ago. And so once I started to have these thoughts about the future, and what I'm trying to figure out, you know, how we can be more involved. That photo came back to my mind. And I immediately knew that that's the name of the podcast, that that was going to be the name and needed to be the name. And so since then, I've just been reaching out to folks who are futurists or not, you know, but just people who have taken time to think about the future, and have formulated some thoughts, ideas, or have identified signals and trends just to hear what they think about the different futures we can experience, or we can have and my hope with this podcast is that I can help people open their minds, stretch their brains, and get out of their echo chambers where perhaps they, you know, hear the same things, see the same things, read the same things, you know, talk to people with the same mindset.
I really wanted to kind of pull people out of their comfort zone and challenge them to think more about the future so that we can be more intentional about creating the future. And in that way, we can be more sure that we're going to be happy with the future.
Amanda Reeves: Mm. I love that real focus on agency, it's such a core part of the work.
And for our listeners interested in listening to the podcast, when is it available? Where can they find it?
Rose Genele: Yes. So it's available now and it is on Apple podcast. It is also on Spotify. And so those are the two places right now that you can go to listen to the podcast. There are two episodes up and we are releasing more each week.
Amanda Reeves: We do have quite a few practitioners who listen to this podcast. If anyone was interested in maybe speaking with you for a future season, what should they do?
Rose Genele: Absolutely, That would be fabulous. I'm so happy that you said that the best way would be to get in touch with me at rosegenele.com. Alternatively, you can find me on LinkedIn and if you reach out and let me know that you heard this conversation with Amanda, then that would be the best way to get in touch. And I'd be so thrilled to chat with you.
Amanda Reeves: Fantastic. And we'll put links how to contact Rose and to listen to the podcast in the show notes. Well, thank you so much for making the time to chat with me today, Rose. It's been such a pleasure. On behalf of the future pod community, really appreciate everything you've shared and you're really wonderful insights into the fast emerging world of AI.
Rose Genele: Thank you so, so much for having me. I've had such a good time. This has been an amazing experience. So thanks again.
Amanda Reeves: FuturePod is a not for profit venture. We exist through the generosity of our supporters. If you'd like to support the pod, please check out our Patreon link on futurepod.org. I'm Amanda Reeves. Thanks for joining us today.