Can You Hear Me?

Why Your Human Voice Matters in an AI World

Episode Summary

The use of artificial intelligence has skyrocketed for businesses as they try to figure out how to incorporate this revolutionary tool into their day-to-day operations. AI now drives content creation, research, and on a broader scale, technology. However, we need to remember it lacks a human component, a component that is now seen as vital for many company leaders. Join “Can You Hear Me?” co-hosts Rob Johnson and Eileen Rochford as they explain “Why Your Human Voice Matters in an AI World.”

Episode Notes

The use of artificial intelligence has skyrocketed for businesses as they try to figure out how to incorporate this revolutionary tool into their day-to-day operations. AI now drives content creation, research, and on a broader scale, technology. However, we need to remember it lacks a human component, a component that is now seen as vital for many company leaders. Join “Can You Hear Me?” co-hosts Rob Johnson and Eileen Rochford as they explain “Why Your Human Voice Matters in an AI World.” 

 

Recommended Reading

Episode Transcription

Eileen Rochford: [00:00:18] Hello everyone, and welcome to another episode of And You Hear Me? I'm Eileen Rochford, CEO of the marketing strategy firm The Harbinger Group. [00:00:25][7.7]

Rob Johnson: [00:00:26] And I'm Rob Johnson, president of Rob Johnson Communications. Today we're diving into a topic at the forefront of technology and humanity, why your human voice matters in an AI world. As regular listeners know, we are fascinated, Eileen, by the advent of generative AI, and we're interested in discussing how it can be used most effectively. In fact, as you'll know, this is now our fourth podcast devoted to the subject. With the rise of artificial intelligence, there's a growing concern about losing our human touch, our authenticity. AI is undoubtedly a powerful tool for efficiency and content creation. However, it is essential to remember that is not just that, it's a tool. [00:01:05][39.1]

Eileen Rochford: [00:01:05] What do you think? More than that, right? I think, we can't say it enough, honestly. The, rapid pace of adoption and the level of, sorry, I should say the consumption of energy involved in the generation, aspect of AI, is it remains scary, and everyone needs to be reminded of the point is, is that it's to be used by people like us, by humans, period, as a tool, not a replacement. Because of all of the challenges and risks that are, wrapped up inside of artificial intelligence. That's what we want to talk about today. [00:01:42][36.9]

Rob Johnson: [00:01:42] Yeah. And we're going to get into that a little bit as well. Because as people sit there and think this is unlimited, this potential is unlimited, which is true if you sit there and think you're going to turn over the keys to your kingdom, to a machine, to machine learning, while amazing, the point you were just making a second ago is you really need to have you need to have eyes on it. You need to have ears on it. You need that human eyes and ears on it, making sure that it all makes sense to make sure that it that it adheres to the policies and the standards that you have as an organization. [00:02:16][33.7]

Eileen Rochford: [00:02:17] Yeah. Getting too wrapped up in the excitement of this and the possibilities is where danger really lies. Anything like this that has the potential power, that artificial intelligence represents should that go unchecked, untethered and the use of it can go in some very dark directions really fast. I think many of us know that. But that's why you start off by saying we can't say this often enough that it needs to only be used as a tool and not a replacement, and that the presence of, our humanity, which we'll talk about, how to do that in a, in a moment. But that presence is so required. And I think people might be getting a little numbed to all the headlines, that are, you know, every single day you read about another debacle in the use of AI or scandal in the use of AI, things that may appear to be not truly dangerous. Like this. I think it was, Sports Illustrated's generation of basically the true sense of ghostwriter. Right? Those authors exist. That was just a few months ago. I think it was in November when that broke. [00:03:26][68.5]

Rob Johnson: [00:03:26] So and that just that brand was hurting anyway and that almost that, that drove it into the ground because, you know, now they're, they're basically doing away with it. [00:03:35][8.3]

Eileen Rochford: [00:03:35] Yeah. It's so sad. Yeah. [00:03:36][1.5]

Rob Johnson: [00:03:37] But but that's where, that's where the cynicism of people that are looking to cut corners, that's where it can really harm you. And you'd be bringing a great example of Sports Illustrated. Eileen. Yeah. Another example I want to give is somebody that I spoke to in the last six months who was a marketer who said, I just got, downsized. I just got let go. They're they're using AI to do my job. Like, it's like a chief marketing officer job. And I'm like, excuse me? Like they're just. Yeah, they're just going to use a little wild, like, well, if you're that shortsighted, good luck with that. Can it help with a lot of your tasks as that CMO? Absolutely. [00:04:13][35.9]

Eileen Rochford: [00:04:14] Sure. [00:04:14][0.0]

Rob Johnson: [00:04:14] Can it take over for you and be effective if you're willing to have somebody be in charge of it all the time and to to put that human context into it, it might have some, you know, useful moments, but you can't just turn it over to this, go, go do that project. Go do this to go do that to me. That was that was this whole idea of AI kind of run amok. [00:04:36][21.8]

Eileen Rochford: [00:04:36] Yeah. Absolutely. Right. Rampant, unchecked work. No, no doubt about it. That's surprising. I haven't heard too many examples of senior positions being eliminated. [00:04:45][9.1]

Rob Johnson: [00:04:46] I heard I heard that one story and I was stunned. I was absolutely stunned. And that's the that's the truth. I was sitting there and I asked a lot of questions. I'm like, are you kidding me? Yeah, I said that. I dove in with a bunch of questions because it just seems, you know, like it's, ill fated, but. [00:05:02][15.8]

Eileen Rochford: [00:05:02] Yeah, absolutely ill fated. Just a perfect description. Yeah, I, I would not, think the future is very bright for any organization making decisions like that when in this nation stages of adoption of the tool, you know, tools represented. By I even in marketing. So that's very surprising to me, but a great cautionary tale of what you should not do as a leader of any organization. [00:05:25][23.4]

Rob Johnson: [00:05:26] And you know what I said to this woman that I was speaking to? I was like, do you really want to be part of an organization that got the Dodgers, even though she was reeling from, hey, they you know, they laid me off. [00:05:37][10.5]

Eileen Rochford: [00:05:37] Sure. [00:05:37][0.0]

Rob Johnson: [00:05:37] I'm like, but if they're heading in that direction, it's not going to end well. [00:05:40][3.0]

Eileen Rochford: [00:05:41] So it's a great point. Yeah. [00:05:42][1.3]

Rob Johnson: [00:05:42] At least your fingerprints are going to be on that potential disaster. [00:05:46][3.3]

Eileen Rochford: [00:05:47] Yeah. And that's exactly what we want to talk about today is just yeah, raising the point particularly for organizational leaders that AI is is not your answer. It's not going to solve all your problems. It's not the way that you're going to cut all of your costs. So stop that thinking right away. We have a lot to share and why, and the things to look out for and how to avoid making those mistakes. So all right. We recently came across a very insightful article in the Harvard Business Review, and I believe that this was just in the last couple of weeks that this was published. [00:06:21][34.7]

Rob Johnson: [00:06:22] The most, I think, is the most recent. It was. [00:06:24][2.4]

Eileen Rochford: [00:06:24] February, March. [00:06:25][0.3]

Rob Johnson: [00:06:25] March, yeah, February, March. [00:06:26][1.1]

Eileen Rochford: [00:06:27] And the type of the article that you'll, we'll put this in the show notes, of course. But just in case, you know, you're a subscriber and you want to look for it right away, it's bringing human values to AI. And how coincidental that we had planned this topic and that broke. Yeah, that would be. [00:06:42][15.5]

Rob Johnson: [00:06:42] Around for for for a resource, you know, resource information. And I was like, oh, God, there's one right there. [00:06:48][5.4]

Eileen Rochford: [00:06:48] Well, to me, that just reinforces the need for the discussion of this topic today, right now. So that article bring human values to AI emphasizes the importance of maintaining our human voice in this age of AI. The article highlights how I can enhance decision making processes and streamline operations. But it also says, and I quote, seeks to identify the challenges that entrepreneurs and executives will face in bringing to market offerings that are safe and values aligned. Companies that move early to address those challenges will gain an important competitive advantage. End quote. [00:07:28][39.3]

Rob Johnson: [00:07:29] Values align. There we go. We're talking about your organization. We're talking about your voice, the authenticity of the way you do business. And it can't be just checking a box and a one size fits all. And I think that's that speaks a little bit to the quote that you just read from there. [00:07:45][15.9]

Eileen Rochford: [00:07:45] I think absolutely. Values align being a key component of all the challenges discussed in this article. Would you like to discuss what values Alliance means now, or what shall we hold until after we go through the elements? [00:07:57][12.6]

Rob Johnson: [00:07:58] I think we should. I mean, I think we should tee it up a little bit here, but, we probably I think it'll make more sense to everybody once, you know, you hear a little bit more about this. And as we dive in a little bit deeper into this, Harvard Business Review article, you're going to see, oh, gosh, I should think about that or I should think about this. And we're not going to get into all every nuance of the article. You can read that if you really want to dive in. But it also leads us and I think this is interesting, and I don't want to, you know, to get too much here into how you, as an organization should be running your AI protocol, for lack of a better term. Do you agree with that? [00:08:32][34.2]

Eileen Rochford: [00:08:32] Yeah, absolutely. [00:08:33][0.4]

Rob Johnson: [00:08:34] So here's so the question is how can we ensure that we don't use our human voice in this AI driven world? So the article lists six challenges for those wanting to bring that human component to the discussion. And they can be pretty technical as well. But for purposes of this discussion, let's boil it down to defining and writing values into your AI program. So you have this program, but how is it going to reflect your values. So this is aligning your partners values, ensuring human feedback and preparing for surprises. So the partners can be people that are helping you create this AI product. Partners can also mean people that you do business with because you're you're going to be perhaps creating some content or doing research, you know, for them as a client, how is that going to align with the values that they have? And are you all on the same page? So I didn't want to get into deep into like point by point by point by point because there's a of some really important things we want to get to. But I just wanted to kind of, you know, we want to make sure that you understand that this having the human component needs to be built in to what you're doing. Whenever you're creating your plan, your program. It needs to make sure that that's, evident. [00:09:50][75.6]

Eileen Rochford: [00:09:50] So it's kind of like when we talk about, breaking in values into whatever, you know, tool that you're utilizing that's, artificial intelligence based. It's like you also need to have or think of it maybe as ethical guardrails. And I believe the article even use that phrase. Yes, ethical guardrails is the thing that I've heard in a lot of places that going back to how your company makes decisions. Ethically based decisions, you have to determine how to translate that into any artificial intelligence tool that you're using to, so that it helps you avoid those any pitfalls that could be created? There's so many but ethical guardrails is a really good way to think about it. And getting the tool or tools that you're utilizing to understand what values, does your organization espouse in all operations and functions and how those values need to be reflected, as well as what are the values of your customers? So bringing both of those into, you know, I guess the programing for lack of a, word right now, into the artificial intelligence tool that you're utilizing can be really helpful. And it, you know, it might be able to save you from probably 80% of, potential problems, but it's it's never going to replace the human gut. Check the human impact. But this is moving at a really, really fast pace in the marketplace right now. For example. So one of the big dangers that everyone talked about from the beginning is how artificial intelligence, particularly, generative AI, has the power to create dangerous content, threatening content, things like hate speech, things like, production of malware that can take down a company or a government and those, those guardrails, not just ethical but also technical, really are being introduced pretty quickly, like GPT four. I think last I read it was was a really high number. It was like 82% less likely to produce threatening content of the type that I just described than its predecessor, which is good. I'd like to get to 100%. Yeah, let's be pretty cool. [00:12:11][140.7]

Rob Johnson: [00:12:11] It's like, hey, we're, we're almost there. It's like, you know, for something like that, let's get all the way. [00:12:15][4.1]

Eileen Rochford: [00:12:16] Yeah, yeah. So but that's a great example. Like, just think about, you know, GPT ChatGPT and that organization and how far along they are. They've been doing this for so long and they're still only able to get their latest iteration, iteration of their tool to be 82% less likely to do that. So remember that the producers, the creators of these tools themselves are they can't get their arms around this fast enough. So if your organization is relying on those tools in how you're leveraging AI in the different aspects of operations in your organization, you're unprotected at least as far as you know, GPT four goes by about 18%. That's a margin of error that I'm not comfortable with. [00:12:59][43.6]

Rob Johnson: [00:13:00] That's a very high mark. Yeah. It's a. [00:13:01][1.3]

Eileen Rochford: [00:13:01] Very yeah. That's very high. Yeah. So again, I think that, you know, as we said earlier, this this show, this the point here is to keep saying it is a reminder to everybody. Don't get numbed to the excitement. You have to stay really, really vigilant and, carefully select the ways in which you're going to utilize anything. In terms of generative AI in particular, just be really, really careful and constantly monitoring and having a lot of human beings inside of your organization closely monitoring how it's being used and what's being produced when it is used, and preventing a lot of that to go to market until you're 100% protected, not just 82%. [00:13:45][44.3]

Rob Johnson: [00:13:46] That is, you are 100% right about that. And basically, basically what I've been saying is check your work back, your work because, you know, the thing that we learned early on, and this is when we started doing this about a year ago, is like, if you don't put in anything, any specifics and say, hey, I write, something about the Can you hear me podcast. And I remember when you did this just to check it out. It it was spitting out just absolute garbage because it had. No, it had no, guidance. And so what we've done, you know, just as, as people that talk about this a lot that have this podcast, it's like the more information you give. And I think a lot of people understand that when you're talking about this, you know, the, the, the partner that you're using, whatever tool you're using, the more information you put in there, the better product you're going to get. Better product, not best product 82% less, not 100% less. That's the point. So I think what you're saying, it's like you got to check your work. You can't assume that I typed in a bunch of it for generative AI. I typed in a bunch of information that's that was very specific for what I wanted. And you can get the worst case scenario, which of the scenarios you were talking about. But you can also get kind of mealy mouthed nothingness, you know, even when you think you're being specific. And I know this to be the fact the case, because I've done it where I thought I was very specific and I would see it back, you know, as I'm creating content from time to time using it. And I would say, well, that's not right. Well, that's real generals. That doesn't say anything. I guess that's why you can't assume even. When you are specific. [00:15:17][91.0]

Eileen Rochford: [00:15:19] Yeah. [00:15:19][0.0]

Rob Johnson: [00:15:19] You can't assume it's going to have the voice. It's going to have the standards that you want. And that's why to Eileen's point, whether it's you, whether it's your team eyeballs need to be looking at this. And and they need to be making sure that it all makes sense and that it is values aligned, as we said earlier. Yeah. [00:15:38][19.0]

Eileen Rochford: [00:15:39] Yeah. Just because things are accelerating in terms of capability and you're reasoning about that every day to you, which might lead you to think, oh, okay, well then we should be using this. And it's got to be, you know, close to 100% accurate. Do not make that assumption and do not make that mistake and put in place the guardrails in your organization that will protect you proactively. So on that note, we I, we want to make this next point. All these issues being raised in the HBR article beg the question for particularly those of you who are leading organizations or companies. And that is, what is your documented policy for AI usage and utilization? In other words, does your company have a formal AI policy? You need specific guidelines for how you and your team, your teams, your departments will use it, and if you don't, what can go wrong if you don't have one? There's a ton that can go wrong. So let's sit here for a minute and talk about that. [00:16:37][58.6]

Rob Johnson: [00:16:37] That's a yeah. And when we started discussing this as we were putting this episode together, and you were very clear on having the documented AI policy and how important it was. And I was like, well, I think this is a perfect opportunity for you to share with our listeners here at the harbinger Group, because you all are utilizing it so much. And we had some of the folks that work with you on about it almost a year ago talking about what works, what doesn't work. So I know it's been front of mind for you, top of mind for you. What is your what is your policy and what are you hammering home to the people that are on your team. [00:17:12][34.0]

Eileen Rochford: [00:17:12] So it's really simple. And frankly, our team came to this conclusion together and it it became really obvious after even just a few months of I'll, I'll call it dabbling, you know, with these various tools and dedicating lots of time to learning all that we could about them, following all the most significant influencers in the space to see. All right. They really they're way ahead of us on this. What are they learning? Right. So we took, a cue really from many of those folks and then made your own conclusion, which is this. And it's very simple. We decided that we are a human being first organization. That's it. We are we start with we start real work, whether that's writing or recommendations or strategy from our own, hearts and minds first. And the way that we bring in the power of, artificial intelligence right now in this team is into two aspects of our business. First, research, help us do research better and more efficiently, get us more resources, get us a synthesis of information, perhaps faster than we've been able to do in the past, because in our business, research is truly the foundation, you know, of every marketing house that we build. You start there, gather all the information, Cynthia, synthesize it, then analyze it so that you know where to take your recommendations and your strategy. Right. So we decided we use it to research. Yeah. That is and to organize. So after we've done some extensive research, on any given topic, right, feeding a lot of that information, even the facts that were kind of summarized from articles from Harvard Business Review, for example, into our tool of choice, whichever one that may be and, you know, varies by team member. I got to be honest. And then asking for structure or organization of that immense amount of material, that gets us, again, massive running start way down the field, you know, farther than we would have been, and faster than we if we had done this manually. And everyone on the team had to consume all that information and draw those conclusions. So literally, we use it to find all the good stuff, and then we check against it. We go and see if I were doing this research myself, what would I have come up with just to try. And then we compare it against it to see. Is it flawed? What did it miss? Is there a critical element here that this summary is based on that was completely left out because the tool did was told to do something, you know, only partially right, because we make errors in the inputs that we, you know, even give the orders. Right. The prompts that we give to the different artificial intelligence tools all the time. That's why jobs, people are being trained right now as AI prompt writers. And those jobs are in high demand. Right? [00:20:10][177.8]

Rob Johnson: [00:20:10] So basically you're saying, if I only had AI to tell us what, tell you what to do. [00:20:13][3.5]

Eileen Rochford: [00:20:14] If you find there you go. Now that's a whole other. Plane and I don't even want to go there as my head just can't go around. [00:20:19][5.1]

Rob Johnson: [00:20:19] I know my neither. [00:20:20][0.8]

Eileen Rochford: [00:20:21] So essentially, we decided that the best way for us to do what we do is to gather information, to organize the information and create structure with it. And from then on, the human beings take over. Here at the harbinger Group. And there's one other thing that I'll add to this. So that's how that's basically like our, documented, approach, if you will, to the use of artificial intelligence kind of all together barn on here at our, at our company. The other thing that I want to mention is that, we we never, ever, ever create anything that doesn't then have a human being check it. And I, you know, we've said that a lot, whether subtly or overtly in the now for episodes that we've done on this topic. But I think it's, that is another critical piece of a documented policy. You have to say it because, you know, interestingly, even I say that the danger resides at any level, whether you're talking about an entry level person, a mid-level person, or a really, really experienced person in our industry and probably in any industry, we are all new to this. This is all exciting. It's kind of like a new toy. It is. And we rush in, you know, without the deep thoughtfulness that's required in incorporating any of these tools into our operations and business. You know, departments, divisions, whatever you want to call it. And so you got to write it down. You got to make sure everyone understands this is our policy, our framework for decision making in the usage of anything related to artificial intelligence, so that you're not making assumptions because assumptions are going to leave you really vulnerable to the risks that artificial intelligence represents to any company, any company at all. And right now there is a really, there's a demand, I'll say, in, PR, advertising, digital marketing for agencies to divulge how they're using artificial intelligence right now. And the transparency that's associated with divulging that information is having an impact on credibility and reputation. So if you're not revealing that if you're choosing not to ask yourself why, right. No. Because maybe at this point you should be because you don't want to be overusing any of the of your customers knowing it. [00:22:44][143.2]

Rob Johnson: [00:22:44] As you sit here and explain your policy and you talked about research and organize, and then you said the last piece is ask for structure and then the humans take over. Yeah. Can I you touched on it a little bit, but but what does that mean asked for. You're asking the generative AI for the structure. What does that mean? What does that that just outline in general. [00:23:04][20.3]

Eileen Rochford: [00:23:05] To organize information? Okay. This is really what it means so hierarchically, whatever way that we want it to be organized, we give it that request. Do we want it to be organized? In terms of sometimes it's time based. Tell us what happened in this in, you know, in the last year and a half related to this issue. Give us the timeline, the breakdown, or organize these thoughts in order of importance based on blank, you know, x, y or z input. [00:23:30][25.2]

Rob Johnson: [00:23:31] Is there is there anything else we're missing? There is. It's pretty simple. The you've outlined it very clearly about what your expectations are as a leader. [00:23:39][8.2]

Eileen Rochford: [00:23:39] Right. And for now, I think the best guidance that we can give. And, you know, I'm just doing it the way that I think is good for us and for our clients. But any leader of any organization right now needs to keep it simple, stupid for real, because you if you get too mired in this, you are going, you won't be able to cut through it and the chaos will just continue. You just need to to boil it down to something really simple, maybe along the lines of what we did, which is we can use it in these ways. Anything that falls into these categories. Absolutely. Okay. Check it with human. Check it. It it's, you know, major deliverable to make sure this isn't B.S. because it's still producing a lot of B.S. and you're you might, you know, miss it because you're believing all these headlines about the best pace of acceleration of accuracy. Don't do that. So make sure you, you know, boil it down into here's the ways that we use it. And these are all okay, as long as we have these double checks in place, that's fine. But at what point do human beings take over in your organization and really deliver the product that you're creating? You need to see when that is as well documented. Yeah. Thanks. So and don't forget that whole think about are you divulging how it's being utilized? I heard somebody say something that was pretty interesting. It was like, well, you know, do we divulge when we use spellcheck? Because that's basically grammar. This is. Right. Right. And I'm like, no, it's not basically what this is, but it's kind of like when that was first introduced, maybe it was a little bit like that. The this has the repercussions of the usage of particularly generative AI are so huge. That you really do need to sit back and be honest and forthright about the usage of it. And gut check your gut. Check yourself. Like, is this how is this impacting things for us, or our clients in a way that they should know about it and share that? [00:25:31][112.0]

Rob Johnson: [00:25:31] So, yeah, no, it makes perfect sense. And you outlined it, in great detail. And I'm glad that when I ask you that question that you're like, here's how it works for us. And you're like, I don't know if it's the right way to do it. It's just the way I'm doing it. And that's why you're gonna see. That's why you're the CEO. So I don't want to get to I don't want to get too simplistic here toward the end of the podcast here. But I as it's currently configured again, it needs a human touch. If you use AI to create content, you must ensure the human voice is evident in the editing process. Get rid of that. I was talking about it before the generalities. Even when you put structure around it, even when you put, detail around it, sometimes they're generalities and it sounds like, well, anybody could have done that. So add your voice to give it the humanity that some AI content lacks. And that leads me to a point that I've made time and time again. And I think it's very important to make here with with you, which is when people would say, oh gosh, I it's really I mean, it's there, it's developing week by week. You know, we were our heads were spinning. Right. And everybody'd say, well, gosh, you know, Rob, you're a consultant, a content creator, you know, what are you going to do? And I'm like, and I know you feel the same way. We're getting paid for our ideas. There's there's a machine out there that can try to emulate something. But the journey that I went on for 30 years that started in journalism and now it's in communications consulting is unique to me. The decisions I offer up for my clients are based on the experiences I've had. You can try to trick that or game that, but that's really why I'm bullish on our futures, and I'm bullish on the fact that humans are always going to matter in the realm that we're in, Eileen, which is yeah, we could just go type in some things and then I could spit it back out. But even though it's improving, and even though they say someday they're going to be able to have the experiential thing that, you know, that that I have as a professional or you have and I'm like, and then we'll be the judge that like how it's gone so far, maybe or maybe not, but we're getting paid for our ideas. And I think that's a really important point to kind of drive home when we talk about this human touch. [00:27:36][124.1]

Eileen Rochford: [00:27:36] Absolutely. That's spot on. We are without, the human touch, the injection of humanity and all of the things that make us human into content. It's going to fail. People are going to get so tired of the repetition, the boringness of the content that AI creates, which frankly, we're already seeing people are. I mean, you've probably seen posts on LinkedIn, which I certainly have very recently, just that, anything with, you know, certain words that are completely overused by, you know, generative AI tools that any post that, you know, have that I just flick right on by, forget it. Not interested, not even the next one. It's not real or that it's not real, right? This isn't a birds aren't real moment. This is a words aren't real. So don't believe it. Just look on by. As I say. [00:28:27][50.5]

Rob Johnson: [00:28:27] Even if bird is the word. Don't the bird, bird of the devil don't believe anything. Hey, I want to talk real quickly. And this is related in some form or fashion to it. But you and I were recently. We're at the Communication Leaders of Chicago. I event at DePaul University, and I thought it was a wonderful event, but it just speaks to the fact of how academia and how PR professionals are all trying to figure out, like, how does this fit in our world? And part of that discussion is how do we keep that human component to it? Didn't you think that that was eye opening and it was just a great kind of robust discussion? [00:29:00][33.3]

Eileen Rochford: [00:29:01] Yes, I did, the research in particular that was conducted before the event amongst members of CLC attendees, etc. was really interesting. And I know that there is a link to that research on their website, recently launched website, which is very nice. Yeah. Ginger Porter talked about becoming website and we were there and now it's real. So we'll put the link to that research in our show notes, because that's, I think, very worthwhile for anyone, particularly anyone in marketing, grappling with the usage of AI right now, you'll get a feel for how lots of others in leadership positions in this industry are thinking, feeling, and doing when it comes to AI. So it would probably benefit from given that a gander and will link to it. But I mean, I was particularly heartened by, the repetitive, aspect of many themes, such as, this injection of human voice and the, spirit of creativity, the essence of creativity being the thing that will win out above all in, creative industries like ours. Yeah, very heartening. And just good to know that we're all thinking along those lines. And frankly, I in no way do I think that that's, you know. We're drinking the kool aid and just want to see that that's the case. I concur with you, Rob, that that's absolutely true. But that's why we're having this discussion right now. Because in order for that to be true, the presence of humanity in whether it's communications or the creation of products has got to remain. And that's the job of leaders of organizations and companies to make sure that that's what happens. [00:30:40][99.0]

Rob Johnson: [00:30:41] Can I get an amen? Okay. Perfect. That was terrific. [00:30:45][3.9]

Eileen Rochford: [00:30:45] Let's get an amen. I love those, and. [00:30:47][1.7]

Rob Johnson: [00:30:47] We have some great we have some great show notes. We got the Harvard Business Review. We got the communication leaders of Chicago. I'm glad we're able to talk about this here at the end and throw that in there, because I thought it was worthy of talking about. And I'm glad that we'll be sharing it with our listeners. [00:31:00][12.5]

Eileen Rochford: [00:31:00] Yeah. Sorry. I'm just going to, you know, close by saying, don't stop thinking about this. Don't get complacent. We all have to remain super vigilant. And it's particularly incumbent upon those leading organizations and companies to do so. [00:31:12][12.0]

Rob Johnson: [00:31:13] So it's obviously on our minds. This is our as we mentioned, it's our fourth episode. And when it's time for a fifth one, whenever that is in the coming months, we'll do one on it as well. [00:31:23][10.2]

Eileen Rochford: [00:31:23] Yes we will. [00:31:23][0.3]

Rob Johnson: [00:31:24] This isn't going away or wide open to this. And I'm and I'm really happy to to know that about us. [00:31:28][4.2]

Eileen Rochford: [00:31:28] Yeah. We're going to stay on the big issues. Don't you worry about that for a second. So that's going to do it for another episode of Can You Hear me? I'm Eileen Rochford. If you would love to weigh in on this podcast or give us an idea for a topic for a future show, please reach out to us at our new Can You Hear Me podcast page, which is on LinkedIn? [00:31:47][18.5]

Rob Johnson: [00:31:48] Yes, and it's growing by the day, which I'm very happy to report. And I'm Rob Johnson. We do. Thank you for listening. If you like this show, please consider giving us a review on any of the platforms where you find can you hear me? That's Apple, Spotify and much more. Your reviews help other potential listeners find this show. Thank you so much. [00:31:48][0.0]

[1859.8]