Dave Birss says you won’t be replaced by AI – you’ll be replaced by a leader who’s been told the wrong story about it.
About this episode
Dave Birss is back on Business Without BS – author of the Sensible AI Manifesto, co-founder of the Gen AI Academy, and a man who’s taught a million-and-a-half people how to use AI without setting their business on fire. He walks Andy and Andrew through what he calls a “corporate poopocalypse” — what happens when you apply AI to a business that hasn’t cleaned up its own mess.
The episode covers the Sensible AI Manifesto’s six points, the CREATE prompting framework, the three Cs for checking AI output, the adequacy trap, why judgment is the most undervalued skill of the next decade, and the practical playbook for rolling out AI across a team without sending the whole organisation into a panic.
About the guest
Dave Birss co-founded the Gen AI Academy with Helena, where they run AI training across governments, the UN, and Fortune 500 companies. He wrote the Sensible AI Manifesto and GPT Junior, the kids’ AI book and video course now in over 100 schools. Before all that he spent his career in advertising and creativity, which is where most of his frameworks come from.
Key moments
- [02:46] The Roomba poopocalypse – why AI applied to a dysfunctional business spreads the mess, not the productivity.
- [05:46] Corporate barnacles – the institutional plaque costing every business 40% in fuel and speed.
- [08:04] Sensible AI Manifesto Point 1: use AI to augment skills, not to outsource tasks.
- [09:15] The two-list exercise: tasks that piss you off vs tasks you wish you could do more of. Only the second list is the real opportunity.
- [12:11] AI slap – 96% of leaders think AI raises productivity, 77% of staff feel buried by unrealistic expectations.
- [13:48] The adequacy trap – why AI users get stuck at “good enough” and never break through.
- [22:51] The other five Manifesto points: use data responsibly, support employees, assign AI leaders, keep learning, always add a human layer.
- [26:40] The CREATE prompting framework — Character, Request, Examples, Adjustments, Type, Extras.
- [37:59] The three Cs for checking AI output: Confirm, Check, Craft. Why most people skip the third one.
- [55:14] How business owners keep their thinking sharp: do the work on paper before you open the laptop.
- [1:01:03] What humans still beat AI at – conceptualisation, creative voice, and judgment. The judgment one matters most.
- [1:14:17] The line that pisses Dave off: “you won’t be replaced by AI, you’ll be replaced by someone using AI.” His correction is sharper.
- [1:18:09] The three-stage AI value pyramid — cost cutting → skill amplification → unlocking what wasn’t possible before. 80% of companies are stuck on stage one.
- [1:24:18] How to roll out AI across a team in an afternoon: align with business strategy, declare an AI amnesty, pave the desire lines.
Mentioned in this episode
- Sensible AI Manifesto — Dave’s six-point framework for applying AI without breaking your business. Currently being turned into a book.
- Gen AI Academy – the training company Dave co-founded with Helena, working with governments, the UN and Fortune 500s.
- GPT Junior – Dave’s book and video course teaching kids how to use AI properly, currently in over 100 schools.
- Perplexity – Dave’s preferred AI tool for fact-checking because it gives you the sources.
- Cal Newport – referenced for the long-form-reading argument and the case that children reading for pleasure is the strongest predictor of life outcomes.
- Range (David Epstein) – the case for generalists over hyper-specialists; Dave says the book describes him.
- Yann LeCun – recently left Meta over the limits of next-token prediction; arguing AI needs world models, not just language.
- Roomba poopocalypse – the family-and-the-dog metaphor that opens the episode and frames the whole thing.
- Marc Andreessen / lump of labour fallacy — the framing for why we systematically underestimate the new jobs that emerge from disruption.
- RAF desire lines – the Nissan-hut path-paving story; Dave’s metaphor for letting staff show you how AI is already being used.
- Combinedly – the AI tool Andrew’s firm is testing for client-sentiment analysis and email drafting.
Find the guest
LinkedIn: https://www.linkedin.com/in/davebirss/
Gen AI Academy: https://thegenaiacademy.com/
Follow Business Without BS
Website: https://withoutbs.com
YouTube: https://youtube.com/@bwblondon
Instagram: https://instagram.com/bwblondon
X / Twitter: https://x.com/bwb_london
LinkedIn: https://www.linkedin.com/company/business-without-bs
🎧 Business Without BS — straight talk from people who’ve actually built things.
Transcript
You're not going to be replaced by AI.
Speaker A:You're going to be replaced by somebody in leadership who is trying to cut costs and has believed the that they've been told that's the truth.
Speaker A:The skill that everyone needs to be looking at is judgment.
Speaker A:And what I've been finding is that more and more people are becoming over reliant on the AI tools to do the judgment for them.
Speaker A:And I think that that is a dangerous and scary place to be.
Speaker B:How do you use prompting to sort of navigate a lot of the issues that we've just been discussing?
Speaker A:Most people are pretty bad at prompting because all prompting is, is writing a brief, and most people are really bad at giving a breath.
Speaker B:Brass tacks.
Speaker B:How does a management team build that kind of plan you were talking about earlier?
Speaker B:Fastest route from A to B.
Speaker C:There's a famous story about a robot vacuum cleaner spreading dog poo all over the house.
Speaker C:That's basically what AI does to a badly run business.
Speaker C:Today on Business without bs, we're helping Dave Burser, who explains why AI is a magic, why leadership keeps getting it wrong, and how AI should be used to make your business better, not just faster and louder.
Speaker C:Welcome to Business without bs, the alternative MBA with me, Andy Uri and and me, Andrew Craig.
Speaker C:I love it.
Speaker C:And today we're delighted to be rejoined by the brilliant David Bur, co founder of the Gen AI Academy and the man behind the Sensible AI Manifesto, which is basically the world's stop panicking and guide to AI.
Speaker C:Dave, welcome back to the podcast.
Speaker A:Thank you very much.
Speaker A:Thank you.
Speaker C:Such a pleasure to have you here.
Speaker C:I think it's marvelous there's anyone out there, you know, using the word sensible and AI in a sentence.
Speaker C:You've taught, you've taught now what you said.
Speaker C:A million and a half people.
Speaker C:Amazing.
Speaker C:You know about AI and things.
Speaker C:What is the one simple habit you wish every business owner used to think more clearly every day?
Speaker A:I think, well, when it comes to AI, I think it's understanding that it's about people, it's not about the technology.
Speaker A:So many people think AI, well, it's a technology issue.
Speaker A:So let's give it to the cto, let's give it to the CIO and make it their responsibility.
Speaker A:And that's a ridiculous thing to do because actually they are not equipped for what we need to do with AI.
Speaker A:AI is a catalyst for transformation and that is a human thing within an organization.
Speaker A:And the problem is that so many business leaders, they've got a knowledge gap and they haven't realized that the only way to deal with the knowledge gap is to fill it.
Speaker A:So instead they go with ignorance.
Speaker A:They hand it to the person who's the cto, think it's their problem, and the CTO will look at it from a tech point of view rather than the fact that this is actually about humans.
Speaker A:So there's one of the issues that we have with this, is that organizations, before they start applying AI, they also.
Speaker A:They need to clean up their mess.
Speaker A:And I don't know.
Speaker A:Have you heard the story about the Roomba poopocalypse?
Speaker A:It was from.
Speaker C:I need to.
Speaker A:Fifteen years ago, there was a family had the Roomba.
Speaker A:They had it downstairs, and every single night when they were asleep, the Roomba would come on and it would clean all their carpets.
Speaker A:They'd come down for breakfast.
Speaker C:Oh, the vacuum cleaner.
Speaker B:The Roomba.
Speaker B:I was literally about to interject, say for people who aren't aware, the Roomba is a little automatic robot vacuum cleaner.
Speaker A:Yes.
Speaker A:It looks as if they're about to go bust, but, you know, that's the situation at the moment.
Speaker A:So the Roomba vacuum cleaner would come off, little robot would clean the carpets.
Speaker A:Absolutely fantastic.
Speaker A:They come down every morning, clean carpets, until one night, their dog was feeling a bit ill and did a shit in the carpet.
Speaker C:Oh, my God.
Speaker C:Yeah.
Speaker A:And then the Roomba came to life and spread it evenly across all of the carpets in the downstairs of their house.
Speaker A:And when they came downstairs the next morning, it was not quite the usual experience.
Speaker B:Not pleasing.
Speaker A:No.
Speaker A:And this is the problem that we've got when it comes to AI, is that so many companies are not cleaning up their shit before they apply the technology.
Speaker A:And it's really, really important because AI is an amplifier.
Speaker A:It's an augmenter, it's a magnifier.
Speaker A:It speeds things up.
Speaker A:And if you start applying that to an organization that is riddled with dysfunctions, as most organizations are, then it's just a poo pocalypse.
Speaker B:Was that the expression?
Speaker A:Yes.
Speaker A:A corporate poopocalypse is what you'd end up with.
Speaker A:Yeah.
Speaker C:How do you.
Speaker C:When you mean clean up, how do you clean up your shit?
Speaker C:I mean, does that mean getting your data in the right place?
Speaker C:Does that mean telling Susan to stop fiddling around on the Facebook or.
Speaker A:Well, not so much with Susan.
Speaker A:Susan can keep doing that.
Speaker C:Okay.
Speaker A:I mean, Susan, if she's on.
Speaker A:Still on Facebook, she's dysfunctional.
Speaker A:I think we could.
Speaker A:She's a lost, lost cause.
Speaker A:But, yeah, it's things like getting your data sorted out.
Speaker A:Making sure that your technology talks to each other.
Speaker A:But beyond that, there's, you know, in processes, you start, people, companies start off with a process and the process is usually pretty well thought through and they got it all sorted.
Speaker A:And then over the months and years, we start bolting these kludges onto them because Brian over here has got some kind of agenda and for a political reason, he wants to actually be sent that information so that he can approve it before it carries on.
Speaker A:Start getting these cludges built in.
Speaker A:A new piece of.
Speaker A:A New software update comes along.
Speaker A:It doesn't quite talk to everything else, so it ends up.
Speaker A:Look, it's just.
Speaker A:Rather than actually updating the whole thing, it's just easier if you get the intern just to do that little bit and just do it manually.
Speaker A:And again, you've then broken your process and you get these little.
Speaker A:I call them the corporate barnacles, because I don't know if you know, but every few years the big cruise ships.
Speaker B:Get lifted out and, well, the scuba.
Speaker A:Divers go down and they chip.
Speaker A:They chisel the barnacles off the propellers.
Speaker C:It slows them, doesn't it?
Speaker A:Costs fuel 40%.
Speaker A:If that's over a few years.
Speaker A:If there's loads of barnacles on a propeller, it will make them 40% less efficient when it comes to fuel and speed.
Speaker A:And that's exactly what's happened to organizations.
Speaker A:They've got these corporate barnacles all over, and if you don't chip them off, but you're putting in a new engine, expecting speed, don't expect good things.
Speaker B:Institutional plaque.
Speaker A:Yes.
Speaker A:Yeah, exactly.
Speaker C:I mean, in terms of what you said, I think that's great.
Speaker C:I think working out where your corporate barnacles are and knocking them off is difficult.
Speaker C:And I think you could probably do a podcast on trying to work that out.
Speaker A:Oh, absolutely.
Speaker C:But I think that's great advice.
Speaker C:I mean, people say that, you know, our CTO here says, look, the first thing we have to do is we have to get our information tidy and joined up and in the same place and, you know, know.
Speaker C:So that's right.
Speaker C:Get your data right and then look at your existing processes and don't just sort of chuck it on top is really the point, you know?
Speaker A:Exactly.
Speaker C:Map it all out.
Speaker C:Brilliant.
Speaker B:So, you know, you created the Sensible AI Manifesto to.
Speaker B:To help the fact that small businesses are panicking about just what a big, scary, gnarly issue it is.
Speaker B:What does a sensible use of AI look like for a small business?
Speaker B:Not, you know, having just focused on the quality of data and.
Speaker B:Yeah, corporate barnacles and whatever else.
Speaker B:How.
Speaker B:How is a small business distinct from a.
Speaker A:Well, to be honest, a lot of them suffer from the same problems regardless.
Speaker A:It's just the scale of the problem.
Speaker A:The way that a lot of organizations have been led to believe that AI helps them is all about efficiency, it's all about productivity.
Speaker A:It'll help you do things faster and cheaper.
Speaker A:And that's a bit of a red herring.
Speaker A:But of course, the reason that the AI companies have tried to market their product that way is because they know that that's what the decision makers want to hear, is, hey, can you cut our costs by 20% this quarter?
Speaker A:Fantastic.
Speaker A:Wouldn't that be amazing if it could happen?
Speaker A:But it doesn't work like that.
Speaker A:So there's a whole.
Speaker A:It's best if I bring up my website so I can actually sort of bring up what the different sort of points are in the manifesto.
Speaker C:Yeah, let's hear them.
Speaker C:I'd like them.
Speaker A:So the first one is use AI to augment skills.
Speaker A:What's happening at this moment is when or leadership is trying to cut costs.
Speaker A:What they're doing is instead of looking to augment skills, they're looking to outsource tasks.
Speaker A:That's a crappy thing to do.
Speaker A:So instead it's about augmenting skills.
Speaker A:It's about looking at the talent, the knowledge, the ability, the passion that your workforce have.
Speaker A:How can you use AI to multiply that?
Speaker A:That's where the real unlocker is with AI.
Speaker A:Anything else?
Speaker A:Well, I mean, you know what it's like as an accountant that very often you've got a corporate scalpel and what you're trying to do is you're trying to make cuts, but you can only make so many cuts before you damage what's on the slab.
Speaker A:And that's the thing.
Speaker A:There's only so many cuts you can make.
Speaker A:And AI can make some cuts, but as we're saying, it can magnify problems on the way to do that if you're not sorted out.
Speaker A:But the real opportunity is not in cutting, but it's in growing.
Speaker A:And that's what you do when you augment your staff, your team.
Speaker B:So how does.
Speaker B:Sorry to interrupt.
Speaker B:How does a business decide what tasks are ripe for AI and which aren't?
Speaker B:What's the sort of in and out.
Speaker A:Criteria that is going to be different for every company?
Speaker A:But there's ways that we can look at this.
Speaker A:We can look at this exercise I do with teams when I go in and I get them to create Lists, two lists.
Speaker A:The first list is the tasks that piss them off because they don't want to do them.
Speaker A:So we make that list first.
Speaker A:And then the second list is the tasks that piss them off because they wish they could do more with them, and they just don't have the time or the opportunity.
Speaker A:And these are the opportunities for AI.
Speaker A:The first one is the one where most people in leadership think it's at.
Speaker A:That's okay.
Speaker A:It's about, let's get rid of all of the admin so we can speed people up and get them to do more of the actual stuff that we earn money from.
Speaker A:And they look at that, but actually there's only so much you can do there.
Speaker A:And that's not the big opportunity.
Speaker A:The big opportunity is, and we definitely should do that first, but the big opportunity is in the stuff that people wish they could do more of, they wish they could spend more time on.
Speaker A:But when we do that, we have to look at processes.
Speaker A:And this is what AI is good at, is when you look at an entire process rather than necessarily individual tasks.
Speaker A:When people look at a task, very often they get it wrong and they think that something's a task when it isn't a task.
Speaker A:So, say, for example, creating a podcast or writing a blog post, that is not a task.
Speaker B:That is a series of underlying steps, series of subtasks.
Speaker A:So if you're writing a blog post, your sub tasks are going to include doing research, coming up with a point of view, then sort of doing a bit more research to back up that point of view.
Speaker A:Then you're going to write bullet points of all the things you want to address in the blog post.
Speaker A:Then you're going to write a first draft of it, then you're going to go back and you're going to edit it.
Speaker A:All of these are subtasks.
Speaker A:Yet the way that most people, when they're dealing with AI tools is they go, write me a blog post.
Speaker A:And it doesn't work like that.
Speaker B:You need to be more granular.
Speaker A:The best way to do it is subtasks.
Speaker A:And then once you've got your subtasks, you work out, is that something that the AI can do most of?
Speaker A:Is that something that I do in conjunction with the AI as a partner, a thinking partner?
Speaker A:Or is that something that's human and AI shouldn't be allowed anywhere near it?
Speaker C:How do you.
Speaker C:How do you choose?
Speaker C:Is there any sort of rules of thumb you can use to say, oh, that's not an AI thing, that is an AI thing?
Speaker A:Most people when they're doing the task, they will know.
Speaker A:So.
Speaker A:So, for example, if I'm writing a blog post, when it comes to coming up with a point of view and making a decision on that, that's me.
Speaker A:I'm not going to let an AI do that.
Speaker A:I'm not going to lets an AI pass judgment on something.
Speaker A:So it's that kind of thing.
Speaker A:But also some of it is trial and error.
Speaker A:So I might imagine that an AI would be good at some stuff, and it's not.
Speaker A:So there is this weird thing about technology, is that the stuff that you think is pretty easy is sometimes very hard, and the stuff you think is very hard can sometimes be very easy and you don't actually know until you try it out.
Speaker B:Have you come across the idea, I think I'm going to say this wrong, but the idea of being AI slapped, by which people just mean that so you're sitting in whatever corporate role you're in, and because it's now so much easier for people to just produce stuff, reports and research and blog articles, that everyone in their role is being deluged by way, way more stuff than they were in the past because everyone out with the organization that used to send them stuff can produce, you know, 10 times more.
Speaker C:So true.
Speaker A:Yeah.
Speaker B:Is that, Is that.
Speaker B:How do you navigate.
Speaker B:How do companies navigate that problem?
Speaker B:Is that a real problem?
Speaker A:It is a real problem.
Speaker A:There was some research that was done a year and a half ago.
Speaker A:I think I may have talked about it in the last podcast about business leaders.
Speaker A:And I think it was 96% of business leaders believed that AI was going to increase their productivity.
Speaker A:And then it was about 77% of people felt that that had resulted in them being given too much work to do with unrealistic expectations.
Speaker A:Now, there's a reason for that.
Speaker A:And one of the reasons for that is what I call the adequacy trap.
Speaker A:So adequacy is this kind of point in your career when you're adequate is the point where you are producing work that I guess is when you hand it to your boss, they go, yeah, that'll do.
Speaker A:You're adequate.
Speaker A:You're adequate.
Speaker A:It's that competency, minimum level of competency.
Speaker A:Now, it took me months or years to become adequate at what I did.
Speaker A:In that time, I was embarrassed.
Speaker A:I was, you know, I'd go away, I'd do research, I would ask people.
Speaker A:And because of that, that equipped me.
Speaker A:I built resilience and it equipped me to break through that adequacy line towards something that was hopefully relatively good.
Speaker A:Hopefully at times and places.
Speaker A:The problem is with AI now, everyone starts at adequacy and they haven't had all that painful journey, which is like the butterfly coming out the cocoon.
Speaker A:If you help the butterfly out the cocoon, it can't fly because actually part of the.
Speaker A:Part of the struggle is what gets its blood flowing, pumping into its wings.
Speaker A:So it's like that.
Speaker A:If you start at adequacy, relying on AI tools, you're kind of stuck there.
Speaker A:You're not equipped to keep flying higher.
Speaker A:But if you think of that adequacy line, one of the things that happens is people in leadership don't tend to know what the people who are actually doing the work are actually doing.
Speaker A:So they'll go, well, say, for example, it's the pharmaceutical industry and it's doing some copywriting for drugs, whatever.
Speaker A:And somebody high up in the company goes, well, I just put some information into ChatGPT and it wrote me something.
Speaker A:It looked fine.
Speaker A:That's because their understanding of that is so low, they don't do this job, and they've not built the judgment.
Speaker A:So they're looking at adequacy, they're looking up because their ability is so low, and they're looking up at adequacy and going, that's amazing.
Speaker A:I could never do that.
Speaker A:Because they're not trained in copywriting.
Speaker A:They don't understand what it is.
Speaker A:But all the expert copywriters, their level is up here, and they're looking down at adequacy, going, that's crap.
Speaker A:I would never let that go.
Speaker A:It doesn't sound right.
Speaker A:There's no emotion to it.
Speaker A:And so they're looking at it from a point of expertise.
Speaker A:And it's not to say that the manager's bad at their job.
Speaker A:It's just they're bad at empathizing and they're bad at understanding what the teams are actually doing.
Speaker B:But that's a big systemic challenge for business generally, isn't it?
Speaker B:Because how do you navigate, how do you then discern when a business is using AI really badly?
Speaker B:Because that's happening all the time.
Speaker A:It is happening all the time.
Speaker C:I would also say even as an expert in tax, it can give me an answer.
Speaker C:It sounds very persuasive.
Speaker C:I find myself going, you know, I'm often on the phone call because the client's trying to tell me the same thing.
Speaker C:And I'm like, yeah, I know it's wrong, but it's so persuasive the way it's Written.
Speaker C:There's something going on in my brain.
Speaker C:It's a bit like, you know, everything you see written in text, we're always like, well it must be true.
Speaker C:I saw it in text, isn't it?
Speaker A:And the thing is we trust technology because previously technology was deterministic.
Speaker A:And when something is deterministic it means that we will get the same result every single time.
Speaker A:And that's what we've been used to with technology.
Speaker A:AI is not deterministic, it's probabilistic.
Speaker A:Basically it's a guessing machine.
Speaker A:It's guessing the next word or the next chunk of a word.
Speaker A:That's all it does one after the other.
Speaker A:So what it's doing is after fluency and plausibility that takes precedence over accuracy.
Speaker C:It doesn't even take precedence.
Speaker C:It doesn't know what accuracy is.
Speaker C:Someone said to me, it doesn't actually ever know if it's accurate.
Speaker C:It just, it knows how probable it is almost, isn't it?
Speaker C:You know, I'm 98% certain this is the right word to put next.
Speaker A:You know, and there is a confidence setting that there's some experiments I've been doing recently, playing with the confidence settings and you can put this into a prompt and say that I want you to give me facts about the moon Titan, give me 20 facts and give me a percentage of your confidence level on this fact.
Speaker A:And it can calculate that kind of based on what it's.
Speaker A:So it'll give you these confidence levels and then you can say for anything under 95% confidence level, I want you to delete that fact and give me a new fact to replace it.
Speaker C:Wow.
Speaker A:And then so you can create these self improving prompts that are leading to things that are more liable to be true.
Speaker A:But there's some of the things we have to understand about it.
Speaker A:People are throwing any question at it.
Speaker A:And for you it'd be more helpful if it was an lmm, a large mathematical model.
Speaker A:Well, I suppose language model.
Speaker C:Yeah, we really.
Speaker C:We're talking about LLMs all the time, aren't we?
Speaker C:I was going to say there's so many other forms of AI.
Speaker B:Yeah.
Speaker C:Someone's a trainee.
Speaker C:Should you be saying you shouldn't use AI because you need to get to base camp on your own?
Speaker C:You know what, what should you never give AI and who should you never give AI to?
Speaker C:Do you know what I mean?
Speaker A:I think that the first thing you should never do is use it as an emotional support.
Speaker C:I find it incredible at that though.
Speaker C:What's about.
Speaker C:I know there's been some terrible incidents.
Speaker A:Terrible incidents.
Speaker A:I mean the, the number of suicides.
Speaker A:There's been people who have been institutionalized because of what we call AI induced psychosis.
Speaker B:Do you think though, I mean, it's early days, right?
Speaker B:To a certain extent.
Speaker B:But do you think that that sort of individual probably would have come a cropper from some other.
Speaker A:They're more susceptible, but it's not necessarily that they would come a crop or elsewhere.
Speaker B:It's been the catalyst.
Speaker A:It's been the catalyst where people have had a certain weakness, I would agree.
Speaker A:But like for example, some of the stuff with AI induced psychosis there were.
Speaker A:They tend to be people who are tin hat.
Speaker A:Tinfoil hat.
Speaker B:Yeah, yeah.
Speaker C:What does that mean again?
Speaker B:That means conspiracy theorists.
Speaker B:Aliens are going to come and, and.
Speaker A:What's happened is they've said these things to the AI tool and because the AI tools are sycophantic, it's a real problem is sycophantism.
Speaker A:So they are so agreeable.
Speaker C:Yes.
Speaker A:They compliment you so much.
Speaker A:That's a great idea.
Speaker A:And yes, well done.
Speaker C:You should chuck yourself off the bridge.
Speaker A:And there is.
Speaker C:The world would be a better place.
Speaker A:There are elements of that where that, that has pretty much been the conversation that they've ended up having with some people.
Speaker A:But, but for people who are going down some kind of conspiracy rabbit hole, it's basically patted their back and said self reinforcing.
Speaker B:Yeah, yeah.
Speaker A:You've spotted stuff.
Speaker A:Other people aren't noticing this and people have basically believed they're gods or they're getting messages.
Speaker B:There must be a way of incorporating kind of guardrails into.
Speaker B:I mean that can't be that difficult.
Speaker B:There'd be a whole set of interactions that the, you know, that the system would be able to highlight in any other area of guardrails.
Speaker A:In the last two weeks, OpenAI have actually put stuff in that.
Speaker A:Now it is not supposed to give you medical information or legal information.
Speaker A:Right.
Speaker A:So if it does, it will probably give you caveats and say see a proper expert about this.
Speaker A:But that's something they're starting to add in.
Speaker B:Although I've had that naturally from ChatGPT in the past where I've been asking it about pharmaceutical drugs and biotech stuff and it said be aware I am not a qualified medical professional, blah, blah.
Speaker C:I would agree in terms of things that happen.
Speaker C:So you can't say publicly use CHAT GBT for emotional support.
Speaker C:But you know, almost to Andrew's point, someone who's reasonably stable.
Speaker C:We have a mental health crisis that we all need someone to talk.
Speaker C:It could be incredibly helpful and, you know, it gives my wife great solace.
Speaker C:You know, it's.
Speaker B:I'm not choose AI over you.
Speaker C:I'm not joking.
Speaker C:I come home and I take it in the neck for an hour about stuff and I make no difference.
Speaker C:I try not to say any.
Speaker C:I'll try and say something positive, you know, I'm not going to fix her problems.
Speaker C:I'm a grown adult, I'm a grown man.
Speaker C:I know how this shit works and I'll make no dent in this thing.
Speaker C:I'll come back in an hour later, she'll go, oh, I feel much better.
Speaker C:I was like, oh, thanks babe.
Speaker C:I'm glad the talk out.
Speaker C:Oh, no, I've been chat to chat GPT.
Speaker C:Oh, it's brilliant.
Speaker C:It's what it said and I'm like, what the man?
Speaker C:You know, it's like, let it be your husband.
Speaker C:I was like, can you do that before I get home?
Speaker B:You know, I mean, this might cause you to be suicidal at this rate,.
Speaker A:But the problem with all of this is dependency, and that's what we want to avoid.
Speaker A:The problem is when there's no cap on the amount of time that you're speaking to the AI tool, then you're not getting that feedback.
Speaker A:You know that there's a guy called Azar Raskan who invented Infinite scroll on, I believe it was Instagram.
Speaker A:And you know, it used to be when Instagram was first launched, you would get 10 pictures, then you would go reload, you press the button.
Speaker A:Now if you think about that, you've got a glass of rum and Coke and you get to the bottom of the glass of rum and Coke and then it's like, do you want a refill?
Speaker B:You have to think about it.
Speaker A:Yeah.
Speaker A:But instead, what happens with Infinite scroll is you do not.
Speaker A:There's no conscious decision in there.
Speaker A:Instead you're just lying underneath a wine barrel and switching the tap on.
Speaker A:And that's, that's the issue that we have.
Speaker A:And again with AI, when, when it just.
Speaker A:You could, you could talk for days.
Speaker A:As long as you've got the, the money in the account to have those conversations, it will keep going.
Speaker A:And there, there are risks for people who can become dependent on that.
Speaker A:And it's something that I am personally very conscious about how much time.
Speaker A:This is somebody who teaches this stuff.
Speaker A:I'm very conscious.
Speaker B:You can fall down the rabbit hole yourself, even though you're.
Speaker B:Yeah, exactly.
Speaker A:Yeah.
Speaker A:So there we go.
Speaker A:Use AI to augment skills.
Speaker A:The first one, the second one is use data responsibly.
Speaker A:That's something that we certainly need to be looking at.
Speaker A:There's stuff that we know there's people who upload stuff to the AI tools that they shouldn't be uploading legally.
Speaker A:Be ethical.
Speaker A:We're starting to see court cases come through at the moment.
Speaker A:They're only really starting to hit.
Speaker A:There was a big court case result came out about three weeks ago that was looking at Stability AI, and it was Getty Images against Stability AI.
Speaker A:Because when you prompted for images from Stability AI, you sometimes get a Getty watermark on it.
Speaker A:And really, surprisingly, stability won the case.
Speaker A:Wow.
Speaker A:Because the judge had done some research and went, well, look, what you're kind of imagining here is that these AI tools go to a filing cabinet and go, right, you want a penguin, right?
Speaker A:Penguin.
Speaker A:Penguin, Penguin.
Speaker A:Emperor penguin.
Speaker A:Emperor penguin, right there, isn't it Emperor penguin.
Speaker A:And you want it holding a shopping bag.
Speaker A:Okay, shopping bags.
Speaker A:Image of a shopping bag, right?
Speaker A:And you want it to be Justin Bieber.
Speaker A:Bieber.
Speaker A:Bieber, Right.
Speaker A:There we go.
Speaker A:Now let's put those together into an image.
Speaker A:And it doesn't work like that.
Speaker A:It doesn't store independent images.
Speaker A:You can't go in and say, give me Van Gogh's Starry Night.
Speaker A:It can't do that.
Speaker A:It doesn't have that in there.
Speaker A:But what it does, it's got data points that are connected to each other, mathematical sets that are inspired and influenced by those images.
Speaker A:Which means that the judge was saying, well, that's exactly what happens with humans.
Speaker A:We are inspired by stuff.
Speaker A:So there is like, be ethical.
Speaker A:But businesses, you shouldn't have to be told this, but businesses need to be told this.
Speaker A:The next one.
Speaker A:Support your employees.
Speaker A:So that's not just about supporting them financially.
Speaker A:We also have to look at supporting them emotionally.
Speaker A:Because if you don't tell employees what your plans are for AI and you leave a knowledge gap.
Speaker A:Nobody fills knowledge gaps with positivity.
Speaker A:So it's just good leadership.
Speaker A:Assign AI leaders.
Speaker A:If there's not somebody who is assigned to that, you know, nothing's really going to happen.
Speaker A:And I've been finding that.
Speaker A:Keep learning, because this stuff is changing all the time.
Speaker A:I think you probably, the person who's your AI leader probably needs to be assigned to keeping up with this stuff and have that responsibility.
Speaker A:And then the last thing is always add a human layer.
Speaker A:And that is that before anything goes out there, it has to be run through one of these spongy processors rather than just coming straight out of an AI tool.
Speaker A:Bang.
Speaker B:Online that's quite a big bottleneck to a lot of the things that people perhaps aspire to use it for.
Speaker B:Right.
Speaker A:This is the whole thing.
Speaker A:Somebody has to be accountable for any work that's going out there.
Speaker A:And I've got lots of organizations who are coming to me that are basically expecting we can just put that into the AI and then it goes out there.
Speaker A:And then I say, okay, we could, but who's going to be accountable?
Speaker B:Especially when it's a customer interfacing function, Right.
Speaker B:And it's like, yeah, do you really want to do that?
Speaker A:So who's going to be accountable for that then?
Speaker A:And at that point they're like, oh, but the AI?
Speaker A:No, a human needs to be accountable for that.
Speaker C:I mean, ultimately you got to read what you send out, haven't you?
Speaker B:Well, I mean, I've always been a believer in that.
Speaker B:And also it reads stuff that people send you, which very few people seem to do anymore.
Speaker C:Yeah.
Speaker C:Anyway, we will move on.
Speaker C:They are excellent points.
Speaker C:Do you think you're going to add to that manifesto?
Speaker A:I'm currently rewriting it.
Speaker A:Yeah, I'm in the process of, I'm turning it into a book as well at the moment, so I'm talking to a couple of publishers about that.
Speaker B:So Dave, I know that a big part of your work is around prompting and the merit of prompting and how kind of key that is.
Speaker B:So you know what, is there a formula, is there?
Speaker B:How do you use prompting to sort of navigate a lot of the issues that we've just been discussing and you've just articulated?
Speaker A:Yeah, I mean, absolutely.
Speaker A:Most people are pretty bad at prompting because all prompting is, is writing a brief.
Speaker A:And most people are really bad at giving a brief.
Speaker A:That's what it comes down to.
Speaker A:And a lot of people still seem to have this misunderstanding thinking about thinking of it like a search engine and you know, you just put in something, short bang, you get a result.
Speaker A:But it's not the way that it works with AI tools, but it's the.
Speaker B:Most nutshell way of sort of thinking about that.
Speaker B:Because I was thinking about this, having listened to some of this stuff before, that it says almost without wanting to belittle it or trivialize it, but it's almost as simple as you can write a long form, whatever you want to the AI to ask a very detailed, qualified, nuanced, multi part question or give it a task.
Speaker B:And is that the easiest way of kind of summarizing?
Speaker B:It's not.
Speaker B:Because every time you go on Google in the old days, you try and do something really short and sweet and pithy because you think that's going to work better.
Speaker B:But with AI, you can just write a whole page of A4 that I want this and this and this with loads of elements to it, and then it is able to cope with that.
Speaker B:Is that one way of looking at it?
Speaker A:Well, like humans, the more barriers and limitations and things that it needs to consider that you put in, the less likely it is to give you great results.
Speaker A:So if you're trying to be really nuanced about stuff, the problem about nuance is you've boxed someone into a linguistic quirk and you totally limit what can be done.
Speaker A:So when I used to worked in advertising and I would get something that'd be a client brief, if it had that kind of thing that felt as if it was written by a lawyer and it had like a clause and there was words that were picked for their nuance, I would hand it back and go, I can't work with that.
Speaker A:You need to write it as if it's for a 12 year old.
Speaker A:Not because I'm that stupid, but because your language has boxed me in and you've taken any opportunity out of giving you anything decent there.
Speaker A:So we have to be careful with the language we use.
Speaker A:So lawyers are probably terrible at this.
Speaker A:But there's a prompting framework that I started to teach just after ChatGPT came out.
Speaker A:And it's not about finding magic tricks within the technology, which is why it still works.
Speaker A:All it is is about getting information out of your head and into the machine.
Speaker A:So it's called the Create framework.
Speaker A:So the C is character.
Speaker A:And for the character, you're going to tell the AI tool what role you want it to play.
Speaker A:And a lot of people will say, well, that's just nonsense.
Speaker A:I mean, it knows all this stuff.
Speaker A:Why would you need to tell it to play that?
Speaker A:Well, doing that is shorthand.
Speaker A:It's shorthand for saying, this is the expertise I want you to draw on.
Speaker A:This is the quality of response that I'm looking for.
Speaker A:This is the perspective that I'm after.
Speaker A:So if you say, there's a test that I often show people, which is you say, I'm getting married in six months and I want your advice.
Speaker A:Choosing flowers for the wedding.
Speaker A:If you just put that in, you'll get a really good response.
Speaker A:You will.
Speaker A:But if you then say that, you put an exact same response, exact same prompt, but before it, you say, I want you to play the role of an experienced florist with a reputation for creating instagrammable displays.
Speaker C:Great.
Speaker A:Then I'm getting married in six months.
Speaker A:You can see that that will give you a very different approach.
Speaker A:Then if you do that again, put in the same core of the prompt.
Speaker A:But before it, you say you are an accountant with a reputation for never spending more than is necessary.
Speaker A:You know, I'm getting married in six months, tell me how to.
Speaker A:But you again, you're totally different perspective.
Speaker A:So role is really about giving it the perspective you want it to take and saying that this is the kind of quality that I'm looking for.
Speaker A:So when I'm sometimes that expert, that character, people tend to put experts in there.
Speaker A:It doesn't always have to be an expert.
Speaker A:So I was doing some stuff for a pharma client in the US a couple of years ago and I was taking them through prompting and they kept putting in, you're a surgeon with 20 years experience of this, diagnosing and treating this illness.
Speaker A:Just like, no, actually for what we're after, that's the wrong person.
Speaker A:It should be.
Speaker A:You are the life partner of somebody who's just been diagnosed with this disease and you've got a head full of questions and concerns.
Speaker A:That's what you're after.
Speaker A:Because that would then give us the insight to understand why these people aren't staying on the drug plan.
Speaker A:Rather because this person up here, the expert expert has got no idea about the lived experience of the person.
Speaker A:So that gives you a totally different approach.
Speaker A:Or you could be surreal about it and say that I want you to play the role of an electron passing through the CPU of a computer that's doing such and such a process.
Speaker A:Tell me what you see as you pass through wow.
Speaker A:And you will get interested.
Speaker C:I need to do that.
Speaker B:One of my overarching thoughts just as you start on with character is the first one is how much of this is kind of first principles thinking from you as a former creative.
Speaker B:And it's like.
Speaker B:Because a lot of it's very subjective and capricious.
Speaker B:Like the examples you've just given require intellect and lateral thinking rather than any sort of prescriptive objective, whatever.
Speaker B:And so perhaps, perhaps once you've run what RE8 is, but bigger picture, what I wanted to ask you is how did you come up with these frameworks other than just I'm a creative and first principles thinking and I'm smart.
Speaker A:I've spent a lot of my career working with briefs and knowing what information really helps spark thinking.
Speaker A:And part of it comes from books that I've written and stuff like that and working in innovation and understanding how to define problems and how to solve them.
Speaker A:So all of that past experience went into this.
Speaker A:So I know how to solve problems and that.
Speaker B:Have you come up.
Speaker B:Have you heard of that book?
Speaker B:I think it's called Range and it's like we're entering the new era of the generalists and.
Speaker B:Exactly.
Speaker B:It's like if you can magpie, you know, the most valuable folk and the biggest problem solvers are actually not the super duper expert in a micro micro field.
Speaker B:It's being able to magpie half a dozen things and put them all together logically.
Speaker A:That book really resonated with me.
Speaker A:It was like, holy crap, that's me.
Speaker A:Describe me.
Speaker A:So the reate so C's character.
Speaker A:Next is your request what it is you're asking.
Speaker A:That should be like a sentence or two.
Speaker A:And I use a way of doing this.
Speaker A:I use what I call the what what.
Speaker A:This very British approach to this, the what what.
Speaker B:And the first, it's really what what what.
Speaker A:The first what is you describe in a sentence or two, no more than two sentences, what it is you want it to do.
Speaker A:Because it's a subtask, remember that's the best thing here.
Speaker A:If you're writing too much stuff, then don't expect great results.
Speaker A:So we're focusing on something that we know what we want back.
Speaker A:So what is the task you want it to do?
Speaker A:And then the second is what do you want it to deliver?
Speaker A:And that's like a list of bullet points of.
Speaker A:I want you to give me a headline, then a 100 word summary and then a table that compares this to this to this.
Speaker A:I want you to give me an interactive chart that I can change to see how this responds to this.
Speaker A:I want you to give me an analysis of that so that you know and all of these things that you can ask in bullet points then.
Speaker A:So that's the request then E is examples.
Speaker A:If you've got examples, it's great.
Speaker A:It's great at following examples.
Speaker A:So a good example of examples, for example, is writing the subject line for emails.
Speaker A:If you're doing marketing emails, then you go through your emails you've already done and you pick the 10 highest opened emails and you take those email lines and you add them to a prompt as examples and say I've added examples of 10 highly clickable subject lines.
Speaker A:I'm going to give you an email.
Speaker A:I want you to give me five suggestions of headlines that are inspired by these A is adjustments.
Speaker A:Because you're going to have to come back and Adjust this.
Speaker A:If it's something you can't expect to get it right first time.
Speaker A:So you're going to come back and say, right, I want you to add this to the response.
Speaker A:I don't want you to do this.
Speaker A:So you're always going to be putting those things into your prompt.
Speaker A:T is the type of output which is actually the second.
Speaker A:What?
Speaker A:Really?
Speaker A:Because it's amazing at all the things it can do.
Speaker A:You can say write it in the style of a Hollywood script.
Speaker A:You can say give me a table.
Speaker A:You can say code me an HTML web page.
Speaker A:It will do all of these things.
Speaker A:It's so good at the different types of output.
Speaker A:And then E is extras which are some strange little quirks that you can add in.
Speaker A:Like for example, did you know it depends.
Speaker A:It changes across tools and it changes as the tools change.
Speaker A:So they're kind of like quirky things.
Speaker A:But there was a period there where if you said to your AI tool, I'll buy you a coffee if you give me a really great answer.
Speaker A:It would give you a better answer because that works for humans.
Speaker A:It's in its training data.
Speaker A:It'll give you a great answer.
Speaker C:Never quite make sense.
Speaker C:That bit there.
Speaker C:It's the bit of AI that I think we all like.
Speaker C:What I know you know that you threaten.
Speaker C:Threaten it and you get a better answer.
Speaker C:You say, yeah, that they've had all this weird experience when they've tried to turn it off and it will lie or something.
Speaker A:That's right.
Speaker A:That's a.
Speaker A:That's a recent.
Speaker A:That's a recent discovery which you recommend.
Speaker C:Do you think any.
Speaker C:Because people say OpenAI doesn't really care about data because it's American.
Speaker C:Does it?
Speaker C:It says it does.
Speaker C:It doesn't.
Speaker C:I don't know.
Speaker A:Listen though.
Speaker A:The people running all of these companies are people who probably shouldn't be trusted as human beings.
Speaker A:Let's be honest about that.
Speaker B:But that's like the old.
Speaker B:You know the literature that says 90 of American Fortune 500 CEOs would actually medically qualify as being psychopathic.
Speaker A:Yes.
Speaker B:Or some ridiculous.
Speaker B:Pretty much.
Speaker B:They have to have those traits in order to make it that far up the greasy pole.
Speaker C:It's very sad.
Speaker C:Europe doesn't really have one.
Speaker C:We've got London, Google One, DeepMind.
Speaker C:But that's Google Now.
Speaker A:Yeah.
Speaker A:And.
Speaker A:And DeepMind isn't a public one for people to play with that.
Speaker A:That's.
Speaker A:It's doing.
Speaker A:It's doing incredible research LED stuff.
Speaker A:And some of that information I guess probably goes into Gemini, Google Gemini.
Speaker A:But Basically, if you're looking for the power of it, rather than the ethics and the capabilities, you're kind of OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini.
Speaker A:Between those three, they're all much of a muchness, although Claude doesn't do image generation.
Speaker A:But that doesn't matter too much to me.
Speaker C:I thought you're.
Speaker C:And I love the fact you've managed to squeeze it into create.
Speaker C:It's a shame it's not a swear word, but I'll let that go.
Speaker C:But I think that's a great.
Speaker C:I think it really gives someone a framework to think about.
Speaker C:Yeah, who.
Speaker C:Who do you want it to be?
Speaker C:You know, what do you want it to ask and what do you want it to output?
Speaker C:You know, I think that gives people great structure.
Speaker C:I think, unfortunately we have all got used to Google and we've learned that the way to get fine things on Google is to find the two juxtaposition words that are like, for instance, trying to look you up for my old podcast, for the podcast we did Bris and bullshit.
Speaker C:I was just like those two.
Speaker C:Because if I write something long in Google, it'll find me lots of articles, won't it?
Speaker C:It'll say, oh, well, it was written over here.
Speaker C:It was written over.
Speaker C:It's the opposite of what we've all trained ourselves to do and get good at with Google, isn't it?
Speaker C:But that's a fantastic framework within all of that.
Speaker C:Do you think that there's an easy way to check the output other than we know, without wasting time?
Speaker C:Is there some better way to check it?
Speaker A:I'm not saying that anything won't be wasting your time.
Speaker A:I offer no quick fix.
Speaker A:I think the problem is that companies are looking for a quick fix always.
Speaker A:And that's not what this is about.
Speaker A:If you're looking for that, if you're looking for a hack, you're looking for a quick fix.
Speaker A:AI is not what you should be looking at.
Speaker A:If you want a quick fix for your company, scrape off those barnacles.
Speaker A:You will get an immediate acceleration in productivity, you will improve staff morale.
Speaker A:And that's all the stuff that you need to do before you start using AI.
Speaker B:And then you'll be AI ready.
Speaker B:Yeah, exactly.
Speaker A:And you will already be more efficient, so you already be getting the rewards from the barnacle scraping.
Speaker A:So that's how you get immediate rewards.
Speaker A:But if you want to check stuff, it can take time.
Speaker A:There's three things that I recommend people do when it's really a valuable output that they've got.
Speaker A:It's the three Cs.
Speaker A:So the first one is confirm, and you're going to confirm that what the prompt has delivered is what you were after.
Speaker A:If it's not, you have to go back and adjust your prompt.
Speaker A:That's where the adjustments come in.
Speaker A:And you know, very often when people don't get what they want, it's because they didn't briefly very well or they didn't give enough context or whatever.
Speaker A:And that's the thing that tends to be missing is the context.
Speaker A:Then you've got.
Speaker A:The second C is check.
Speaker A:At this point, it's checking the facts.
Speaker A:So names, dates, numbers, all of that stuff needs to be checked.
Speaker A:You need to actually check that.
Speaker A:And you can potentially use some AI tools to do that.
Speaker A:I use Perplexity to help me with that.
Speaker A:And Perplexity is the best for that kind of factor.
Speaker C:People talk to me about Perplexity, so you can ask anything and I'm like, yeah, I can ask ChatGPT everything.
Speaker C:What's the difference with Perplexity?
Speaker A:Perplexity just feels as if it's more accurate.
Speaker A:And then there's.
Speaker A:It gives you the sources, so you can then check the sources, which is really useful.
Speaker A:You can get some of that in ChatGPT and in other stuff.
Speaker A:But I just, from experience using this stuff, I get better results using Perplexity.
Speaker B:Whose AI is Perplexity?
Speaker C:Is.
Speaker B:They're one of the big companies.
Speaker A:It's just, it's the company called.
Speaker A:It's just Perplexity.
Speaker A:And they, they use other AI engines as part of that.
Speaker A:The, the third C is about crafting and that is all about making it smell more like you than it smells of AI.
Speaker A:And this is what.
Speaker B:So you can get past the teacher?
Speaker A:Yeah, it could be, or you get past your boss or whatever, but sure.
Speaker A:But what it's about is you want people to hear your voice in their head when they read it, not the AI.
Speaker A:And this is what people, they don't understand.
Speaker A:Well, what do I need to change then?
Speaker A:Do I just get it to remove the hyphens and the phrases that it tends to.
Speaker A:No, it's about making sure that it's something that you would be happy to pass off as your work.
Speaker C:So that's actually human adapting it.
Speaker C:Because I was going to say to you, how do I teach it to do my tone better?
Speaker C:I mean, there's a new AI that's been introduced at work and it's like matching your tone.
Speaker C:It's read a lot of My emails matching your tone, Andrew, Direct of Forth and it does these emails.
Speaker C:That.
Speaker C:That's not right, you know, it's like bullet point, bullet point, bullet point.
Speaker C:I was like, yeah, look, I'm direct.
Speaker C:But that just sounds ridiculous, you know.
Speaker A:Is this called Pilot?
Speaker C:No, it's.
Speaker C:It's called Combinedly, which is developed through accountants.
Speaker C:Yeah, yeah, we're experimenting with.
Speaker B:Maybe it's just because it's an accountant.
Speaker C:Actually, we like, we like a piece of this.
Speaker C:Bigging them up now.
Speaker C:Big up.
Speaker C:Combining.
Speaker C:I guess I actually like some of the stuff it does in regard.
Speaker C:It actually analyzes and can pick up sentiments and say, these clients really happy.
Speaker C:These clients are getting pissed.
Speaker C:It's not completely accurate, but it's pretty cool because you put it up and you see.
Speaker C:And he also picks out compliments.
Speaker C:Thank you, John, so much.
Speaker C:Thank you, Lynn, so much.
Speaker C:Really a brilliant work is always really nice to see in my team that I run, you know, and it's just giving us a bit more of a sense.
Speaker C:And it's.
Speaker C:I mean, being American, they're like these people you can sell more stuff to.
Speaker C:These people are flight risk, you know, But.
Speaker C:But it also tries to draft emails for me that aren't a complete waste of time, but the tone is.
Speaker C:It may have read thousands of my emails, but it doesn't get me at all in terms of it.
Speaker C:How do you teach it?
Speaker C:Your tone.
Speaker A:Your tone.
Speaker A:Right, yeah, that's good.
Speaker A:Anyone that's interested can just connect with me on LinkedIn and I will give you the prompt to do this.
Speaker A:So what we do is, yeah, ask me for the prompt or else you don't get it.
Speaker A:So what we do is there's a prompt that I've written that you then upload, like 10 examples of emails that you've written or documents you've written and create the way that you write emails, you know, it changes depending who you're talking to.
Speaker A:But find some really good examples of ones that you feel is your general tone of voice.
Speaker A:But the tone of voice for your emails is going to be different to tone of voice of documents you write, which is going to be different from if you're writing an article or a blog post or whatever.
Speaker A:So create a tone of voice for each of those because there's no such thing as one tone of voice.
Speaker A:Then when you've got that, you bring those together into a document, like 10 examples, and that might go on for several pages of a document.
Speaker A:Then there's a prompt that I've written that you upload this document and it will give you like a hundred words that you can add into any prompt that will get it to write closer to your tone of voice.
Speaker B:You could then cut and paste that back into Grok or ChatGpt or Claude or whatever and the algorithm would be altered appropriately.
Speaker B:Whichever one agnostic.
Speaker A:It's likely to write things more in your tone using this.
Speaker A:So it describes your tone and it analyzes it in a way that are the different dimensions of writing.
Speaker A:So as somebody who was a writer for a living, I've taught clients about how to analyze tone and what the different dimensions of tone are.
Speaker A:So I basically brought that into a prompt so that it understands what that is, the kind of language you use, how much rhythm you have in the way you write and all of that analyzes that gives you this hundred word thing that you can either put into custom instructions.
Speaker A:So like on ChatGPT, Gemini and Claude and I don't know about Grok because I just don't open that thing.
Speaker A:You can add in custom instructions which means it colors all of the output.
Speaker A:So I add this into my custom instructions which means that all of the stuff that I get back is written in the tone that I'm happy with.
Speaker C:I might not be keen on doing that.
Speaker C:If my tone doesn't sound great, I hope it tells me I'm erudite, sophisticated.
Speaker C:But I imagine I have the tone.
Speaker B:Of a mix of Albert Einstein and.
Speaker C:Charles Dickens rather than a 12 year old who's on spice who thinks he's, you know, he's got no emotional intelligence or something.
Speaker C:It's like I'd hate to ask it what it tell me honestly what's my tone of voice like?
Speaker C:And it's like not great.
Speaker B:This might be, this might be challenging your mental health.
Speaker B:You're an at risk person.
Speaker C:Yeah, I would, I'd be very sick sensitive.
Speaker C:So before we crack on with the show, please consider subscribing to this wonderful channel and to our mating list.
Speaker C:At withoutbs.com you get free weekly classes from the best minds in business and free downloadable resources to strip away the jargon and give you the real world lessons.
Speaker C:You don't get a business school.
Speaker C:Thank you.
Speaker B:So we've sort of navigated the whole create thing, the three Cs, etc.
Speaker B:Etc.
Speaker B:What I and this is quite a boring question, but it's sort of how many prompts are we talking about here to sort of, to have just, you've just articulated this whole process.
Speaker B:Is that like is it all sort of straight out the gate at the beginning.
Speaker B:This is the character and this is the request.
Speaker B:And these are.
Speaker B:Or is it an iterative discussion dialogue with whichever AI that then inculcates each of those steps?
Speaker A:It depends.
Speaker A:There's a matrix that I sort of show when I'm teaching and the matrix goes from if you're looking for adequacy on this axis here, adequacy to excellence and then you've got one off to regular.
Speaker A:And that gives you four quadrants.
Speaker A:And the way that you would work with your AI tool is different for each of those four quadrants.
Speaker A:So if it's just one off adequacy, which is find me the best croissant shop in Paris.
Speaker C:It will.
Speaker A:That's all you need to put in.
Speaker A:It's a conversation and it's just like, no, not that one.
Speaker A:Can you look in this Aaron Diesmont?
Speaker A:And you know that's the way you do it.
Speaker A:Just do it as a simple conversation.
Speaker A:Then you've got excellence, but it's a one off thing at that point.
Speaker A:What you need to do is you need to have a strong vision for what a great output would be.
Speaker A:So you need to know what would be most valuable for me at this point.
Speaker A:And you might write down what the context is.
Speaker A:So there might be.
Speaker A:The context can be things like, here's the way we work around here, this is the kind of process we like.
Speaker A:This is the language you like to use.
Speaker A:Here's a bit of history about this,.
Speaker B:This client and your character point as well as one filter for that.
Speaker B:Yeah.
Speaker A:And so.
Speaker A:So all of that sort of context plus what your problem is and say so the more if you don't give it as much information as you would give a human, don't expect great stuff.
Speaker B:No.
Speaker A:So you imagine that you're speaking to an intern that's just started.
Speaker C:I would say that's a great way to do it.
Speaker C:Imagine you're speaking to a 22 year old.
Speaker C:He's just started.
Speaker C:But it's super bright.
Speaker A:Yeah.
Speaker A:You know, exactly.
Speaker A:So at that point you still need to work out what would be the thing that's most valuable to me.
Speaker A:What context would a human need to give me that?
Speaker A:And that's a one off thing.
Speaker A:That's gonna take you time but you're after excellence so you're gonna have to put in the time.
Speaker A:I'm sorry, there's no shortcuts to that.
Speaker C:It's a bit like the old adage about people think that as a manager, if you imagine you're the person you're Managing.
Speaker C:You're in a bowling place and they're trying to bowl the ball and they can't see the Skittles because they can't possibly understand that.
Speaker C:And there's something blocky.
Speaker C:You're trying to tell them a bit left or go a bit right.
Speaker C:And they're like, your job as a manager is to remove the blindfold so they can see the Skittles.
Speaker C:You know, it's that same thing, isn't it?
Speaker C:Give them the whole picture where the hell we're trying to go.
Speaker A:Exactly.
Speaker B:Is that it?
Speaker C:Really interesting, though, that you say, don't do too complex a point prompt, because I know people who successfully develop very long prompts for a very specific output, though.
Speaker C:You know, like a spreadsheet comes out with everything in the right.
Speaker C:Or whatever.
Speaker C:But.
Speaker C:But I would totally agree because I've worked with creatives so much and I watch people do it, that rather than going along to a creative person and saying, this is.
Speaker C:This is the vibe we want to correct, they go, I want it to be blue with a yellow thing in the corner, you know, and they're just like this, you know?
Speaker C:But it's interesting.
Speaker C:Don't be too detailed.
Speaker A:No, no, no.
Speaker A:I'm not saying don't be too detailed.
Speaker C:Okay.
Speaker A:My prompts are.
Speaker C:Don't build them in a box.
Speaker A:My prompts are still long.
Speaker A:It's what I'm asking for.
Speaker C:Yes.
Speaker A:Yes is something that's not too broad.
Speaker A:I'm being quite specific in what I'm asking for.
Speaker A:So if I can't say what I'm asking for in two sentences, I'm asking for too much.
Speaker A:So this is just describing that I want you.
Speaker A:I've got a document here that I've been sent by a client, if you're allowed to upload such a document.
Speaker A:That's another thing to talk about.
Speaker A:I've got this document that I've been sent by a client.
Speaker A:I would like you to analyze it and help me write a proposal for them.
Speaker A:That's kind of what you're saying there.
Speaker A:And then you're sort of saying, I want you to start by giving me the strategy for all the different points that we're going to want to cover off in this email, according to this document that I've attached.
Speaker A:Now, that's two sentences, but that's as much as you should be asking in terms of the ask.
Speaker A:But from that.
Speaker A:So that's the first part of the request.
Speaker A:The first what of the request.
Speaker A:But after that, when you start saying the second part of the request.
Speaker A:At that point, you can be really.
Speaker A:You can be asking for loads of stuff within that and then you're adding context in there as well.
Speaker A:So because of that, my prompts can be really quite long.
Speaker A:But it's not that they're complex or nuanced, they're still simple and clear.
Speaker A:But I know that it needs a certain amount of context in lots of contexts.
Speaker C:Clear.
Speaker C:Clear instruction.
Speaker A:Yeah.
Speaker A:So if we think of.
Speaker A:I've got this hokey theory about the.
Speaker A:The pyramid of knowledge and it's got four levels to it.
Speaker A:The first level is general knowledge.
Speaker A:The AI has got loads of that.
Speaker A:We don't have to worry about it.
Speaker A:It's get more general knowledge than any human alive.
Speaker A:The next one up is domain knowledge.
Speaker A:That's information about your industry or the departments that you work in.
Speaker A:That kind of specialism that you have.
Speaker A:Again, unless you have to sign a government NDA every single day, then the AI tool is bound to have that.
Speaker A:It's bound to have that in its knowledge.
Speaker A:It knows about marketing, it knows about accountancy, it knows about a lot of legal stuff.
Speaker A:But if it's like nuclear physics, it's probably not going to have that information in there because that stuff isn't out there on the Internet.
Speaker B:Epstein files.
Speaker A:Yeah, yeah.
Speaker A:The last two.
Speaker A:You've got contextual knowledge and then you've got specific knowledge.
Speaker A:Specific knowledge is looking at the task that you are doing and that's the least amount of information is about the task you are doing.
Speaker A:To be honest, the bit that people miss out is the contextual information.
Speaker C:Yes.
Speaker A:That's the stuff that makes the difference and that's the stuff that I end up spending a lot of time working with people on is right.
Speaker A:This is the bits that are missing and there's ways that we can do that where we can add that kind of information, create a document so that you can add whatever context you need to a prompt in a way that's nice and simple and easy.
Speaker C:We need to move forward, I think.
Speaker C:I mean, there was one thing you mentioned which I think we need to cross off because you mentioned it.
Speaker C:Can you give it the client information?
Speaker C:You've got a.
Speaker C:You've got a subscription account.
Speaker C:Does it really matter?
Speaker A:There's certain information that you are legally not allowed to upload according to European, like, legislation that we still have in this country.
Speaker C:Do we have the.
Speaker C:Have you done the EU law?
Speaker C:Does it apply.
Speaker C:It doesn't apply to us, though, for.
Speaker C:Because that's.
Speaker A:No, but there is like gdpr.
Speaker C:Oh, GDPR is wonderful.
Speaker B:World.
Speaker A:So.
Speaker A:So from that, you're not allowed to upload things like personally identifiable information to a server that you don't have control over.
Speaker A:When you add that into a prompt, you're basically uploading that to like OpenAI server and you cannot remove it.
Speaker A:Once it's up there, you've got no control over it.
Speaker A:So you are technically contravening the law GDPR by doing that.
Speaker A:But it's actually down to the policy of your company.
Speaker A:And there's a lot of companies that get policies that are actually just a bit wrong, they're a bit too stringent, they're not quite understanding how this stuff works.
Speaker A:But if your company says that you can't upload it to this tool, then you shouldn't.
Speaker A:And there's something that we will be seeing in the future is LLMs or even SLMs, small language models that will work even on these devices.
Speaker A:So I've actually got about four or five AI models running on my phone and I don't need to be connected to the Internet.
Speaker A:They actually run on the device on my Mac, I've got seven arrays AI models.
Speaker B:They are huge, but without sound excessively Machiavellian and cynical too.
Speaker B:Yeah, but you know, if you were to use the online version of Chat GBC or whatever and include that information as part of your work, that you're concerned about it then being uploaded forever into ChatGPT to be Machiavellian about it.
Speaker B:Unless they ever come after you and you're ever subpoenaed and you know, they're actually.
Speaker B:It doesn't matter.
Speaker A:Right?
Speaker B:You never.
Speaker B:Nobody's ever going.
Speaker B:And like, unless you did something really nefarious, like upload a big list of people's personal details or, you know, in a nefarious way.
Speaker B:But if you were just in the normal process of doing something pretty anodyne.
Speaker B:And why is that ever going to.
Speaker B:It's like saying something controversial on WhatsApp.
Speaker B:You know, you never.
Speaker B:Well, actually, lots of politicians.
Speaker B:It'd be lovely knowing don't interrogate my WhatsApp.
Speaker A:But no, you're absolutely right.
Speaker A:And there's a lot of companies that don't understand this, that if it gets put into the training data, then actually that stuff is not going to come out verbatim because as we're saying, it's not stored verbatim, it's stored aggregated and anonymized.
Speaker A:And so the chances of you actually getting that chunk of a document coming out.
Speaker C:I tested it too.
Speaker C:I've sat next with my mate because it's like, you know, people saying, oh, it knows.
Speaker C:And I'm like, all right, ask it, what's the name of Andrew's dog?
Speaker C:Because mine knows.
Speaker C:It was like, I don't know the name of Andrew's dog.
Speaker C:That's not public information.
Speaker C:So there's definitely gating going on.
Speaker C:But God knows what's going to happen to it all long term, you know.
Speaker A:You know, with that kind of stuff, when stuff is uploaded to these devices, you're not going to get it back the way it is.
Speaker A:Where you are more likely to lose that information is actually being intercepted on the Internet.
Speaker A:Or if you're using an unsecured Internet connection, you're more likely to lose it that way.
Speaker A:But then again, if you were sending an email internally with that information and you're using an unsecured network work, they could do it again.
Speaker A:So you're absolutely right.
Speaker A:And this is where a lot of companies and even a lot of CTOs don't understand that this information isn't likely to come out of the tool verbatim like that.
Speaker A:And that's the way they think it works.
Speaker A:And it doesn't work like that.
Speaker C:That was great.
Speaker C:Thank you.
Speaker C:You've said the danger is in the AI thinks.
Speaker C:For us, it's that we stop thinking.
Speaker C:What is the simplest way business owners can keep their thinking sharp?
Speaker A:Oh, the business owner's keeping their thinking sharp.
Speaker A:Well, I mean, for anyone that wants to keep their thinking sharp, I'd say do the work on paper before you open your laptop.
Speaker A:I think that's probably the best way to do it if we start losing those skills.
Speaker A:I mean, we know neuroscience that if you don't use it, you lose it.
Speaker A:And that's the problem is that there's three levels, the way that I look at it in neuroscience, in how we learn skills.
Speaker A:So if this is our brain, on the underside of our brain, we've got our hippocampuses under there somewhere.
Speaker A:And the hippocampus is where neural material is dripped during the day.
Speaker A:And as soon as you start picking up a new skill or you're creating mental maps for stuff, your hippocampus is involved in that.
Speaker A:So that's the first stage of it.
Speaker A:When you start learning something new, the next stage is it kind of like gets assigned to parts of your brain and it starts creating its little sort of tendrils of neurons that go out that will create a functional area within your brain.
Speaker A:And the more you do something, the more you learn about it, the more you try it out, the More functional this becomes and the stronger it becomes.
Speaker A:As it settles down and you start repeating stuff, it goes to phase three, which is what we call myelinization.
Speaker A:And that is when the neurons, they get a myelin sheath around it.
Speaker A:It's like the plastic around the wire.
Speaker B:That's what it is like driving a car.
Speaker B:It's completely hard coded into your.
Speaker A:Yeah.
Speaker A:Muscle memory kind of stuff.
Speaker A:And that's the third stage of it.
Speaker A:The problem is that when people are outsourcing and initially being dependent on AI tools, you don't even do stage one.
Speaker B:So that's.
Speaker B:Are you familiar with the Cal Newport writing.
Speaker B:Yeah.
Speaker B:And, you know, like, I'm a huge fan of his.
Speaker B:And so my publisher at the moment's running this campaign for.
Speaker B:To encourage learn reading children reading for pleasure.
Speaker B:Because it's like the number one correlate for, you know, reading for pleasure is the number one correlate for future life success, financial success, mental health.
Speaker A:Yeah.
Speaker B:Beyond the conditions, the conditions of your parents, the socioeconomic background you're from.
Speaker B:Yeah.
Speaker B:Like, you can be from the worst kind of cracker state and read for.
Speaker B:I mean, obviously it doesn't happen very often.
Speaker B:Read pleasure.
Speaker B:And you've got a better pro life outcome than somebody from, you know, an immensely wealthy, privileged background who doesn't read for pleasure.
Speaker B:It's insane.
Speaker B:And what I'm driving at here, which just goes to what you're talking about in terms of the brain architecture, is people need to read books, not spend 30 seconds watching TikTok videos.
Speaker B:Because then that.
Speaker B:I think it's such an important message.
Speaker B:I think all this stuff, AI and all sorts of other elements of it are fantastic and they're powerful and there's lots we can do with them.
Speaker B:But social media, this, everything else, if we lose long form, if children and teenagers and indeed adults lose long form and learning a musical instrument or whatever it might be to.
Speaker B:Exactly your point, to support the point you're making about functionally how brains develop.
Speaker B:We're screwed.
Speaker C:Oh, please show us a book.
Speaker A:I'm really interested in how kids learn and how AI can impact their brain.
Speaker A:The problem is that schools have been criminalizing AI and so have universities.
Speaker A:So when you criminalize it and say you're not allowed to use this, but I still want you to do this se at home, of course they're going to use AI.
Speaker B:Of course they will.
Speaker B:Well, the more you criminalize something, the more people want to do it.
Speaker A:Right.
Speaker B:Exactly.
Speaker A:So I created a book, show it Up, I wrote about a year ago.
Speaker A:Here we go.
Speaker A:GPT Jr and it's got a video course that goes with it that my 10 year old daughter did with me.
Speaker A:So the two of us do the course together and this is all about showing kids how to use AI as a way of elevating your thinking, bringing subjects to life, filling gaps that the school isn't quite filling in the education and showing them how to use AI to grow their brain rather than simply as a cheap machine.
Speaker B:Yeah.
Speaker A:And so that is what this is about.
Speaker A:It's currently in over 100 schools and that's something I'm very passionate about is actually teaching kids how to use AI properly.
Speaker A:Because if we don't, they're still going to use it, but they're going to use it.
Speaker A:Wrong.
Speaker B:Yeah, correct.
Speaker B:And is that when you say it's in 100 schools, is that embedded into some kind of syllabus in schools or just the book is in.
Speaker A:I'm not sure the book is in 100 schools.
Speaker A:I'm not sure what the schools are doing with it at the moment because it only came out a few months ago.
Speaker C:Yeah, yeah, that's great.
Speaker C:It could be such a useful tool for knowledge and learning.
Speaker C:Really.
Speaker C:I agree.
Speaker C:I mean I'm a Wikipedia advocate but, but I find it, it's a sort of of.
Speaker C:I assume everyone's curious.
Speaker C:Sometimes people say, oh, you know, you're, you know, some people are more curious than other.
Speaker C:But it's that sort of curiosity, isn't it, to learn?
Speaker C:It's a sort of, you know, tell me more about this subject.
Speaker C:You know, and the way when you search on Google you have to sort of read articles and put together and sort of fit in between.
Speaker C:But I usually have quite specific questions.
Speaker C:The way it can do that for you is such a pleasure.
Speaker A:But you know, the curiosity, it's not evenly distributed across people and it's not evenly distributed within that person.
Speaker B:It's also probably learned.
Speaker A:Yes, I think probably modeled, I would say maybe rather than learned.
Speaker A:So they see other people around them being curious.
Speaker A:I think if you grow up in a book reading household, there's more likelihood that you're going to be a book yourself.
Speaker B:Well, you don't need to just say you think that.
Speaker B:It's absolutely demonstrably true.
Speaker C:I think in a way, section four we've covered.
Speaker B:Well, the one that I was quite interested in answering, I think it's a good one, is it's framed as are there tasks that humans still outperform AI in every time or their domains.
Speaker B:And can you.
Speaker B:What are examples of that?
Speaker A:Love.
Speaker A:Okay, okay.
Speaker A:But I mean, generally this is, this is something that there are these people who become dependent on it.
Speaker B:Yeah, yeah.
Speaker A:And this is the thing that they.
Speaker B:Don't realize it's an ephemera though, right?
Speaker A:Yeah.
Speaker A:But they don't want to understand the truth and they don't want to understand that all it is is that it's a word guessing machine.
Speaker A:That's.
Speaker B:But to my point, these are the same people that, you know, 50 years ago would have been an obsessive stalking fan of Elvis or a fee or some.
Speaker A:You know, there is weakness in the, in, in humankind, and it is.
Speaker A:You were talking about anthropomorphization and how we, that we look at our animals and our animals because our dogs have got these sort of eyebrows.
Speaker A:We, we give them just the feeling, personality from it, and we anthropomorphize and go, well, what's the.
Speaker A:The dog's guilty.
Speaker A:Definitely guilty.
Speaker A:The dog's guilty.
Speaker A:Look, you can see the dog's guilty.
Speaker A:Dog's probably not freaking guilty.
Speaker A:We're just adding that because that's an emotion that we understand.
Speaker A:It's got some emotion that we don't know because we don't.
Speaker B:The dog's got a mating the size of a squash ball.
Speaker B:Cats, maybe.
Speaker A:But there is this thing.
Speaker A:There's so many people at the anthropomorphise, the AI tools and that.
Speaker A:That is something that I think it starts to get risky when we get a bit of that.
Speaker A:But the things that humans can do that the AI can't.
Speaker A:I was doing a talk this morning and I showed here' swhen it comes to creativity.
Speaker A:It was a bunch of people involved in the creative industries.
Speaker A:I said, I've mapped here.
Speaker A:This is all the different parts of creative skills.
Speaker A:And it's everything from memory consolidation, connection, all of these different things that are part of creativity.
Speaker A:And this list and there's like 40 different things on screen.
Speaker A:And I said, I've highlighted in green all the ones that AI can do.
Speaker A:And I press a button and they all turn green and they're like, they're creative tasks.
Speaker A:Creative things.
Speaker A:I said, yeah, but here's the ones that AI isn't very good at.
Speaker A:And I put it up.
Speaker A:It's not great at conceptualization because what it's doing is working with data it's already got.
Speaker A:But what we want to do is we want to do something that's not been done before.
Speaker A:So when it comes to that kind of originality of conceptualization, I think that humans are Way ahead, and will be for a while.
Speaker A:Then you've got a creative voice, your style.
Speaker A:That again is something that these are generalist machines, but humans can create their own creative voice.
Speaker A:And then what I believe is the skill, the skill that everyone needs to be looking at is judgment.
Speaker A:Knowing what good looks like and knowing how to get there, that is the most important thing.
Speaker A:And what I've been finding is that more and more people are becoming over reliant on the AI tools to do the judgment for them, them.
Speaker A:And it's something I'm wanting to do a study into this at the moment.
Speaker A:So if there's any universities out there that want to work with me, just let me know.
Speaker A:And the study will be looking at how people feel about their own judgment.
Speaker A:Does the confidence in their own judgment has it dropped because of AI tools?
Speaker A:Because I'm hearing more and more of people that will come up with something and then they will check with the AI to see is that right?
Speaker A:Is that good?
Speaker A:They're losing confidence in their own ability to judge.
Speaker A:And I think that that is a dangerous and scary place to be.
Speaker B:That was always.
Speaker B:It was already such a capricious kind of subjective error, isn't it?
Speaker B:Because there are all these studies saying that the definition of delusional is people who have high confidence in their ability to judge things, but are invariably wrong and poor judges of things.
Speaker B:But then the other end of the distribution, you have people who have low confidence but actually really know what they're talking about.
Speaker A:And we've got studies now with AI that are showing that people have got more conf in what they've produced, but it's less, it's worse quality.
Speaker B:And is that only because they've been validated by the AI, Say, yeah, this is great work.
Speaker B:But to your point about the sycophancy that's built in again, yeah, absolutely.
Speaker C:You know, the imposter complex is only recently recognized, but it's got millions of years of evolutionary programming in there.
Speaker C:There must be a sort of reason that as humans we kind of know, even though we've been doing, we got more and more knowledgeable that we sit, I don't know, there's limits there.
Speaker C:And I agree with the conceptualization I've tested to death because my son is always like, can it do a Mustang tank?
Speaker C:And what it does is it takes like, it just can't do it like you would do in your head like a Ferrari double decker bus.
Speaker C:It would just get a bit of a Ferrari and stick it on the front of a Double decker bus.
Speaker C:It's like, no, like make a double decker bus.
Speaker C:Like Ferrari would design a double decker bus, which you do in your head.
Speaker C:So it can't blend ideas.
Speaker C:It can't sort of.
Speaker C:Well, that they can't be novel in a way, can it?
Speaker C:It can't invent something new.
Speaker A:They're actually what you're talking about.
Speaker A:There is combinatorial thinkings when we're combining different things.
Speaker C:So many big words in this podcast.
Speaker A:And AI tools are fantastic at combinatorial thinking of taking this and that in a way that humans really struggle with.
Speaker A:But that is a creative skill because most things that we consider to be new are in fact existing elements that have come together.
Speaker A:So look at this.
Speaker A:This is, this is a phone.
Speaker A:That was the old dial up phone that we had smooshed together with a.
Speaker B:Computer and a camera and a dietitian and a fitness instructor and a map.
Speaker A:But you can look at the journey of the phone.
Speaker A:When we look at this, it actually goes all the way back to.
Speaker A:It's about 200 years.
Speaker A:You can look at the actual technological, what we've been doing to get to this slice of glass that we have here.
Speaker A:And it comes from what we called speaking tubes on ships.
Speaker A:And it was before they started doing this, before we had steam engines in ships.
Speaker A:So somebody would be up in the crow's nest and the wind is howling and the sails are flapping, but they want to say, there's a boat in the horizon and they're shouting down.
Speaker A:And people are looking up and they're going, what?
Speaker A:What is that, Brian?
Speaker A:What?
Speaker A:And so what they did was they took pieces of old sail and they created tubes of canvas and they lashed these to the mast and the person up in the crow's nest would shout down this improvised tube of canvas.
Speaker A:The person at the bottom could hear them.
Speaker A:Incredible.
Speaker A:I mean, you could take that all the way back to Paleolithic times and look at humans maybe shouting down a hollow log.
Speaker A:That's rotted out.
Speaker A:So they understood the concept of it, but then they turned it into a product.
Speaker A:Then from that we started to build that into the fabrics of the ships, particularly when we started to build steamships so that you could do the thing.
Speaker A:And they'd go down to the engine room and then they'd shout down to the engine room, good mole, cool in the boiler room, chap.
Speaker A:Then when people got back from these wars with the steamships at the time, the navy, the people who would be in the ships in the navy would tend to come from like you know, grew up in a stately home, these kind of people, and they would get home and they had these expensive houses.
Speaker C:You know, I.
Speaker A:There was this b.
Speaker A:Wonderful thing on the ship that they had, and it meant that we were able to talk to the people in the engine room.
Speaker A:And I was able to say, I would like a cup of tea.
Speaker A:And they brought me a cup of tea.
Speaker A:So I was thinking that what we do in the house is we get a speaking tube and we actually put it in so that I can ring a little bell and I.
Speaker B:And.
Speaker A:And Mary will come and go, hello, master, where is it?
Speaker A:And I will say, yes, another gin and tonic.
Speaker A:And within two minutes, so, so.
Speaker A:So they brought this technology back, put it into houses.
Speaker A:Then we got the telephone.
Speaker A:And telephones were originally sold in pairs.
Speaker A:They didn't network them at first, they sold them in pairs.
Speaker A:And it was for the upstairs and the downstairs.
Speaker B:Well, and you know the story about Edison designing the.
Speaker B:er, which was in, I think the:Speaker B:And the only reason that they were developing means of recording for posterity, you know, permanently recording sound.
Speaker B:That was the technology thereafter.
Speaker B:It was.
Speaker B:They hadn't even conceived of all the uses for it, you know, like music or theater or latterly film.
Speaker B:All they were trying to do was in the early telegram, you know, the boats sending telegrams to shore or whatever or across the States was the telegram operators wanted to save money on telegram offers, having to work through the night, and they wanted to be able to record what they wanted to say and then be able to send it and for them to record it, and it would be there in the morning for whatever for them, or it would be to save on.
Speaker B:So they hadn't even conceived of the idea that you might be able to use that for music or the spoken word or a podcast or, you know, or a book or.
Speaker B:I mean, that all came after it.
Speaker B:Isn't that amazing?
Speaker A:But this is the thing about the AI tools, the people who are building the AI tools, they are trying to.
Speaker A:They don't know what the use cases are.
Speaker A:People are going to discover those use cases.
Speaker A:And in the same way as Listerine was originally used to clean linoleum.
Speaker B:Yeah, yeah.
Speaker A:And then it became something else and then it became mouthwash.
Speaker A:It was a gonorrhea cure at one point as well.
Speaker A:To wash your area with Viagra was.
Speaker B:Originally it was a heart medication.
Speaker A:So it's like this with AI tools that we're still years away from understanding what the potential of the Tools we currently have are.
Speaker B:And that's also because one of the things I wonder whether we get into today is this whole.
Speaker B:The threat to jobs and the sort of existential threats of humanity, everything else.
Speaker B:And one of the things I always go back to, and I think Marc Andreessen's come up with these thoughts, is the lump of labor fallacy to your point about we're navigating towards the future.
Speaker B:We have no idea, we haven't got ideas about the use cases.
Speaker B:We are hopeless of that demonstrably.
Speaker B:eassuring thoughts is that in:Speaker B:And a few were miners and some were in the navy to your earlier example.
Speaker B:But there are 90% of the population were farmers and some ridiculous percent of the population was involved in horses.
Speaker A:Yes.
Speaker B:And if you'd said to them 200 years from now 0.5% of people will be farmers, they would have been like existential panic.
Speaker B:Like what the hell are the 90, 89.5% of people going to do?
Speaker B:'t exist and didn't happen in:Speaker B:And I hope that one of the more reassuring things on this journey is to that all this threat, there'll be ructions and there'll be creative chaos on the way through.
Speaker B:But yeah, hopefully on the side there will be a lot of use cases for a lot of economic.
Speaker A:I believe there will.
Speaker A:But are there going to be victims of this?
Speaker A:Are there going to be.
Speaker A:Yes, absolutely.
Speaker A:And part of that is because if you've invested a lot of time in developing skills and your knowledge and everything, and then suddenly that blown out of the water.
Speaker A:Yeah, suddenly that's lost all value you.
Speaker A:Because the AI can do your job better and faster.
Speaker A:And it's not just AI, we've also got robotics coming in.
Speaker A:Automation was already on its way in anyway before AI was around.
Speaker A:And all of this technology is happening at once.
Speaker A:Yeah, it's going to disrupt the market.
Speaker A:Of course it is.
Speaker A:There are going to be winners, there's going to be losers.
Speaker A:And with all of these things, it's like a gold rush at the moment.
Speaker A:And, and with the gold rush, there's.
Speaker A:We have to understand that in the gold rush, the people who made the money.
Speaker B:Pink shovels.
Speaker B:Yeah.
Speaker A:It wasn't the people who were actually finding the nuggets of gold.
Speaker B:Levi Strauss.
Speaker A:Yep.
Speaker A:It was the people who were running the whorehouses, bringing in the food.
Speaker C:I'm glad someone brought up the whorehouses.
Speaker C:I don't like to do a podcast without that.
Speaker A:Yeah, so, yeah, so I mean, if you want to make money in this age right now, let's have an AI whorehouse, sell AI picks and shovels.
Speaker B:Well, surely that's going to require some robotics as well.
Speaker B:Well, moving swiftly on whilst we're talking about economics, especially in the corporate setting, commercial stuff.
Speaker B:So, you know, you help Fortune 500 companies through this sort of stuff.
Speaker B:How can corporate leaders bring this into their teams without freaking everyone out and impacting morale?
Speaker A:Goodness, to me, there's been corporate irresponsibility from leadership and it is this thing where the, they've refused to deal with it, they've stuck their heads in the sand, which means that there's just been silence to all of their workforce who.
Speaker B:Are worried, who are genuinely worried.
Speaker A:They're genuinely worried.
Speaker A:They need leadership, they need somebody with that flag saying, no, we're here to make sure that this is gonna benefit you and that we're gonna take it your knowledge and we're gonna be, this is about making you better at what you do.
Speaker A:And that's going to be our belief about this.
Speaker A:No, because a lot of these people in the board are going, well, actually, if we could get rid of a.
Speaker B:Corporate, we can increase our margins by three and a half percent.
Speaker A:So there's this phrase you'll have heard again and again.
Speaker A:It pisses me off every time I hear it, which is, you're not going to be replaced by AI.
Speaker A:You're going to be replaced by somebody knows AI.
Speaker B:People using AI.
Speaker C:It pisses you off, does it?
Speaker A:Yeah, pisses me off, yeah.
Speaker A:Because.
Speaker A:Because that's not the truth.
Speaker A:You're not going to be replaced by AI, you're going to be replaced by somebody in leadership who is trying to cut costs and has believed the bullshit that they've been told.
Speaker A:That's the truth.
Speaker A:And we have to understand that any layoffs are human decisions.
Speaker B:Okay, but just to challenge that a bit, as a leader or a board member of a stock market listed company, your obligation is legally you have a fiduciary due to yourselves.
Speaker B:Now, if another company is using AI and cutting costs and improving margins and is now 200 million quid bigger than you are, market cap, more powerful, can raise more money from the stock market.
Speaker A:Suddenly it creates a waterfall effect.
Speaker B:Yeah, Correct.
Speaker B:But that's not something like one of the tragedies of capitalism.
Speaker B:Capitalism, a lot of merits, but one of the problems is that by its very nature it scales a creative destruction and if you don't, you get killed and you get a fine.
Speaker B:So do.
Speaker B:And in the long run the companies that survive and thrive are the ones that have the capital to then reinvest and invest in their staff and everything else.
Speaker C:So I think it's monopoly.
Speaker B:I think it's a bit, well, it tends towards oligopoly generally, doesn't it?
Speaker C:Yeah.
Speaker B:And if you think about, think about like mobile telecoms like Nokia and Siemens and Ericsson were the massive 800 pound gorillas in the mobile phone market 20 years ago.
Speaker B:Five years later they're gone.
Speaker B:So people are like, oh well, you know, monopolies and then send season, monopoly and then these companies dominate forever.
Speaker B:It's like.
Speaker B:No, it feels like that because they often dominate for 20 years and you know, maybe structurally they'll dominate more now for a number of structural reasons.
Speaker B:But in something like mobile telephony, Nokia was the biggest mobile telephone company in the world and Apple was a fairly small company that made pink computers for grannies and artists.
Speaker B:And literally within a decade Nokia was destroyed and so was Ericsson and Apple was the, this multi trillion dollar company.
Speaker A:I am not trying to insult anyone on a board for the decisions that they're making.
Speaker B:Some of them will be really crap to be clear.
Speaker A:I want them to apply more human thought to this and I want them to fill their knowledge gap with information.
Speaker A:Because at the moment I believe that we've got some pretty bad decision making.
Speaker A:Really, really poor leadership across most industries.
Speaker A:But what we need to be looking at, there's so much short termism of looking how can we save money this quarter?
Speaker A:And that's not the right approach for AI because AI is going to take time for people to adjust and get into it in a quarter, six months, a year is not long enough for us to be looking for the results, the payback from this.
Speaker A:We need to be looking bigger, we need to be going well actually what are the opportunities that this opens up in terms of us amplifying our human capital?
Speaker A:We need to be looking at what are the opportunities that this opens up that were never previously possible before AI came along.
Speaker A:But no, that's a three stage again, that's another my pyramids, a three stage pyramid that everyone's stuck in the first layer of that pyramid it seems, and I'd say that probably about 80% of companies are still stuck on just using AI to reduce costs and get stuff done faster and cheaper.
Speaker A:It's the next stage up is when we use it to augment human skills.
Speaker A:And then next, the final bit on the top is we unlock opportunities that were never previously possible.
Speaker A:And I've only spoken to a couple of companies that are looking, toying with that top bit.
Speaker A:None of them have actually sort of done anything about it yet.
Speaker A:But about 20% of the companies I speak to are looking at this human capital.
Speaker A:How do we amplify that human capital?
Speaker B:Yeah, but it's like, you know, probably only 1% of them.
Speaker B:As ever, the most enlightened, smart, human, thoughtful executives or corporate leaders will do amazing things and cherish their human capital and look after people and they'll probably come from very high margin industries and not from very low margin industries who can't justify that expense.
Speaker B:Right.
Speaker A:That's one of the things I'm finding.
Speaker A:It's low margin industries are the ones that are really worried and struggling at the moment.
Speaker C:If you've got advice for leaders about how should they handle this conversation with their staff, how do they talk about it?
Speaker A:It, why don't they ask their staff rather than feeling that they have to immediately if they don't know the answers, why don't they ask their staff?
Speaker A:Because that's actually what the staff are looking for is to be involved.
Speaker A:Because to be honest, the staff understand their job and if you give them a remit of going actually let us know what your concerns are.
Speaker A:What are your problems?
Speaker A:What, what are your worries about this?
Speaker A:Let's actually get it on the table.
Speaker C:What are your excitements?
Speaker A:Yeah.
Speaker A:And what can we, let's discuss this then.
Speaker A:You're going to find that there's a lot of people with, there's stats that show there's a lot of people using AI but not telling anyone because they're worried that if people know I used AI for this, they're going to think I'm cheating or they're going to, they're not going to use this report.
Speaker A:So there's so much under the table stuff.
Speaker A:People, most companies, big companies are saying it's Copilot.
Speaker A:We've got Copilot because we're bought into the Microsoft suite.
Speaker A:Copilot is not great.
Speaker A:So people were finding that most people are using ChatGPT under the table.
Speaker C:We've done side by side comparisons.
Speaker C:ChatGPT is better than Copilot.
Speaker A:So we're finding that about 70% of people in organizations are using non approved tools, AI tools in the workplace and we're finding about the same amount of people have admit to having uploaded information they shouldn't have to an AI tool.
Speaker A:Tool.
Speaker A:So this is all because of poor leadership in organizations.
Speaker A:And I did a little bit of research on LinkedIn a couple of months ago where I was asking, does your company have an AI policy?
Speaker A:And only about 40% of people said, yes, our company's got a policy.
Speaker B:It's probably still nascent as well and sort of embryonic, isn't it?
Speaker C:ChatGPT, write me a policy.
Speaker A:Yeah, but another.
Speaker A:Yeah, but another 40%.
Speaker A:We're saying that we're working on it.
Speaker A:We've been working on it for a while.
Speaker A:But the thing is, policy is not even enough.
Speaker A:What a policy does is says, here's.
Speaker A:Well, it's like a legal document that's a company covering its arse and it's basically saying that if you do this, we will fire you and you'll be legally accountable for whatever.
Speaker A:So policies.
Speaker A:Nobody opens a document with a policy in it, nobody reads it.
Speaker A:So it's absolute nonsense.
Speaker A:Yet still only 40% of companies have got that.
Speaker A:What we need is beyond that is a playbook that's not just about what you don't do, it's about what you do do, what you should do, how you actually use the tool to get the most value out of it.
Speaker A:I've got one client that we did that because with the leadership team of the company, there's 20 of them in a room.
Speaker A:And I said, you know, at the time, this was two years ago ago, said only 12% of companies, apparently, according to recent statistics, have a policy, an AI policy.
Speaker A:And somebody like, sir, sir, put their hand up and was like, oh, we've got one.
Speaker A:And I said, brilliant.
Speaker A:Can I see it?
Speaker A:And what really impressed me was they knew how to connect to the printer in the corner of the room from their laptop.
Speaker A:That impressed me.
Speaker A:I've not seen that very often.
Speaker A:And they managed to print out the policy for me and it was like five pages of densely written legalese text.
Speaker C:Jesus.
Speaker A:And I was like, okay, how many people here in the room?
Speaker A:This is the leadership team.
Speaker A:20 People.
Speaker A:How many people knew this existed?
Speaker A:Three hands went up.
Speaker A:I said, all right, who's read it?
Speaker A:And one person did this.
Speaker A:And I said, you skimmed it, didn't you?
Speaker A:And they're like, yeah.
Speaker A:I said, do you know, can you.
Speaker B:Summarize it for us all, please?
Speaker A:Would you know what to do if somebody in your team uploaded personal identifiable information?
Speaker A:Okay.
Speaker A:And I just ripped it and put it in the bin.
Speaker A:I said, that's an absolute waste of time.
Speaker A:I said, you've sent that.
Speaker A:I.
Speaker A:They said, yeah, everyone got an email about that.
Speaker A:Said, you've not fixed that problem and if you think you have, you need to have a long hard look at it.
Speaker B:Standard corporate box ticking, right?
Speaker B:Yeah.
Speaker A:So I said, what you need to have is this kind of playbook.
Speaker A:And I explained to them what they should have.
Speaker A:And we did this thing that was beautiful.
Speaker A:On one side it would say, do not upload personal identifiable information.
Speaker A:On the other page explained what we meant by that and it gave you an example.
Speaker A:So you knew how that was worked.
Speaker A:Right, next page, here's another point.
Speaker A:And it also told them, here's how to get the most out of the tools.
Speaker A:Try this technique, do this.
Speaker A:Here's some prompting advice.
Speaker A:So it gave you examples, understand context, find examples.
Speaker A:You know, all of this stuff.
Speaker A:It gave you guidance on what to do as well as what not to do.
Speaker A:And I think that's really important.
Speaker A:And then we printed it out and everybody got a copy to go on their desk, desk and printed it out in a level of quality that it wasn't immediately going to hit the bin.
Speaker A:And that's the important thing, so that people know when they're using it.
Speaker A:Can get that out the drawer.
Speaker A:Ah, right.
Speaker A:Okay.
Speaker A:I can't do that.
Speaker C:I think also in business, you know, you've got to over communicate.
Speaker C:It's so fundamental to leadership.
Speaker C:You know, tell the staff about it, go through it.
Speaker C:Over communicate.
Speaker C:Over communicate.
Speaker A:Absolutely.
Speaker C:Because we all forget it constantly.
Speaker C:You could take one of each of those points and work through them slowly.
Speaker C:But I like the fact you've got to engage everyone in it.
Speaker C:We've got this new tool.
Speaker C:It's like we've built a new, you know, I don't know, we've got a new vehicle, how are we going to use it?
Speaker C:Or we've got a new chainsaw, what we're going to chop up, you know.
Speaker A:Yeah.
Speaker B:So basically incorporating everything you've just been talking about in terms of how companies should think about, you know, brass tax, how does a management team try and implement or build that kind of plan you were talking about earlier in like an afternoon?
Speaker B:Fastest route from A to B. Yeah.
Speaker A:I mean, I've got to say, if you've only got an afternoon to do it again, don't expect great things, you're really not committing.
Speaker A:But if you've only got an afternoon, then I think first of all, you start with your business strategy, because AI needs to have a strategy and if you don't align it with your business strategy, you're going to be pulling in two different directions.
Speaker A:So what is your business strategy?
Speaker A:What are you after?
Speaker A:First of all, start with that.
Speaker C:That.
Speaker A:Then go, all right, what would our strategy then be for AI?
Speaker A:And saying that we're going to try out a few pilots is not a strategy.
Speaker A:That is playing with it.
Speaker A:That's not a strategy.
Speaker A:Are you looking for it to cut costs as the most basic thing to do at least?
Speaker A:Then go, well, yes, we're wanting to do that initially.
Speaker A:So let's.
Speaker A:Beyond that, go, what would we do after that?
Speaker A:That.
Speaker A:What is your policy going to be?
Speaker A:On staff.
Speaker A:On staff.
Speaker A:Using this?
Speaker A:On replacing staff?
Speaker A:Is that what you're wanting to do?
Speaker A:Be honest with yourself.
Speaker A:Don't.
Speaker A:Don't try and skirt around that issue.
Speaker B:Be honest with your staff.
Speaker B:If that is the key.
Speaker B:Strategic.
Speaker A:Yeah, go.
Speaker A:And I mean, it's not.
Speaker A:It's not an easy one to say, but if that's the way you are, then, I mean, be honest.
Speaker A:I think people need to know employment.
Speaker C:Law in that regard.
Speaker A:But, yes, yeah, okay, I'm not an expert on that, so I'll leave that to the experts.
Speaker A:But from that, I think that we also need to understand that you have to get your.
Speaker A:You have to be doing this at the same time from two different directions, from the top down and from the bottom up.
Speaker A:So you should probably have an AI amnesty in your organization and say to people, right, who's using this?
Speaker A:What are you using, and what are you using it for?
Speaker A:Because these people have maybe worked out the stuff that would actually be valuable and should be democratized across the organization.
Speaker A:So I don't know.
Speaker A:The RAF used to do this thing when they built Nissan Huts is they wouldn't put the paths down, so they wouldn't lay any concrete slabs.
Speaker A:Instead, what they did is they just left people to just walk their own way in between the huts, and it would create these worn paths in the grass and they would pave those.
Speaker C:That's genius.
Speaker A:So it's what you would call desire lines.
Speaker A:So they paved the desire lines.
Speaker A:So maybe your staff have already worn some desire lines into how they use.
Speaker B:AI and you might as well leverage that domain knowledge.
Speaker A:I think you probably want to measure it first to make sure that it's.
Speaker B:And also in that amnesty is a good word because you're basically saying to people, look, no, no comebacks.
Speaker B:You know, just be on.
Speaker B:Raise your hand, be honest.
Speaker B:And there's no.
Speaker B:Nobody's gonna get sacked because that you've been using it to help you in your role for the last six months in an unsanctioned way or whatever.
Speaker B:And.
Speaker A:Yeah, so from that, I think there's a lot of information you need and that's part of the information you need.
Speaker B:Crowd crowdsource.
Speaker A:And there is the knowledge.
Speaker A:Yeah.
Speaker A:Can you get the knowledge from your staff on how this fits in invest in training too?
Speaker A:Oh, yeah, absolutely.
Speaker C:I'm not even pushing your products, but if you give people training and you give people time, especially with, you know, quality third parties like you who are going to talk to the team and go through, well, they could use it and just keep iterating those things, like training doesn't happen once.
Speaker C:You just keep coming back to it.
Speaker C:Let's keep talking about it.
Speaker A:That's it.
Speaker A:Thank you.
Speaker A:And I'd forgotten completely that I should really be pushing my own product.
Speaker C:You don't need to.
Speaker C:I think it's so.
Speaker C:I think it's so great, great, great what you do out there.
Speaker C:Obviously you would align with the business needs, but that people feel supported you're there to support your team.
Speaker A:Yeah.
Speaker A:I mean, there's all sorts of stats that we know about.
Speaker A:The ROI you get from training is absolutely huge.
Speaker A:So it's one of the biggest indicators of company success is your investment in training.
Speaker A:And We've got about 35, 40 AI experts around the world that all come from different.
Speaker A:Different backgrounds, have got different perspectives.
Speaker A:So we've got people that specialize in ethics, we've got neuroscientists, we've got a professor who teaches critical thinking.
Speaker A:Because one of the things that we look at is that actually the most important skills aren't the technical skills necessarily, but it's the human skills that you need in the age of AI.
Speaker A:And that's the stuff that I'm really passionate about because everyone is a manager now.
Speaker A:Now, even if you're an intern that started today, you are a manager now because if you're working with an AI tool, you need management skills.
Speaker A:You need to know how to create your own vision for what you're after.
Speaker A:You need to understand what strategy is and how to come up with it.
Speaker A:You need to develop judgment.
Speaker A:So that's knowing what's good and knowing how to get there.
Speaker A:You need to understand how to present your work.
Speaker A:And all these are management skills.
Speaker A:So that's the kind of stuff we're actually.
Speaker A:We're teaching a lot of this at the moment because it's about getting the right skills in the age of AI, not just AI skills.
Speaker A:That's the very narrow minded approach to it.
Speaker A:And a lot of companies come to us just wanting show our teams how to prompt and we're like, actually there's more than you need, more than that.
Speaker A:It's all about the skills that we need within an AI operated and environment.
Speaker C:You could argue that an AI is just a new employee that you have and that all of the things you're talking about are helpful skills.
Speaker C:We are generally quite bad all people are when they manage about spending the time to give someone the context, allow, you know, spending that extra half hour to make them understand the client, make them understand what we're really trying to achieve, how it fits together.
Speaker C:And I think if you see the world as I've got 10 employees and a new person's joined called AI and they are, have certain strengths and weaknesses and we're all allowed to talk to them and we can give them context.
Speaker C:That sort of feels, in my mind it feels a happier place than saying you've got these people and then we've got AI and it's a sort of competition.
Speaker A:There is, I was actually discussing this with an HR specialist yesterday, the idea of AI employees because there's something that we're starting to see people talk about more and more particularly when we're talking about agents and we're talking about automations, seeing them as AI employees.
Speaker A:So it's like it irks me a bit.
Speaker A:I feel very uncomfortable about it because most of the people who talk about it in AI are basically these hucksters, these tech bros that are talking about hey, you know, I've just, I've just saved it three quarters of a million pounds by getting nine agents to do all the work.
Speaker A:And, and it's like, yeah, no, they.
Speaker B:Were like half bummy and half cockney.
Speaker B:Just saved out of millions of.
Speaker A:So, so, so it irks me the idea of AI employees because it's these hucksters who are doing all this bullshit about it.
Speaker A:But it's something that companies are going to be talking about more and more because actually yes there are some people's job already is quite robotic and it doesn't require a hell of a lot of cerebral input.
Speaker A:And that stuff can be automated, but it could kind of be automated before AI to be honest, a lot of it, it's just now it's much easier to do it with AI.
Speaker A:So yes, AI employees.
Speaker A:So I was discussing yesterday with this HR specialist and we were going, well actually let's look at it from the point of view of how we would Work with an actual employee to see.
Speaker A:Do we hire an AI employee?
Speaker A:Do we need to interview?
Speaker A:Do we need to work out what the core skills are, have a job,.
Speaker B:Write a job description, Write a job.
Speaker A:Description for them, what their roles and responsibilities are, what their deliverables are?
Speaker A:You know, do we need to continue to train them?
Speaker A:So we need to think of that.
Speaker A:Do we need to be doing 6 month reviews?
Speaker A:How do we make it redundant?
Speaker A:Is there a fallback?
Speaker A:If it stops working, should it pay tax?
Speaker A:Well, yes, that's another thing for these AI companies.
Speaker A:But yeah, so there's a lot of conceptual thinking that's needed round about that that hasn't been done yet.
Speaker A:And that's some of the stuff that with the folk in the Genai Academy, we're starting to explore that so that we can be ahead of the companies when they're asking for it.
Speaker C:Dave, you've been brilliant and I think such a helpful, big overall picture of what's going on here and how to manage it.
Speaker C:You mentioned this Genai Academy.
Speaker C:Tell us a little bit about that so we can understand.
Speaker A:So I started this up with somebody I was working with.
Speaker A:So Helena had been helping me with my business, which was mainly sort of training and speaking.
Speaker A:And we realized that because I was getting so much, so many conversations, companies wanting to talk to me, speaking engagements, training requests, and for some of those I didn't have the expertise.
Speaker A:I could talk about that stuff.
Speaker A:But you know, I'm not an expert in AI and hr.
Speaker A:I'm not an expert in building agentic automations or stuff like that.
Speaker A:I know about it and I can speak about it, but there's going to be other people who are better.
Speaker A:So Helene had already been working with a community of people in the AI field and had built up this incredible roster of people.
Speaker A:And we thought, well, why don't we just take this, make it bigger and get more people involved.
Speaker A:Now it took us a year to get it all together, to get the experts together and we created video courses.
Speaker A:So we have about 20, 30 video courses up on our platform.
Speaker A:But what we discovered was clients were like, yeah, but we want you to come in and deliver it live, or we want you to do a live delivery over the Internet or whatever.
Speaker A:So that's actually what we've ended up doing a lot more of.
Speaker A:And we're advising some companies as well.
Speaker A:We're helping build their education program for AI and the skills needed in the AI age.
Speaker A:Across these organizations.
Speaker A:We're working from the level of, I mean, we only launched about Six months ago.
Speaker A:We're already working with governments, we're working with the UN, we're working with Fortune 500s and we are providing training programs.
Speaker C:GN stand for G N General Generative.
Speaker C:Generative.
Speaker A:So generative AI, it's basically, it's AI tools that come up with new things.
Speaker A:So large language models.
Speaker A:ChatGPT is generative AI.
Speaker A:Image generators are generative, they're generating new stuff.
Speaker A:There's also AI that can be doing like analysis.
Speaker C:One of the questions I just wanted to end on is we're all obsessed with LLMs and we think that's what AI is.
Speaker C:What?
Speaker C:Yeah.
Speaker C:So what are the main other areas?
Speaker C:So there's images analysis.
Speaker B:Are we going to open the can of worms of AGI?
Speaker C:Oh, well, maybe.
Speaker C:Yeah, coming.
Speaker A:Well, it's a big one.
Speaker A:It's actually one of the things I'm interested.
Speaker A:So we've got a. Yeah.
Speaker A:AGI they reckon in the next five.
Speaker C:Years, which is super cleverness.
Speaker A:Artificial General intelligence is when AI tools are smarter than a human.
Speaker C:Right.
Speaker A:And then you've got artificial super intelligence.
Speaker A:That's when AI is smarter than the whole of humanity combined.
Speaker C:Wow.
Speaker A:At that point, you know, humans are no longer the dominant species on the.
Speaker C:Planet, but as a business that could be.
Speaker C:There's image generators built into ChatGPT, but I don't think it's very good with numbers, so I probably want to look at some analysis ones.
Speaker A:So.
Speaker A:So there's.
Speaker A:This is something just in the last week there's an AI researcher who's been working for Meta.
Speaker A:He's one of the fathers of modern AI, a guy called Yann Lecun.
Speaker A:And he just announced in the last few days that he's leaving Meta.
Speaker A:But it's his reason for leaving that's really interesting.
Speaker A:He believes that this next token prediction, basically this guessing what the next word is approach, which is just a souped up version of when you're texting.
Speaker A:It's the same technology that.
Speaker A:He believes that that has got a real limitation because what we're trying to do is we are taking thinking and we are shoehorning it through language and that's not the way that we work.
Speaker A:So if you've got a toddler in a high chair and they've got a spoon in front of them, they will knock the spoon off the highchair and the parent will come along and pick it up and go, brendan, don't do that.
Speaker A:And they'll knock it off again.
Speaker A:And what they're doing is they're conducting experiments of a world model and starting to understand gravity and the social interaction with the parent and all sorts of things.
Speaker A:And the child is just constantly running experiments.
Speaker B:They have nothing to do with language.
Speaker A:Yeah, nothing to do with language because we're not at that language level yet.
Speaker A:And Yann Lecun believes that what we need to do is create AIs that create world models so that they can understand things like physics, they could understand chemistry.
Speaker A:But they're doing it not through the.
Speaker A:The lens of language.
Speaker A:They're doing it with a deeper understanding.
Speaker C:With experiments like dropping bombs.
Speaker C:That's what happens when you drop a nuclear bomb.
Speaker C:David, we've gone on long enough.
Speaker C:I think it's been absolutely brilliant.
Speaker C:Thank you.
Speaker C:Where people can find you linked.
Speaker A:LinkedIn.
Speaker A:All over LinkedIn.
Speaker A:Yes.
Speaker A:If you spell my name correctly.
Speaker A:B I, R, S, S. You won't.
Speaker C:Beat you with a stick.
Speaker C:Stick.
Speaker A:And a Dave, Not a David.
Speaker A:If you.
Speaker A:Yeah, I'm very easy to find on the Internet.
Speaker A:There's about five Dave Bursas.
Speaker A:We all got together about 20 years ago on the Internet, talked to each other and I won.
Speaker B:Nice to you.
Speaker A:So when you search for Dave Burst on the Internet, it's me.
Speaker A:It tends to be.
Speaker B:I'm a mixed martial arts fighter and a Northeastern estate agent.
Speaker C:There's only one of the.
Speaker C:One of the top battery engineers of General Motors.
Speaker B:Congratulations.
Speaker C:Very good.
Speaker C:I spoke to him too.
Speaker C:I wanted to give him come on the podcast, but it all went wrong.
Speaker C:When we discussed Harry and Megan said, just one question, Andrew, which side are you on?
Speaker C:And I was just like, I was on the wrong side, apparently.
Speaker C:I was like, Team Cape.
Speaker C:Sorry about that.
Speaker A:Like all British people, Zoom call suddenly.
Speaker C:Ended, I think so he's ghosted me ever since.
Speaker C:Andrew, Uri, if you're out there, take it back.
Speaker C:No, thank you, Dave.
Speaker C:Thanks for doing this.
Speaker C:It's been an absolute pleasure.
Speaker C:Good, good friend of the show.
Speaker C:Thank you.
Speaker C:Thank you, Mr. Craig.
Speaker B:Thank you as always.
Speaker C:Excellent.
Speaker C:Thank you.
Speaker C:D. And you'll catch us same time next week.
Speaker C:Take care.