Galen Low is joined by Ronald Schmelzer and Kathleen Walch—the managing partners and lead analysts behind Cognilytica, and the hosts of AI Today—to lift the lid on why AI projects fail and why responsible project management is so important to the success of artificial intelligence.
Interview Highlights
- Cognilytica has been around since 2017. It started because Ron & Kathleen were working together at the same company (TechBreakfast). [2:44]
- Ron & Kathleen found that people weren’t running AI projects correctly, and there was a high project failure rate. So they looked into why this was happening. [4:12]
- Another big reason why Ron & Kathleen started Cognilytica was that people over promise and under deliver on AI technology. [4:36]
- Ron & Kathleen spend a lot of time explaining what AI technology is on their podcast called “AI Today“. [5:09]
- Cognilytica came up with the 7 Patterns of AI. One or more of AI projects falls into these seven patterns. [9:57]
- 1. Conversational systems: computers talking to humans, humans talking to computers, and then also human to human.
- 2. Autonomous systems: taking the human out of the system, whether that’s taking the human out of the vehicle or out of the software.
- 3. Recognition: making sense of unstructured data, which is the majority of data that we have these days.
- 4. Goal-driven systems: using reinforcement learning.
- 5. Patterns and anomaly patterns
- 6. Hyper-personalization patterns: treating people as individuals, ad targeting, hyper-personalized health care.
- 7. Predictive analytics: taking past or current data to help humans make better decisions.
- There are great people trying to solve big problems but they’re failing. 70% – 80% of AI projects fail. [15:35]
- A lot of AI projects failure comes down to fundamental PM issues. Two of them being:
- 1. AI professionals don’t know basic project management. [15:58]
- 2. PMs that are brought in treat AI projects like other projects, but there’s some key differences. The biggest one is that AI projects are entirely dependent on data. [16:08]
- You have to run AI projects from a data-centric perspective. [17:13]
If you want to run a data-centric project, then you need to have that data mindset. You also need to have data specific methodologies and practices that you’re bringing to the project.
Kathleen Walch
- Cognilytica is an advocate of the CPMAI methodology, which is the Cognitive Project Management for AI. [17:51]
- You need to make sure you have access to the data you need. [20:28]
- Cognilytica’s mantra is think big, but start small and iterate often. [22:40]
- A lot of organizations are agile, or atleast they want to be. They used the term “wagile” quite often because it’s like this waterfall and agile combination approach where they want to be agile, but they’re not. [23:48]
- Agile can be visualized as a spiral. When you’re building an AI project, the AI isn’t actually the end goal, it’s a means to some other ends. [24:29]
- A CPMAI methodology, from a project management perspective, you wouldn’t think of it as a methodology. It’s really more of a process. It’s a step-by-step approach: you do the data understanding, data prep, model development, model evaluation, model operationalization. [26:05]
- Start simple in your first iteration. It doesn’t need to be the end goal. [30:01]
When you’re following a step-by-step approach, it helps you understand what’s needed so that everybody is on the same page as well.
Kathleen Walch
- Understand what AI can and cannot do. It’s not a one-size-fits-all technology. [33:30]
- Sometimes you should just program your way to a solution. AI might not be the solution to your problem. Some people try to make AI work when they shouldn’t. [33:47]
- Over promising and under delivering on what AI can do causes major issues in the industry. [34:29]
- AI winter: the idea that we go into a decline in investment, research, and funding. [34:36]
- A lot of the problems that we try to solve really are dependent on some of our human abilities. And if we can get machines to do them, then we can unlock the so-called digital transformation dream. [35:45]
- Augmented intelligence: keeping the human in the loop but reducing their workload. [37:02]
- You need leadership buy-in. There’s going to be a lot of resistance if people have fears and concerns with AI or if they’re worried that it will replace their jobs. [38:50]
- Feedback is incredibly important. Make sure that you are getting rid of the tasks that people don’t like and not the tasks that they do. [40:10]
- AI is not a job killer, but it can be a job category killer. [44:18]
- AI has touched every industry, so PM is no different. It’ll improve their work, and it might replace some of the job, but will it replace it entirely? That depends. [45:18]
The more AI projects we have, the more project managers we need.
Ronald Schmelzer
Meet Our Guests
Ron is the managing partner, and founder of the Artificial Intelligence-focused analyst and advisory firm Cognilytica, and is also the host of the AI Today podcast, SXSW Innovation Awards Judge, founder and operator of TechBreakfast demo format events, and an expert in AI, Machine Learning, Enterprise Architecture, venture capital, startup and entrepreneurial ecosystems, and more. Prior to founding Cognilytica, Ron founded and ran ZapThink, an industry analyst firm focused on Service-Oriented Architecture (SOA), Cloud Computing, Web Services, XML, & Enterprise Architecture, which was acquired by Dovel Technologies in August 2011.

It’s okay to think big, but start small and iterate often.
Ronald Schmelzer
Kathleen Walch is a serial entrepreneur, savvy marketer, AI and Machine Learning expert, and tech industry connector. She is a managing partner, and founder of Cognilytica, and co-host of the popular AI Today podcast.

Set realistic expectations. Part of that is scoping your project correctly and understanding what it is you’re trying to solve.
Kathleen Walch
Resources from this episode:
- Join DPM Membership
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Kathleen on LinkedIn
- Connect with Ron on LinkedIn
- Learn more about Cognilytica
- Check out Galen’s interview with the AI Today Podcast
Related articles and podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Galen Low: You've been having those dreams again. You know, the one where every other PM on your team is a cyborg? Or the one where the AI product your team is building becomes self-aware and enslaves humanity?
Relax.
Sure, artificial intelligence is a hot topic these days, but it's been something that has existed in some form for decades now. And it's all around you: it's suggesting phrasing for your email, it's recommending videos for you, it's figuring out the best route to get to work on top.
But yes, delivering projects that involve AI is a big responsibility, and frankly, an inevitability. And as with any rapidly advancing technology, not every project is destined to succeed. So... that pressure you feel is valid and real.
If you're someone who is trying to wrap their head around artificial intelligence and the way it will impact digital project management, today's episode is for you. We're gonna be sinking our teeth into the reasons why AI projects fail and what we as project managers can do to give our AI-related projects the best chance at success.
Hey folks, thanks for tuning in. My name is Galen Low with the Digital Project Manager. We are a community of digital professionals on a mission to help each other get skilled, get confident, and get connected so that we can amplify the value of project management in a digital world. If you want to hear more about that, head over to thedigitalprojectmanager.com.
Okay. Today, we are talking about projects that involve artificial intelligence and how agile practices can help navigate some of the complexities of creating an AI-based product.
With me today are Kathleen Walch and Ron Schmelzer, the managing partners and lead analysts behind Cognilytica and also the hosts of AI Today—a podcast devoted to practical, real-world insights about what's happening in the world of artificial intelligence.
Kathleen, Ronald, welcome!
Ronald Schmelzer: Thank you for having us.
Kathleen Walch: Yeah, we're so excited to be here.
Galen Low: It's so great to have you on the show. For folks listening, we were just nerding out about podcasts because Ron and Kathleen have been podcasting about AI for, what, six years now?
Kathleen Walch: Just about, yeah.
Galen Low: So I've been gleaning all of their tips and secrets, and I'm feeling a little bit under pressure.
I'm feeling self-conscious, but I thought we'd dive right in. AI is such an interesting topic right now in the world of project management, maybe around the world. Your podcast gets heaps of listeners. Everyone's really trying to keep the finger on the pulse of this thing that's moving very quickly and is generally somewhat misunderstood.
But before we get into it, I'm just wondering, could you tell us a little bit about Cognilytica? Like what prompted you to create your organization and how do you help people navigate the world of artificial intelligence?
Kathleen Walch: Sure. So Cognilytica has been around for about as long as our podcast since 2017. And it started because Ron and I had been working together previously as well at a company called Tech Breakfast, which was really a morning demo style event.
Very entrepreneurial, not necessarily focused on AI, but just tech in general. To help showcase and highlight what's going on in different regions across the United States. So we were in about 12 locations, all the way from Boston to New York, the DC area, and Northern Virginia, Maryland. We had some in Texas, North Carolina and Silicon Valley, of course.
So from Tech Breakfast actually, we started to see a lot go on with voice assistance in particular and voice technology. And we said, all right, there might be something here. And that's kind of the beginnings of Cognilytica. Cognilytica started as an AI focused research advisory and education firm. And what we quickly realized was that people who were running AI, so the term artificial intelligence has, was coined in 1956.
So the concept is not new. It's been around for decades at this point, yet it still feels new. And I think for many folks it is maybe, you know. They've never really been introduced to it before. It's really just in the past, you know, decade or so started to become mainstream. People are interacting with this on a daily basis through voice assistance, through their phone, different, you know, facial recognition technology.
So what we found was that people were not running AI projects correctly, and there was a high failure rate. And we said, okay, let's take a step back and understand what's going on here and why is this happening. So on our podcast, we actually have an AI failure series that goes into common reasons why AI projects fail.
And this can be around data quality, data quantity issues. Maybe your return on investment isn't necessarily there. People wanna do AI just to do AI. And another big reason was that, for whatever reason, people over promise and underdeliver, so they over promise on what AI can do. They get so excited about the technology and then of course they underdeliver on it because maybe the project was out of scope.
You know, they really bit off more than they could chew. So we said, all right, we need to focus on fundamental education here and really show and educate people on how to do AI right, which includes following best practices methodologies. And I'll let Ron introduce himself and pick up on that.
Ronald Schmelzer: Fantastic introduction.
So, I'm Ron Schmelzer, also a managing partner here at Cognilytica. And you know, one of the things we, we found, and part of the reason, I think part of the reason why the podcast also took off so much is that we spend so much of our time on concepts and explanations, even terminology.
Because some of it is confusing. As we say, some of it's almost honestly, almost intentionally confusing. We use the same words to mean different things. We use different words to mean the same things, because artificial intelligence is actually a collection of a bunch of related but different communities. We have sort of the traditional robotics folks and folks who have been involved in control systems.
So we have the whole group of statisticians and statistics folks who bring in their analytics background. We have linguists and the natural languaging folks. And we have, you know, people who have been worked in cybernetics and control systems and they're like, we use this word to describe, they're like, that's interesting.
We use this word to describe that. But nobody tends to fix the problem. They just throw it all together. And meanwhile, you know, it's a lot of researchers who all of a sudden people get really excited about AI and they throw a ton of money into it, you know, billions of dollars sometimes.
And all of a sudden these researchers now find themselves at big companies like Microsoft and Amazon and Google and Facebook and all that sort of stuff, and they run these projects. So, as Kathleen mentioned, you know, really a lot of our time spent in education, we released some training very early on in the Cognilytica history, fundamentals of artificial intelligence and applications of AI and that sort of stuff.
And that took off very quickly and we ended up spent doing a lot of our training for governments, major government institutions, federal state, local, international, Australia, UK, Singapore, so many countries that we've been working with, as well as major organizations and enterprises. And you'd think after all these years, they kind of, you know what's going on.
But of course you go in and you're just surprised. You're like, how can you be doing all this and you don't have a fundamental grounding in it? And that of course brought us to this methodology, which we'll spend some time talking about on how to run these AI projects because what a lot of these organizations had in common was sort of making the same mistakes over and over again.
And it's not because they don't have smart people, they have tons of smart people. Some of the smartest, most well-known researchers in the space are working for these companies and they have great technology. So, something else is to blame and as you all know, the magic sauce that makes a lot of this work is project manager and that's why we are here.
Galen Low: You know, it's funny you're mentioning about artificial intelligence having its origin and being coined in the 50's, and I'm picturing some of this like sci-fi technology, right? That comes through fiction and then kind of becomes a reality. And in a way just this sort of notion of, you know, AI and why do AI projects fail?
We're gonna get into that. But I'm thinking through this like notion of like, okay, teleportation, right? Something that we have coined in fiction and somebody's out there trying to make it happen and the fixation is trying to make it happen, versus trying to connect it to why teleportation? What is the business impact that they can have?
And how we connect that to, how we deliver that technology into usage?
Ronald Schmelzer: I mean, like, and we're actually seeing this now, actually this news actually today and this week, about how one of the autonomous vehicle companies has just gone bust. Billion dollar Argo just shut down, laid off 2000 people, which is not so good in selling off their team and their technology to Ford in VW.
But there's a big article, it's like, I think they're like, we think the bloom is off the rose and autonomous vehicles may not be happening, but it goes to your point, which is like, I know Kathleen is particular loves the idea of the autonomous vehicle, but it kind of doesn't have like the driver, it's you know, no pun intended there.
Because it truly doesn't have the driver, but like, you know, in terms of the business, it doesn't have a business driver either. There's no driver in the front seat and there's no business driving it either. And so, is it just the quest, right? Is that, is it the quest that matters? Or do we need to have some return to make it work?
Galen Low: I really like that. And actually just to build on that as well, I mean, I think a lot of our listeners, actually just people in general, we hear, Oh, AI projects. And we think of that, right? We think of, Oh, okay, autonomous vehicles. Oh, okay, yeah, you know, conversational design, chat bots, things like that.
But it is a big sort of area, right? Artificial intelligence is not just that. So I thought maybe we could just level set for our listeners, like when we're talking about how to deliver projects that involve artificial intelligence, like what other kinds of projects are we talking about? What other examples do we have?
Kathleen Walch: Yeah. You know, that's something that we were struggling with at the beginning too, because when people talk about AI, you may be talking about two different things. So you may be talking about AI naval chat bots, we're all talking about autonomous vehicles, which he said I love. That was something that I was looking forward to.
And I may be talking about something else, you know, predictive maintenance, for example. So they all fall under this general category of AI, but they're not the same thing. Which can be difficult when we're trying to talk about artificial intelligence. So we came up with the seven patterns of AI, because all of the projects that we've seen fall into one or more of these seven patterns.
And at a high level, that's conversational systems. So that's, you know, computers talking to humans talking to computers, and then also human to human. So like with machine translation. We have autonomous systems, so that the idea there is really to take the human out of the system. So think about autonomous vehicles, for example, but you can also have autonomous software as well.
So, you know, trying to automatically route things through the most optimal path, you know, in a machine, maybe trying to eliminate bottlenecks in your process flows, things like that. Then we have recognition. So these are things, it's making sense of unstructured data mostly, and the majority of the data we have is unstructured these days.
So think about computer vision, you know, like facial recognition, that kind of stuff. Then we have goal-driven systems. So this really is using reinforcement learning, trying to figure out most optimal path through a maze, things like that. So game playing is a big example there.
We also have the patterns and anomalies patterns, so that's really where we're trying to, you know, look at data and find patterns in that data, find outliers in that data. We also have the hyper-personalization pattern. So this is where we're no longer bucketing people into groups and categories, but treating them as an individual. Obviously we think about advertising targeting with ads here, but then maybe take it one step further and think about maybe hyper-personalized medicine or hyper-personalized healthcare.
You know, how can we move beyond that? And then the last one is predictive analytics. So this is taking past or current data to help humans make better predictions.
When you think about it this way, it really helps make sense and say, okay, this, you know, my natural language processing application is part of the conversational pattern. So it helps shortcut you to AI project success as well with what data you need, maybe with what algorithms you're going to select, who on your team needs to be involved. So, when we break it down that way we find at least we can level set our conversation.
Galen Low: I like that word patterns because I mean, originally I was thinking patterns, like okay, yeah, computers, patterns, trends, but actually, if I'm understanding it correctly, almost like more like sewing, right?
You have a pattern for a dress versus a pattern for a pair of pants, and that's gonna help guide you in a certain direction because it gives you a bit of a starting point rather than just having a whatever rectangular piece of fabric. Am I making that up?
Ronald Schmelzer: No I mean, that word pattern, it's one of those English words.
We use it and in special general way, but yeah, and it's actually not a bad mental, you know, image. Sometimes I think of cookie cutter also as like a pattern because it comes up when we're trying to get machine learning systems to learn from data and in effect, they're all just, all of these things are really just forms of pattern recognition and doing things with patterns.
But the pattern matters because if I'm trying to learn, say a language pattern, I can't really apply that model, to your point, like the dress model, I can't use that for image recognition. There's this goal, this overall goal of artificial intelligence called "artificial general intelligence", which is the idea that we do like, cause if you think of how your brain works, it's not like, we do have specialized regions of the brain that are good at visual and all that sort of stuff.
But basically you have one brain. You don't have 20 different brains, and it's not like you need to train one part of your brain for one thing and another part of your brain for something else. So we're trying this idea of the AGI, artificial general intelligence is this generally intelligence system that can learn anything, that can learn recognition and conversation and autonomous stuff.
But we can't figure that out, like we don't really know exactly how the brain works. So we have these like different ideas. And what we're doing now is actually something called Narrow AI, where we're trying to solve these individual patterns.
And because of that's why it's called patterns, because of that, we can't really apply, if something's really good at, say, playing chess or a poker bot, you know, great AI, it beats the best humans, right? You can't make that do image recognition. It's like, now it's still that huge leap. So yeah, so it's like if I say, what are you making? I make clothing. Like, okay, that's, I know generally what you're talking about, but like, what do you mean by clothing?
Or like, oh, I make dresses. Well, it's very different than being in the, you know, outer garments, you know, parka industry. It's like you're really making different things. You can't be like, all of a sudden I'm gonna decide one day to make socks or something.
That's kinda where we are. It's a good analogy.
Galen Low: It makes it even more fascinating because, you know, like what you said, we don't understand everything about how a brain works, and yet we are trying to build it piece by piece. We're almost figuring out that quest for AGI is actually a quest to learn how to build a brain. And we're starting in these like localized regions and localized functions, but eventually, it will ladder up into something more general, that will make better metaphors than I try to make on this podcast.
One thing we touched on earlier was just, you know, delivering projects and where some of the disconnects and misconceptions and problem areas are around delivering a project that involves AI. And just to lift the lid on this, I thought I'd ask you just how does delivering an AI project differ from delivering other projects or even other digital projects?
Like what are some of the key characteristics of these projects that project managers need to be cognizant of to be successful?
Ronald Schmelzer: Well, great. Obviously, this is the key question to answer and part of why we're here and as we were chatting, sort of pre-show, you know, one of the reasons why we're even here is that you know, there's these great people who are trying to solve these big problems and that they're failing. And they're failing in very notable ways. You can look online at something like 70% to 80% of all AI projects fail. And by fail mean, like they get canceled or sometimes companies get shut down and laid off a lot of people.
That's a pretty big failure. But there are other smaller failures, you know, projects getting stopped, people getting moved off. And what we found is that a lot of it comes down to these fundamental project management issues. And there's two issues. One, AI and data folks don't know basic project management.
They should know that, but also project managers who are brought in, treat an AI project like any other project in general or like any digital project. And there's some key differences. The biggest one is that AI systems are entirely dependent on data. Now you might say, well, all digital systems are, but that's actually not true.
You can build a website or you can build a mobile app, or you can build, you know, some cloud thing that's moving things around and it's like maybe it doesn't even have any data or very little data on the first day, and maybe there's later data, but the later data doesn't really change anything about how the system works.
A machine learning system learns everything from data. So even if I, even if you and I build the exact same chat bot, let's say with the exact same technology, I use one set of training data, you use a different set of training data, they can have completely different success and failure rates. In the case of Microsoft, with their notable Tay bot, they let the internet train the chat bot.
Wow. What do you think happened in that case? Yeah, within 24 hours, this bot, which had to be taken down races, you know, all the terrible stuff, because why would you ever let anything open to the internet? Like that's a crazy idea, right?
So what we learned is that, you have to run a AI projects from a data-centric perspective, which means very specific things. Maybe I'll segue to Kathleen on that, like what does it mean to run a data-centric project right.
Kathleen Walch: Right. And if you wanna run a data-centric project, then you need to have, you know, that data mindset. You also need to have data specific methodologies and practices that you're bringing to the project.
And what we found is that a lot of people weren't doing that. So they were running it with this, you know, traditional mindset of like software application projects and that it was failing. So we're, you know, advocates of doing AI right.
We don't want to have a lot of these failures out there. And so we're advocates of the CPMAI methodology, which is the cognitive project management for AI. What we've also found is that some people, especially a few years ago atleast, were getting confused with do we start with the business understanding or do we start with the data understanding, right?
Cause we talk about how data is the heart of AI and they're like, okay, well then the data's most important and we're gonna start with that first. And we're like, actually, business understanding, because if you're not solving a real problem, why are you doing it at all? And we found that people just get so caught up in the hype of AI and they're like, well, I'm being told by upper management, I wanna do it. Or it's in the news, it's this new hot thing, so let's go ahead and do it.
And we're like, yeah, but if you're not solving a real problem, you're just gonna spend a lot of time and money and resources to build something that doesn't have an actual useful application. So if it's not going to have that return that you're looking for on your investment, you're probably never going to run another AI project because upper management's gonna look at this and be like, you spent $5 million to do what?
So we found that's another reason.
Galen Low: You know, it's funny just thinking about just context in general. Like, I think in some ways all crafts, all disciplines, all specializations, you know, in some ways we've come from this world of, you know, almost with the blinders on, right? Like the quest to do that thing without understanding the broader context.
And I'd say project management is somewhat guilty of that as well. And you see it, and I'm not criticizing it, it's a way to do it, but you have like your enterprise PMO, right? And they're like, okay, guns for hire. You lead this project to success. And it doesn't necessarily appreciate some of the understanding required of what does managing a data project look like compared to managing a website build or a CRM rollout.
Like what are the different pieces? And we've treated it as this blanket craft of project management without recognizing just some of that curiosity and specialization that has to happen. And that context, the, the understanding the context in order to do it well.
And then both those groups, right, whether you are a data scientist or a project manager, you might be missing the forest with the trees if you're not also in tune with the business strategy, the goals, not just of that project, but also of the organization. Because, you know, we were talking about autonomous vehicles, like that could shift.
That could shift in an instant, and especially when we're talking about AI, things are changing all the time. The competition is fierce. Not everybody knows where they're going or if they're even competing for the same prize, but it's ferocious competition out there. And things are changing really rapidly.
And without understanding that context, you know, you're just kind of doing a thing and then suddenly you might not be asked to do that thing anymore, and you're like, what happened?
Ronald Schmelzer: And, you know, this issue rears itself, even if you're not doing an AI project, you might be doing sort of just a big data project, right?
And AI projects are actually a kind of big data project. And this issue about not having this data centricity rears its head in some very obvious ways if you've seen it and you're like, oh, what could be the issue? Well, first of all, you might say, well, all I need is this data, and I can build a model for some predictive thing or a recommendation system or some chat bot.
And you're like, have you actually tried to get that data? Do you have access to the, have you seen that data? And then the next thing you know, you're like, oh man, the project starts, we didn't even look at the data yet.
We're already kind of gearing up. And then we're like, we're missing half of the data. Or it's bad, or I can't get access to it. Or, Oh my goodness, this is private information. Now have some data privacy policy, now my security. And then you're like, now your project. Which you were like all gung ho on hinges upon this ability to access the data and clean the data, and that determines the success of your project.
You might have the, as I said, the best product in mind, but if you're doing, let's say, medical image data and you can't get to it, it's PHI - Private Healthcare Information, it's protected by that, or it's a bad quality, or it's like of limited availability, then you're gonna be saying, oh wait, we need to hold our project and now, to solve that problem, you might have to have another project to clean the data, collect new data, or do what's called data labeling. Enhance the data, get more data in there. And all of a sudden you're like, well, we didn't budget for that. In terms of time or resources, like, okay, now you see how your little project on shaky little wheels, the wheels are coming off really quickly, and then you go, we're into this thing now.
12 months, 18 months, you haven't produced anything. You're like, yeah, we're still trying to clean the data. We're still trying to get the data. And then you're like, well, maybe the whole need for the project has gone away. Maybe the market has changed. All of a sudden there's a pandemic, supply chain, labor chain issues, and they're like, whenever we do this in our podcast or in training, we just bring the real world like, we don't have to make this stuff up. Walmart had their, you know, inventory shelf scanning robot. They spent millions of dollars on it and years, and they just canceled it, because they couldn't, they just couldn't make it work.
And I'm like, well, if it's gonna fail for Walmart, it might fail for you. So the question is, how could we have done this in a more agile and iterative way? How could we have, like our mantra is "think big". It's okay to think big, but start small and iterate often. And maybe we could have done something with a smaller data set.
Maybe we didn't have to start with this whole autonomous thing. Maybe you could have just had a push cart with a camera on it that did the shelf scanning. You didn't need to solve all of these hard problems of robotics, you know. This is it, like that's when we start taking apart the patterns and say, well, let's separate the recognition pattern from the autonomous pattern.
That's why we bring that up.
Galen Low: The bar is so high for AI, right? It's like, go for broke. If we're gonna do an AI project, it better be a robot that, you know, cooks my breakfast. Meanwhile, like you said, the first iteration of that might just be, you know, whatever, personalized suggestion of a recipe, you make it yourself.
Like, let's start really minuscule. You raise a really good point about agile, and I know I opened this promising that we talk about agile so I think we should. But in some ways, you know, you've identified and called out that agile is a good approach for this because things are changing rapidly because we need to value iterations.
Would you say it's the perfect methodology for it? Or where can Agile fall down in the world of AI?
Kathleen Walch: Yeah, so we think that, you know, I mean, a lot of organizations are agile organizations, or atleast they wanna be, they try to be, you know, we've found that, oh, some still don't do that.
We used this term "wagile" quite often because it's kind of like this waterfall and agile combination approach where they wanna be agile, but they're not. As Ron talked about how sometimes these projects just take way too long, you know, 12, 18 months, they shouldn't be taking that long. So agile, you know, there's an aspect of it, but we say it just needs to be enhanced for these AI projects so that we make sure we're understanding that these are data projects that we need to manage them like data projects, but take some of those agile, that agile mindset and bring it in.
Ronald Schmelzer: Yeah, and we like to think of it like, it's kind of a weird visualization, but I, if you think of like Agile as like this sort of continuously rotating spiral, I don't know. I, I've seen the visualizations of Agile as a spiral, right? And you think, well, you have your sort of your project, your main project, which maybe has as a components to it, you know, building this machine learning model or the AI system.
The AI is not the end goal. It's a means to some other ends. If you're trying to do some diagnosis thing, well, the diagnosis thing is the application, but of course, yes.
One of the most important parts is the little black box that does the diagnosis or whatever you want. So you could think of it as like two intertwined cork screws, right? You can think of that. There's like this one, you know, iteration for your main project. You know, where you're dealing with all this stuff.
And then there's the black box part, which is your machine learning model. And that's got its other like little sort of perpendicular or orthogonal, you wanna use, you know, mathematical speak, orthogonal, you know, process. And that they have different timeframes, right? They don't have to iterate on the same schedule or they can iterate with the same frequency, but the outcomes don't have to be the same.
One may be more functionality-driven, and then what you can do is to say, Hey, AI team, right now, give me something basic. Don't make it too smart. Don't make it too intelligent. Maybe make it chat bot that literally just says the same thing every single time. Or it's like some, we call it the heuristic, which is like the simple alternative or cuff way of doing it, right?
And then later you can make that more and more intelligent. So that way they can proceed on their iterations and then meantime, like you say, okay, your job is to make the dumb way of doing it better. And so there's these iterations. So if you do it that way, if you kind of have those simultaneous sort of the data life cycle iterations, which would follow say, a CPMAI methodology, which is honestly from a project management perspective, you wouldn't think of it as a methodology.
It's really more of a process. So it's like people like say, okay, well it's a step by step approach that says, step one, figure out the business needs. Do what's called the AI go no go, which are these nine traffic lights. And then you do the data understanding, and then you do the data prep, and then you do the model development, then you do the model evaluation, then you do the, what's called model operationalization.
Then you just repeat that whole thing and you do that in the context of an agile sprint or something like that. It doesn't replace it. It's not like an alternative. It's just, it tells you what to do in iteration.
Galen Low: Well, I love that cuz frameworks to be thinking about this just to guide us. And I know in my world, right, in project management, there's a lot of, I don't know, people who might consider themselves purists, right? They're like, they're gonna follow this to the letter because that's how it's gonna work. But it doesn't free them to actually adopt the actual principle of agile, which is to be adaptable and to like go with the flow and change when you need to change.
And so we kind of need these guiding standards and best practices and guidelines. Not to follow them rigidly necessarily, but just like considerations within. And Kathleen, you mentioned wagile and you know, my instinct goes there, right? It's like, Oh, Ron just said you can start a project, but if you haven't done that data thing up front, then, you know, your project might be, you know, get blown out of the water in terms of timeline and costs.
And then I'm like, okay, well, is that then like this waterfall component of data gathering, but it's not necessarily, right? They can also still be iterations. It's just a different part of this project and we're so accustomed to think in terms of linearity that when we start trying to think in terms of, you know, you mentioned concentric cork screws.
It breaks our brain, right? We're like, okay, well what are the steps within that? Like, tell me how to be linear in this thing that's non-linear. And we keep gravitating back towards that in some ways that is our downfall. Whereas actually it is more, you know, we can still iterate at any stage.
It might not be iterating on the black box that does the diagnosis. It might be iterating on the business case. They might be intertwined because the business is changing and we need to start building this black box, but they can still be iterative. And what I really like about visual model there is that they are intertwined.
That's the other thing I see where people fall over is they're like, cool, we'll do this thing and then separately we'll do this thing and at some point they'll plug together and we'll hope for the best. You know this famous story of, I can't remember what jet they were building, but one team was building the front half of the jet.
The other team was building the back half of the jet, and no one thought, Hey, let's just make sure that the cable that connects these two is long enough and it wasn't, they had to scrap it. Right? Like things like that. The intertwined bit is the important bit because things can happen at the same time.
Because nothing is ever gonna get locked. And this is something, the whole point is that things are changing all the time. We just need to be talking to one another.
Ronald Schmelzer: Exactly. I mean to add to that, I'm sure Kathleen will chime in cuz we have so many like customer examples of this happening. But like when you think about what gets people tripped up are usually the assumptions.
And people will say, you know, like, oh, in order to do this project, you know, I got petabytes of information. I need to use that tons of information and I'm gonna use deep learning neural nets, which are sophisticated. And yeah, they're great, they're state of the art, but the amount of effort that has to be taken into to, you know, clean the data, transform the data, then go ahead, train those models, takes a long time, is very expensive.
And so people treat that in a waterfall way. They're like, okay, well first we need to do this. Let me do that. Like, whoa. There's a lot of assumptions there. What if I told you, putting this out there, that maybe for your first iteration, you don't need all the petabytes of information. And you don't maybe need the most sophisticated network because we're just trying to prove for this first iteration, something more basic, you know.
Like, does a prediction work, will people use it? That kind of thing. So let's start with something more basic. I'll be a little terminology centric here, but like a decision tree, something very basic. Maybe we'll use, you know, megabytes, not petabytes, and let's just check first iteration. Then what it does is it doesn't hold up the schedule because you only work with megabytes instead of petabytes.
Maybe a cost and takes a lot less time to clean it, that sort of stuff. Because we're not using deep learning, could probably use my laptop or something simple to train it on. And then this goes to those intertwining cycles. It's like, I didn't say this was the end goal.
We just said this was iteration one, so let's just have these iterations check in. So with the cables, it's okay, let's just check this in. What are the integration? Okay, you're doing the front half, you're doing the back half. Great. I'm not gonna stop you. You did the front half, you did the back half.
What are the points at which these two systems need to connect? And how do we make sure they connect before we get too far? And it could have been like, well, let's just have a rope that goes from the front to the back. It's not the cable, it's the first iteration that'll tell us how long the plane is.
Later, when I have my cables, I will replace the rope, right? Oh, that would've turned that problem up. Pretty simple. It's a very similar idea to that like, you know, use the rope before the cable. There's lots of examples there.
Kathleen Walch: And kind of following up with that, I think when you're following a step by step approach, it helps you understand what's needed so that everybody is on the same page as well. And it's all laid out, we're not making it up as we go along running projects ad hoc. We found a lot of projects fail that way as well.
And you'd be surprised at how many people are actually doing that, whether or not they want to admit it. So when you're following this step by step approach, then it says, okay, in business understanding, let's do this. In data understanding, let's do this. So that we don't run into some of those gotchas. And five months into the project we're like, why haven't you delivered anything?
Why has this cost $5 million? And I mean, these are not uncommon things, unfortunately. Or even just to get to the data understanding, right? Where, you know, we're not even prepping the data yet. We're not even doing anything, we're just trying to access that data. It can be months.
Sometimes this is an organizational kind of cultural change that needs to happen, which we understand those are like you know, things that people have to tackle, which is always why it's nice to kind of get leadership buy-in on these types of projects. But so we've seen sometimes it can be issues related to that, but as Ron mentioned, well then let's just start with the small amount of data.
Maybe data that we have access to, so we don't need to worry about you how are we going to kind of break through some of these data silos and get access to data that isn't ours in an organization where people can be very protective.
Galen Low: I think that's also a really good way to break into like the tougher conversation, which is that something you and I had been talking about was just this notion that, sometimes expectations are just not aligned.
Or in some ways, you know, even not just from the leadership level, oh, that expected a robot, but you just gave me a shopping cart that like scans some UPC codes, but also from the team upwards, right? It's like, we're gonna do this, it's gonna be amazing, and $5 million later, everyone was like, Okay, but where is it and what is it?
And you know, you've kind of underdelivered on what you promised. And like how can folks have that conversation? What do they need to arm themselves with in terms of knowledge? Who do they need to talk to and when to just kind of find that balance of expectations, and not just like kind of going, it's gonna be great. And then doing this ad hoc thing with no guidance and then, you know, missing the mark completely. What do those conversations look like and how can people prepare to have them proactively?
Kathleen Walch: So first they need to understand what AI can and cannot do. It's not a, you know, one size fits all technology. So that's important to understand, you know, if you need something that's done the same exact way every single time, that kind of deterministic system, sometimes you should just program your way to a solution. Do some simple automation, right? AI is not going to give you the exact same way every single time. It's probabilistic, not deterministic.
So understand that is an outcome of AI and that again, it is not a one size fits all approach. So it might not be the best solution to your problem. And if it's not, that's okay. Just be honest. Sometimes I think people try and, you know, make AI work when it shouldn't. And so then that can cause a lot of problems.
And we talked about one of the main reasons why AI fails and the term was coined in 1956. Why haven't we, you know, maybe gotten as far on the technology as we have? And part of it is over promising and under delivering on what AI can do causes major issues in the industry. There's these term called an AI winter, which if your listeners aren't familiar with it, it's this idea where we go into a decline in investment, a decline in, you know, research.
I mean, yes, some of it still happens. Also a decline in funding. So organizations will choose a different approach. This is a major problem in the AI industry. We've had two previous winters, and so now, you know, we're in kind of what we call an AI spring, but if we continue to over promise and under deliver on what we can do, we're really concerned we'll come into another AI winter.
So we always say set realistic expectations. Part of that is scoping your project correctly. Part of that is understanding what it is you're trying to solve. And then if you are, if you decide that AI is the right, you know, technology, the right thing that we should be doing for this solution, then make sure that we're following best practices so that we can actually achieve AI success.
Galen Low: I love that sort of responsibility of the team that's actually delivering the work. It's kinda like investor confidence, right? It's like you are sending people into a more frightened state. They're on the back foot about AI and as a result, it's not gonna move forward because people aren't investing in it.
Ronald Schmelzer: Yeah, and I think sort of more to this point, we understand why people sort of get caught up in this promise because a lot of the problems, the harder problems that we try to solve really are dependent on sort of some of our human abilities. And if we can get machines to do them, then we can unlock the digital transformation dream.
You know, this idea, we call this the digital transformation log jam, which is that, you know, as we have automated and done some of the easier tasks, we are left with some of the harder ones where we have people or paper bound processes. And the only way to get that next level of productivity is we need smarter machines because I can't just have a rules-based system and all of a sudden we have an exception in the whole system. Now is hanging, waiting for something or there's an approval that really does require a human with some judgment or something. And if I automate it, so that's the thing. There's this desire to automate, we get it.
Our desire to make even intelligence systems that were more than just automation, right? But then you could see the downfall of that, like when you have, you know, major organizations that use algorithmic decision making and then they put all of their trust in it, and now the algorithm like cancels your YouTube account, you know, shuts down your PayPal, whatever.
It's very frustrating because there's no re-course, right? And you're like, well, we want to use algorithms, but they're not perfect. So you can't have partial, there's this idea of something called augmented intelligence, which is keeping the human in the loop, but sort of reducing their workload. And that is a sort of gentler way of doing this because people feel like there's some way that they can still have some control over their lives.
But that's the reason people want to, you know, introduce these things to move further down this digitaI transformation journey.
Galen Low: I wonder if we can go there in terms of just resistance. We're talking about AI winter, and we're talking about this notion of, you know, there being no recourse when AI kind of goes sideways or misaligns with their expectations.
It kind of builds this sort of resistance to AI. It takes a step backwards. Some people are like, okay, I don't know if we can trust this yet. We have to find ways to get people comfortable with it. But, you know, I'm thinking of just like culturally too, right? Not just at the decision maker or at the investment level, but also the teams working on this.
You know, where people culturally are like, you know what, AI, we should just scrap it. It's scary, it's dangerous, like we shouldn't pursue this. Not just AI winter, like AI ice age, maybe even, you know, dead on the table. How do you sort of have those conversations with folks, you know, teams who are working on an AI project who might be like, this is really creepy.
Like, I don't really, like, I'm scared of this. I don't really want to do my best work on this because I can see that, you know, I could have this Oppenheimer moment, right? And you know, at the same time, how do you navigate that conversation of someone who's really gung ho about AI, but actually it's not the right solution.
Kathleen Walch: So there's a lot to unpack there. So, first of what we say is you need to have leadership buy-in because there's a lot of fears and concerns related to AI. So some are kind of emotional and some are more rational-based. We need to make sure we're addressing both because that is how people feel. And so if you feel at all uncomfortable with the technology, there's gonna be a lot of resistance, right?
And people also don't wanna feel like technology's going to take their job. So, despite the number of studies that have been out there, you know, the automation paradox where organizations like Amazon are bringing more automation in, and you'd think, well, more automation, the less people they're gonna need, but they're actually hiring more people.
Humans are good at certain things and machines are good at certain things, and so when you take machines and they have, they do these kind of repetitive tasks, maybe unsafe tasks, you know, think about things in a warehouse. Well, then we're able to take the human and apply them to things that are much more harder for machines to do, like that last mile of delivery, for example.
Or things that require emotion. So maybe we want somebody, you know, behind the phone when we're talking. And so we can have an AI enabled chat bot to some of the more automated things, you know, that don't like, what are your store hours? What's your address? I have basic things that I need help with, but sometimes I wanna actually talk to a human.
So we have that. But then also with these fears and concerns related to AI, when you have leadership buy-in and the leadership says, we promise we're not doing this to replace your jobs. We're doing it to make you better at your job. We're doing it to take away some of the things you don't like that can help allay some of these fears.
Also, what we've found is that feedback is incredibly important. Because you wanna make sure that you are getting rid of the tasks that they don't like and not the tasks that they do. Which I know sounds so silly, but a lot of people don't talk to like, you know, the end user and say, what are the painful points of your job? And then let's work to eliminate that. Let's work to either bring in automation, which we say automation is not intelligence. It's really just automating repetitive tasks. Or maybe we'll bring in some more intelligence systems or this idea of augmented intelligence that Ron mentioned.
So we're not fully replacing, but we're just helping you do your job better. What are those areas that really bother you? And then let us fix that. Let us bring the technology in to do your jobs better. You'd be surprised where people were like, wow, I really feel listened to. All right. Thank you for helping me do my job better.
Thank you for letting me do the parts that I enjoy, or the parts that I was hired for. Not necessarily all this data entry or whatever it is that my job has evolved to, that I wasn't actually hired to do.
Ronald Schmelzer: Yeah, and there's sort of like this corollary here because I think we're at this moment, I think in our general global economy where there's a healthy level of distrust of management, which I think is okay.
I think honestly a lot of employers have done a bad job and you know, when you have this pandemic, the whole work from home situation, and people have lost trust amongst each other, you know, management of employees and management. And I think a lot of it's founded honestly because management have done a lot of stuff to truly, you know, not engender the trust of employees.
But I think one of the ideas we talk about all the time, and this is I think where I think even management stuck on this, is that there's a difference between the job, what you're trying to do. If your customer service, your job is to improve customer service and customer relationship.
Your job is not data entry, right? So if you think about, we say, separating the idea of work from your job, like, well, the work you do may not correlate directly to your job. It may correlate to things that you might have to do as part of it. But if it's taking six out of your eight hours of a day doing data entry and you're not actually doing your job, it actually doesn't benefit anybody.
It doesn't really benefit the employer, it doesn't benefit the employee. But it's necessary because maybe, you know, you need to keep track of the customer relationship. They were like, that's a good place. That provides a pretty solid ROI honestly, for implementing either basic automation or rules-based systems or more intelligent systems that may need to do like a little bit of document processing.
The funny thing is, I encounter this all the time because you interact with say, a government system or a big company system and you're like, why am I filling out this information and why does it feel like the person I'm dealing with is just like an information taker? They're not actually helping me.
You call the IRS up on the phone, you're waiting on for 45 minutes an hour if you're lucky, and then you ask someone a basic data question, what's the status of my return? Or like, I got a letter there's like that 45 minutes of time, wasted time. The other person just doing a lookup.
Clearly there's some technology thing. Now, if you can recover that IRS person's time, they can solve some of this huge backlog that they've got of basic return information. I think it's a little bit of that mindset where maybe we've gotten a little used to our processes. This, again, it's more about change management than it is about AI and technology.
And I think, you know, changing the processes and introducing AI or any automation or machine learning into some of these less sexy, you know, more boring, ironically, that's where the returns might really be for everybody.
Galen Low: You know, it's actually probably a good segue into the elephant in the room that all of our listeners are thinking. You know, you've both touched on it, about the jobs we do and the scope of those jobs, and I think everyone listening is wondering, will AI replace project managers?
Kathleen Walch: You know, that's a great question. And I think at the end of the day, what we've found is that, roles that require that human touch, that human element won't necessarily fully go away, but it may change.
So you know, this I think has been brought up forever, right? Is AI a job killer? And what we say is AI is not a job killer, but it can be a job category killer. And so you need to make sure that you're understanding that. Technology in general, any type of transformative technology has changed the way that we work and live fundamentally.
So we think about, you know, the desktop, computer, what has that done? That's taken away rooms of secretaries, right? We no longer need, you know, one secretary for one executive or whatever the ratio was back in like the 1960's. But that doesn't mean that we suddenly had mass unemployment, right? New jobs are created.
Back in the 1960s, there was no such thing as a social media marketer. Now we have social media marketers and it's, you know, tons of them. So jobs and roles can shift and change. I think that for folks that are in the PM world, obviously artificial intelligence, you know, it's touched every single industry. From banking, insurance, healthcare, finance, automotive, consumer package goods, retail.
I mean, every single industry, AI is doing something, right? And so project management's going to be no different, and it's going to help them do their job better, help augment their roles. Maybe it'll replace some of the things that they do, and as we mentioned, you know, might replace some of the work, but will it necessarily replace the role?
You know, that depends. I think it really, it's how much of that human element are the project managers going to be needed and maybe it'll change the job for sure.
Ronald Schmelzer: Yeah, exactly. We like to think of it as like, you know, there's these transformative moments and I think, you know, when I think of, as I said, a lot of the AI projects are suffering from basic project management responsibility.
So ironically, the more AI projects we have, the more project managers we need, right? So I told you, that's where they need to invest. They need to invest in more project managers. But the difference is you know, when I look at sort of how, again, take a look at how you spend your day and the things you spend your day.
Maybe a lot of the time is in meetings, maybe some of it's in project tracking and in scoping and analysis and documentation. And you might say, well, that's important because these are tools for communication. You know, I need to have the meeting for communication. I need to have these tools to communicate and document, which is important.
You can't not communicate. So the question is, is like if you can recover some of that time and maybe even improve it, and you have tools that can give you better predictive capability, maybe give you better visibility, you know, it's easy now for AI systems to transcribe meeting minute notes.
So maybe you have an AI assistant sitting with you on the call and they can even do things like, you can ask like, Hey, I wanna ask some question on the call. And the AI system's like just responding, right? Does that make the job go away? No, because at the end of the day, sort of the job of project management is to shepherd a job to successful completion, separate shepherd or some project to successful completion.
And really maintain that sort of sense of ownership. When the project manager is not necessarily the subject matter expert, they're not the ones who are implementing the task. And they're not the executive leadership either. They're that connective tissue that's making sure that these things happen.
And the more efficient that PMs can be, the more efficient the organization can be. And so, know, that's why, so I look the future, you know, the future PM. Maybe the P in PM will change. Maybe the M in PM will change, but sort of the role will be, I think there, but of course evolved. And I think that's one of the things we're keeping an eye on.
We like this overlap of AI and project management, and we've been toying with things like, yeah, maybe we should do more AI for project managers kind of stuff. So stay tuned. We might do more of that.
Galen Low: I love the sounds of that. And I think everyone listening is comforted to know, that probably their roles are safe, their jobs might change, but their roles are safe. Just to kinda round things out, one of the things that we keep coming back to is just this understanding of AI. And, you know, a lot of folks are intimidated about it.
But the more they'd know, the better they'd be able to navigate it. And yet, I don't think anyone, other than, you know, the data analysts and the people who are really into the AI side of things are ready to just take that deep dive. But like, how can somebody, like a project manager, learn enough about AI to ask the right questions, to be able to say, what if we didn't have the cable but we had a rope?
What if we did a simple version of this thing? Like, how can somebody develop that skill?
Kathleen Walch: So, that for us is a big reason why we put together the CPMAI training and certification because it's meant for folks that maybe don't have, you know, deep backgrounds in artificial intelligence, machine learning, big, you know, deep math backgrounds. So it helps you learn CPMAI at a fundamental level and how to run AI projects.
If folks aren't quite yet ready to go into the training and certification, we also have a free intro to CPMAI course that you can go to AItoday.live/cpmai. Sign up for the free intro to CPMAI, it takes about three hours to complete. Gives you a really good comprehensive overview of CPMAI, how to apply it to your projects for project success.
Because like we said, it's a step by step approach that really you just need to understand the steps, understand how to do it in the right order. Since it is iterative, you can go back to steps as needed, right? It's not like we have to get through the whole thing and then we can go back.
No, don't worry. You can go back. And it provides that overview with examples of how people are doing it.
Ronald Schmelzer: Yeah, and to add. It's like, you know, we've really been spending a lot of time on our podcast just doing, you know, it's kind of funny. We've been doing our podcast five, six years now. Running into our six season now, and we just said, Hey, you know, we should do a glossary.
We should just, you know, basic terms. Cause even on this podcast, I think we, I discussed like probably a dozen or two dozen terms that maybe new to your listeners. AGI and AI winters and predictive analytics and seven patterns and all that sort of good stuff. And I think we started with like, I don't remember what the glossary started with.
It was like, maybe like a hundred terms and then next thing you know are like 300 terms. Like, oh my gosh. And, so we're like, you know, just, Hey, same thing subscribe. Go to our AI Today and do this glossary series. We'll just go over one term or a group of related terms. And say, here's what it means.
And maybe really basic, we even like go over what is data and you might think everybody knows what data is. We're like, do you? What is big data then? Just a big data? So you know, this is very useful and you know, maybe something to consider free podcast listeners as well. Just like, you know, sometimes people just need that grounding and understanding and staying connected is great.
The other thing is I know that we would love to ourselves, be more connected to the PM community. This is sort of a new thing for us. Maybe participate more in different, you know, project management events and you know, really sort of cross pollinate the AI community and maybe say, Hey, maybe the PM folks may wanna come to a couple AI events that may seem daunting, you know, but trust me, understand the terminology and you don't even need to go too deep. You'll be fine.
Galen Low: I really do love that notion of cross-pollination and we're all about it. You've got my gears cranking in here. Maybe a collab event to follow where we can kind of, you know, mix our audiences and have a good dialogue cuz that's how we all learn from one another and get better at what we do and set ourselves up for success in the future.
Kathleen Walch: Yeah, definitely.
Galen Low: Awesome. Ron, Kathleen, thank you so much for coming on the show. This was a lot of fun, super insightful. I'm sure our listeners got a lot of value from it. I'm gonna link your podcast, and all the AI Today stuff below, as well as something about Cognilytica for anyone who's interested in diving deeper.
But I just wanna say thanks again for sharing your thoughts and your time.
Kathleen Walch: Yeah. Thank you. This has been super fun.
Galen Low: So what do you think?
Does leading an artificial intelligence project require a specific approach to be successful? Or is a project just a project with the same challenges as any other?
Tell us a story: when has an AI-driven tool or technology thrown a wrench into the gears of your project? And how did you solve it?
And if you want to hone your skills as a strategic project leader, come and join our collective!
Head over to thedigitalprojectmanager.com/membership to get access to a supportive community that shares knowledge, solves complex challenges, and shapes the future of our craft — together.
From robust templates and monthly training sessions that save you time and energy, to the peer support offered through our Slack discussions, live events, and mastermind groups, being a member of our community means having over a thousand people in your corner as you navigate your career in digital project delivery.
And if you like what you heard today, please subscribe and stay in touch on thedigitalprojectmanager.com.
Until next time, thanks for listening.