Skip to main content
Key Takeaways

AI Limitations: AI tools can confidently generate incorrect answers without indicating their lack of knowledge, posing real risks.

Human Oversight: AI cannot replace the need for human judgment when creating project plans, emphasizing critical thinking.

Emotional Intelligence: AI lacks the ability to understand human emotions, making it ineffective in stakeholder management.

Process Issues: Implementing AI on flawed processes can worsen dysfunction rather than improve efficiency and effectiveness.

Skeptical Adoption: Successful AI integration in project management requires experienced oversight to navigate its inherent shortcomings.

Artificial intelligence has made serious inroads into project management. From auto-generated status reports to predictive scheduling tools, PM software vendors are racing to embed AI features into their platforms — and many project managers are genuinely excited about the possibilities. But beneath the enthusiasm, a quieter conversation is happening among practitioners. Experienced PMs, trainers, and product thinkers are running into the same walls, repeatedly. AI is useful, yes — but it is also overconfident, emotionally blind, and only as good as the messy systems it feeds on. Before organizations go all-in, it's worth taking a hard look at what AI still gets fundamentally wrong about project management work.

It Confidently Makes Things Up

One of the most dangerous qualities of AI in a project management context isn't that it fails — it's that it fails without telling you. AI tools are built to produce an answer, full stop, regardless of whether they actually have the knowledge to back it up. Dr. Mike Clayton puts it bluntly, describing AI as something "designed to make mistakes," explaining that "hallucinations aren't a bug, they're a feature," because "it will give you an answer whether it knows what the correct answer is or not."

Hallucinations aren’t a bug, they’re a feature – it will give you an answer whether it knows what the correct answer is or not.

1708809625171 (1)-80822

Mike Clayton

CEO & Founder, OnlinePMCourses.com

This isn't a theoretical risk. Megan Cotterman found this out firsthand: "I actually found myself in a bit of a pickle because I was using AI to help me with this instructional design project," she explains, noting that "the AI actually did get its wires crossed and I had the wrong information." The downstream consequences of that kind of error in a live project environment can be significant.

Unlock for Free

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 2

This field is for validation purposes and should be left unchanged.
Name*
This field is hidden when viewing the form

The problem has even reshaped hiring in some industries. As Emmanuels Magaya observes, "developers now are in demand because when they replace developers with AI or automation, somebody has to check the work because AI hallucinates." The need for human oversight hasn't gone away — it's simply shifted.

Developers now are in demand because when they replace developers with AI or automation, somebody has to check the work because AI hallucinates.

1762427455122 (1)-81914

Emmanuels Magaya

Founder, Project Managers Africa

Join the DPM community for access to exclusive content, practical templates, member-only events, and weekly leadership insights - it’s free to join. <br><br>

Join the DPM community for access to exclusive content, practical templates, member-only events, and weekly leadership insights - it’s free to join.

This field is for validation purposes and should be left unchanged.
Name*
This field is hidden when viewing the form

It Can't Replace Human Judgment in Planning

There's a tempting pitch that AI can take a statement of work and spit out a ready-to-use project plan. Pam Butkowski has heard it — and isn't buying it. "Even if you do have a tool where you input an SOW and ask it to create a project plan for you, I promise it's not right," she says. "It's just not going to be. It's a great starting point. But then we need to use our critical thinking."

AI simply doesn't have the context to understand complex dependencies, client relationships, or the kind of organizational knowledge that shapes a realistic plan. That's why human review in AI adoption isn't optional — it's the entire point. Varun Anand frames it this way: "You cannot just blindly trust the result given by AI, you have to work on it." He adds that "AI is still evolving, it's still learning, you have to review the results which AI is giving you otherwise either you will lose the human touch out of it or you will get the wrong results."

You cannot just blindly trust the result given by AI, you have to work on it.

1741228910360 (4)-17896

Varun Anand

CEO and Co-founder of EduHubSpot

AI Cannot Navigate Human Emotion or Stakeholder Dynamics

Project management is, at its core, a people discipline. Schedules and budgets matter, but so do fear, politics, ego, and history — and AI has no framework for any of it. Roman Pichler is direct about the limits of using AI for stakeholder management: "I'm not a big fan of approaches where people say like, hey, I found a really cool new way to do stakeholder management. We now use this AI tool." His concern is fundamental: "Humans are humans. And as humans, we want to be heard. I want to feel that somebody understands my concerns." He worries about "an element of thinking that the tech will solve our problems" when the real work is irreducibly human.

Humans are humans. And as humans, we want to be heard. I want to feel that somebody understands my concerns.

1739981170450-13355

Roman Pichler

Founder, Pichler Consulting

Michael Gold echoes this when it comes to team leadership and communication: "If you don't know how to talk to people, and you think that trucking AI to talk to people is the answer... AI can't replace that one to one human interaction." The skills that make a PM effective with people — persuasiveness, presence, genuine listening — are not things that can be delegated to a tool.

Applying AI to a Broken Process Just Breaks It Faster

Perhaps the most under appreciated risk of AI adoption in project management is what happens when organizations skip the foundational work. AI doesn't fix dysfunction — it scales it. Markus Kopko is clear on this point: "Throwing AI solutions on a bad process doesn't make the process better. It's even worse."

This is a critical message for organizations chasing efficiency gains through AI without first auditing whether their underlying processes are sound. Automation amplifies what's already there — good or bad.

Throwing AI solutions on a bad process doesn’t make the process better. It’s even worse.

1767953931064 (1)-10518

Markus Kopko

CPMAI Lead Coach

Final Thoughts

None of this means AI has no place in project management. It can accelerate drafting, surface patterns in data, and reduce administrative load in meaningful ways. But the practitioners closest to this work are sending a consistent message: AI is a tool that requires experienced, skeptical human hands on it at all times. It cannot empathize with a nervous stakeholder, produce a trustworthy plan from scratch, or fix a process that was broken before it arrived. The project managers who will get the most from AI are not the ones who trust it most — they're the ones who understand exactly where it falls short.

Kristen Kerr

Kristen is an editor at the Digital Project Manager and Certified ScrumMaster (CSM). Kristen lends her over 6 years of experience working primarily in tech startups to help guide other professionals managing strategic projects.