Critical Thinking: Automated project plans look complete but lack essential context, leading to potential project issues.
Review Necessity: Relying on AI outputs without human review can lead to errors and significant consequences.
Reality Check: AI struggles with factual tasks like timesheets, risking data accuracy and project records integrity.
Human Engagement: Using AI for stakeholder management can undermine the genuine engagement needed for productive communication.
Process Foundation: AI should not replace establishing human process baselines, as it cannot define effective workflows.
AI adoption in project management has accelerated rapidly, and with it, a wave of enthusiasm that doesn't always hold up under scrutiny. Tools that promise to save time, reduce admin burden, and surface insights have become fixtures in PM workflows — but not every application of those tools is working out the way practitioners hoped. In fact, some use cases are quietly causing more problems than they solve. The experts who work in this space every day have seen the patterns, and they're frank about where AI is getting project management into trouble.
Generating Project Plans Without Critical Thinking
Few applications of AI in project management are as tempting — or as fraught — as the automated project plan. Feed in a statement of work, get back a schedule. It sounds like a productivity win, but experienced practitioners say the outputs tend to look better than they actually are, and that PMs who accept them without scrutiny are setting their projects up for problems.
Pam Butkowski, SVP of Horizontal Digital, puts it directly: "even if you do have a tool where you input an SOW and ask it to create a project plan for you, I promise it's not right. It's just not going to be. It's a great starting point. But then we need to use our critical thinking." The plan may be structured, it may look complete — but it doesn't have access to the real dependencies, the team's actual capacity, or the organizational context that makes a schedule realistic.
Even if you do have a tool where you input an SOW and ask it to create a project plan for you, I promise it’s not right. It’s just not going to be.
Jeff Chamberlain, Manager of Broadband Services and PMO at Fredrick County Government, has reached a similar conclusion from experience: "Some people approach it where they want it to create project plans. I have not had great success creating great project plans with it. And usually you end up redoing them anyway." When the rework required to make an AI-generated plan usable approaches the time it would have taken to build it from scratch, the efficiency argument disappears.
Some people approach it where they want it to create project plans. I have not had great success creating great project plans with it. And usually you end up redoing them anyway.
Copy-Pasting AI Outputs Without Review
If there's a single habit that practitioners flag as the most immediately dangerous, it's this one: accepting AI output as finished work. The appeal is understandable. The output looks polished, the language is confident, and the temptation to move on is real. But that polish can mask serious errors — and when no one reviews the work before it goes out the door, the consequences can be significant.
Mike Clayton, CEO and Founder of OnlinePMCourses.com, has watched this play out in professional services contexts with real financial stakes: "There are people who just go, okay, I've been asked a question. I'll put a question to ChatGPT and cut and paste the answer, and my work is done. And nobody had even reviewed it and corrected it." The problem isn't that the AI was used — it's that the human step of reviewing and correcting the output was skipped entirely.
There are people who just go, okay, I’ve been asked a question. I’ll put a question to ChatGPT and cut and paste the answer, and my work is done.
Megan Cotterman, Fractional Project Manager and Operations Consultant, experienced this firsthand: "I was using AI to help me with this instructional design project... the AI actually did get its wires crossed and I had like the wrong information. So I think just making sure teams understand that it's not the end all be all... and not just, you know, copy paste, send a client good to go." Even when AI is used thoughtfully, it can still produce incorrect information — which means the human review step isn't optional; it's the work.
Using AI for Timesheets and Factual Tracking
There's a category of project documentation that is entirely dependent on reality: what actually happened, who actually did what, and when. Timesheets sit squarely in that category. And it's precisely here, where accuracy is non-negotiable, that AI is the least equipped to help.
Oliver F. Lehmann, Project Business Trainer at Oliver F. Lehmann Project Business Training, frames the problem with useful sharpness: "AI cannot write a time sheet. It can fantasize a time sheet, but it cannot write a time sheet. So you bring fantasy into documentation that should be real." AI can generate something that looks like a completed timesheet — plausible-looking entries, hours that add up — but none of it reflects what actually occurred. Introducing that kind of fabricated data into a project record doesn't save time; it corrupts it.
AI cannot write a time sheet. It can fantasize a time sheet, but it cannot write a time sheet!
Automating Stakeholder Management and Communication
Stakeholder management is one of the most human-intensive parts of project work, and for good reason. The conversations that move projects forward — the difficult ones, the ones that require trust and careful listening — depend on genuine human engagement. Attempts to automate or outsource that engagement to AI tend to undermine exactly what makes those conversations productive. Stakeholders navigating complex or contested decisions don't want an optimized communication process — they want to feel genuinely heard by another person.
The problem extends to everyday written communication. Lehmann observes that "I see very often that project managers just use AI to write emails for them. It's often too soft in the language when it's time to be a bit more direct in language. It smoothes things quite a lot." When a project requires a direct, firm message to a stakeholder, AI-generated language tends to sand down the edges — often leaving the actual issue unaddressed.
Yonelly Gutierrez, Senior Program Manager at Palo Alto Networks, notices this quality in AI-written communication as well: "sometimes with the wording, I'm thinking, 'you sound like such an AI robot. Like, just talk like normal, please.'" The telltale stiffness of AI-generated language isn't just an aesthetic problem — it signals to recipients that the message wasn't written with them specifically in mind.
Deploying AI Before Establishing a Human Baseline
One of the more structurally flawed approaches to AI adoption is implementing it before anyone has figured out how the underlying task should actually work. When organizations skip the step of establishing a functional, human-led process and go straight to automating it, they're not accelerating progress — they're accelerating confusion.
Derek Fredrickson, Founder & CEO of The COO Solution, sees this pattern frequently: "oftentimes they try to initiate the AI as the solution before a human has actually done it. I always believe a human should be doing what it is that you want AI to automate first, as opposed to just putting in AI for the sake of AI." AI is good at scaling and systematizing processes that are already understood. It has no capacity to define what good looks like in a process that has never been run by a human first.
I always believe a human should be doing what it is that you want AI to automate first, as opposed to just putting in AI for the sake of AI.
AI-Powered Reporting With Dirty Data or No Clear Objective
Automated reporting is one of the most commonly cited benefits of AI in project management. The promise is real — but it comes with conditions that organizations frequently overlook. If the underlying data is unreliable, or if the reporting objective isn't clearly defined, AI doesn't produce better reports. It produces confidently formatted reports that don't tell you what you need to know.
Emmanuels Magaya, Founder of Project Managers Africa, identifies both failure modes: "If you want AI to automate your reports, what you will often find is if the data is not right, your report will not give you what you're looking for. And also you need to know what you're looking for in the report." Clean data and a clear, stakeholder-specific objective are prerequisites — not nice-to-haves — for AI-assisted reporting to deliver value.
If you want AI to automate your reports, what you’ll often find is if the data is not right, your report will not give what you’re looking for.
Layering AI Onto Broken Processes
AI is a force multiplier. That's precisely what makes applying it to a dysfunctional workflow so counterproductive — it multiplies the dysfunction. Organizations that believe AI will fix a broken process are, in almost every case, making that process more broken and harder to diagnose.
Markus Kopko, CPMAI Lead Coach, puts the dynamic plainly: "Throwing AI and AI solutions on bad processes doesn't make the process better. The results are even worse than if you didn't change your processes." Process improvement has to come first. AI applied on top of an unexamined or inefficient workflow doesn't surface the underlying problems — it buries them under faster, more voluminous output.
Replacing Human Judgment in Team Dynamics and Conflict
Data can tell a project manager a lot. It can't tell them that two team members have stopped trusting each other, when team psychological safety is non-existent, or that what looks like a scheduling problem is actually a deeper conflict. Those things require presence, observation, and human judgment — none of which AI can provide.
Jeremiah Hammon, Leadership and Project Manager Trainer at Project Revolution, draws a clear boundary: "What it won't do is see the real problems. It doesn't tell us that we have three team members that have personal issues going on or when team issues need conflict resolution. It won't do that." The interpersonal texture of a project team — who's struggling, who's disengaged, what's going unsaid — is invisible to AI. And in many projects, that texture is exactly what determines whether the work gets done.
What it [AI]won’t do is see the real problems. It doesn’t tell us that we have three team members that have personal issues going on or when team issues need conflict resolution. It won’t do that.
The Pattern Behind the Problem
Across all of these use cases, a common thread runs through the failures: AI being asked to perform tasks it was never designed for, or being deployed in ways that remove the human judgment that would have caught the problem. The issue isn't AI itself — it's the assumption that because AI can produce an output, that output is reliable, appropriate, or sufficient. The practitioners who are using AI most effectively aren't the ones with the longest list of applications. They're the ones who are clearest about where AI stops and human judgment begins — and who treat that boundary not as a limitation, but as a design principle.
Want more insights like these? Sign up for a free DPM account to hear from more experts like these.
