Project Planning: Project planning often fails because we are too emotionally attached to the plan itself.
AI's Role: Artificial intelligence rapidly adapts plans but has zero stake in the execution of those plans.
Flexible Approaches: Shorter planning horizons can provide meaningful agility but must be synchronized to the business.
Decision Gates, Not Progress Milestones: Tracking progress alone can mask opportunities to pivot towards greater value.
An Outcomes-First Culture: Emphasizing commitments to outcomes fosters accountability and encourages proactive problem-solving in teams.
I used to think detailed project plans meant control.
I'd spend weeks locked in rooms with leadership teams, building detailed Gantt charts and slick roadmaps that fit on a single slide — albeit in a comically small font — convinced that if we just planned thoroughly enough, we could predict the future.
Then we’d start moving, and the plan would begin to crumble as soon as it made first contact with the real world.
Each time that happened, we’d typically pause, regroup, have a few meetings to get re-aligned, and find our footing again.
But what happens when we’re moving at the speed of AI, and hitting the undo button is a lot more difficult?
Three Leaders And A Project Plan Walk Into A Bar
Stop me if you’ve heard this one.
For a product R&D project I led, we needed to track progress, understand when we’d need resources, and forecast when revenue would hit. Pretty standard. A project plan with milestones would do the job. It took two weeks of conversations and produced something detailed, but still easy to communicate and build expectations around.
But as soon as we started moving forward, everything shifted: market forces changed our priorities; staffing changes required new resourcing; technology trends invalidated our approach — and most importantly, our prospective users told us they needed something different than what we'd envisioned.
Our plan changed, for the better. Or at least that was how I saw it.
The problem wasn’t planning. It was the inability to reconcile continuous change. It was a fear of uncertainty.
But instead of fielding excitement, I found myself defending why the plan was slipping: no amount of explaining the rationale for adjusting our approach seemed to get me off the back foot. Even if we’d get more value from the new approach, it felt like everyone was disappointed we weren't following the plan. It felt like they'd rather follow the plan even it led them off a cliff than wrap their heads around something more dynamic.
The problem wasn't planning. It was the inability to reconcile continuous change. It was a fear of uncertainty.
All Project Plans Are Fairy Tales
Fast-forward a few years, and I’ve come to the realization that every project plan is inherently a work of fiction. Saying you’re good at making a plan is like saying you know a good fairy tale.
So why do we do it?
Planning is actually less about determining the future and more about communicating an ideal version of it.
The reasons go back way further than project-based knowledge work: humans have evolved as social creatures whose advantage has been our ability to collaborate to achieve shared goals — from working together to take down a wooly mammoth to developing an industrialized society built on diverse specializations.
Plans are just our evolutionary mechanism to agree on a path that has some likelihood of leading to the outcome we want. We think of them as sacred, and in some ways they are because they are a social contract that connects our destinies in some way, shape, or form.
Planning is actually less about determining the future and more about communicating an ideal version of it.
AI Doesn’t Care About Your Plan
But you know who doesn’t care about your plan? Yeah, artificial intelligence.
AI isn’t inconvenienced by a change of plans. By the time you’ve told your chatbot that the plan has to change, it has already re-planned 10 variations of the timeline.
It's also not too fussed about which plan works out and which plan doesn't. In fact, assuming that your project isn't to decommission your LLM, AI has zero stake in your project. While your team is wrapping their head around the plan and making peace with the risk and effort involved, AI is happily drinking from the internet firehose and turning people's photos into action figures and caricatures.
For AI, it doesn’t really matter, either way.
That’s both good and bad: on the one hand, AI probably should NOT be trusted implicitly just like we shouldn’t trust the person offering to stay behind and not get into the underwater death trap that may or may not lead to freedom.
On the other hand, with AI in the picture, we need to get faster at evaluating a plan, understanding the risks, and getting aligned around it so we can keep moving. And not just once in a while — multiple times during any given project.
With AI in the picture, we need to get faster at evaluating a plan, understanding the risks, and getting aligned around it so we can keep moving.
You Still Need A Plan
So, don’t get me wrong: I’m not saying we should get rid of project plans altogether.
What I’m saying is that we need to get better at being less attached to parts of the plan that we leave behind.
We should still have the right conversations to drive alignment and create consensus. We should still communicate ideas to one another. We should still inspire one another and innovate together.
But maybe we can do all that without the meetings to moan about changes, point fingers, accuse teams of underperforming, and tighten our grip on a plan that we knew would change from day one.
So what’s the balance? How can we harness the power of AI and our own human superpowers to develop a better relationship with planning that actually speeds things up instead of slowing us down.
Here’s what I think.
How We Can Get “Better” At Planning
Shorter, More Realistic Planning Horizons
Humans generally try to plan too far ahead. In fact, most business models try to look 6, 12, or 18 months ahead. It’s logical when you need to orchestrate resources and other things that take time to mobilize, but for most projects in the digital space, it’s just a comfort blanket… and it’s almost never accurate.
Instead my team’s detailed plans are now aligned with shorter business cycles. For example, we've seen better results with 7-8 weeks project iterations so that they align with our QBR cycle. Not sprints, not full projects, but something in between.
We still have longer term roadmap with a lower fidelity to plan hiring, manage resourcing, and align with other programs. But trying to make specific commitments for dates far in the future almost always led to disappointment. So we’ve been removing that expectation from our internal culture.
Taking The Shortest Path Towards Certainty
That being said, there is usually a stream of work that will help shed light on some of the shadowy unknowns up ahead. And while that might not create absolute certainty, many projects I’ve been involved with are guilty of following a logical sequence of tasks based on best practice and, in the process, completely overlook the opportunity to clear away the fog.
For our recent projects, a portion of the team’s job is to run ahead and tackle uncertainty — like a recon team. That could be a proof-of-concept, guerilla user research, or even validating a concept with a group of opinionated decision-makers. They have specific goals that unlock specific blindspots for our project’s success. And once we have more clarity, we add to the plan or adjust it.
Decision Gates Over Progress Milestones
Milestones can be useful when it comes to monitoring project progress, but they can also change the work from “delivering outcomes” to simply “getting it done. In other words, progress can quickly become a vanity metric that masks high-value opportunities.
Instead of measuring progress, we’ve shifted our emphasis to be on decision gates — key moments in the project that require humans to come together to decide how to proceed.
Progress might be a part of getting to that decision point, but the focus is no longer about whether all the tasks got done as anticipated. It’s more about staying on course if the goal is still valid and pivoting if the goal posts have moved.
Checkpoints Throughout
But communicating less isn’t the answer, either.
We do still need to track our progress towards project outcomes. It’s how we know if resources will be freed up for the next project, and it’s how we know whether we are on target for a broader product launch date.
Not to mention that with AI and other emerging tech making leaps forward every day, it’s easy to have a plan that becomes moot in an instant.
For our team, although we’ve stripped back meetings, status reports, milestones, and long-term commitments, we’ve been adding more checkpoints in the process. These take the form of team-level evaluations to determine whether we are still chasing the right goal or if running 200mph in the wrong direction.
And instead of being "a human in the loop", we treat it as "a team of humans in the loop". In other words, it's still a creative team exercise that builds bonds, creates reassurances, shares knowledge, and discusses risk.
Importantly, our decisions are oriented around the desired outcomes of the project, not necessarily how well we’re executing the plan. In fact, the plan is rarely on stage at all.
Then the informal updates from our conversations are surfaced to stakeholders through AI-drafted status reports that get reviewed and tailored by our project leads before being published. That keeps the right people informed, and it keeps the right people steering the ship.
Instead of being ‘a human in the loop’, we treat it as ‘a team of humans in the loop’. In other words, it’s still a creative team exercise..
Commitments To Outcomes, Not Activities
The glue that holds this all together is a culture of accountability and ownership. Truly self-managing, high-performing teams don’t leave their areas of responsibility to chance. They don’t just tick off the boxes on their task list and escalate things if they’re blocked. They proactively seek out solutions and make it their problem to drive the necessary outcome that keeps the project moving towards its goals.
That’s been the more difficult transition for our teams. But I see it as the counterweight to the prospect of skill atrophy in a world where we could just prompt an AI and then try to blame the technology for any shortcoming.
We keep our most important tool sharp: our ability to work creatively together to achieve a goal that we couldn’t achieve on our own, even with AI’s help.
Where This Applies, And Where It Doesn’t
I recognize that this approach might not work for everyone — especially in larger, regulated organizations with complex stakeholder ecosystems and heavier governance. But for lightweight tech-focused teams trying to keep things human in an age where AI app factories are releasing finished products in hours instead of months, I think there’s a lot of value to be gained by unlearning some of our old habits around planning.
Certainty is a myth. And maybe that’s okay, too.
What tension are you holding right now between the plan you've committed to and the reality you're seeing unfold?

Leave a Reply