Skip to main content

Key Takeaways

Rapid iteration beats rigid planning in the AI era: Michael argues that traditional project management breaks in fast-moving environments. AI-enabled delivery requires short, repeatable habits — weekly horizon scans, monthly reviews, and daily micro-experiments — instead of long roadmaps. The teams that succeed build a rhythm of continuous testing, adjusting, and learning.

Culture change matters more than tools: AI adoption fails when organizations treat AI as a “bolt-on.” Michael emphasizes human-centered design, habits, and rituals: peer-led training, shared wins, prompt libraries, and small workflow experiments. The real transformation comes from reshaping behavior, not deploying new software.

AI is becoming a collaborator, not a tool: The future of delivery is human–AI partnership. AI handles research, synthesis, analysis, drafting, and repetitive workflows, while humans focus on strategy, judgment, creativity, and alignment. Project managers move away from admin tasks toward orchestration and strategic leadership — with AI collapsing the distance between data and insight.

We caught up with Michael to understand what it takes to successfully modernize delivery systems with AI. He told us it's all about cultivating rapid iteration — which is easier said than done.

Helping teams make AI a reliable partner

I lead AI-ccelerator, where my team and I help organizations modernize their delivery systems by weaving AI directly into their workflows.

Unlock for Free

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 2

This field is for validation purposes and should be left unchanged.
Name*
This field is hidden when viewing the form

My background spans data science, behavioral economics, and large-scale AI implementation, but most of my work today focuses on helping companies rethink how work gets done in a world where intelligence itself is becoming scalable. Instead of treating AI like a bolt-on tool, I help teams redesign processes so humans can focus on creativity, judgment, and strategy — while machines handle the repetitive and time-consuming project analytics work.

In practical terms, that means I spend much of my time guiding teams through AI roadmapping, experimentation frameworks, adoption strategy, and large-scale capability building. Whether I’m helping a marketing team launch campaigns faster or training analysts to make better, faster decisions, my role in project delivery is to make AI a reliable teammate rather than an intimidating unknown.

Why AI adoption fails without culture change and human-centered design

AI has shifted my role from “solution builder” to “capability architect.” I spend less time on manual analysis, research, or content creation because AI can now handle much of the heavy lifting at speed and scale. Instead, my focus is on designing systems, setting guardrails, and helping teams build the habits needed to integrate AI in ways that meaningfully change outcomes.

The work that has grown the most is orchestration: choosing where AI fits, ensuring humans stay in the loop where it matters, and continuously tuning workflows. AI collapses the time between insight and decision, so my role is increasingly about sequencing, alignment, and helping teams “surf the wave” rather than drown in it.

And ultimately, technology is the easy part; people are hard. So a huge portion of my attention goes toward coaching leaders, cultivating adoption, and redesigning rituals so teams can work in new ways.

How to create the conditions for rapid iteration

In an exponential environment, delivery leadership becomes less about controlling the plan and more about creating the conditions where rapid iteration is not only possible, but expected. And creating these conditions means creating a repeatable operating rhythm.

With clients, we start by mapping a single workflow end to end and running what I call an “AI census”: where are people doing manual decision-support work that AI could realistically help with today? From there, we define one or two low-risk, high-impact experiments — e.g., speeding up research synthesis, campaign generation, or analysis — and give them clear owners, a simple success metric, and a tight feedback loop. Then, we layer on lightweight rituals: weekly horizon scans to see what’s changed, monthly reviews to decide which experiments to double down on, and “daily paddling.“ In other words, small, real uses of AI inside live work so it becomes muscle memory rather than a side project.

Here's a concrete example: I worked with a team of skeptical analysts at a large life-sciences firm who were responsible for high-stakes market and pipeline decisions. Instead of mandating “use AI,” we started by mapping how they produced one specific deliverable, then used Gemini to automate just the ugliest parts — scanning market data, summarizing competitive intelligence, and stress-testing scenarios.

We trained them with a “see-do-teach” model: first live demos on their own workflows, then hands-on practice in low-risk scenarios, then having them teach peers. And we institutionalized tiny rituals that rewarded progress — weekly show-and-tells, prompt libraries, shared wins.

Over four months, daily AI usage among trained analysts jumped from 16% to 83%, and department-wide adoption went from 31% to 100%. The tech mattered, but the real unlock was that cadence of small, scoped experiments plus recurring rituals and peer teaching that made rapid iteration feel safe, normal, and expected

In an exponential environment, delivery leadership becomes less about controlling the plan and more about creating the conditions where rapid iteration is not only possible, but expected.

1763423590217-zq7t9zs40w-30675

Michael Housman

Founder of AI-ccelerator

Join the DPM community for access to exclusive content, practical templates, member-only events, and weekly leadership insights - it’s free to join.

Join the DPM community for access to exclusive content, practical templates, member-only events, and weekly leadership insights - it’s free to join.

This field is for validation purposes and should be left unchanged.
Name*
This field is hidden when viewing the form

How to shift to lightweight AI-enabled delivery

Traditional project management relies on heavy upfront planning and rigid structures that break in fast-moving environments.

I argue that AI requires a shift from five-year roadmaps to short, repeatable habits: weekly horizon scans, monthly positioning reviews, and daily micro-experiments. AI changes too quickly for Waterfall. Instead of “plan > build > launch,” modern delivery looks more like “explore > test > adapt.”

Lightweight systems emphasize real-time alignment using AI-generated summaries using note taking software, instant progress snapshots, and automated documentation. Teams spend less time updating artifacts and more time iterating on outcomes. We prioritize working prototypes over theoretical plans and use AI to compress cycle times so we can learn faster than competitors. The result is a delivery rhythm that feels more like surfing: constant motion, constant adjustment, and constant opportunity.

When we moved away from traditional project management toward more lightweight, AI-enabled systems, we did it by intentionally layering these tools into the existing workflows rather than replacing everything at once. ChatGPT became the multi-purpose hub — drafting scopes, generating briefs, clarifying requirements, and producing project snapshots. Gemini handled the deeper research tasks like competitive scans and technical synthesis.

For creative teams, Flux Replicate powered rapid image generation, RunwayML handled image-to-video transformations, and Suno/ElevenLabs provided instant audio assets. On the automation side, Atlas and Lindy were used to prototype agentic workflows, helping us automate steps like gathering reference material, rewriting outputs for different stakeholders, or generating task lists from project documents. Finally, tools like Claude and Manus helped accelerate code development and prototyping.

The transition followed the simple, repeatable pattern I touched on above:

  1. Map a single workflow end-to-end.
  2. Identify which steps can be offloaded to which tools.
  3. Give teams a library of starter prompts.
  4. Run a live practice session using a real piece of work, where team members complete the entire workflow using the new toolchain.
  5. Institutionalize tiny rituals that reward progress.

How to navigate organizational complexity and drive AI adoption at scale

Here's another example. I worked with a fast-growing pet supplement company whose marketing team was struggling to keep up with demand. They were launching about 500 campaigns a month, and every asset required human designers, writers, and manual iteration. When generative AI hit the mainstream in late 2022, we introduced tools like Midjourney and trained the team to integrate them directly into their campaign pipeline.

With only minor process changes and hands-on practice, their production time dropped from 45 minutes per ad to 5 minutes: an almost 900% improvement! That unlocked exponential scale: 2,300 monthly campaigns, higher conversion rates, and ultimately 50% year-over-year revenue growth.

The process changes were intentionally small but highly leveraged. It went like I described above.

We began by isolating a single workflow — the production of paid social ads — and breaking it into its component steps: concepting, copywriting, visual generation, asset formatting, QA, and upload. Instead of redesigning the entire pipeline, we replaced only two steps: initial concepting and first-pass visual creation. We introduced Midjourney prompts tailored to each product line, created template-based copy prompts inside ChatGPT, and designed a short decision tree so creators knew exactly when to use AI versus when to refine manually.

The team then practiced this in a controlled environment: one ad, one asset, one prompt at a time. Once they saw the time compression — from 45 minutes to about 5 — adoption followed naturally. Real-world outputs build confidence far faster than training in the abstract.

How AI changes the starting point for every delivery ritual

AI changes the starting point for every ritual.

For example, scope is no longer a question of “what can we realistically do with our available hours?” but “what becomes possible when our capacity expands?” When defining scope, we now co-create with AI: generate multiple versions, challenge assumptions, and explore alternative paths before committing. This broadens the solution space and reduces blind spots early.

Alignment also shifts because AI provides a single source of truth. It produces shared briefs, summaries, risks, and options, removing interpretation gaps that often slow collaboration.

Validation becomes continuous rather than episodic: AI tests assumptions, reviews outputs, and surfaces inconsistencies in real time.

And execution becomes less about managing tasks and more about orchestrating human-machine workflows, ensuring the right work goes to the right “team member,” whether human or AI.

Why human–AI partnership is the winning formula

Overall, repetitive, manual, analytical, and high-volume work is the lowest-hanging fruit. Tasks like research synthesis, requirements gathering, competitive analysis, early drafts, technical validation, QA, and backlog refinement are ideal for AI because they rely on pattern recognition and scale well with automation.

AI is to knowledge workers what the steam engine was to manual labor: a force multiplier that handles the heavy lifting so humans can focus on what only humans can do.

The work that still demands a human touch includes ambiguity resolution, cross-functional alignment, conflict management, storytelling, emotional intelligence, and strategic tradeoffs; areas where context, judgment, and relational skill matter more than computation. Machines can surface options, but humans still need to choose which mountain to climb.

The winning formula is partnership: Let AI handle the precision tasks, and let humans lead with creativity, strategy, and connection.

AI is to knowledge workers what the steam engine was to manual labor: a force multiplier that handles the heavy lifting so humans can focus on what only humans can do.

How AI agents reduce cognitive load

Agentic workflows are quickly becoming foundational. We focus on three areas:

  • Reducing cognitive load
  • Automating repetitive workflows end-to-end
  • Creating “closed-loop” systems where agents can reason, act, and verify outcomes

Early experiments include agents that generate campaign variants, review datasets, refine prompts, build dashboards, or manage complex research workflows without requiring manual supervision between steps.

When you combine talented people with the right AI systems, output scales exponentially. In some cases, what used to take days now takes hours — or minutes. The key is thoughtful orchestration: ensuring agents have constraints, evaluation criteria, and escalation paths so humans stay in control of judgment and strategy. In other words, agents should operate inside a clearly defined “box,” and humans should always be the ones making interpretive or strategic decisions.

For example, with a pharma analyst, we identified exactly where AI agents could reliably help, and those places became the agent’s constraints: It could gather, organize, and draft, but it could not interpret, recommend, or prioritize. Those higher-order decisions remained squarely with the analysts.

We also established evaluation criteria before AI touched a single task. For instance, every AI-generated summary had to accurately reflect its source documents, every scenario outline had to cite underlying assumptions, and every comparative table had to match the analysts’ preferred formats. When the AI encountered ambiguous data, conflicting signals, or significant gaps — which happened often in early iterations — it automatically escalated the issue to a human reviewer who would clarify direction and add nuance.

The orchestration worked because the AI had constraints, the outputs had criteria, and analysts had clear escalation triggers to stay in command of the work. When that balance is right, the productivity gains feel almost unfair.

How AI collapses the distance between data and insight

But the most underrated capability of AI is not a single tool or agent; it’s AI-assisted decision-making.

When every team member has an intelligent research assistant, strategist, and analyst in their pocket, meetings get shorter, decisions get clearer, and execution accelerates dramatically. Teams stop spending hours gathering information and instead spend minutes choosing among AI-generated options.

For most of my decision-making, I rely on a rotation of the major frontier models—ChatGPT, Gemini, Claude, and Grok—because each one has different strengths, and the quality of an answer often depends on matching the question to the model. In practice, I’m constantly “pinging” all of them when I’m evaluating tradeoffs, pressure-testing assumptions, or exploring strategic options. That cross-model triangulation is what gives me confidence: if multiple systems converge on the same reasoning, I treat it as a stronger signal.

That said, ChatGPT is usually my first stop. The reason is simple: The memory feature makes it uniquely good at decisions that require continuity. It remembers my business context, the types of clients I serve, my tone, my preferences, and the patterns in my work. That continuity compounds over time—so when I’m thinking through positioning, strategy, messaging, or anything that benefits from long-term context, ChatGPT becomes my “home base.”

The other models act like specialized advisors; ChatGPT acts like the one that knows me best.

How orchestration layers and custom GPTs unify AI tools

My core stack now revolves around:

  • Multimodal models like ChatGPT, Gemini, and Claude
  • Agentic tools for workflow automation
  • Domain-specific copilots for design, coding, analytics, and documentation

Overall, the biggest evolution is not the tools themselves, but the orchestration layer — using automation frameworks and custom GPTs to connect everything. In the past, my stack included separate tools for research, writing, diagramming, and project management. Today, many of those workflows converge into a single AI layer that sits on top of existing tools.

Essentially, I'm talking about a collection of interoperable AI systems that teams interface with to streamline their workflows. In practice, this looks like using general-purpose models (ChatGPT, Claude, Gemini) as the cognitive engine for tasks like research, synthesis, drafting, or analysis, while leveraging domain-specific copilots or plug-ins inside existing tools like Notion, Google Workspace, Figma, or Jira. Rather than switching between a dozen fragmented applications, teams increasingly rely on a small set of core AI interfaces that connect to the rest of their stack.

The value comes from reducing cognitive switching costs.

Michael's Tip

Michael's Tip

Overall, the biggest evolution is not the tools themselves, but the orchestration layer — using automation frameworks and custom GPTs to connect everything.

The AI tech stack that accelerates iteration and delivery

Over the past year, I’ve doubled down on tools that enable rapid iteration (visual generators, code interpreters), orchestration (Zapier, Make), and governance (prompt libraries, audit logs). At the same time, I’ve replaced many legacy tools because AI-native systems are now more flexible, faster, and easier to adapt.

The goal is to remove annoying steps that used to eat hours of collective team time: routing assets, summarizing threads, generating status updates, converting meeting notes into tasks, or repackaging outputs for different stakeholders. All of these can run autonomously now. The cumulative time savings are enormous, not because any one task was huge but because delivery work is made of thousands of small ones.

My stack evolves monthly, but the principle stays constant: use AI to collapse steps, reduce friction, and scale output. I use literally dozens of tools at any given time, but here's what I use most right now:

  • ChatGPT: multi-purpose large language model
  • Google Gemini: deep research
  • Lindy: designing agentic workflows
  • Manus: AI-enabled prototyping
  • Claude: code development tool
  • Atlas: agentic browser
  • NotebookLM: audio summaries
  • Flux Replicate: image generation
  • Suno: text to music
  • ElevenLabs: voiceover
  • RunwayML: image to video
  • AKool: AI avatar/digital twin

Why AI will transform project management into strategy-first leadership

Soon, AI will become a full-fledged collaborator in delivery, not a tool. We will have systems that can autonomously manage projects end-to-end: generate scopes, sequence tasks, manage dependencies, flag risks, and even run standups or retros. Delivery managers will shift from task orchestration to strategy, culture, and cross-functional alignment. The “administrative” side of project management will be 80-90% automated.

More importantly, we’ll see AI-native competitors in every industry. Teams with near-zero overhead, infinite creative capacity, and delivery systems that operate at speeds traditional organizations cannot match.

The gap between adopters and laggards is widening exponentially, not linearly. Delivery leaders who build AI-enhanced operating systems now will own the future. Those who wait will find themselves trying to compete with organizations that can deliver in days what used to take quarters.

We’ll see AI-native competitors in every industry. Teams with near-zero overhead, infinite creative capacity, and delivery systems that operate at speeds traditional organizations cannot match.

1763423590217-zq7t9zs40w-30675

Michael Housman

Founder of AI-ccelerator

Why waiting for clarity slows AI adoption

So here's my advice: Don’t wait for perfect clarity.

The single biggest mistake delivery leaders make is trying to predict the one perfect wave. You don’t need a five-year plan; you need a posture of continuous experimentation. Start small, move fast, test often, and build habits that keep you learning. Exponential change rewards motion and punishes hesitation. The teams that win are the ones that paddle early and often.

And invest in people before platforms. AI adoption collapses if your team feels threatened, uninformed, or overwhelmed. Create a culture of curiosity, run hands-on training, celebrate early wins, and make adoption visible. When AI becomes joyful rather than intimidating, it becomes sustainable.

Ultimately, this era belongs not to the teams with the best tools, but to the teams that learn how to partner with them.

Follow along

You can follow Michael's work in modernizing delivery systems on his personal websiteLinkedIn, and YouTube. And check out AI-ccelerator!

More expert interviews to come on The Digital Project Manager!

Faye Wai

Faye Wai is a Content Operations Manager and Producer with a focus on audience acquisition and workflow innovation. She specializes in unblocking production pipelines, aligning stakeholders, and scaling content delivery through systematic processes and AI-driven experimentation.