Planning to Decision Velocity: Traditional project management focuses heavily on planning, documentation, and coordination. Amos shows that in an AI-first business, execution becomes cheap and fast, so the real bottleneck is how quickly you recognize patterns and make decisions.
Autonomous Workflows: Swan AI removed status meetings, long project plans, approval chains, and quarterly reviews — replacing them with async updates, outcome definitions, clear decision rights, and automated feedback signals. This shift reduced coordination time from 60% to 10%, allowing the team to spend most of their time actually doing work, not talking about work.
Human Roles Must Be Defined: Human roles must be defined before automating with AI, because implementations fail not due to AI limitations but because teams never clarify what humans should own; Amos explains that AI should fully handle low-impact, low-complexity tasks, assist in areas where complexity or risk is higher, and leave high-impact, high-complexity decisions to humans.
We sat down with Amos to understand what this means in practice. Here's what he had to say.
How an autonomous business model transforms project delivery
I'm the cofounder and CEO at Swan AI, a GTM automation platform that integrates across communication channels (email, Slack, etc.), CRMs, and B2B data sources to handle GTM workflows. We use it internally for project delivery and customer management.
We're building the first autonomous business with a mission to get to $10M ARR per employee with a small team and an army of AI agents. Our autonomous business is designed for human+AI collaboration, not human-to-human coordination, and can scale with intelligence — not headcount.
This constraint turned into an accidental laboratory for project delivery. When you can't hire your way out of problems, every project decision becomes existential: Will this 10x our output, or is it just activity that feels like progress?
That brutal filter revealed patterns I couldn't see when I had the luxury of throwing people at problems at my previous two startups, which I founded and scaled based on the old growth-at-all-cost playbook. Now, after watching 200+ companies implement AI through Swan's product, I can spot the failure patterns instantly.
Why AI is shifting project delivery from planning to pattern recognition and decision velocity
The most dramatic shift is how I allocate my time — and it's almost inverted from traditional project management. I spend very little time on the tasks that used to take my focus, like upfront planning, status meetings, and documentation that no one reads. More on that in a moment.
Here's what's getting my attention now:
- 60% goes to pattern recognition and decision-making: I'm analyzing what's working across our 200+ implementations in real-time. Not monthly reviews — daily pattern spotting. Which implementations are compounding results versus plateauing? Where are bottlenecks actually showing up versus where we predicted they'd be? The game is making fast calls on what to kill versus double-down on, and you can't do that from a quarterly business review. To do this, we have a Retool dashboard for customer health and surface alerts in Slack. I also use Metabase, where we have more detailed product analytics per customer.
- 30% is designing feedback loops: How do we know within 48 hours if an AI implementation is working? What signals tell us to pivot versus persist? I'm building "breakage detection systems" — ways to catch when AI goes off-script or a workflow isn't delivering before it becomes a crisis. To do this, it's important to set a clear plan that starts with small steps that are easy to measure. Traditional PMs planned for success; I'm designing systems that surface failures immediately so we can fix them while they're small.
- 10% is the actual "project launch": This used to be the climax of months of planning. Now it's just Tuesday. We're shipping v1s constantly and treating launch as the beginning of learning, not the end of planning. Launch isn't success anymore — it's permission to start getting real feedback.
In an AI-first world, execution is cheap and fast. The bottleneck is decision velocity. My role shifted from "make the perfect plan" to "make fast decisions based on real feedback." That's why I spend 90% of my time on learning systems, not execution systems.
In an AI-first world, execution is cheap and fast. The bottleneck is decision velocity.
How to eliminate coordination tax in project delivery
Our shift from traditional to lightweight project management wasn't a methodology change, it was killing specific rituals that slowed us down. Here's what died and what replaced it:
Status meetings (40% of calendar) → Async updates (now 5% of time)
- Killed: Weekly project status meetings with slide decks
- Replaced with: Brief Slack updates when something changes, not on a schedule
- Why it works: Information flows when it matters, not when the calendar says
- Result: Recovered 15+ hours/week per founder
Detailed project plans (2-4 weeks up front) → Outcome definitions (2-4 hours)
- Killed: Comprehensive requirement docs, workflow maps, risk matrices
- Replaced with: Clear outcome we want and first experiment to run
- Why it works: Plans become obsolete immediately; outcomes stay relevant
- Result: Shipping first versions in days, not months
Approval chains → Clear decision rights
- Killed: "Run this by leadership" for small decisions
- Replaced with: Defined thresholds — under $X or Y impact, just ship it
- Why it works: Speed matters more than preventing small mistakes
- Result: Decision-to-action time dropped from days to hours
Quarterly reviews → Daily feedback signals
- Killed: Scheduled "let's see how this project is going" meetings
- Replaced with: Automated metrics that alert us when something's wrong
- Why it works: Catch problems at day 2, not month 3
- Result: Issues fixed before they become crises
We went from spending 60% of time coordinating work to spending 90% of time doing work. Not because we're more disciplined, but because we removed the coordination tax that traditional PM creates.
How AI changes delivery rituals by requiring richer context
Traditional thinking assumes AI means less human communication. Our reality is the opposite: AI on the team means we communicate more explicitly, because AI needs context to be useful.
This is particularly relevant to us because we're using our own product to remove "context tax" — meaning that it integrates across our communication channels to gather and retrieve context. It listens.
This has a big impact on our rituals. Customer success projects are a good example. When a customer reports an issue or requests something, our founders intentionally communicate with more depth than needed for each other — because we know AI is picking it up.
Here's what that looks like practically:
Instead of: "Customer X wants feature Y, let's discuss tomorrow"
We write: "Customer X requested feature Y. Context: they're hitting scale issues at 200 leads/day, their team of 3 SDRs is overwhelmed, and this feature would let them handle 500/day without hiring. This aligns with our SMB scaling ICP. Urgency: high—they're evaluating competitors this week."
Later, when anyone on the team — including AI — interacts with that customer, we have access to relevant context.
Yes, this takes 2-3 extra minutes per communication. But it saves hours in:
- Alignment meetings — AI keeps everyone synced
- Context-gathering — AI already knows the situation
- Handoffs — AI maintains context across interactions
When you treat AI as a team member who needs to stay in the loop, your communication and documentation rituals fundamentally change — and that's when AI becomes truly useful, not just a feature.
Why defining human roles first is essential for successful AI implementation
AI implementation failures almost never happen because the AI isn't capable enough. They fail because teams never answered a more fundamental question: "What should humans actually own in this workflow?"
Here's the pattern I see constantly across our 200+ implementations:
Teams spend weeks asking "Can AI handle this task?" They map the current process, identify what AI can automate, build the system, and launch. The AI works. It can do 70-80% of the task reliably.
Then it falls apart. Not because the AI breaks, but because no one defined what humans should do with the remaining 20-30%. So one of two things happens:
- Humans micromanage everything. They don't trust the AI, so they review every decision, effectively turning a $20K AI implementation into an expensive suggestion tool. The team burns out checking AI's homework.
- Humans abandon ownership entirely. They assume "AI is handling it" and stop paying attention. Then something breaks or goes off-script, and no one notices until it's a customer crisis.
This is because we approached AI implementation backward. We asked "What can we automate?" instead of "Where is human judgment irreplaceable, and how do we build AI that amplifies that judgment?"
Instead, start every AI project by defining the human role first. What decisions require intuition? Where is context critical? What failure modes need human oversight? Then, build AI to handle everything else and surface the right signals at the right time for human decision-making.
It sounds obvious in hindsight. But when you're excited about AI capabilities, it's incredibly easy to skip this step and focus entirely on what AI can do—rather than designing the human-AI partnership up front.
A practical framework for deciding what AI should automate
I think about automation decisions across two dimensions: Impact of error and complexity of judgment required. This creates four clear categories, which I'll cover in a moment.
As you move up in complexity or impact, AI's role shifts from "owner" to "assistant." But you must define which role AI plays for each workflow up front — not after launch when humans don't know if they should trust or override AI decisions.
The framework isn't about what AI can do technically. It's about what AI should own given the risk profile and judgment required.
Low impact + low complexity = full automation
Example 1: Data enrichment and research
- AI pulls company information, tech stack, social profiles
- If it gets something wrong, humans catch it during review
- No judgment required — just data gathering
- Implementation: API integrations, AI structures output, light human spot-checking
Example 2: Initial lead scoring and routing
- AI scores against defined ICP criteria, routes to appropriate workflow
- Misroutes are obvious and easily corrected
- Logic is codifiable (company size, industry, behavior signals)
- Implementation: Rules-based + AI scoring, human reviews edge cases
Low impact + high complexity = AI prep, human decision
Example: Customer support responses
- AI drafts responses based on similar past cases
- Complex because tone, context, edge cases matter
- But low impact if we review before sending
- Implementation: AI drafts, human reviews/edits, we learn from edits
High impact + low complexity = automated with guardrails
Example: Outbound message sending
- High impact because it can burn your brand/market if done badly
- But relatively low complexity — and quality can be measured
- Implementation: AI generates, strict approval workflows, automatic kill-switches on performance drops
High impact + high complexity = human owned
Example: Strategic project pivots
- Deciding to kill vs. double down on an implementation
- Requires pattern recognition across multiple contexts
- Wrong call is expensive due to wasted resources or missed opportunity
- Implementation: AI surfaces signals (usage drops, feedback patterns), human makes the call
Example: Complex customer negotiations
- Pricing discussions, expansion conversations, renewal risks
- Requires reading relationship dynamics, political context, unstated concerns
- Getting it wrong damages trust that's hard to rebuild
- Implementation: AI does research and prep, human owns the conversation
How agentic workflows enabled a self-learning support system in 13 hours
Here's an example of complexity we've navigated using agentic workflows in the pursuit of building an automated business.
The constraint: Building toward $10M ARR per employee with just 3 founders meant we couldn't hire our way out of problems. When support tickets hit 200+/week, the traditional answer — hire 2 CSMs — wasn't available to us. Every ticket required a founder's full attention — researching the answer, crafting the response, documenting it somewhere. We were drowning.
V1 (week 1): Answer known questions
We started with the absolute minimum: AI agent in Slack that could answer ~20 questions we'd already documented in Notion.
- Tools: Swan AI, Slack, Notion
- Setup: 6 hours
- Result: 15% of tickets handled
- New problem: Customers were frustrated getting "I don't know" responses. We were still answering 85% manually.
V2 (week 2): Escalate unknown questions
We added an escalation path: AI stays in the customer thread, but pings us internally when it doesn't know something. We answer, AI delivers our response.
- Setup: 3 hours
- Result: 35% autonomous resolution
- New problem: We noticed we were answering the same new questions over and over. Each escalation taught the AI nothing — it was just a routing system.
V3 (week 4): Learn from escalations
We built the learning loop. Now, when we answer an escalated question, AI automatically captures the Q&A, structures it, and adds it to the knowledge base.
- Setup: 4 hours
- Result: 70% autonomous resolution within 2 weeks
Why this worked: Each version took under 6 hours to build because we weren't planning for perfection — we were responding to specific feedback. The "self-learning" capability wasn't in the original plan. It emerged naturally when we asked: "Why are we answering the same questions twice?"
Now, we're answering 200+ tickets/week, 70% of it is autonomous, and our knowledge base is growing organically from real customer questions.
So here's my advice: Ship the minimum that addresses the immediate pain. Watch where it breaks. Add the minimum feature to fix that break. Repeat. We got to a sophisticated self-learning system without ever planning for it up front.
Ship the minimum that addresses the immediate pain. Watch where it breaks. Add the minimum feature to fix that break. Repeat.
Other places ripe for agentic workflows
We're also using agentic workflows in:
- Onboarding: New customer → Swan guides setup conversationally, asks ICP questions, configures workflows based on responses
- Success: Monitors usage patterns → flags expansion opportunities or risks → surfaces to founders with recommendations
Why feedback loops matter more than your AI tool choice, and how to build them
Everyone asks, "Which AI tool should we use?" Wrong question.
The most valuable thing saving us time isn't a tool — it's that we know within 48 hours when anything breaks.
We build feedback loops into every implementation. We track:
- Escalation patterns: When AI kicks to humans, it indicates that it needs help
- Override rates: When humans change AI outputs, it indicates that judgment was weak.
- Outcome metrics: Meetings booked, tickets resolved, etc. — not just activity.
- Failure triggers: Automatic alerts when performance drops.
Without feedback loops, you launch AI and hope it's working. By the time you realize it's not — usually through complaints or quarterly reviews — you've wasted months.
With feedback loops, you catch problems in days and fix them while they're small.
We've seen real impact:
- Support AI: 35% → 70% autonomous in 2 weeks by seeing which questions needed better answers
- Qualification AI: 40% accuracy improvement in 30 days by tracking human overrides
Your AI tool choice matters way less than your ability to detect what's broken and iterate fast. The "tool" that saves the most time is the system that tells you what needs fixing before it becomes a crisis.
An AI-native tech stack powering autonomous project delivery
That said, I do have some staples:
- Claude projects (AI productivity tool): I create and iterate on all the project-related docs in a Claude Project. This way the context of the AI is compounding and it gets smarter with every conversation.
- Shortwave (AI communication tools): It's a new AI-native email client with an AI agent inside that helps me easily retrieve the right context from all of my inbox clutter. It also helps me write great emails and sync stakeholders without context switching or drowning in searching for previous threads.
- N8N (AI workflow automation software): I use this to build AI agents that are connected to our Retool and Metabase so we can get project updates directly into slack.
Why project management is evolving into velocity management
AI in project management will evolve from discrete project delivery to continuous-adaptation velocity management.
I believe that AI will make building and changing things so fast that projects with defined beginnings/endings become obsolete. By the time a 3-month project finishes, the requirements have changed, AI capabilities have evolved, and the "complete" solution is already outdated.
What will replace it are continuous improvement cycles with rapid feedback loops. Instead of "launch the customer portal in Q3," it becomes "continuously evolve customer interactions, ship changes daily based on real feedback."
"Velocity managers" will optimize for:
- Feedback-to-fix speed — not delivery dates
- Iteration frequency — not project completion
- Learning velocity — not scope management
Success will be defined by how fast you detect problems and ship solutions, not hitting predefined milestones.
The PMs who survive are already building this muscle — shipping imperfect v1s, iterating based on real usage, optimizing for speed over perfection. Those still perfecting their Gantt charts won't recognize the job in five years.
Why iteration speed beats planning — and how leaders can ship faster with AI
Here's my advice: Build for iteration speed, not implementation perfection.
Stop: Multi-week planning, perfect launches, avoiding all failures.
Start: Ship minimum useful versions in days, build feedback loops first, learn from fast iterations.
And make it practical: Cut your next project's planning phase by 75%. Ship something imperfect this week that teaches you what actually matters. Measure success by how fast you go from "that broke" to "here's v2."
Speed of learning beats depth of planning every time.
Speed of learning beats depth of planning every time.
Follow along
You can follow Amos' work automating GTM workflows on LinkedIn and Swan's newsletter. You can also check out Swan and Autonamos, Amos' digital clone.
More expert interviews to come on The Digital Project Manager!
