In today’s rapidly evolving digital landscape, the use of data and AI in resource management is becoming increasingly essential for organizations that want to stay competitive. This is especially true in the agency world, where resourcing plays a pivotal role in delivering key programs and initiatives.
Galen Low is joined by Grant Hultgren (Agency Operations Consultant at Parallax), Marcel Petitpas (Co-Founder & CEO at Parakeeto), and Ann Campea (VP of Project Management Operations at TrueSense Marketing) to explore the promises and challenges of integrating data and AI in resource management.
Interview Highlights
- Data Cleanliness and Human Factors [02:17]
- Data is inherently messy, and people are unpredictable, complicating resource management.
- Gen AI shows promise in interpreting data and providing insights.
- Current Gen AI demos assume perfect, clean, consistently structured data, which rarely exists in real-world scenarios.
- The way we interact with data will evolve significantly, making current methods like table filtering feel outdated in a decade.
- Clean and accurate data is essential for AI to deliver meaningful results, but human-created data will always have imperfections.
- Product and operations managers must account for these imperfections when designing processes and data systems.
- Capturing good data is challenging due to human unpredictability.
- Data reliability is affected by varying workloads, time off, and individual working styles.
- Building effective AI tools requires accounting for human factors in data.
- Organizations face hurdles integrating old tech with new AI systems.
- Ensuring harmony between technology and the human element is essential.
- Key Data Points for Accurate Resourcing [05:35]
- Tools alone aren’t a magic solution; trust in systems is key.
- Defining data consistently across teams is crucial despite differing terminologies.
- Core KPIs include revenue per employee, project margins, and P&L analysis.
- Balancing old and new tools remains a challenge in data ecosystems.
- AI is not yet reliable for making key decisions; human accountability is still critical.
- Common themes across metrics help align understanding and decision-making.
- Start by understanding how an organization defines capacity (skills, tasks, roles).
- Group contributors into 5–8 simple buckets to balance precision and maintainability.
- Avoid overly precise systems that create messy, unmanageable data.
- Use consistent naming conventions across planning, time tracking, and actual work data.
- Add metadata (e.g., work type, client type, phase) to enhance data analysis.
- Structured data simplifies forecasting and highlights predictable vs. unpredictable areas.
- Process inconsistencies can hinder accurate predictions, regardless of data quality.
- Systems must adapt to constant changes like client demands or external factors (e.g., COVID).
- Align frontline contributors to provide accurate, real-time data updates.
- Success requires consistent cadence and communication across all organizational levels.
- Codifying data processes helps maintain alignment, precision, and accuracy.
- Collaboration with experts or consultants can aid in implementing effective solutions.
It doesn’t take a large data set to generate an interesting line of best fit, which gives us a good sense of accuracy and helps identify what is predictable and what is highly unpredictable.
Marcel Petitpas
- Gaining Stakeholder Buy-In for Data Tracking [13:30]
- Align stakeholders with a clear structure for measuring resources and capacity.
- Stakeholder buy-in is crucial; lack of it is a major blocker.
- Acknowledge the human element in data collection, including its inherent subjectivity.
- Present data transparently, emphasizing it’s not 100% accurate but close enough for decisions.
- Build trust by framing data as a tool to improve processes, not a burden on employees.
- Gamification and automated tools help improve compliance but don’t solve the core issue of buy-in.
- Closing the feedback loop with employees fosters trust; involve them in seeing and discussing how data is used.
- Teams often resist or misunderstand data requests when they aren’t shown its impact on planning and decisions.
- Presenting data, even if imperfect, can spark valuable discussions and move teams toward accuracy over time.
- Simplify data tracking—large, meaningful time buckets are better than excessive detail for compliance and insights.
- Match resource planning and tracking methods to the specific questions being addressed to avoid overcomplication.
- Success Stories and Lessons Learned [21:07]
- Grant shared a success story from his time at a digital agency.
- He learned to trust and educate his team on project margin calculations, explaining the importance of resource management.
- The focus was on helping the team understand cost per role and its impact on achieving goals.
- The biggest success wasn’t KPIs, but the personal milestones of team members (e.g., starting families, buying homes).
- The CEO described it as the most fulfilling year, achieving team confidence and stability.
- Grant emphasized the importance of leadership stewardship and long-term growth over immediate concerns like future work availability.
- Despite the ongoing challenges with data and AI, he reflected on the value of team trust and collaboration.
- The goal was to balance lean operations in 2024 with expansion plans for 2025.
- He stressed the importance of supporting both senior and junior team members and providing professional growth opportunities.
- Grant acknowledged both successes and mistakes but cherished the positive team impact during that time.
- Success is not linear; the market determines what is successful.
- Grant highlighted the importance of being adaptive and iterative in data implementation.
- He stressed the need for constant reflection and adjustment in daily tasks, such as timesheets and project tasks.
- The goal is to improve efficiency and avoid burnout during the process.
- Grant shared a success story from his time at a digital agency.
When you focus on a single data point, you’re myopic. But when you aggregate the data, you can become strategic and inform a business roadmap that benefits individuals. This allows you to create growth paths for them, providing professional development and fulfillment in their lives.
Grant Hultgren
- Ann shared that her role often involves using data to tell a story and gain buy-in from the team.
- She focuses on preventing burnout by discussing additional resourcing and staffing.
- Her goal is to help the organization understand the complexity of the work and its impact on staff.
- Ann emphasized the importance of using “good enough” data to inform leadership about work and employee well-being.
- She highlighted that everyone is working toward the same goals: profitability, client service, and quality work.
- The data helps tell the story of resource management and its effects on the team.
- Marcel shared a turnaround story of a long-time client with a strong external reputation but struggling internally.
- The agency had a culture of burnout, with staff overworked during busy periods and facing layoffs during slow periods.
- There was misalignment between project management, sales, and creatives, with issues around time, money, and scoping.
- The agency had problems with pricing and scoping, where budget and time were misrepresented to meet client expectations.
- Marcel’s team helped by separating pricing from scoping and restructuring how data was tracked.
- They identified high-risk and low-risk work, finding that video production, a low-margin service, caused significant stress and risk.
- More profitable services, like brand and website design, were less risky and more consistent.
- They introduced a linear pricing system for the sales team, eliminating subjectivity and setting realistic expectations.
- As a result, the agency grew 60-90% year over year, increasing profit margins by over 500%, while staff worked fewer evenings and weekends.
- Marcel emphasized the importance of project managers and operations managers in removing subjectivity and making data-driven decisions.
- He highlighted the value of sound assumptions and effective projections for future planning, benefiting all stakeholders.
We want to be profitable as a business, serve our clients, and deliver quality work. But the data will tell you the story of what’s happening with your resources.
Ann Campea
- Ethics and Security in Generative AI [31:45]
- Companies must ensure data security when using AI tools, possibly through company-approved platforms.
- Ethical concerns arise over collecting and using personal work data, such as preferences, skill levels, and past performance.
- Transparency is key when using generative AI, especially in marketing or customer-facing applications.
- Internal data use is safer with compliance measures (e.g., GDPR, SOC 2), but customer data requires caution.
- Data anonymization can help ensure compliance and avoid privacy risks.
- Agencies should be cautious with customer data and avoid using it without explicit consent.
- Generative AI should inform processes, not replace human expertise in customer-facing decisions.
- The legal and ethical landscape around AI usage and data handling is still developing.
- Generative AI carries inherent biases, requiring caution in its use.
- PMI has introduced its own AI platform with vetted project management sources.
- Ethical concerns arise when using data from AI sources, particularly in how it’s applied.
- Be cautious when presenting data to stakeholders to avoid manipulating it for specific purposes.
- Data should inform decision-making, but ethical lines must be respected in how it’s used.
- The Future of AI in Project Management [37:48]
- Full automation of project complexity and human judgment is unlikely in the near future.
- Current tools may reach 98% efficiency, but human intervention is still needed for judgment calls.
- The final 2% of automation, like in self-driving cars, is very challenging.
- Even with automated data pipelines, high observability is necessary to ensure accuracy.
- AI tools struggle with making nuanced judgment calls, like verifying time entries or categorizing tasks.
- Humans will still be needed for final checks, especially for edge cases and complex situations.
- AI should focus on automating tasks and improving efficiencies, not making high-level decisions.
- Tools can prompt users to consider adjustments, like accounting for more time on tasks.
- AI is not yet reliable for running a business or predicting major events like COVID.
- Human judgment is crucial for decisions involving financial agreements or project adjustments.
- AI may help with project-level decisions, but moral and relational factors require human input.
- Executives must make decisions based on values, standards, and a structured thought process.
- Using tools like data and Gen AI can aid decision-making if implemented thoughtfully.
- Objectivity in decision-making helps reduce emotional bias.
- Deliberate guardrails and system maintenance are essential for effective tool implementation.
- Tools can guide leaders towards rational, beneficial decisions for the organization.
- Employees quit managers, not jobs.
- Managers should not profit off employees but support growth and fulfillment.
- Technology can improve efficiency but should align with values.
- Recognizing and nurturing employee potential is key to growth.
- Balancing company struggles with fair and value-driven judgments helps retain talent.
- Gen AI is a tool to inform, not direct humans.
- PMI advocates for humans in the loop when using Gen AI.
- Gen AI can handle repetitive tasks, allowing focus on creative work.
- Leadership has shifted from managing to guiding with empathy.
- Machines can’t replace human empathy or personal connection.
- Data Precision vs. Accuracy Discussion [45:37]
- Presenting data as a range, not absolute, helps manage human factors.
- Precision vs. accuracy: ranges offer more realistic answers than precise figures.
- Higher levels of organization deal with more uncertainty and probability-based decisions.
- Ranges in data are more reflective of reality and aid decision-making.
- Parakeeto embraces using ranges for better discussions and decision-making.
- Maintaining a precise forecast for 4-6 weeks is difficult and often inaccurate.
- Using a range helps understand the potential highs and lows.
- Ranges inform decisions about clients, projects, and business direction.
- This approach is crucial for effective leadership and day-to-day operations.
Meet Our Guest
Ann brings over 14 years of experience driving operational excellence and enhancing project management practices across diverse industries. Ann holds her PMP and CSM certifications, and is currently pursuing a doctorate in Organizational Change and Leadership for ongoing professional development. You can tune in to hear more of Ann’s insights on The Everyday PM Podcast.
Gen AI is here to partner with us and inform us, not to direct us as human beings.
Ann Campea
Marcel is the CEO of Parakeeto, a company specializing in agency profitability tools and software. With a passion for optimizing agency operations, he’s helped hundreds of agencies around the world measure the right metrics and improve their operations and profitability. When he’s not helping agencies make more money, he’s probably watching “The Office” or “Parks and Rec” on a never-ending loop.
Data is messy and people are unpredictable.
Marcel Petitpas
Grant brings over 16 years of experience in project management and operations. He is passionate about helping organizations grow through culture, values, and collaboration. Grant leverages a strong business acumen, operations know-how, and SME-level expertise to define optimal strategy and drive revenue growth.
People don’t quit their jobs; they quit their managers.
Grant Hultgren
Resources From This Episode:
- Join DPM Membership
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Grant, Marcel, and Ann on LinkedIn
- Check out Parallax, Parakeeto, and TrueSense Marketing
Related Articles And Podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Galen Low: Welcome to our panel discussion on using data and AI as your resourcing crystal ball. And not to oversell it, we're going to debate it. We're going to dig into it. We're going to find out what it's all about. And this is just something that we like to do every month with our members and our VIP guests.
Just as a way to directly engage with the experts and the folks who are contributing and collaborating with us here at the Digital Project Manager. For those of you who don't know me, my name is Galen, I'm the founder, co-founder of the Digital Project Manager, and I'm also your host for today. So you're stuck with me for a bit, but I've also got with me an amazing crew of some of the top agency operations experts that I know — Ann Campea, Marcel Petitpas, and Grant Hultgren.
Resourcing is a massively popular topic in the agency world, and honestly, for a lot of organizations. And I think there's a good reason for that, it's because not having the right resources available to deliver on key programs and initiatives is a pretty big risk that could jeopardize meeting your goals, and it could even jeopardize some of the jobs for, you and your compatriots within your organization.
And I think that's why resourcing happens on so many different levels within an organization, from having the right team members on your projects, to having the right orb design and talent within that to make it go. But when it comes to predicting the future, us humans are at best making a guess and at worst making rose colored over idealized assumptions.
So sometimes we're assuming that surrender will get a task done just as fast as Kathy would if they weren't on unexpected sick leave. And sometimes we assume that Kelvin can work on 8 different projects for 8 hours a day with no time lost in context switching. And sometimes we assume that Elena and Leila must have the same work preferences, knowledge scales, and understanding of the work because they've got the same job title.
And at the end of the day, resourcing is hard, and it's hard because typically it involves variables that we don't control, namely, people. But while our approach to resourcing hasn't really evolved drastically over the past few decades, at least from my perspective, the technology has. So, the question I found myself asking was, listen given all the tools we're using and all the data that we're collecting couldn't we be using it to make better guesses about our resources and what is going to happen in the future on our projects, on our initiatives with our goals. So that's what we are going to explore today.
And honestly, I'm going to start with the biggest, broadest question for basically all my panelists, which is, with all these organizations becoming a lot more data-centric and with generative AI in the picture like why isn't resourcing and resource management just like auto magic already?
Maybe I'll throw it to Marcel first. I'll put you on the hot seat.
Marcel Petitpas: Sure. I mean, the simple answer is data is messy and people are unpredictable. And it's extremely exciting what Gen AI is going to be able to do in terms of helping us get value and interpret data, but the underlying assumption, and this is the thing that always irks me about every demo that we see of, Gen AI company saying like, Oh, look, you're just like automate your reporting.
And the critical assumption is you feed it perfectly clean, consistently structured data with a clear schema and consistent naming conventions and no mistakes, no outliers. And then, of course, Gen AI is going to be able to answer a question about the data and crunch it faster than a human. So I think that what's really exciting is that the way that we interact with data is certainly going to be disrupted.
And I think we're going to look back in a decade from now and say, clicking through a table and applying filters and applying sorting to get to the visualization that I want is going to feel archaic to us. But the problem that isn't going away is that you need good, clean data in order for AI to be able to give you those kinds of things.
And the unfortunate reality is that as product and operations managers, we're dealing with sets of data that are created by human beings, and therefore it is inherent that it will be imperfect. And we're going to have to plan for that in the way that we architect our processes and our data systems.
Galen Low: Boom. I love that clarification on the cleanliness of data. And then I think we'll get there later in terms of this data literacy, what does that mean in this day and age, and who needs to have it? Spoiler alert, probably everybody.
Ann, anything you want to pile on in terms of why isn't resources today with all the tech we've got?
Ann Campea: I mean, first off, going off of what Marcel just said, it's getting good data. And there's that human element behind trying to capture that. So in my experience, we built tools, we feel like we have the right reporting in place, we're looking at the right metrics, but then you go back to the data. So getting good data is like trying to catch lightning in a bottle, right?
It's tricky. It's sometimes unreliable because you have that human factor infused into it. And humans are wonderful, but they're also wonderfully unpredictable. So in the example that you gave when you opened up where Susie might have 50 projects, she's trying to crank out in a week, but then you've got Joe who had to go on PTO for three days because he wasn't feeling so well.
He cranked out a little less projects than Susie did. How do you really capture that human element of what's going on with the data when it comes to time tracking, when it comes to the complexity of the projects, when it comes to the ability of your staff to be able to produce and, everyone works a little bit differently.
So I do think to kind of piggyback on what Marcel was saying is getting that good data, and then building the tech and the AI around what that could be. And I would throw out one more example of, I've been in organizations where you're dealing with old tech versus new tech. Now you've got AI layered on top of that.
So how do you make sure all those things play well together? And then at the same time, still have the human in the loop.
Galen Low: I love that.
And it comes back to Grant, what you were saying in the beginning too, right? The tool isn't the like magic bullet solution. There's other variables. And maybe this is a good transition out of okay, yeah, this is complicated.
Why isn't it easier? Well, because, data cleanliness and humans. I think it begs the question okay, well, what data should we be gathering? I know there's probably a huge set, but I don't know, Grant, from your perspective, what are some of the data points that like an agency or any organization really needs to be capturing to just resource more accurately?
What's something that's maybe not traditional, maybe controversial about some of this data that we'd gather?
Grant Hultgren: Yeah, and I think just to transition there what it comes down to is what do you trust, right? And as agency owners, when you start that journey, or maybe you're joining the agency, it's like agency is self agency, right?
And oftentimes it's I can do it better than the company I came from, or the person I worked with before. But that often means I'm going to define it a little bit differently, right? And all of a sudden, we're defining different data points differently. And I think Marcel would say, listen, we don't need to define this differently.
You can use your own terminology, but we're really talking about the same things. And when you look at core KPIs, I would absolutely agree. From our standpoint, and the SaaS company side of things, like when you look at it, you're looking at revenue per billable employee. You're looking at revenue per all employees.
You're looking at what your margins are at a project level versus what your PNL is on a quarterly and annual basis, right, in terms of overhead, fully baked costs. And there are a lot of commonalities in that. But as we look at particular, the digital project management side of things, it's all part of this ecosystem that is playing into these higher level KPIs, right?
And there's multiple tools, as Ann was saying, some could be from 1990 all the way up to more modern ones. And that's where, it does become a challenge, but I would say more than anything else is, do we trust those internal systems to be reliable on AI models to make decisions and have optimal outcomes?
And I just don't think we're there yet. That's why people are so important. No one's going to hold AI accountable to missing a project margin, right? We're all going to look at that project manager and say, what happened here? Interpret the data for us. And to Ann's point, well, someone was sick. Okay. Why didn't we do anything?
Well, AI didn't catch it. That's not acceptable, right? So we might get there someday, but I think it is more, I don't think those metrics are going to change all that much, but it's how they level up into a common theme. Marcel, would you like, see you nodding along, would you agree?
Marcel Petitpas: Yeah, I think, I'll just speak to what we do at Parakeeto to onboard someone into a forecasting system and think through the structure and the way that we work through this is very simple.
We start with understanding how they think about capacity in their organization. And this is going to change from one organization to another. Some organizations think of this in terms of skill sets. This person has design skills, project management skills, development skills, copywriting skills.
Other times they're thinking about it in terms of task sets. This person does this type of work, right? So whether it's task categories, role categories, job titles, what is the mental model that this firm thinks of their capacity in terms of? And thinks about making hiring decisions in terms of? And we want to try to create a simple layer of abstraction generally between individual contributors and larger buckets that we can group them up into.
And this is where I'll introduce an idea that I'll probably talk about a lot, which is precision and accuracy. Those are different ideas, and I think that this is the curse of most project managers is they conflate those things as being not only the same, but correlated, and they're often in conflict with one another.
I found that a lot of project managers set themselves up for failure because they try to create very precise systems that end up being impossible to maintain. And it actually ends up putting them in a situation where they just have a whole bunch of messy data all the time that they're struggling to get any value from.
And so that's a big part of what we're trying to do at this first step to say, how do we develop this kind of simple model five to maybe on the high end, eight buckets of capacity that we think about, then let's look at planning work in a way that looks the same. So the same naming conventions, we bucket our estimated time in the same way.
And then we go downstream to time tracking and say, how do we make sure that every time entry has a piece of metadata on it that we can connect back to one of those buckets? And if we can build structured data in that format over time, and we layer in maybe a couple of other objects, like what type of work was this, what type of client was this?
What service offering or product was this? Phases we can start to layer in as much metadata as we want, then linear algebra becomes really easy. Where a question like, for every dollar of website budget, how much design time on average do we need to plan for? Well, it doesn't actually take that large of a data set to start to get a really interesting line of best fit that gives us a pretty good sense of accuracy on what that's going to look like and also start to identify what stuff is predictable and what stuff is very unpredictable, which is an interesting way to think about, okay, where might we actually have a process problem that's setting us up for failure?
Because we don't actually do things very consistently and therefore predicting the future regardless of how good our data is actually going to be difficult because of how that happens in practice. And so just in terms of thinking through how do we structure the information, the data points are capacity.
What is our model for that? And how do we think about the schema planned work? What is our model for that? And how do we think about the schema? And then just matching up actual time spent to that same schema so that we're not having to translate three different sets of data structures to try and answer that question.
It's unnecessary friction, and it often means that we can't actually answer questions that we set out to get when we started this process.
Galen Low: I like that idea of there's predictable stuff and there's stuff you can't predict. And like even just that lens is really important because we're talking about chaos, unplannable things, right?
And our desire as humans is to solve for it, but maybe you can't. But the other thing that really kind of, resonated with me about what you're saying is that we need to have good clean data, but I think this is where the tech comes in because I remember building a skills matrix for the staff at the agency I was working at.
And it was like, Hey, like on a scale of one to 10, like, how good are you at Drupal development? They're like 10, I guess, and there was like, it was so much to collate and put together and we're doing it in like Google sheets. And it was just a mess. Our intention was good. We were like, okay, How do you like to work?
Where are you? What kind of projects are on your career roadmap and all this stuff. And we're trying to sort of plan it all out, but like the volume of data for us to put into our consideration as humans was just too much. So as you're saying that, I'm like, wow, that's a lot of models.
That's a lot of taxonomy. What are we going to do about it? But I guess that kind of leads into that whole notion of yeah, this is where technology can help.
Grant Hultgren: Well, and you can architect it all right. And you can get it all set. But it's going to change in an hour. That client's going to, they'll sign that statement of work, right?
That change order, they'll cut their scope. COVID will hit, right? Now do we need an office? Is that overhead cost we have to have? And I think exactly to Marcel's point, you construct it, and then you've got to put the cadence. How do we align the individual contributors who are on the front line that can inform this data the best possible way to make those incremental changes?
Like everything Marcel just said is frankly why our company exists. I know he's agnostic, right? But that is what we're trying to codify in this data, but it's the exact same mindset. It's the exact same means of measuring these different inputs that come into different solutions to that. And you can hire consultants like Marcel, who can help you through that or Ann who's going to say, yeah, actually, here's how I go about it this way with your team, or you try to codify into it.
But the problems are still all there of how do we get it to every layer from owner all the way down and we're all line and we're, yeah, we know where the precision comes into play. We know where the accuracy is, and we all understand where the value is.
Galen Low: Let's go there because I think it's a really interesting question, clean data seems like a myth and probably not from a data standpoint, not from a taxonomy standpoint, but from a human standpoint, I'm wondering if I could throw this one to Ann, just like the human component, how do we get people on board with the idea that they will benefit from tracking more data?
People already hate time tracking. Marcel, you posted about utilization the other day on LinkedIn, like these are things that get people tied in knots. And they can really damage morale. And now we're going to ask them to record more stuff, right? More data, like more models. And like, how would you approach that in terms of getting people bought into this idea that this is going to be helpful?
Ann Campea: This feels like a very loaded question, but I think it's tough and it really is tough to get all the layers aligned. And what I'll say is, I think, shout out to Marcel, you put up a post on LinkedIn about trying to come up with that magic formula that your stakeholders can buy into to say, this is how we are aligning to measure resources, capacity, all of that good stuff.
And Grant, you just called out things change all the time. So how do you control for that moving variable? I think it's about aligning back to what Marcel was saying. If you can at least get the buy in to a structure, even if it's, something vastly different than what the other agencies are doing, but your leadership team, your culture as an organization, you're all opting to rally behind what that formula is to capture this data.
Then, at the very least, you've got the buy in, which is one of those biggest blockers that you might encounter. Because we can spend all this time building, we can spend all this time enhancing our tools, building out the reports, gathering the time tracking, squeezing the time from our employees to say, how long is it taking you to do different types of projects of different complexities?
How many can you take on in a given day or week? You can gather all of this stuff, but if you don't have that buy-in, that then becomes your biggest blocker. It's, again, going back to that human element. And I think what I found in dealing with trying to get stakeholders to buy-in, is sometimes there's this assumption that, you might be promoting very objective quantitative data.
And I do think, if we're being real and honest about the data, going back to that good, clean data, I think it comes down to the stakeholders understanding what it is you do have. I always present my data by saying, this is the time study, there's going to be a variance in this time study because you're asking humans to enter the data, right?
So always consider that there is going to be a bit of subjectivity, even in the quantitative data that you're gathering. Which then when you present it to your stakeholders for buy-in, you got to just keep it real in that way. You can't just say, this is the data, this is clean, this is what we're going off of.
It's 100 percent accurate because as we all have said already, it can't ever be that way, but it can get close. And I think that's number one in terms of selling that formula to your stakeholders.
Galen Low: I love that approach because it's collaborative, right? Intrinsically, what you're saying is don't, send it down from the top and cram it down people's throats.
But also I think it's like the framing of it. And I think we do have that tendency, especially in resource management to be like, this is the definitive thing. Everyone's week adds up to 40. It's going to be perfect. Whereas you're right, it's if we present it with that variability.
All right, Marcel, I don't know if that ties into the sort of precision and accuracy thing.
Marcel Petitpas: Yes, it does. So I have two things on this. We talk a lot in the industry about these tactics to improve compliance. Could you gamify it? You get the Slack bot that puts people on teams and shows compliance. You get the thing that tracks the time, based on what they're doing on the computer and helps them fill out their timesheets.
You resource plan based pre-filling of the timesheets. And all of those things are very helpful. But to me, the underlying issue is what Ann just mentioned, which is the buy-in. And another big component that I find often happens is the loop isn't being closed. And so the ICs in particular that are being asked to enter time, they're never privy to the actual conversations, decisions, or reports that are being created as a result of that.
And so they're left to create their own story in their mind about how that time is being used. And it's rarely going to be a flattering one. No matter how well intentioned you are about telling them how you're using it, they need to see it to believe it. And to Ann's point, there's value in having them involved in that conversation.
And so what I found is a lot of management teams are hesitant to close that loop and start showing the data to their team because they're like, Oh, but it's, we know that the data is not good yet. And so it becomes this chicken and egg game. And what I always encourage them to do, and we've done this experiment with clients a number of times and say, have the meeting and pretend you don't know that it's bad.
And what you'll start to see is, A) the way that maybe not having a really deliberate that of KPIs that you measure with the team, what incentives that creates that are actually opposite of what you want. So like hyper focusing on utilization, it creates a reaction that is often not desired. And so when you go into that meeting, let's say, for example, you were really focused on client budget. And so you go into the meeting and you're like, guys, or you have a compliance issue. You're like, guys, this is incredible. We were under budget on every single project. Our clients are thrilled. You guys aren't even that busy. I could go sell twice as much work next month and you still wouldn't be at capacity.
This is incredible. High five, everyone. This is great. And all of a sudden you might find the team goes wait, hold on a second. Maybe all of our time wasn't in the timesheets. Like we definitely can't handle twice as much work next month. And you're like, Oh, geez, I'm so glad you told me that. Cause this is the information that I'm using to plan into the future.
And I need this to be accurate so that I can make sure I don't put too much work on your plate and that we're resource planning things effectively. And then you might find that the pendulum swings in the other way. And then you have a different discussion about, Hey, we're going over budget a lot.
I don't know if this is sustainable, but over time, I think you'll find that as long as you're using the data correctly and having that conversation earnestly and from a place of curiosity, the pendulum will swing towards truth and the team will really start to understand how this serves them and protects them.
I think that's really important. And the last thing that I'll add on precision and accuracy is I think that is another really good thing to keep in mind is if you're asking your team to track time to the subtasks within the task within the milestone within deliverable within the phase within the project with it, it's like it's too much, right?
It's way too much friction. Simplify it and what you'll find is you'll have larger buckets of time that are actually more meaningful. You'll be quicker to get to statistical significance in terms of your insights and you'll have higher compliance generally. And the same thing is true about resource planning.
Individual allocations are not the path to a six month forecast. That does not work. That's way too much surface area, and there's way too many things changing all the time. So, match the methodology to the purpose of the question that's being answered.
Galen Low: In my head, I'm like, that's sneaky, but I get how it pulls out the qualitative aspect of it.
And coming back to Ann, what you were saying is that you start at the data, you don't end at the data. It drives a conversation about some of the things that maybe aren't in the data. I like that it kind of, if you can build that culture of depressurizing it. Because of course, everyone it's like handing bullets to somebody so they can shoot them back at you.
It's oh, what? You're only 70 percent utilized? Oh, it took you seven hours to do that five hour task? And nobody wants to volunteer that data. So they're already on the back foot. But yeah, I like that. Simple is okay. Simple is still better than trying to go over complicated and making your data "perfect".
I thought I'd maybe drive into some storytelling and I'm gonna conflate two questions because originally I was gonna be like tell me some horror stories and then tell me some success stories, but this is a free for all. You know, we've been talking about things that can hold you back.
We've been talking about things that can really push an organization forward. Let's do story time. Does anyone have a success story of where they were able to sort of bring sort of this buy-in and raise the level of data literacy within an organization so that resourcing and looking into the future is better? And then the other side like tell me some ghost stories about how if only they had done this thing but they got held back because, they hit this blocker and that's the thing that was mostly holding them back from removing the headaches around resourcing.
Grant Hultgren: I've got, I think a win story. I got out of my own way with the digital agency I was working with and got exactly on the pep talks that essentially Ann and Marcel just delivered, which is you got to trust your team, right? Educate them and help them understand why this is important.
So we went through why we're calculating margin on projects. What's that actually means cost per role, right? Like we weren't giving out what we're paying people, but we started generalizing and showing Hey, a senior developer relative to a junior developer, like when we're resourcing, this is why it's important, right?
If you're going to be held accountable to hitting a margin. And, I think a lot of times you're like, Oh, is this going to come back to bite me? And there's all the reasons why you shouldn't do that. Especially if you're in management, we end up doing that within a year that the greatest success was not a single KPI, right?
We didn't sit there. We hit our girls goals. We did all that. That's great. But we had three people start families. We had two people buy homes. And I remember sitting down with the owner, the CEO at the time, and he said, this is the most fulfilling year I've ever had. This is what I set out to do for the team.
And oftentimes we're so worried about, do we have work in three months? And that worry is never going to go away. In fact, that's kind of stewardship of leadership, in my opinion. I've come to learn that. But the greatest wins that we had was those team members sitting down and them demonstrating with their life choices that they had confidence in where they were at.
And you know what, they might have moved on, they might have taken different roles. But I think when you do this right, there are moments where you can sit down and say, you know what, I got it right for that moment. And maybe I didn't get it right the next day, right? Maybe I needed that data. But there's a reason why we're talking about AI relative to this, because it's not solved.
If it were easy to solve, any one of us would have solved this by now, but it takes a lot of effort. Like I said, I'm a wet blanket, but in that moment, I could sit back and say yeah, I'm really proud of what we did as a team. I'm proud of the decisions I made and the trust that, that I was able to showcase.
And I think that's the core understanding here. We'll always need to work on the data. We'll need to work on the process. We'll scale. We'll contract. We'll get lean and mean during 2024 with the hopes that 2025 will expand on those service offerings that we're developing now. And that all takes effort.
There's no shortcut in this path for digital agency owners or participants. But that's, in my opinion, what we're signing up for. And when we can collaborate and work together, we can actually make it an easier path for each other. And I think that's where it's, why are we doing this for our project managers?
Let's not burn out our senior dev. Let's not be blind to where we need to help our junior devs or designers grow. Let's help inform the business exactly to Marcel's point with accuracy, not precision. Because when you look at a data point of one, you're myopic. When you can start to aggregate it all together, you can start to be strategic and inform a business roadmap that will only benefit those individuals and allow you to put growth paths in front of them that they now actually have professional development and fulfillment in their lives.
And it sounds very aspirational, but having lived in that moment, for that fleeting moment, I still look back at it and now I can also share some horror stories right where I got it wrong, but that was a moment for me that was really special.
Galen Low: I love that story because, A) I started with, oh, how can we improve data literacy? But where it went was the why is not if you understand data better, then you'll understand the reasons why we're gathering this data.
The why is, growth is not just agency growth, it's like personal growth, it's career growth, it's growth for your peers. And we're all here to do a thing. So we're like, our livelihood is okay and hopefully improving. And that's a pretty solid 'why'. It might not work in every work culture. I know there's some organizations.
I know some folks in the audience here are like, yeah, that would never land. Totally. It's a line pockets of the exact. It's totally for a yacht. You got to be genuine, I guess, but I love that story and that framing.
Grant Hultgren: Gross, not linear, right? There is no, if you do this, you will get this, right? The market gets to tell you what's successful and not.
And so being adaptive and iterative, I think that's what that, even in terms of data literacy or data implementation, that's the piece that it's, are we trying every day to inform what we're actually working on? Whether it's a timesheet, whether it's a project task. And there are efficiencies there, so we don't kill ourselves in the process.
Galen Low: Love it. Love it.
Ann Campea: It's very hard to follow up Grant's story, but I would say, and not to get too specific, but I think it's in those moments of you feel like the hero in this role. Sometimes when you can take the data, tell a story to the audience and have them buy into whatever it is that you are looking to achieve.
And in my case, oftentimes, it's preventing my team from burning out, talking about additional resourcing and staffing. How do we get a better understanding as an organization around the complexity of the work and how much of a toll it's taking on our staff and what we can do to better balance that. So it's in those little wins in those conversations on a day to day where you are able to use the good enough data and paint a picture for your leadership team around what's happening with the work with the employees and how everyone is really in it for the same goals, right?
We want to be profitable as a business. We want to be able to service our clients. We want to be able to put quality work out there. But the data will tell you that story of what's going on with your resources.
Galen Low: Marcel, you got a story?
Marcel Petitpas: Yeah, I'll share a turnaround story of a client that we worked with for a long time that on the surface, and this is so true, but a lot of agencies looked like they were absolutely killing it.
I mean, incredible logos on the website. The work was unbelievable, creative. They were winning awards, right? And then you look under the hood, and what you found was this culture of like very creative, but super burnt out people. They were on this feast or famine rollercoaster of going from being like 150 percent utilized, everybody working like crazy hours through the weekends and on evenings to then going into these long seasons of having no work and having to lay people off.
And of course, all of the kind of common challenges that you hear inside the agency of like PMs pointing to sales, saying that they got set up for failure. Or sales pointing to the creatives, saying they can't control their egos and they blow through budgets every time they do something. Just a complete misalignment of expectations and a really problematic intrinsic relationship between time and money as it related to scoping and pricing, where it's like the only way to change one was to change the other.
So it's Oh, the client doesn't have a lot of budget. Well, let's pretend that this is going to take less time than it actually will, or the client has more budget than let's like, pretend that it's going to take more time than it actually will. Instead of just acknowledging that these two ideas are related, but actually very distinct and that a pricing conversation is for the client and a scoping conversation is for us.
And so we came into that organization and went through the things I talked about earlier. We started by separating those things. We got clear on what the structure of the data was going to look like. We started tracking what was actually happening. And over time, we're able to start to identify what work was higher risk, what workers lower risk.
We identified that one service offering in particular for them, which was video production was incredibly lumpy. Anyone that's worked in video production knows they're super lumpy projects. The production days or shoot days are just like absolutely insane. People are there for 18 hours a day burning out and they were the lowest margin thing that they were doing, but it was causing an incredible amount of stress and inconsistency in their business and a lot of risk.
And whereas they had these other things like brand and website design and strategy that were much more profitable. And so with all of that insight, we were able to do a couple of things. First, create linear algebra for the sales team so that there is no more subjectivity in the scoping or pricing process.
They took the inputs from the client, they punched them into a calculator and created a directionally accurate sense. And then it was just a question of tweaking up or down basically on that. A P2 tax is like all our pain in the ass tax for that particular client, which then set realistic expectations for the project management team that they were already bought into.
And anyway, the end result was we took a firm that was experiencing all of those challenges to growing 60 and then 90 percent year over year while simultaneously increasing their profit margin by over 500%. But the thing I'm most proud of is that team worked substantially fewer evenings and weekends and over time in the process, things that people feel and tend to believe are in conflict with one another.
But I share all this to say this is why project managers and operations managers are so important because this is the power that you have. Because agency operations and it's really just about taking our assumptions about reality and projecting them into the future. And so it really comes down to those 2 things, how sound our assumptions about reality and can we use data to remove subjectivity to that and make it more consistent?
And then how effectively can we project that out into the future? And to the extent that we can recline those two things, we can do a lot of good for all the stakeholders that are involved in our company.
Galen Low: Boom. As my quick story, I was at a table a few weeks back with some folks in higher ed and some agency folks and we were talking about time tracking and how problematic it is and they were all like, y'all track your time?
That sounds great because right now I have no way to tell my manager that I'm over capacity other than anecdotally, right? And you kind of start seeing this story of not just like how we're going to do this thing, but should we do this thing? Should we continue to do this thing where it's like burning people out and it's low margin? And, when we have that data, we can start that conversation. I think that's really interesting.
I want to get into some of the questions, some of the answers, because I see some really good stuff coming through. I thought maybe what I'd do is I'd cross cut one, because we have a question about generative AI, and data, and ethics, and the security of that data. So I'm going to read the question and I'm going to build on top of it.
So the question is, if we're putting project data into a gen AI tool, how do we ensure security of that data? For example, do we need to use a tool that the company we're working for has paid for, or can we use something else? And then I want to intercut that with like, where is the ethical line to right where we start kind of gathering all this information about people maybe with the best of intentions like work preferences and skill level with a tool and how much time it took them to do something similar last time on a project that was similar?
Where are we drawing a line that says actually, you know what, that's pretty invasive in terms of where this data goes and how we wield it. I just conflated two really big questions, but I thought I'd throw it out there just in terms of yeah, the security of data in tools, and sort of like corporate policy and compliance there. And then where does the rabbit hole go?
Can I throw to Grant?
Grant Hultgren: Sure. Yeah, there's a marketing agency that I worked with. And I think even when you start looking at how generative AI in particular is affecting production level marketing input, whether it's blog posts to social posts or whatever it may be, I think we're seeing it more and more.
And we're seeing like, okay, is the quality there? Maybe not, but like the volume is getting out there. And first thing we did was one, are we disclosing this that we're using it and what does that mean? So, internally, we had to meet as a team to say, are we going to use this? Are we okay with this?
Is the quality there? Are we, if it is great, let's then use it to our advantage. It's another tool in our tool belt. But I think there's got to be that internal conversation, especially if you're going into the market with it on a customer's behalf. If it's internal data, that conversation, at least in my opinion, gets a little bit safer and that you can go and make sure you're compliant for us.
Like we're sock to compliant. We GDPR all that. And so, we know there are limitations to that. We cannot do some things and we cannot use AI on some of that data relative to our customer data, and that's just what we signed up for, right? That's all part of it. But there's components to that, too, where we could start looking at that, where we can anonymize it and all sorts of stuff and looking at it that would help ensure that we are not being too prescriptive or on where that data is coming from, naming particular customers or anything like that.
And I would just encourage that for anybody in that agency world, right? Unless your customer has explicitly said that you can, I would just assume you can't, right? Because it's just not worth risking the relationship, in my opinion. Many deals today are based off of that personal relationship. Even my whole background has been if I empower the right people, they'll find the right people, which would be our customers or clients who would give us the projects which generates revenue.
And even on that basis, it's an interpersonal relationship. And so I always err on the side of caution when it comes to aspects of this. And at the same time, there are generative AI tools used throughout chatbots. I was looking at ClickUp more recently, and one of our customers is coming from that. And it's well, that's pretty powerful, but they're using it at stages that are not informed with details.
They're using them to inform what that methodology could be, right? And now I'm going to tailor that and I'm going to use my craft and my experience to customize at that point. But at that point, generative AI has only informed the path. It's not giving you the diagnostic or giving you the exact next steps that you are bringing to a customer like straight on.
And I think there's kind of clear lines of delineation morally, at least in my mind on that. It's a tricky area. ChatGPT they're getting sued. You know what, they're using as source data even, right? We're all benefiting from that, right? I don't think it's resolved yet.
Galen Low: I like that you call that compliance though, because there's a proliferation of tools out there, some of which probably have gone that step to be like, listen, like security is our priority.
And some of them are just MVPs out there just to figure out if there's a market, if there's demand for this tool, if it's a...
Grant Hultgren: 100%. Yeah, a hundred percent. And we've seen it. And we've been really careful, even internally on what we can or should do, because just because you have the data doesn't mean you should be sharing it with these tools either.
Galen Low: Boom. Love that.
Ann Campea: I want to touch on the ethical line piece just a bit, just because we hit on earlier that Gen AI in itself carries a little bit of bias, right? So you have to be really cautious when using it as well, because as Grant pointed out, ChatGPT is in a little bit of hot water right now, just because of their source.
For those that are embedded in project management, PMI, Project Management Institute has come out with their own kind of AI platform as well that they're promoting to say, Hey, all the source information is coming from vetted project management sources. But I think the ethical line needs to be drawn around how you are utilizing the data.
I don't know if everyone has been in that situation where you're presenting data to a stakeholder and they say great, but that's not the data that I need to be able to get what I want. And so you have to be ultra cautious, PM or not, when you're using data, especially if it's coming from a Gen AI source, that it's there to inform you, as Grant has said.
But also, what are you being asked to produce with that data? And are you crossing that ethical line in terms of trying to doctor the data to present a certain purpose for somebody? So just be very careful there, because I think I've been in that situation, which I imagine some of you might have been in that situation where they're trying to take that data and use it for other purposes that are really definitely crossing that line.
Galen Low: I love that there's the other side of data storytelling, right? Like where you're kind of a nail looking for a hammer? I don't know, is that how that goes?
Anyway, it's hammer looking for a nail, I think it was. I think that's actually maybe a good transition too, because I think there's another question here, which I think is a really good one that ties it sort of back, not just ethics, but the humanity of it.
The question is the thinking that in the future, the tools will be able to accommodate for the humanness / project complexity piece, or will this always be a manual intervention?
Marcel Petitpas: I have an opinion on this. I find it very hard to fathom that this will be able to be completely automated from end to end.
And I'd like to be proven wrong on that, but I think that we are years and years away from getting to that point. And just look at self driving, for example. We were talking about self driving cars being a real thing like a decade ago, and we're still really far away. And the reason for that is, that last 2 percent is really hard to get to.
And it's the reason that we still have bookkeepers and accountants, even though finance has been a thing and QuickBooks has been a thing for decades and decades, right? You can get 98 percent of the way there, and I think there is a ton of leverage, especially if you have, a really deliberate data structure and data pipeline in your organization to make that a hyper efficient process.
But there is a certain amount of judgment that is going to be really hard to have AI replaced. And I think that even if you do have a data pipeline that's fully automated. It's going to be very important to have high observability because this is one of the inherent challenges that we've all experienced with ChatGPT or with any of these other Gen AI tools where you feed it a prompt, it gives you an answer, but how it arrives at that conclusion is a complete black box.
And that's very problematic when you're talking about, making changes and transformations to data or making judgment calls about, for example, this person logged a 99 hour time entry over the weekend. Well, is that real or is it because they forgot their timer was running? A designer log some time to project management.
Is that a mistake? Did they choose the wrong task category? Or is it actually because they got pulled in to do project management work, right? These kinds of judgment calls are going to be very difficult for the AI tool to consistently and accurately be able to make the right assumption on and even if they do make assumptions, we're probably still going to want to have at least a final check on some of these edge cases for a human being to say, is this congruent with reality?
Grant Hultgren: Yeah, I mean, I would absolutely agree. I honestly, for me, the meme is I thought AI would help me like do my dishes not make art. And let's start there, right? Let's start with automating tasks and giving us those prompt points to say, you know what, you just added this task. Do you think you should account for more time for Marcel on this project?
Because you did that. That's a great prompt. Let's go for efficiencies first. It will naturally learn then from that. I think to help it build off of its models. But I don't think I would trust AI to run a business right now. How do you predict COVID? And the results of COVID? There are huge events that have occurred even in the last 3 years that would have to fracture a whole model effectively and change the course in very decisive moments.
Marcel hit it right on consider it's not going to help happen all that often, right, in terms of lifetime events. But there are relative events at a per project level that occur that will help, I mean, just even morally, should we write off that time? Because we were inefficient. Or should we bill for it because, it's actually part of the financial agreement and they were asking these questions and it actually made a better project.
And so therefore, it's it's got value. Those are really hard things to decipher between human relationships and what the right judgment call is.
Galen Low: I like that notion of human in the driver's seat, but I want to go to a gray. What I think is a great area, which is, you're getting advice from your Gen AI tool, and they're like, Do you really want Tony on that?
Tony's been slow. Tony's like really slow. But Sergio is available, and Sergio is fast, and Tony just keeps going down and down, and there's no professional development plan. And Tony's on the bench all the time, Sergio's getting all the work. Is that an area where we would want the tool to be advising us?
Marcel Petitpas: I'll comment on this as, and this is just my calculus as like a CEO, those are decisions that and conversations and thoughts that happen and just do that's our job as executives. And so again, I think that the question is, what is the thought process? What is the set of values? What is the set of standards?
What is the decision making process that goes into those things that are going to be a reality of running an organization? And if you can use data, Gen AI, any of these other tools to help enable that process, I think there's a lot of good that can come of that. I'll say that as a person who's responsible for making those decisions, and everybody else on this call, I'm sure has been in that position.
Having more objectivity is a gift because it can be very challenging to go into that conversation clouded by your emotions. We're all human beings. And so I think as long as you set some deliberate guardrails down around this, and this is going to be like my recurring theme, it's like there's thoughtfulness that needs to go into the implementation and there's maintenance of that system that needs to go on thereafter.
I think that it can become a really great lever to either walk you off the ledge when you're being emotional and you're about to do something that you shouldn't, or to walk you towards the decision that might be emotional but is actually in the best interest of the organization, which is your produciary duty as an executive or as a leader on that team.
And I think that could be a positive thing if it's, thoughtfully implemented.
Grant Hultgren: Yeah, people don't quit their jobs. They quit their managers, right? And if you are a manager or an owner or a COO who is there to profiteer off people, I hope your whole team quits, right? That is not the intent here.
And if we're using technology to enhance and make that more efficient that goes to the exact oof, that's a dark spot to go down. What about, conversely, I would say, for every Tony, hopefully there's an Ann who's excelling and is junior and showing such great growth trajectory that we want to put her on more projects.
But it's got to be the right ones. And how do we enable that learning so that they're finding fulfillment, right? And I think if we can balance, there's no doubt. I mean, there are times when companies struggle and it's through no fault of the salespeople or delivery or whatever it may be. But we've got to balance that, I think exactly with what Marcel is saying, what our intent and our values actually are.
Because then we'll actually have people who want to work for us, hopefully, because we can be fair in our judgments.
Galen Low: I love that lens too of just these are difficult decisions, no matter how much technology you have. Sorry, Ann.
Ann Campea: No, you're good. I was just going to tag on to what Marcel and Grant have been saying and that, Gen AI is here to partner with us. It's here to inform us. It's not here to direct us as human beings.
I love that PMI has this approach of humans in the loop always, anytime you're interacting with generative AI, have a human in the loop just to kind of have that check point. Interestingly enough, I fed this question into a Gen AI, and there's two things that I would call out that it says, which I think are really valuable to this part of the conversation, which is, Gen AI can help take care of the boring stuff so people can focus on creative, meaningful work.
And at the same time, if we think about how leadership has changed over time and the qualities of a good leader have changed from being more of a, from a manager to much more of a leader who guides with empathy and that sort of type of mentality. The other thing that Gen AI spit out was, machines can't give you a hug or understand your bad day like a real person can.
So I'll leave it with that.
Galen Low: Well trained AI. I think I have time for one more question. Let me read it out and then we can kind of go from there.
But the question is collecting and presenting data as a range rather than an absolute, a way to mitigate some of the human factors? From that question, I'm kind of gathering this notion of if we're saying, listen this is not, I guess precise, Marcel.
I hope I'm using that this is not precise. It's a range and the end because the rest of it's human stuff rather than kind of can that mitigate some of the misunderstanding of the unplannable factors?
Marcel Petitpas: This is a perfect example of precision versus accuracy. And the example I use here is what's the weather going to be today?
I could tell you it's going to be, 76.4 degrees, or I could tell you it's going to be between 72 and 78 per of the day. The last one is less precise, but it's a more accurate answer for that question. And one way to think about this is generally the further up the organization you go, the more uncertainty and the less perfection you're dealing with in terms of data.
And so you're making more probability based decisions. And ranges are a much better way to have that conversation, because it's more reflective of the reality of what you're dealing with. And so, I think it's a great idea. It's one that we're really embracing at Parakeeto in terms of how we talk about certain data points and how we help facilitate certain conversations and decision making.
So, yeah, again, it's does that feel like a more accurate model for what's being talked about? And if that's the case, then I think it's a safe bet to say let's give that a try and I think importantly, see how it impacts the conversation.
Galen Low: Boom. I love that.
Grant Hultgren: Quickly, try maintaining a precise forecast four weeks out of your work and see what the difference is, right?
I mean, let alone three to six months, you are drastically different from what you expect. The accuracy component, that range is critical to understanding the highs and lows. That's what I used as my data point, right, to inform what clients I want to take on, which ones I wanted to jettison, which work would work, which wouldn't.
And I think that's critical to informing that leadership element of where the business is going and how we can help on the day to day then.
Galen Low: Boom. Yeah, I love that. Can't plan with too much precision.
Awesome. Well, I think that takes us to the time. I just wanted to say thank you again, Grant, Marcel, Ann — thank you so much for volunteering your time and being with us here today.
As always, if you'd like to join the conversation with over a thousand like-minded project management champions, come join our collective. Head over to thedigitalprojectmanager.com/membership to learn more. And if you like what you heard today, please subscribe and stay in touch on thedigitalprojectmanager.com. Until next time, thanks for listening.