Unlocking Predictable Growth Through Data-Driven, GenAI-Enabled Resource Management
Let’s face it: we’d all have a little more hair on our heads if resourcing our projects wasn’t a weekly ritual characterized by chaos, conflict, and overwhelm.
In fact, with better predictability in our projects and agency operations, we could be reducing stress, improving culture, and unlocking growth for the people we work with.
That’s always been a bit of a pipe dream.
But can a combination of good data and GenAI make that dream more of a reality?
Join us for a deep dive into the latest trends in data-driven, AI-enabled resource management and forecasting to explore whether emerging tech can help agencies operate with a clearer view of the future.
We’ve assembled a stellar panel of agency operations leaders and consultants to get their unique perspectives on how a blend of good data and generative AI can change the game for resourcing and forecasting — and where it can go terribly wrong!
What you’ll learn
This is an unscripted event, so anything could happen! But I’m reasonably confident you’ll leave with…
- A solid idea of the data and tools capable of enhancing resourcing and forecasting
- A few cautionary tales of how to avoid common pitfalls with data-driven resourcing and forecasting
- A pragmatic sense of what’s realistic to expect from the tech versus the human considerations
- A smile on your face ; )
We’ll also be fielding questions from DPM Members via our exclusive Slack space ; )
If you’re looking to bring more predictability and precision to your agency’s operations while unlocking new avenues for growth, this is a must-attend! Save your spot now and join us for a deep dive into the future of digital agency management!
Track My Progress
Host
Guests
[00:00:00] Galen Low: Mhm. Start with the traditional. Oh, are we live? Are we live guys? Are we? No, of course we know we're live. Welcome everybody. Uh, welcome to our panel discussion on using data and AI as your resourcing crystal ball and not to oversell it. We're going to debate it. We're going to dig into it. We're going to find out what it's all about.
Um, and this is just something that we like to do every month with our members, uh, and our VIP guests. Uh, just as a way to directly engage with the experts and the folks who are contributing and collaborating with us here at the Digital Project Manager. Uh, for those of you who don't know me, my name is Galen.
Uh, I'm the founder, co founder of the Digital Project Manager and I'm also your host for today. So you're stuck with me for a bit, but I've also got with me an amazing crew of some of the top agency operations experts that I know. Ann Campea, Marcel Petipa, and Grant Hulgrin. Um, we're gonna do formal introductions shortly.
But first, uh, let's lean into a bit of a tradition that we have here. Just let us know in the chat. Uh, first of all, let us know where you're joining from [00:01:00] and maybe just what your biggest project management challenge has been lately. It doesn't have to be like a long description. Even just give me two words to sum it up.
It could be like dysfunctional scrum, job security, no one ever knowing where to look to find the proper process documentation that took six weeks to create. Okay, that was more than two words. Uh, well, you do that. I'm going to go through a little bit of housekeeping for today's session. Um, so I should let you know that today's session is being recorded and will be made available for members of our community shortly afterward.
Uh, we may also use, um, audio or video clips, uh, from the session on our website, on social, and what have you. Um, by default, your cameras and your microphones are off, so you will not appear in the recording, uh, unless, for whatever reason, you, you raise your hand and I, I put you on stage. Um, and the one last thing I should say is that we are going to make some time at the end for questions from our career builder members.
And if that's you, post a question in our live event Slack channel and we will answer as many as we can during the Q& A section at the end. Um, our members will get sort [00:02:00] of priority, uh, status on that. But of course, if you've got questions along the way, Pop them into the Q& A, pop them into the chat. Um, Michael's in the background, uh, just grabbing those down so that we can weave them into the conversation and make this as interactive as we can.
Um, and I know we do have a few VIP guests in the audience today, so if that's you, welcome. Uh, this is just one of a series of monthly sessions that we hold for our members who, uh, Who get access to a number of other benefits, including our entire back catalog of session recordings, our library of templates, resources, and mini courses, as well as our flagship certification course, mastering digital project management.
Uh, you can join the fun by going to the digital project manager. com slash membership. We'll chuck that link into the chat as well. So you can learn a bit more about us. Okay, let's dive in. Today's session is about exploring the merits and the pitfalls of using data and AI as your resourcing crystal ball.
So first [00:03:00] up, let's meet our panelists. We've got some amazing folks here today. I'm gonna start with Ann, Ann Campea. Ann, you are You just let me know that tomorrow you are defending your doctoral dissertation and all the while you are also leading a team in your operations leadership role and yet You've got this shelf full of Hasbro merch behind your workspace and a few weeks back you, you turned a post from PMI CEO Pierre Lamant into a TikTok dance.
Uh, so are you as much fun to work with as I think you are?
[00:03:31] Ann Campea: I'd like to think I am. I mean, I grew up thinking that I shouldn't take life too seriously. And I think I try to apply that to the professional setting when I can. I think this topic is going to be a lot of fun.
[00:03:42] Galen Low: Yeah, absolutely. And I love that because you weave the fun in and it's like, it's, it's very humanizing, uh, if I may say so myself,
[00:03:48] Ann Campea: I'm happy to be here.
Super fun.
[00:03:50] Galen Low: Awesome. Great to have you. Uh, and next I'm going to pick on Marcel, Marcel Petipa. Uh, Marcel, you've been traveling around to multiple cities around the world, um, giving [00:04:00] talks and sharing your knowledge. I'm just wondering what's been a standout challenge or topic that just seems to be common, like across all the people that you've been talking to in all these different geographies.
What's hot right now?
[00:04:10] Marcel Petitpas: I think what's hot right now is how cool the margins are in our industry. And, um, I think a lot of people are pointing to COVID to the economy, to the election, for all the reasons that it's getting harder and harder to be profitable as an agency. But I actually think that this is just, The industry maturing, um, just like every other industry in history over time, margins get tighter.
And so I think it's been interesting to watch agents donors wake up to the fact that, you know, um, as much as they don't really want to pay attention to their finances and their ops data, they're realizing that they don't really have a choice if they want to be sustainable into the future. So it's a bit of a bittersweet thing.
Um, I of course love to talk about this stuff. Unfortunately they don't, but I guess that's what we're here to do, isn't it?
[00:04:53] Galen Low: We're here to make it fun, right? We've got Anne, we've got you, we've got Grant. We're evangelizing a little bit. [00:05:00] That's awesome, I love that. I love that looking forward. And, uh, last but not least, uh, Mr.
Grant Halcron. Uh, Grant. You, you've got a really interesting history, but, uh, you've gone from being the COO at a high growth agency and multiple consultancies, and now you switched over to, like, working in the SaaS world with Parallax. I was wondering, what's been something that's been a bit tricky to get used to about sort of taking familiar operational challenges and then looking at them through a software lens?
[00:05:29] Grant Hultgren: Yeah, you know, I think, uh, what Marcel even just said is a good transition of stuff they don't want to hear, and if you look at even the chat of what people are pointing out, you have scope creep, you have people not following processes, and guess what? A tool is not your magic bullet. As much as I am representing Parallax here today, it's the people in the process, and the change management, and as Simon Sinek likes to say, the why.
And I love those challenges, but we're dealing with people, even though we're talking about AI today, like, and we've got to find the way to make them both [00:06:00] successful here. So that's going to be the challenge, but that's why I get up every morning. So I'm excited to be here. Thank you for having me. Boom.
Great to have you here.
[00:06:07] Galen Low: Were you that person too? Who's like. Marcel, I don't want to hear it. I don't want, don't tell me about the data I need to gather.
[00:06:14] Grant Hultgren: If Anne is the fun one, like, I've always been the wet blanket, right? I'm the ops guy. I'm like, give me less risk. Give me continuity. You know, I don't want to dance.
I expect just, you know, that we do what we say we do, right? And I guess, you know, I should have never joined the agency world because it's never consistent. And there's all, all sets of variables. And it's people like Anne and Marcel who, I've really enjoyed working with because they bring the fun, they bring the knowledge.
So excited to be here. Amazing.
[00:06:42] Galen Low: We'll get you dancing yet.
Uh, all right. Let me tee this up a little bit. I'm going to do a little monologue. I apologize, but I thought it would give some context. Um, resourcing. Resourcing is a massively popular topic in the agency world, and honestly [00:07:00] for a lot of organizations. And I think there's a good reason for that. It's because not having the right resources available to deliver on key programs and initiatives is a pretty big risk that could jeopardize meeting your goals, and it could even jeopardize some of the jobs for, you know, you and your compatriots within your organization.
Um, and I think that's why resourcing happens on so many different levels within an organization, from having the right team members on your projects, to having the right org design and talent, um, within that to make it go. But when it comes to predicting the future, us humans are At best, making a guess, and at worst, making rose colored, over idealized assumptions.
So, sometimes we're assuming that Surrender will get a task done just as fast as Kathy would if they weren't on unexpected sick leave. And sometimes we assume that Kelvin can work on eight different projects for eight hours a day with no time lost in context switching. And sometimes we assume that Elena and Leila must have the same work preferences, knowledge, skills, and understanding of the work because they've got the same job title.
Um, and at the end of the day, [00:08:00] Resourcing is hard, and it's hard because typically it involves variables that we don't control, uh, namely people. Uh, but while our approach to resourcing hasn't really evolved drastically over the past few decades, at least from my perspective, uh, the technology has. So, the question I found myself asking was, listen, like, given all the tools we're using and all the data that we're collecting, like, couldn't we be using it to make better guesses?
about our resources and what is going to happen in the future on our projects, on our initiatives, with our goals. So that's what we are going to explore today. Um, and honestly, I'm going to start with the biggest, broadest question for basically all my panelists, which is, you know, with all these organizations becoming a lot more data centric and with generative AI in the picture, like why isn't resourcing and resource management just like auto magic already?
Maybe I'll throw it at myself first. I'll put you on the hot seat.
[00:08:57] Marcel Petitpas: Sure. Um, [00:09:00] I mean, the simple answer is data is messy and people are unpredictable and, you know, it's extremely exciting what Gen AI is going to be able to do in terms of helping us get value and interpret data. But the underlying assumption, and this is the thing that always irks me about every demo that we see of, you know, Gen AI companies saying like, Oh look, you just like automate your reporting.
And the critical assumption is you feed it perfectly clean, consistently structured data with a clear schema. And consistent naming conventions and no mistakes, no outliers. And then of course, Jenny is going to be able to answer a question about the data and crunch it faster than a human. So I think that what's really exciting is that the way that we interact with data is certainly going to be disrupted.
And I think we're going to look back in a decade from now and say, Clicking through a table and applying filters and applying sorting to get to the visualization that I want is going to feel archaic to us, but the problem that isn't going away is that you need good, clean data in order for AI to be able to give you those kinds of [00:10:00] things.
And the unfortunate reality is that as product and operations managers, we're dealing with sets of data that are created by human beings, and therefore it is inherent that it will be imperfect. And we're going to have to plan for that in the way that we architect our processes and our data systems.
[00:10:14] Galen Low: I love that clarification on the cleanliness of data.
And then I think we'll get there later in terms of this, like data literacy, what does that mean in this day and age, you know, and, and, and who needs to have it. You know, spoiler alert, probably everybody. And anything you want to pile on in terms of why isn't, why isn't resourcing easier today with all the tech we've got?
[00:10:35] Ann Campea: I mean, first off, going off of what Marcel just said, it's getting good data. And there's that human element behind trying to capture that. So, you know, in my experience, we've built tools, we feel like we have the right reporting in place, we're looking at the right metrics, but then you go back to the data.
So getting good data is like trying to catch lightning in a bottle, right? It's tricky. It's sometimes unreliable, because you have that [00:11:00] human factor infused into it. And humans are wonderful, but they're also wonderfully So in the example that you gave when you opened up where, you know, Susie might have 50 projects she's trying to crank out in a week, but then you've got Joe who had to go on PTO for three days because he wasn't feeling so well.
He cranked out a little less projects. Projects than Susie did, you know, how do you really capture that human element of what's going on with the data when it comes to time tracking, when it comes to the complexity of the projects, when it comes to the ability of your staff to be able to produce and, you know, everyone works a little bit differently.
So I do think to, to kind of piggyback on what Marcel was stating, was saying is getting that good data and then building the tech and the AI around what that could be, and I would. You know, throw out one more example of I've been in organizations where you're dealing with old tech versus new tech. Now you've got AI layered on top of [00:12:00] that.
So how do you make sure all those things play well together? And then at the same time still have the human in the loop.
[00:12:08] Galen Low: Cool. I love that. And it comes back to grant what you were saying at the beginning too, right? Like the tool isn't. They're like magic bullet solution. There's other variables and maybe this is a good transition, but you know, out of like, okay, yeah, this is complicated.
What, you know, why, why isn't it easier? Well, because you know, data, data cleanliness, uh, and humans, um, I think it begs the question like okay, well, what would get what data should we be gathering? Like I know there's probably a huge set but like I don't know grant from your perspective, you know What what are some of the data points that like an agency or any organization really needs to be capturing?
To just resource more accurately, you know, what's something that's maybe like Not traditional, maybe controversial about some of the data that we'd gather, um, and how does it play into the whole picture?
[00:12:54] Grant Hultgren: Yeah. And, and I think just to transition there, right, what it comes down to is what do you trust, right?
And as [00:13:00] agency owners, when you start that journey, or maybe you're joining the agency, it's like agency is self agency, right? And oftentimes it's, I can do better than the company I came from or the person I worked with before, but that often means I'm going to define it a little bit differently, right?
And all of a sudden we're defining different data points differently. And I think Marcel would say, listen, we don't need to define this differently. You can use your own terminology, but we're really talking about the same things. And when you look at like core KPIs, I would absolutely agree. From our standpoint, and you know, the SaaS company side of things, like when you look at it, you're looking at revenue per billable employee.
You're looking at revenue per all employees. You're looking at what your margins are at a project level versus what your P& L is on a quarterly and annual basis, right? In terms of overhead, fully baked costs. And there are a lot of commonalities in that. But as we look at, in particular, the digital project management side of things, It's all part of this ecosystem that is playing into these higher level [00:14:00] KPIs, right?
And there's multiple tools, as Anne was saying, some could be, uh, from 1990 all the way up to more modern ones. And that's where, you know, it's, it just, it does become a challenge, but I would say more than anything else is do we trust those internal systems to be reliable on AI models to make decisions and have optimal outcomes?
And I just don't think we're there yet. That's why people are so important. No one's going to hold AI accountable to missing a project margin, right? We're all going to look at that project manager and say, what happened here? Interpret the data for us and dance quite well. Someone was sick. Oh, okay. Why didn't we do anything?
Well, I didn't catch it. Like that's not acceptable, right? So we might get there someday, but I think it is more, I don't think those metrics are going to change all that much, but it's how they level up into a common theme. Marcel, would you like, see you nodding along? Would you agree?
[00:14:52] Marcel Petitpas: Yeah, I, I think I'll just speak to what we do at Parakeeto to, uh, On board [00:15:00] someone into a forecasting system and think through the structure and the way that we work through.
This is very simple. We start with Understanding how they think about capacity in their organization and this is going to change from one organization to another. Some organizations think of this in terms of skill sets. You know, this person has design skills, project management skills, development skills, copywriting skills.
Other times they're thinking about it in terms of task sets, this person does this type of work, right? So whether it's task categories, role categories, job titles, what is the mental model that this firm thinks of their capacity in terms of, and thinks about making hiring decisions in terms of, and we want to try to create Uh, a simple layer of abstraction generally between individual contributors and larger buckets that we can group them up into.
And this is where I'll introduce an idea that I'll probably talk about a lot, which is precision and accuracy. Those are different ideas, and I think that this is the curse of most project managers, is they conflate those things as being not only the same, but correlated, and [00:16:00] they're often in conflict with one another.
I found that a lot of project managers set themselves up for failure because they try to create very precise systems that end up being impossible to maintain. And it actually ends up putting them in a situation where they just have a whole bunch of messy data all the time that they're struggling to get any value from.
And so that's a big part of what we're trying to do at this first step to say, how do we develop this kind of simple model five to maybe on the high end, eight buckets of capacity that we think about, then let's look at planning work. In a way that looks the same. So the same naming conventions, we bucket our estimated time in the same way.
And then we go downstream to time tracking and say, how do we make sure that every time entry has a piece of metadata on it that we can connect back to one of those buckets? And if we can build structured data in that format over time, and we layer in maybe a couple of other objects, like what type of work was this?
What type of client was this? What service offering or product was this? You know, phases, like, we can start to layer in as much metadata as we want, then linear algebra [00:17:00] becomes really, really easy, where a question like, for every dollar of website budget, how much design time on average do we need to plan for?
Well, it doesn't actually take that large of a data set to start to get a really interesting line of best fit that gives us a pretty good sense of accuracy on what that's going to look like, and also start to identify what stuff is predictable and what stuff is very unpredictable, which is an interesting way to think about, okay, where might we actually have a process problem that's setting us up for failure?
Because answers We don't actually do things very consistently and therefore predicting the future, right, regardless of how good our data is, is actually going to be difficult because of how that happens in practice. And so just in terms of thinking through how do we structure the information, the data points, our capacity, what is our model for that?
And how do we think about the schema planned work? What is our model for that? And how do we think about the schema? And then just matching up actual time spent. To that same schema so that we're not having to translate three different sets of data [00:18:00] structures to try and answer that question. It's unnecessary friction and it often means that we can't actually answer questions that we set out to get when we started this process.
[00:18:08] Galen Low: I like that idea of like there's predictable stuff and there's stuff you can't predict And like even just that lens is really important because we're talking about, you know, chaos unplannable things, right? Uh, and our desire as humans is to like solve for it, but Um, but the other thing that like really kind of, uh, you know, resonated with me about what you're saying is that, you know, we need to have good clean data, but like, I think this is where the tech comes in because I remember building a skills matrix for the staff at the agency I was working at.
And it was like, Hey, like on a scale of one to 10, like how good are you at Drupal development? They're like 10, I guess, you know, like there was like, it was so much to collate and put together and we're doing it in like Google sheets and like, it was just a mess and like our intention was good. We were like, okay, how do you like to work?
Where are you? You know, what kind of projects, uh, you know, are on your career roadmap and all this stuff. And we're [00:19:00] trying to sort of plan it all out, but like. The, the, the volume of data for us to like put into our consideration as humans was just too much. Um, and like, so as you're saying that, I'm like, wow, that's a, that's a lot of models.
That's a lot of taxonomy. Like, what are we going to do about it? But I guess that kind of leads into that whole notion of like, yeah, this is where tech, this is where technology can help.
[00:19:18] Grant Hultgren: Well, and you can architect it all right. And you can get all set. But it's going to change in an hour that client's going to, you know, they'll, they'll sign that statement of work, right?
That change order, they'll cut their scope. COVID will hit and like, right. Like now do we need an office? Is that overhead costs we have to have? And I think exactly to Marcel's point, you, you construct it and then you've got to put the cadence. And what we're even hearing in the chat is how do we align the individual contributors who are on the front line that can inform this data the best possible way to make those incremental changes, like, Everything Marcel just said is, frankly, why our company exists.
I know he's agnostic, right? But that is what we're trying to codify in this, in this data. [00:20:00] Um, but it's the exact same mindset. It's the exact same means of measuring these different inputs. That come into different solutions to that and you can hire consultants like marcelle Who can help you through that or ann who's going to say?
Yeah, actually I here's how I go about it this way with your team Uh, or you try to codify into it But the problems are still all there of how do we get it to every layer from owner all the way down? And we're all aligned, and we're, yeah, we know where the precision comes into play, we know where the accuracy is, and we all understand where the value is.
Let's
[00:20:34] Galen Low: go there, because I think it's a really interesting question, you know, um, the data, I see the comments from Matthew in the chat, clean data seems like a myth. Um, and probably not from a data standpoint, not from a taxonomy standpoint, you know, uh, but from a human standpoint, I'm wondering if I could throw this one to Anne, uh, just like the human component, how do we get people on board with the idea that They will benefit from tracking more data.
[00:21:00] People already hate time tracking people. Marcel, you posted about utilization the other day on LinkedIn, you know, like these are things that get people tied in knots, uh, and they can really damage morale. And now we're going to ask them to record more stuff, right? More data, like more models. And like, how would you approach that in terms of, you know, getting People bought into this idea that this is going to be helpful.
[00:21:23] Ann Campea: This feels like a, you know, a very loaded question, but I think it's tough. It really is tough to get all the layers aligned. And what I'll say is, I think a shout out to Marcel. You put up a post on LinkedIn about, you know, trying to come up with that magic formula that your stakeholders can buy into to say, this is how we are aligning to measure a resource.
resources, capacity, all of that good stuff. And Grant, you just called out things change all the time. So how do you control for that moving variable? I, I think it's about [00:22:00] aligning back to what Marcel was saying. If you can at least get the buy in to a structure, even if it's, you know, something vastly different than what the other agencies are doing, but your leadership team, your culture, uh, as an organization, you're all opting to rally behind what that formula is to capture these, this data, then at the very least, you've got the buy in, which is one of those biggest blockers that you might encounter, because we can spend all this time building, we can Spend all this time enhancing our tools, building out the reports, gathering the time tracking, you know, squeezing the time from, from our employees to say, uh, well, how long is it taking you to do different types of projects of different complexities?
How many can you take on in a given day or week? You can gather all of this stuff, but if you don't have that buy in. That then becomes your biggest blocker. It's again going back to that human element. And I think what I found in dealing with trying to get stakeholders to buy in is [00:23:00] sometimes there's this assumption that, you know, you might be promoting very, very objective quantitative data.
And I do think, you know, if we're being real and honest about the data, going back to that good clean data, I think it comes down to the stakeholders understanding what it is you do have. I always present my data by saying, you know, this is the time study, there's going to be a variance in this time study because you're asking humans to enter the data, right?
So always consider that there is going to be a bit of subjectivity in the, even in the quantitative data that you're gathering, uh, which then when you present it to your stakeholders for buy in, you got to just keep it real in that way. You can't just say, this is the data, this is clean, this is what we're going off of.
It's. A hundred percent accurate because as we all have said already, it can't ever be that way, but it can get close. And I think that's number one in terms of selling that [00:24:00] formula to your stakeholders.
[00:24:01] Galen Low: I love that approach because it's collaborative, right? Intrinsically, what you're saying is like, you know, send it down from the top and cram it down people's throats.
Um, but also, I think it's like the framing of it, like, and I think we do have that tendency, especially in resource management, to be like, this is the definitive thing. Everyone's week adds up to 40, you know? It's gonna be perfect, uh, whereas, you're right, it's like, if we present it with that variability.
Um, Marcel, I don't know if that ties into, like, the sort of precision and accuracy thing. Mm hmm.
[00:24:30] Marcel Petitpas: Yes, it does. So I have two things on this. Um, we talked a lot in the industry about these tactics to improve compliance. You know, could you gamify it? You get the Slack bot that puts people on teams and shows compliance.
You, you know, get the thing that tracks the time, you know, based on what they're doing on their computer and helps them fill out their timesheets. You do resource plan based pre filling of the timesheets and all of those things are very, very helpful. But to me, the underlying issue is what Ann just mentioned, which is the buy in.
Yeah. And another big component that I [00:25:00] find often happens is the loop isn't being closed. And so the ICs in particular that are being asked to enter time, they're never privy to the actual conversations, decisions, or reports that are being created as a result of that. And so they're left to create their own story in their mind about how that being.
That time is being used and it's rarely going to be a flattering one, no matter how well intentioned you are about telling them how you're using it, they need to see it to believe it. And to Anne's point, there's value in having them involved in that conversation. And so what I found is a lot of management teams are hesitant to close that loop.
and start showing the data to their team because they're like, Oh, but it's, we know that the data is not good yet. And so it becomes this chicken and egg game. And what I always encourage them to do, and we've done this experiment with clients a number of times and say, have the meeting and pretend you don't know that it's bad.
Right. And, and what you'll start to see is a, the way that maybe not having like a really deliberate. Set of KPIs that you measure with the team, what incentives that [00:26:00] creates that are actually opposite of what you want, so like hyper focusing on utilization, it creates a reaction that is often not desired.
And so when you go into that meeting, let's say, for example, you were really, really focused on budgets, client budgets. And so you go into the meeting and you're like, guys, or you have a compliance issue. You're like, guys, this is incredible. We were under budget on every single project. Our clients are thrilled.
You guys aren't even that busy. I could go sell twice as much work next month and you still wouldn't be at capacity. This is incredible. High five, everyone. This is great. And all of a sudden you might find the team goes, wait, wait, hold on a second. Maybe all of our time wasn't in the timesheets. Like we definitely can't handle twice as much work next month.
And you're like, oh, geez, I'm so glad you told me that. Because this is the information that I'm using to plan into the future. And, you know, I need this to be accurate so that I can make sure I don't put too much work on your plate and that we're resource planning things effectively, and then you might find that the pendulum swings in the other way.
And then you have a different discussion about, hey, you know, we're going over budget a lot. I don't know if this is sustainable, but over time, I think [00:27:00] you'll find that. As long as you're using the data correctly and having that conversation earnestly and from a place of curiosity, the pendulum will swing towards truth and the team will really start to understand how this serves them and protects them.
Um, I think that's really important. And the last thing that I'll add on precision and accuracy is. I think that is another really good thing to keep in mind is like, if you're asking your team to track time to the subtask within the task, within the milestone, within the deliverable, within the phase, within the project with it, it's like, it's too much, right?
That's way too much friction. Simplify it. And what you'll find is You'll have larger buckets of time that are actually more meaningful. You'll be quicker to get to statistical significance in terms of your insights. And you'll have a more, you'll have higher compliance generally. And the same thing is true about resource planning.
Individual allocations are not the path to a six month forecast. That does not work, right? That's way too much surface area. And there's way too many things changing all the time. So like match the methodology to the purpose of the question that's being answered. [00:28:00]
[00:28:00] Galen Low: I'm like, where's the spicy emoji icon for the chat?
No, but I love that. It's like in my head, I'm like, that's sneaky, but I get how it pulls out the like qualitative aspect of it. And coming back to Ann, what you were saying is that you started the data, you don't end at the data, it drives a conversation about some of the things that maybe aren't in the data, and I like that it kind of, you know, You know, if you can build that culture of depressurizing it, because like, of course, everyone like, you know, it's like, it's like handing bullets to somebody so they can shoot them back at you.
It's like, Oh, what? You're only 70 percent utilized? Like, Oh, it took you seven hours to take that to do that five hour task. And nobody wants to volunteer that data. So they're already on the back foot. But yeah, I like that over like that. How simple simple is okay. Simple is still better than, you know, trying to go over complicated and making your data quote unquote, you know, perfect.
I thought I'd maybe drive into like some storytelling and I'm going to conflate two questions because originally I was going to be like, tell me some horror stories and then tell me some success stories. But this is a free for all. So I'm just, you know, we've been [00:29:00] talking about things that can hold you back.
We've been talking about things that can really, you know, push an organization forward. Um, let's do story time. Does anyone have like a, uh, a success story of, you know, where they were able to sort of. Bring sort of this buy in, uh, and like raise the level of data literacy within an organization. So that resourcing and looking into the future is better.
And then the other side, like, tell me some horror stories. It's almost Halloween at time of recording. Tell me some ghost stories about how, uh, if only they had done this thing, but they got held back because, you know, they hit this blocker and that's, that's the thing that was mostly holding them back from, you know, removing the headaches around resourcing.
[00:29:40] Grant Hultgren: Uh, I can go first. I've got, I think, a win story. Um, I got out of my own way. For what the digital agency I was working with and got exactly on the pep talks that essentially Anna and Marcel just delivered, which is like, you got to trust your team, right? Educate them and help them understand why this is important.
So we went through how to, [00:30:00] um, why we're calculating a margin on projects. What's that actually means cost per role, right? Like we weren't giving out what we're paying people, but we started. Generalizing and showing like, hey, a senior developer relative to a junior developer, like when we're resourcing, this is why it's important, right?
If you're trying, if you're going to be held accountable to hitting a margin. Um, and you know, I think a lot of times you're like, oh, is this going to come back to bite me? Like, you know, and there's all the reasons why you shouldn't do that, right? Especially if you're in management. Um, we end up doing that within a year that the greatest success was not a single KPI, right?
We didn't sit there. We hit our growth goals. We did all that, you know, that's great. But we had three people start families. We had two people buy homes. And I remember sitting down with the owner, the CEO at the time. And he said, this is the most fulfilling year I've ever had. Like this is what I set out to do for the team.
And oftentimes we're so worried about, do we have work in three months? Right. And that worry is never going to [00:31:00] go away. In fact, that's kind of stewardship of leadership, in my opinion. And I've come to learn that. But the greatest wins that we had was those team members sitting down and them demonstrating with their life choices, that they had confidence in where they were at and you know what, they might've moved on.
They might've taken different roles, but I think when you do this, right, there are moments where you can sit down and say, You know what? I got it right for that moment. And maybe I didn't get it right the next day, right? Maybe I, I needed that data. But like, there's a reason why we're talking about AI relative to this, because it's not solved.
If it were easy to solve, we would've, any one of us, probably people all in the chat, would've solved this by now. But it takes a lot of effort, and I think it's, you know, I, like I said, I'm a wet blanket. But in that moment I could sit back and say like, yeah, I'm really proud of what we did as a team. I'm proud of the decisions I made and the trust that, you know, that I was able to showcase.
And I think that's the core understanding here, right? We'll always need to work on the data. We'll need to work on the process. We'll scale, we'll contract, we'll get lean and mean during [00:32:00] 2024 with hopes that 2025 we'll expand on those service offerings that we're developing now, and that all takes effort.
There's no. There's no shortcut in this path for digital agency owners or participants. But that's, in my opinion, what we're signing up for and when we can collaborate and work together, we can actually make it an easier path for each other. And I think that's where it's, why are we doing this? For our project managers, let's not burn out our senior dev.
Let's not be blind to where we need to help our junior devs or our designers grow. Let's help inform the business, exactly to Marcel's point, with accuracy, not precision, because when you look at a data point of one, it's You're myopic. When you can start to aggregate it all together, you can start to be strategic and inform a business roadmap that will only benefit those individuals and allow you to put growth paths in, in front of them that they now actually have professional development and fulfillment in their lives.
And it sounds very [00:33:00] aspirational, but having lived in that moment, like. For that fleeting moment. I still look back at it. And, uh, and, and now I can also share some horror stories, right. Where I got it wrong, but that, that was, um, that was a moment for me. That was really special.
[00:33:12] Galen Low: I love that story because a, you know, uh, I started with, Oh, how can we improve data literacy?
Um, but where it went was the, why is not like, if you understand data better than you'll understand the reasons of why we're gathering this data. The why is. Growth is not just agency growth. It's like personal growth. It's career growth. It's growth for your peers. Um, and we're all here to do a thing. So our, like our livelihood is okay.
Um, and hopefully improving. And that's a pretty solid why it might not work in every work culture. I know there's some organizations. I know some folks in the audience here are like, yeah, that would never land. It's totally, it's a line pockets of the execs. It's totally for a yacht. Um, you gotta be genuine, I guess, but I love that story and that framing.
[00:33:50] Grant Hultgren: Gross not linear, right? There is no, if you do this, you will get this, right? Um, the market gets to tell you what's successful and not. [00:34:00] And so being adaptive and iterative, I think that's what that, Even in terms of data literacy or data implementation, that's the piece that it's, are we trying every day to inform what we're actually working on?
Whether it's a timesheet, whether it's a project task, um, and there are efficiencies there, so we don't kill ourselves in the process. Love it. Love it.
[00:34:23] Ann Campea: It's very hard to follow up Grant's story. But I would say, you know, and not to get too specific, but I think it's in those moments of you feel like the hero in this role.
Sometimes when you can take the data, tell a story to the audience. and have them buy into whatever it is that you are looking to achieve. And in my case, oftentimes it's preventing my team from burning out, talking about additional resourcing and staffing. How do we get a better understanding as an organization around the complexity of the work and how much, um, [00:35:00] of a toll it's taking on our staff and, and what we can do to better balance that.
So it's in those little whims. Um, and I think that's a really important piece in those conversations on a day to day, where you are able to use the good enough data and paint a picture for your leadership team around what's happening with the work with the employees and how everyone is really in it to, to be for the same goals, right?
We, we want to be profitable as a business. We want to be able to service our clients. We want to be able to put quality work out there. Uh, but the data will tell you that story of what's going on with your resources.
[00:35:36] Galen Low: Love that storytelling aspect for sure. What horror stories? Marcelo, you got a, you got a story?
[00:35:46] Marcel Petitpas: Yeah, I'll share a turnaround story, um, of, you know, a client that we worked with for a long time that on the surface, and this is so true about a lot of agencies, looked like they were absolutely killing it. I mean, [00:36:00] incredible logos on the website. The work was unbelievable, creative. They were winning awards, right?
And then you look under the hood, And, you know, what you found was this culture of like very creative, but super burnt out people. They were on this feast or famine roller coaster of going from being like 150 percent utilized, everybody working like crazy, crazy hours through the weekends and on evenings to then going into these long seasons of like having no work and having to lay people off.
And of course, all of the kind of common challenges that you hear inside the agency of like PMS pointing to sales saying that they got set up for failure sales pointing to the creatives saying that they can't control their egos and they blow through budgets every time they do something, just a complete misalignment of expectations and a really problematic intrinsic relationship between time and money.
As it related to scoping and pricing where it's like the only way to change one was to change the other. So it's like, oh, the client doesn't have as much budget. Well, let's pretend that this is going to take less time than it actually will. Or the [00:37:00] client has more budget, then let's like pretend that it's going to take more time than it actually will.
Instead of just acknowledging that these two ideas are related, but actually very distinct. And that a pricing conversation is for the client and a scoping conversation is for us. And so, um, we came into that organization, And went through the things I talked about earlier. We started by separating those things.
We got clear on what the structure of the data was going to look like. We started tracking what was actually happening. And over time, we were able to start to identify what work was higher risk, what work was lower risk. We identified that one service offering in particular for them, which was video production, was incredibly lumpy.
Anyone that's worked in video production knows they're super lumpy projects. The production days or shoot days are just like You know, absolutely insane. People are there for 18 hours a day, like burning out and they were the lowest margin thing that they were doing, but it was causing like an incredible amount of stress and inconsistency in their business and a lot of risk.
And whereas they had these other things like, you [00:38:00] know, brand and website design and strategy that were much more profitable. And so with all of that insight, we were able to do a couple of things. First create linear algebra. For the sales team so that there is no more subjectivity in the scoping or pricing process.
They took the inputs from the client. They punched them into a calculator. It created a directionally accurate sense. And then it was just a question of like tweaking up or down, basically on the, uh, PETA tax as I call it, or pain in the ass tax for that particular client. Um, which then set realistic expectations for the project management team that they were already bought into.
And anyway, the end result was we took a firm that was experiencing all of those challenges to growing 60 and then 90 percent year over year while simultaneously increasing their profit margin by over 500%. But the thing I'm most proud of is that team worked substantially fewer evenings and weekends and overtime in the process.
Things that people feel and tend to believe are in conflict with one another. But, um, I, I share all this to say like, this is why project managers and [00:39:00] operations managers are so important because this is the power that you have. Um, because agency operations and it's really just about taking our assumptions.
about reality and projecting them into the future. And so it really comes down to those two things. How sound our assumptions about reality and can we use data to remove subjectivity to that and make it more consistent? And then how effectively can we project that out into the future? And to the extent that we can refine those two things, we can do a lot of good for all the stakeholders that are involved in our company.
[00:39:31] Galen Low: I love that. And, uh, just as my quick story, I was at a table a few weeks back. With some folks in higher ed and some agency folks. And we were, you know, talking about time tracking and how problematic it is. And they were all like. Y'all track your time? Like, that sounds great, because right now I have no way to tell my manager that I'm over capacity, other than anecdotally.
Right? And you kind of start seeing this story of not just like how we're going to do this thing, but like, should we do this thing? Should we continue to do this thing? You know, where it's like burning people out [00:40:00] and it's low margin, and you know, when we have that data, we can start that conversation. Um, I think that's really, really interesting.
Um, I want to get into some of the questions from the audience, because I see some really good stuff coming through. Um, But I thought maybe what I'd do is I'd cross cut one, um, because we have a question, um, about Generative AI and data and like ethics and the security of that data. Um, so I'm going to read the question and I'm going to build on top of it.
Um, so the question is if we're putting project data into a gen AI tool, how do we ensure security of that data? Like, for example, do we need to use a tool that the company we're working for has paid for, or can we use something else? And then I want to intercut that with like. Where is, where is the ethical line to right where we start kind of gathering all this information about people, you know, maybe with the best of intentions, like work preferences and skill level with a tool and how much time it took them to do something similar last [00:41:00] time on a project that was similar.
And then, yeah, where are we drawing a line that says, actually, you know what, that's like, that's pretty invasive, um, in terms of where this data goes and how we wield it. Uh, I know those are the two, I just conflated two really big questions. But I thought I'd, I'd throw it out there just in terms of like, yeah, the security of data in tools, um, and sort of like corporate policy and compliance there.
And then where does the rabbit hole go? I don't know. Maybe I'll throw it. Can I throw it a grant?
[00:41:29] Grant Hultgren: Sure. Yeah. Um, I came from a, uh, there was a marketing agency that I worked with. And I think even when you start looking at how generative AI in particular is affecting production level marketing input, whether it's blog posts to social posts or whatever it may be.
Um, you know, I think we're seeing it more and more and we're seeing like, okay, is the quality there? Maybe not, but like the volume is getting out there. And first thing we did was one, you know, are we disclosing this, that we're using it? And what does that mean? Right. So like internally, we had to meet [00:42:00] as a team.
To say, are we going to use this? Right? Like, are we okay with this? Is the quality there? Are we, like, if it is great, let's then use it to our advantage. It's, it's another tool in our tool belt. But I think there's gotta be that internal conversation, especially if you're going, um, into the market with it, uh, on a customer's behalf.
If it's internal data, that conversation, at least in my opinion, gets a little bit safer and that you can go and make sure you're compliant, uh, for us, right? Like we're SOC 2 compliant, we, uh, GDPR, all that, right? And so like, we know there are limitations to that. We, we, we cannot do some things and we cannot use AI on some of that data, um, relative to our customer data.
And that's just what we've signed up for, right? That's, that's all part of it, but there's components to that too, where we could start looking at that, where we can anonymize it and all sorts of stuff and looking at it, um, that would help. Ensure that we are not being too prescriptive or on where that data is coming from, you know, naming particular customers or anything like that.
Um, [00:43:00] and I would just encourage that for anybody in that agency world, right? Unless your customer has explicitly said that you can. I would just assume you can't, right? Cause it's just not worth risking the relationship in my opinion. Um, many deals today are based off of that personal relationship. Even like my whole background has been, if I empower the right people, they'll find the right people, which would be our customers or clients who would give us the projects, which generates revenue.
And even on that basis, it's an interpersonal relationship. And so. I always err on the side of, um, caution when it comes to aspects of this. And at the same time, um, there are generative AI tools used throughout chatbots, throughout, uh, I was looking at ClickUp more recently. One of our customers is coming from that and it's like, well, that's pretty powerful, but they're using it at stages that are not informed with details.
They're using them to inform. I saw Anne was kind of mentioning this. Inform what that methodology could [00:44:00] be, right? And now I'm going to tailor that and I'm going to use my craft and my experience to customize at that point. But at that point, generative AI has only informed the path. It's not giving you the diagnostic or giving you the exact next steps that you are.
Um, bringing to a customer like straight on, and I think there's kind of clear lines of delineation, morally, at least in my mind on that, but it's a tricky area like chat GPT, like they're getting sued, you know what they're using as source data, even right, like, And we're all benefiting from that, right?
Like it's, it's, I don't think it's resolved yet.
[00:44:40] Galen Low: I like that you called out compliance though, because like, you know, there's a proliferation of tools out there, you know, some of which probably have gone that step to be like, listen, like, you know, security is, is, is our priority. And some of them are just MVPs out there just to like, you know, figure out if there's a market, if there's demand for this tool, it's a lot less.
[00:44:58] Grant Hultgren: Yep. A hundred percent. And, and [00:45:00] we've seen it, um, and we've, we've. Been really careful, even internally on, on what we can or should do. Um, because just because you have the data doesn't mean you should be sharing it with these tools either. Oh, love that. Love that.
[00:45:16] Galen Low: Love that. I want
[00:45:18] Ann Campea: to touch on the ethical line piece just a bit, just because.
You know, we, um, we hit on earlier that gen AI in itself carries a little bit of bias, right? So you have to be really cautious when using it as well, because as Grant pointed out, chat GPT is in a little bit of hot water right now, just because of their, their source. Um, you know, for those that are embedded in, uh, project management, you know, PMI, Project Management Institute has come out with their own kind of AI platform as well, that they're promoting to say, Hey, all this.
is coming from vetted project management sources, right? But I think the ethical line needs to be drawn around how you are [00:46:00] utilizing the data. I don't know if everyone has been in that situation where you're presenting data to a stakeholder and they say, Uh, great, but that's not the data that I need to be able to get what I want.
And so you have to be ultra cautious, PM or not, when you're using data, um, especially if it's coming from a Gen AI, um, source, that it's there to inform you, as Grant has said, but also what are you being asked to produce with that data? And is, is, are you crossing that ethical line in terms of trying to doctor the data to.
present a certain purpose for somebody. So just be very, very careful there because I think, um, I've been in that situation, which I imagine some of you might have been in that situation, where they're trying to take that data and use it for other purposes that are really, definitely crossing that line. I
[00:46:54] Galen Low: love that.
There's the other side of data storytelling, right? Like, where you're kind of a nail looking for a hammer. I don't know. [00:47:00] Is that how that goes? Anyway, it's hammer looking for a nail. Um, I think that's actually maybe a good transition too, because I think there's another question here, uh, which I think is a really good one that ties it sort of backed, um, not just ethics, but the, the, the humanity of it.
Um, the question is, it's, uh, the question is, is the thinking that In our conversation, it's the thinking that in the future, the tools will be able to accommodate for the humanness slash project complexity piece, or will this always be a manual intervention?
[00:47:39] Marcel Petitpas: I have an opinion on this. Um, I find it very, very hard to fathom that this will be able to be completely automated from end to end. And, uh, I'd like to be proven wrong on that, but I think that we are, uh, You know, we are years and years away from getting to that point and just look at like [00:48:00] self driving for example You know We were talking about self driving cars being a real thing like a decade ago and we're still really far away And the reason for that is that last two percent is really really hard to get to And it's the reason that like we still have bookkeepers and accountants, even though finance has been a thing and QuickBooks has been a thing for decades and decades, right?
Um, you can get 98 percent of the way there. And I think there is a ton of leverage, especially if you have, you know, a really deliberate data structure and data pipeline in your organization. To make that a hyper efficient process, but there's a certain amount of judgment that is going to be really, really hard to have AI replaced.
And I think that even if you do have a data pipeline, that's fully up automated, it's going to be very, very important to have high observability because this is one of the inherent challenges that we've all experienced with chat, GPT, or with any of these other gen AI tools where you feed it a prompt, it gives you an answer.
But how it arrives at that conclusion is a complete black box. And that's very [00:49:00] problematic when you're talking about, you know, making changes and transformations to data or making judgment calls about, for example, um, you know, this person logged a 99 hour time entry over the weekend. Well, is that real?
Or is it because they forgot their timer was running? Um, a designer, uh, logged some time to project management. Is that a mistake? Did they choose the wrong task category? Or is it actually because they got pulled in to do project management work? Right? These kinds of judgment calls are going to be very, very difficult for, uh, an AI tool to consistently and accurately.
be able to make the right assumption on. And even if they do make assumptions, we're probably still going to want to have at least a final check on some of these edge cases for a human being to say, you know, is this congruent with reality?
[00:49:45] Grant Hultgren: Yeah, I mean, I would absolutely agree. I honestly, for me, uh, the meme is like, I thought AI would help me like do my dishes, not make art, right?
Like, and let's start there, right? Like, let's start with automating tasks and giving us those prompt points to [00:50:00] say, You know what? You just added this task. Do you think you should account for more time for Marcel on this project? Because you did that. That's a great prompt. Like, let's go for efficiencies first.
It will naturally learn then from that, I think, to help it build off of its models, but um, eh, like, I, I, I, I don't think I would trust AI to run a business right now. Right? How do you predict COVID? And the results of COVID like there, there are huge events that have occurred even in the last three years that would have to fracture a whole model effectively and change the course in very decisive moments.
And I think, um, Marcel hit it right on, like, consider it like COVID is not going to hopefully happen all that often, right. In terms of lifetime events, but there are relative events at a per project level. That occur that will help. I mean, just even morally, should we write off that time? Did because we were inefficient or should we bill for it because, you know, it's actually [00:51:00] part of the financial agreement and they were asking these questions and it actually made a better project and so therefore it's uh, you know, it's, it's got value.
Um, those, those are really hard things to decipher between human relationships and what the right judgment call is.
[00:51:16] Galen Low: I like that notion of human in the driver's seat. But I want to go to a gray, what, what I think is a great area, which is, you know, you're getting advice from your gen AI tool and they're like, do you really want Tony on that?
Tony's been slow. Tony's like really slow, but like Sergio is available and Sergio is fast. And Tony just keeps going down and down and down and there's no professional development plan. And, you know, Tony's on the bench all the time. Sergio is getting all the work. Like, is that like, is that an area where we would want the tool to be advising us?
[00:51:51] Marcel Petitpas: No, no, no one wants to, no one wants to touch this one. Surprise. Um, I I'll, I'll comment on this [00:52:00] as, and this is just my calculus as like a CEO. Those are decisions that, and conversations and thoughts that happen. They just do those. That's our job as executives. And so again, I think that, um, The question is, what is the thought process?
What is the set of values? What is the set of standards? Like, what is the decision making process that goes into those things that are going to be a reality of running an organization? And if you can use data, Gen AI, any of these other tools to help enable that process, I think there's a lot of good that can come of that.
Um, I'll say that. As a person who's responsible for making those decisions and you know, everybody else on this call, I'm sure has had to has been in that position. Um, Having more objectivity is a gift because it can be very, very challenging to go into that conversation clouded by your emotions. We're all human beings.
And so I think as long as you set some deliberate guardrails down [00:53:00] around this, and this is going to be like my recurring theme is like there's thoughtfulness that needs to go into the implementation and there's maintenance of that. that system that needs to go on thereafter. I think that it can become a really, really great lever to either walk you off the ledge when you're being emotional and you're about to do something that you shouldn't, um, or to walk you towards the decision that might be emotional, but is actually in the best interest of the organization, which is your fiduciary duty as an executive or as a leader on that team.
Uh, and I think that that could be a positive thing if it's, you know, thoughtfully implemented.
[00:53:33] Grant Hultgren: Yeah, people don't quit their jobs. They quit their managers, right? And if you are a manager or an owner or a COO who is there to profiteer off people, I hope your whole team quits, right? That is not the intent here.
Um, and if we're using technology to enhance and make that more efficient, like that goes to the exact, like, Whoa, that's a dark spot to go down. What about conversely, I would say, [00:54:00] right? Like for every Tony, hopefully there's an Ann who's excelling and is junior and showing such great growth trajectory that we want to put her on more projects, but it's gotta be the right ones.
And how do we enable that learning? So that they're finding fulfillment, right? Um, and I think if we can balance there, there's no doubt. I mean, there are times when companies struggle and it's through no fault of the salespeople or delivery or whatever it may be. Um, but we've got to balance that. I think exactly with what Marcel is saying, what our intent and our values actually are, because then we'll actually have people who want to work for us, hopefully, um, because we can be fair in our judgments.
[00:54:39] Galen Low: I love that lens too, of just like, these are difficult decisions, no matter how much technology you have. Sorry, Anne.
[00:54:45] Ann Campea: No, you're good. I was just going to tag on to what Marcel and Grant have been saying, in that, you know, Gen AI is here to partner with us, it's here to inform us, it's not here to direct us as human beings.
I love that PMI has this [00:55:00] approach of humans in the loop always, anytime you're interacting with generative AI, have a human in the loop just to kind of have that check point. And, um, interestingly enough, I fed this question into And there's two things that I would call out that it says, which I think are, um, really valuable to this part of the conversation, which is Gen AI can help take care of the boring stuff so people can focus on creative, meaningful work.
And at the same time, If we think about how leadership has changed over time and the qualities of a good leader have changed from being more of a, from a manager to much more of a leader with, who guides with empathy and that sort of type of mentality. The AI spit out was machines can't give you a hug.
Or understand your bad day like a real person can. So I'll leave it with that.
[00:55:58] Galen Low: Well trained. Well trained AI. [00:56:00] Oh, listen, I know we've only got a couple minutes left here. Um, first of all, I wanted to say thank you for everyone, um, who's here today, uh, and also to our panelists. Uh, I've, I've been loving this.
Um, I know some of you probably need to peel away to your next meeting. Um, so I, I just wanted to say, if you like this, uh, then I would love to see you at our next events, uh, which is actually about conscious leadership for PMs who take on too much, uh, with, um, our speaker, Matthew Fox, who I believe is actually in the audience, Matthew, wave and say, hello.
Uh, you can RSVP using the link that Michael is posting into the chat. Um, And of course, if you are a guest today and you want to learn more about us, head over to the digital project manager. com slash membership. You can read all about us there. Uh, and the last bit I wanted to tuck in there, and I think we can probably answer one more question, but the one I wanted to tuck in there is that, yeah, we love feedback.
We thrive on raw, honest feedback. If that's your jam, uh, I would love to hear from you. Michael has posted a link to a Typeform survey. Um, tell us what you enjoy. Tell us what we can [00:57:00] do better. Honestly, we do this for you. So, um, if. If we can improve, if we can do something differently that makes it more valuable, then I want to know about it and we can work it in.
Uh, so, uh, there is a couple links up there in the chat. And I think I have time for one more question. Marcel, I know you might need to peel away. Like, you know, I don't, I don't think that's rude. You can, you can salute us off. Um, but, um, I wanted, um, to get, uh, to this one question, um, which I think, let me, let me read it out, and then we can kind of go from there.
Uh, but the question is, is collecting and presenting data as a range rather than an absolute, a way to mitigate some of the human factors? And I'm, I'm, from that question, I'm kind of gathering this notion of like, you know, if we're saying, listen, like, this is not, I guess precise, Marcel. I hope I'm using that right, right?
This is not precise. It's a range and like, you know, the end because the rest of it's human stuff rather than, um, [00:58:00] you know, kind of, uh, like, can that mitigate some of the misunderstanding of the unplannable factors?
[00:58:06] Marcel Petitpas: This is a perfect example of precision versus accuracy. Um, and the example I'll use here is like, what's the weather going to be today?
I could tell you it's going to be, you know, 76. 4 degrees, or I could tell you it's going to be between 72 and 78 throughout the day. The last one is less precise, but it's a more accurate answer for that question. And one way to think about this is generally the further up the organization you go. The more uncertainty and the less perfection you're dealing with in terms of data.
And so you're making more probability based decisions and ranges are a much better way to have that conversation because it's more reflective of the reality of what you're dealing with. And so I think it's a, it's a great idea. It's one that we're really embracing at Parakeeto in terms of how we talk about certain data points and, and how we help facilitate certain, certain conversations and decision making.
Um, so yeah, again, it's like, does that feel like a more accurate. model for what's being [00:59:00] talked about. And if that's the case, then I think it's a safe bet to say, let's give that a try. And I think importantly, see how it impacts the conversation.
[00:59:08] Grant Hultgren: Boom. I love that. Quickly, try maintaining a precise forecast four weeks out of your work and see what the difference is, right?
I mean, let alone three to six months. You are drastically different from what you expect. The accuracy component, that range, is critical to understanding the highs and lows. That's what I used as my data point, right? It's great to inform. What clients I want to take on, which ones I wanted to jettison, which work would work, which wouldn't, wouldn't.
Um, and I think that's critical to informing that leadership, uh, element of where the business is going and how we can help on the day to day then.
[00:59:50] Galen Low: Boom. Yeah, I love that. Can't plan with too much precision. Awesome. Well, I think that takes us to the time. I know some folks have already left [01:00:00] already, uh, but I just wanted to say thank you again, uh, Grant, Marcel, and thank you so much, uh, for volunteering your time and being with us here today.
For everyone still left in the audience, thank you very much. If you've got a second, please fill out that feedback survey. I love the feedback, um, and just let us know what you thought of today's session. Um, and yeah, I'll, I'll, I'll cap it there. Um, I think this is a great conversation about data, but actually ended up being kind of about humans.
Um, so thank you all again, and, uh, we will be with you all soon.