AI-powered PM tools promise everything from predictive risk alerts to automated status reports—but what happens when the tool you picked doesn’t actually deliver on the features that sold you in the first place?
In this episode, Galen Low sits down with Emmanuels Magaya, Director of Technology at Tech Legends and Founder & CEO of Project Managers Africa, to unpack what organizations should do when their AI-powered project management platform falls short. Drawing from his experience testing hundreds of AI tools, Emmanuels shares a practical framework for evaluating AI software, explains why organizations shouldn’t rely on one tool to do everything, and walks through how AI agents and automation workflows can augment existing PM ecosystems.
The conversation also dives into AI literacy, predictive PMOs, and why emerging markets have a unique opportunity to leapfrog traditional delivery models—if they approach AI strategically instead of reactively.
What You’ll Learn
- How to evaluate AI tools without getting distracted by hype or “shiny object syndrome”
- Why organizations should focus on use cases before selecting AI software
- What to do when your AI-powered PM tool doesn’t fully meet your needs
- How AI agents and workflow automations can extend existing PM platforms
- Why AI literacy is critical for successful AI adoption across teams
- How predictive AI can help PMOs identify delivery risks earlier
- A practical approach for building AI-enhanced workflows using tools like NotebookLM and n8n
- Why emerging markets have an opportunity to lead in AI-enabled project delivery
Key Takeaways
- Start with the problem, not the tool
Emmanuels emphasizes that teams often rush into AI purchases before clearly defining what they actually need help with. The better approach is identifying repetitive, high-friction work first, then evaluating tools against those specific use cases. - Don’t expect one platform to do everything
Instead of replacing entire systems every time a limitation appears, organizations can augment existing PM tools with AI workflows, automations, and specialized agents that handle niche tasks like forecasting, reporting, or resource planning. - AI adoption should happen in layers
Successful PMOs aren’t trying to become fully autonomous overnight. They’re introducing AI incrementally—starting with assistants, meeting summaries, alerts, and lightweight automations before moving into more advanced agentic workflows. - AI workflows need architecture before automation
One of the strongest insights in the episode is the idea that tools like NotebookLM can help teams map workflows conceptually before building them in automation platforms like n8n or Zapier. In other words: blueprint first, automate second. - Predictive AI changes how PMOs manage risk
Traditional PMOs often react to risks after problems surface. AI-enabled PMOs can analyze emails, chats, timesheets, and project data to proactively flag delivery risks before they become visible through standard reporting. - AI literacy is now an organizational capability
Emmanuels argues that AI knowledge can’t sit only with technical teams anymore. Every role—from PMs to operations teams to administrative staff—needs baseline AI literacy to participate effectively in AI-enabled delivery environments.
Chapters
- 00:00 — AI PM tool gaps
- 03:01 — AI tool overload
- 07:25 — Testing AI tools
- 11:06 — Comparing outputs
- 16:57 — PM tool evaluation
- 20:10 — When tools fail
- 27:49 — AI agents in PMOs
- 34:03 — Resource planning workflow
- 45:20 — NotebookLM + n8n
- 51:24 — Predictive PMOs
- 56:35 — AI in emerging markets
- 57:05 — AI literacy
- 01:03:27 — Final thoughts
Meet Our Guest

Emmanuels Magaya is the Founder of Project Managers Africa, a pan-African organization focused on advancing project management excellence through PMO advisory, AI-driven solutions, training, and leadership development. A seasoned PMO advisor, global project leader, and international keynote speaker, he brings more than 20 years of experience delivering high-value technology and transformation projects across Fortune 500 companies, government agencies, and enterprises throughout Africa and beyond. Emmanuels is also a recognized voice on the future of project management, AI, cybersecurity, and digital transformation, helping organizations and professionals adapt to the evolving demands of modern project delivery.
Resources from this episode:
- Join the Digital Project Manager Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Emmanuels on LinkedIn
- Visit Project Managers Africa and Tech Legends
Related articles and podcasts:
Galen Low: Okay, it's been six months since you made your big purchase of an AI-powered project management platform, but a few gaps have started to reveal themselves. The time analysis tools are great, but the resource planning isn't as intelligent as advertised, or the automated status reports deliver seamlessly but you're not sure if you can trust the underlying data, or the proactive risk notifications have helped avoid some landmines but the risk identification itself is too broad for your industry.
So is it time to throw it all away and just start from scratch? Before you do that, give this episode a listen. I've got a PMO consultant who has tested hundreds of AI tools and has built a consultancy around augmenting and enhancing PM tool functionality using AI assistants, agents, and automated workflows.
We're gonna be talking through their method for evaluating AI tools in a world where hundreds of tools are introduced every day. We're going to be exploring what options are available when an organization realizes they may have placed all their bets on the wrong PM software. And we're gonna be stepping through how to augment any PM tool with a resourcing workflow using off-the-shelf tools like NotebookLM and n8n. Hope you enjoy the episode.
Welcome to The Digital Project Manager Podcast — the show that helps delivery leaders work smarter, deliver smoother, and lead their teams with confidence in the age of AI. I'm Galen, and every week we dive into real-world strategies, emerging trends, proven frameworks, and the occasional war story from the project front lines. Whether you're steering massive transformation projects, wrangling AI workflows, or just trying to keep the chaos under control, you're in the right place. Let's get into it.
Okay, today we're talking about AI-powered project management software and what to do when you realize the tool you've selected doesn't do all the things you thought it would. We're gonna talk about how to test AI tools, how to make specialized tools play well within a broader ecosystem, and we're actually gonna be going through a practical example of building an AI-powered workflow for a specific project use case.
With me today is Emmanuels Magaya, Director of Technology at Tech Legends and the Founder and CEO of Project Managers Africa. Emmanuels is a seasoned tech leader, PMO advisor, and AI consultant. He advises C-suite executives and boards on strengthening PMO capability, portfolio governance, and delivery assurance to ensure that strategy translates into measurable business value. And as CEO of Project Managers Africa, he leads a fast-growing continental consultancy firm dedicated to elevating project leadership standards.
He provides thought leadership at executive roundtables, master classes, summits, and advisory committees. He is passionate about empowering the African community, and he champions governance excellence, capability development, and the future of AI-enabled delivery across Africa. And recently, he's been diving headfirst into AI tooling for program and project management at all sizes of organizations. As part of his role, Emmanuels has tested over 100 AI PM tools, and that's what we're gonna be diving into today.
Emmanuels, thank you so much for being here today.
Emmanuels Magaya: Thanks, Galen, for inviting me to this wonderful podcast. I look forward to having a very productive discussion today.
Galen Low: I'm looking forward to it as well.
We're trying something new for this one, so thank you for indulging me. I definitely want to chat through your experience helping teams find the right set of AI power tools for their projects, and I do hope we go down the odd rabbit hole here and there. But here is the roadmap that I've planned for us today.
So to start us off, I wanted to set the stage by hitting you with, like, a big, hairy question that my listeners want your take on. But then I wanted to zoom out from that and talk about three things. Firstly, I wanted to talk about your insights from having tested so many AI tools and worked with so many organizations.
Then I'd like to step through a practical use case with you for how a more specialized AI tool can fit within a larger project management ecosystem. And lastly, I'd like to get your take on what the impact of AI will be for emerging markets around the world and how that will change the global landscape of digital project delivery.
How does that sound to you?
Emmanuels Magaya: That sounds great. Let's go for it.
Galen Low: Awesome. Let's dive in. I thought I'd start us off with one big, hairy question. When I first met you, you threw a stat at me that there's now, like, 500 to 1,000 new tools launched every day. Some of those are probably enterprise tools, some of them are specialized tools, and I imagine that many of them are vibe-coded apps and other localized home brew tools.
Arguably, a vast majority of them include AI-powered features. So my big question is this: Given that there are so many tools entering the marketplace, thanks to the enabling power of AI, do you think that we're due for a backlash against new tools? Like, what is the upper limit of what the marketplace, and maybe even the human psyche can support?
Emmanuels Magaya: From my perspective, I don't think we really will reach a ceiling, per se, because as humans, we have an affinity for new things. We always want the next, you know, shiny object out there. You know, it's, they call it the shiny object syndrome. We enjoy things that try new things, which is why social media is so popular these days, because there's always something new to look forward to, something you haven't seen, something you haven't heard.
So my thinking on this is that we will not really reach a ceiling, but we'll probably start to have kind of like a, each individual, their kind of their own standard or their own pace to say, "If I'm going to move from my current tool set, what do I look for?" It's not just going to be a matter of trying out everything out there.
It will be, "Now I have a basic understanding of AI, where it helps me, where it assists me with my day-to-day work or my day-to-day personal life. Now, what other new tools can I add to what I already have?" So you find some organizations or some individuals, they use AI maybe predominantly for one function, but you find individuals maybe will now test out new tools for other functions.
So for example, if your day-to-day work is a project manager, maybe you're also doing content creation, so you might still keep an eye out for new content creation tools. But on your project management side, maybe you might stick to one or two tools that you're comfortable with.
Galen Low: I like that. I like the sort of, like, purpose and business category angle to things, right?
There's a lot of things coming out. A lot of them, you know, I think the AI hype is that a lot of tools are saying they can do everything, and maybe they can, and I think that can be the overwhelming thing because you're like, "Okay, well, yeah, it seems to fit with what I do." But I like that idea of, like, filtering and narrowing based on where you might need the assist, you know, whether it's in content creation or somewhere else in, you know, the project management process, not necessarily sort of diving into every tool.
And I think just reading between the lines of what you said, like, also, not every tool is made for every size of organization. You know, there are certain things, certain bands that you can kind of look at and say, "Okay, well, listen, that looks like it's enterprise. I'm not enterprise. You know, maybe I don't have to worry about that for now."
But I think bending it around the purpose that you need, where you need the assist, I think, like, that's a great way to look at it. And I think you're right. I think humans, we like tools and we like new things. Like, those are two very human things. And so maybe there's no stopping this machine that we've created, but at the same time, yeah, you can make a lot of candy, you don't have to eat it all.
I thought maybe I'd zoom out a bit because not only are you quite a technical person, but you are also a passionate educator who can explain things in, like, a non-technical way. In fact, with Project Managers Africa, it seems like you're giving PMI a run for its money in terms of flying the flag for project management education.
And then when it comes to tech and AI side of things, you yourself have run ahead and tested hundreds of AI tools so that you can help teams make good decisions around the tools that they choose to use. But, you know, not everyone is as technical as you are. So how do you get less technical folks to start testing and experimenting with AI tools?
Like, where should they start, and what approach should they take to determine if it's a fit for them?
Emmanuels Magaya: Thanks for the question, Galen. So the first step is to understand what exactly do you want AI to do for you, and also based on the job function. So are you testing for work or for personal? If it's for personal reasons, then I would say you can really play around with just a lot out there.
But if it's for work, obviously there's boundaries and guardrails we need to put because there's intellectual property, there's privacy acts regarding data and all of that. But if you're a non-technical person, the first thing is to check what use cases do I need to apply? What sort of use cases do I want AI to assist me with?
So for example... So don't try to use AI for everything. Look at some of the simple things that you do in your day-to-day that are routine, that are mind work, that take you time. Then find a couple of tools that you now test. So let's use a simple example, which is common. You're a content creator. You realize, "I want a tool that can help me to automate creation of videos or, you know, my blog."
So that's your use case, right? You want automated content creation, maybe automated posting to my social platforms. Now, where do I go with the tool? In the past, we used to use Google, and nowadays Google cannot be the go-to for that. It is good because it's integrated with Gemini, but I would say your first point of call would be to, say, take one use case.
So in this case, we-- let's take the first one, automated content creation. So you can pick... We have the common tools that we have, that I use daily, ChatGPT, Gemini, Claude. Let's use those three, for example. Then you can go to each one of them, and then you type a simple prompt to say, "List for me the latest tools that are used for automated content creation."
So that's one platform you can go to if you really wanna go out of your way to search, you know, a little bit manually. But if you want AI to do it for you, then just go on ChatGPT or Gemini, type, "What are the latest tools for AI content creation?" Then it will likely list a couple of them. The only downside with that is the major tools or our commonly used tools often are biased towards the most popular tools.
This is what I found, and this is what I didn't like in the first place. So in that case, then you need to either have a prompt Tweak your prompt. It goes again to your skill set in prompting now, and say, "Do not list for me the common tools." So one way to do it, Reddit is a good source. You're saying, "Search for me around blogs, search for me on Reddit.
What are people talking about in terms of AI tools that are free or that are cheaper or that are easily accessible, or there's no sign-in required or no credit card required?" You customize your prompt to say, "Find me the latest tools. If in project management, find me the latest tools in project management that are free or that are not the latest tools.
Categorize them and list them according to ease of use." Another way to find them, a simpler way, which I did the very early days, I simply typed on Google this time and I said, "Can you give me the top five simplest tools to prepare a project plan? I need a Gantt chart. Can you just give me the simplest tool to do a project plan which is not your common..."
I even mentioned there, "Not your common Asana, Jira, and whatnot. I want a tool I've never heard." I literally typed in the search, "I want a tool that not ma-many people know about." What happens is it will list for you websites that list other AI tools, like AI Factory and whatnot. When you go there now, you actually discover more tools.
That's what then intrigued me to actually say, "Look, there's so many tools that we don't know that I'm not even getting visibility of." But because I searched in a specific way, I was exposed to other listings that had other tools that I'm now exposed to.
Galen Low: What about, like, once you've selected a tool and you've opened it up for the first time, and maybe you're comparing two different tools to do the same task.
Like, what is your methodology or approach to even, like, just start using a tool and seeing if it's the right one for you? Or how do you start comparing one tool to another for something like content creation or project planning?
Emmanuels Magaya: Great. So let's take content creation, for example. I need to see the quality of the output.
So, and content creation is a bit too broad though because we've got image generation, video generation, but we can use that. I look at specifically how's the quality of the output. So what I often recommend when I do AI training is that never use one tool to get AI results. So the best way around it is type the same prompt in Gemini, Claude, whatever tool that you have access to, then it'll compare the two.
Or another way, I've discovered quite a number of tools, platforms where you can access all these models from one interface. One such is one called Galaxy.ai. You go to Galaxy.ai. When you type, it checks all the models, it consolidates the results of each model, brings it up as one final result, so that way the output you get is much better and more accurate.
If it's just general, you know, prompting for, let's say, you just wanna find maybe summarization of reports, research There's this extension called Sider, sider.ai. Sider.ai, it does almost the same thing that galaxy.ai does. It's just that Sider, it's more of a plug-in or rather an extension that you install in Chrome.
I'm not so sure if they are now having compatibility with other browsers. So you can also use such a tool, sider.ai, right? This is what I need done. You type there, "Can you prepare for me an analysis of what it gives you?" It analyzes 'cause it checks all the other models, gives you a final result, or you can go to a tool such as galaxy.ai.
There's several of them. I can list them maybe after this and for the benefit of our viewers. And if it's on the image generation side, again, similar concept. Also, you need to check how frequently the tool is updated. Right now, there's so much competition amongst the major players. They release these tools, the updates sometimes on a daily basis.
And for me, it's one of the s- the exciting things, but also the struggles I have. Keeping up to date with the latest drop on Claude, on Gemini, sometimes it's difficult. They're doing amazing things these big companies. One other way also is stay in touch with other AI experts in the industry.
For me, that's one of the things that has worked. There are newsletters I get where I get notified of things. For example, I got to know about ChatGPT putting ads through a newsletter, not even by researching by myself. So when you stay in touch with what other industry players are doing, you always up to date.
Your peers are kind of feeding you information, you know. Also, when you go to sites like Reddit, Reddit is always a good source because it's people-driven. It's community-driven. When the community's talking about something, they have tested it, they've tried it, they've experienced good, low, the highs and the lows, the good and the bad, so they can give you a more accurate result on, you know, their experiences.
Yes, I think to answer your question, to say side by side, if I was to compare the tools, the quality is number one. How relevant also is it to my question? And remembering that the prompt you make or the prompt you give it determines the output. If you are not good at prompting, you may not get a good result even with a good tool.
So you have to develop the prompting skill. Even just in general AI use, not just in project management, you need to understand what is the best way to write a prompt. There's a way around it. And then also comparing the tools, how up-to-date is it? What sort of models is it connecting to? How much data is it producing as an output?
And what I found sometimes with these tools There's simple tweaks on the tool which if you click, you get a different result. And sometimes people don't realize, like on ChatGPT, there's a way it says fast, then there's a way it says deep research or, you know, more complex. Often people just jump into ChatGPT, they type, they go.
You don't actually realize with any tool that you have, it comes with so many embedded features that we're not using. So play around with your tools, try out different features. You find that even the tool that you think is not that good might actually serve the purpose. But for specific function type of tools, for example, yesterday I was testing this tool called civils.ai.
It's for people in the civil engineering space. Some of the tools you find that they are h- a little bit harder to compare to other tools because they are so much of a niche, and that niche again maybe is based on region, based on industry, so it becomes a little bit harder. That's where now you need to say, "Okay, do I want one tool that does everything or do I want a suite of tools that I call my go-to AI team, my AI army, where one is my main function, then there's others that complement the tool that I have?"
Galen Low: I like that. So basically, you're figuring out your use case. Why do I need a tool? Where do I need an assist? You're doing some research. You're finding more than one tool ideally to compare. You're putting a prompt a well-crafted prompt, the same prompt into different tools. You're evaluating quality and relevance to see if it's actually going to sort of meet your use case.
And in addition to that, you're looking at the different modes, making sure that you're not just missing the forest for the trees by shoving a prompt into it, realizing it's like, you know, not what you expected, whereas you could have done, you know, deep research mode and it would have given you a better output Let me ask you a question though.
You've tested hundreds of tools. Do you have, like, a time threshold or, like, an upper limit? Like, how much time are you spending going through each one of these tools? Because I know a lot of folks, you know, they're already feeling time-starved. There's already hundreds of tools out there. Like, how do you know when you're like, "Okay, yeah, that's enough time I've spent testing this.
Like, I'm moving on to the next one"?
Emmanuels Magaya: That's a great question, Galen. So it, first of all, if somebody wanted to do what I'm doing in terms of testing, what exactly is your objective? So from my perspective, I want to stay as a thought leader in the space, so I can't just generalize my testing. I can't just say Can you do a Gantt chart?
Can you do helping with screen planning? If it does that, I'm happy. Now, for me, I have a very structured framework. I actually have a template that I use to test it too. For example, just on a high level, what size of the organization? What specific features does it have that are relevant to a project management office?
Are we working with a hybrid? Are we working with an agile environment? Are we working with a waterfall? Is it infrastructure projects? What type of project it is? So there's so many cri- criteria that I look at before I actually then say, "Am I gonna take five minutes or ten minutes?" So I don't actually have a set time to say I give myself five minutes or ten minutes or twenty minutes.
I have tools I've tested for two days, sometimes even three days, because there's just so much to it. But what I've found, the error in some of the people showcasing tools out there, especially on LinkedIn and TikTok and whatnot, is it's kind of influencer-driven, where you're just told, "There's this new tool."
Oh, I remember the time I discovered this tool called Plotato. I don't know if you know about Plotato. It's a tool that automates creation of content, so it can create for you Instagram, all of that. Like, you just give it a concept, give it your colors, it creates content for the next seven days. It's really quite good.
But I discovered it through an influencer that just test tools they got test. They literally, if you listen to their video is meant to be 60 seconds or at most two minutes. And so you can't really give much value even when you wanna talk about that tool. So for me, I go in depth, especially in the project management space.
There are tools that are good at risk management. There are tools that are good at capacity management, you know, allocating resources. So for me, I look at project management, what are our day-to-day functions? What do we do every day? What do we do every week? What is the non-negotiables of being a PM in terms of our work?
Then that's my baseline for specifically project management tools, because I don't just test project management tools, but in our space, project management, I make sure it must serve the functions of a project manager on a day-to-day. If it can't do this bare minimum, then that tool, for me, when I'm screening them through my checklist, I already score it to say this one 'cause I've got it like a scoring, a dashboard that I have.
I can already score it to say, "Here, it's wrong, here, it's wrong." So you find simple tools. For example, the day I ch- I tested Gantt Chart AI. Gantt Chart AI, that was a very easy tool to test. It's very simple. Just give it, "I want a project plan for an ERP." That's exactly what I wrote, "I want a project plan for an ERP.
Can you prepare it for me?" Da. It did it, and I could see this is not really a tool I can use for a Fortune 500 company. I wouldn't even recommend it. But it's good for a small business. It's good, you know, for a smaller project. So size of project, size of team, all of those criteria are part of my checklist.
So I have to have a checklist, and I don't have a specific timeframe for how long I test. But at most, I can't go for more than two days tool testing because, you know, that's a little bit too long.
Galen Low: That makes sense. Yeah, there's so many tools out there, but at the same time, they're not all gonna take the same amount of time to test.
I really like that framework that you mentioned that takes into account org size, industry, ways of working or methodology. I think that's really strong. Actually, kinda leads me, you know, to the next thing I wanted to ask you, which is like, you know, right now there's, like, huge numbers of organizations of all different sizes who are making purchase decisions on project management software based on their AI-powered features.
A friend of mine, Olivia Montgomery from Capterra, she published a report that said 55% of all PM tool purchases have cited AI features as being a driving factor. But the only problem is that sometimes teams get further down the path and realize that the tool that they've selected doesn't quite fit 100% of their use cases or ways of working.
So what do you advise teams who have found themselves in a situation where the AI-powered project management tool they've chosen to standardize on doesn't quite fit their needs? Where do they go from there?
Emmanuels Magaya: So in such a case, I would take a step back and say, first understand what you expected from the tool.
It's easy to miss certain functions because some of these tools, what they sell you or what they advertise or what they promote is so good that you kind of focus on that and you don't focus on the other features that you want. So first thing you need to do is to understand what was your original requirement?
What is it that you had as your, I would say, your base for what you were expecting? Then you do what we call a gap analysis. You do a gap analysis of expectation, what I'm expecting to do versus what is it actually delivering. That's the first step. And with most of the tools of today and the challenge also with a large organization, what I would say is there's a lot of consultation that needs to happen.
Is this tool only for the project management team? If it's only for the project management team, that's fine. So let's understand the gaps. Let's do a gap analysis. So if we see we continue going down a rabbit hole, there's nothing that's really forthcoming, it's actually messing up, we actually have to stop the deployment.
So also, at what stage had we adopted this tool? Had we used it for a month or six months? When did we discover the gaps and the limitations? If it's earlier on where we can reverse and, you know, kind of take a step back, then we take a step back, we put a pause, and then we do a gap analysis. What does it have?
What can't it have? And then is there a possibility that the developer can add these features? If there's no possibility that they can add the features, then now we are now stuck with a tool that we obviously will struggle with, and there's probably a financial impact because most companies don't buy month-to-month licenses.
They'll probably buy a 12-month license. You need to check now, if this tool is incapable, what is it breaking? That's the next point. We are stuck with it. What is it breaking? What are the friction points? Where is it actually making our way of work worse? If we really realize that it's messing up our work, then we have to put a hard stop, deployment pause.
Let's revert back to not having the tool, and we now analyze You know, what is it that we can do in the meantime to make up for whatever impact that was caused? And then from there, we need to find a way to let go of this tool if it can't be enhanced. And again, this is where some of the SaaS tools where they are very intelligent when they sell you this tool, they lock you into long-term contracts, and they get you to embed it, because if it's a large organization, you're probably gonna embed it to integrate with so many other tools you have.
So the impact of making a wrong choice, it can be massive, and it can even cost millions, and it can also show up in terms of customer satisfaction, customer delivery, product quality. All of that can be affected by having the wrong tool, because if your tool is not helping, the quality of delivery is wrong, the team gets frustrated, quality of the product also goes down because you're not getting the output you want.
So you need to l-really check the friction points, where are we, and can we exit from it? If we can exit or can it remain but being used by lesser users, then those lesser users, okay, what functions can they still use? On the tool they have, are there any other functions that we can actually recover from, you know, these tools or really it's as good as just not utilizing it?
Then, and a very important part as well, how much data had we fed this tool? This probably is the most important point after the friction point. How much data had we fed it? Yes, companies often, you know, wanna make sure it's localized, there's no external access to the data, but maybe we've trained it. It has actually been trained on data.
Here's a very good point relating to this actual practical point. We had an AI masterclass ToolsSpec that I taught, and one of the questions that came was, let's say you've got a tool that you have, you've fed it information, it has learnt your systems, it has learnt your way of work, your company culture, because it's, you're feeding it data, you're feeding data.
What do you do now when you need to change? In this case, rather than even to say it's a wrong tool, but maybe it's become legacy. So in such a case, you now need to look at how much data can you recover, and the challenge with AI tools, by the way, is how, let's say, ChatGPT learns your data, it learns about your way of work, you know, all that data that you feed it, is not something you can transfer to Gemini, right.
Because this model has its own brain. It's like you can't transfer your brain to me, Galen, and then I function like you. You know, you can try and teach me a few things, but you can't transfer what you've learnt in life, your experiences. So it's the same. We look at these two, it has had experiences where Monday the secretary type this, Tuesday the PMO manager type this, Thursday the project admin type this.
So it has learnt our way of work. So if we discover this way down the line, the impact to reverse will be a lot harder because we now need to retrain a new tool based on more lessons along the way. So the sooner you can discover. When we're adopting tools, let's make sure we have kind of like a screen test that we give these tools, at least to say one-month or three-month trial.
So what I'm even saying is perhaps don't go for a 12-month or 36-month subscription, go for a three-month or a shorter term so that you test this tool, you bullet test it to see how rigid is it, so that you don't have to have more work in terms of reversing the damage, particularly the learning part, because the learning part is where it really gets interesting, because now you have to export tasks.
I mean, the data itself, in fact, exporting tasks, that's one part. What it has learnt is where the issue really is, because you might not even be able to recover that, you know? So I would say those are the core steps I would recommend, Galen. And then, okay, you also have to evaluate your alternatives.
Can you even find an alternative that serves your purpose? You might feel this one's not good enough, but are there alternatives that can fit what you want? How much time is it going to take? Are we sure this alternative will not take us back where we were with the other one? That's where I often recommend don't go for one tool for everything.
Allow yourself to have Different tools. And with agentic AI now, you know, with AI agents, the beauty of it is you can pull out certain agents and pull in, remove agents that you don't need and have agents that you need. So if you establish an agentic PMO particularly, or if it's just your normal business, if your business is now AI-powered or AI-enabled to the point where it's now an agentic-enabled business where you've got AI agents doing different functions, like on the PMO side, you've got a, an agent that does, let's say, your risk prediction.
You've got your AI agent that does, let's say, capacity planning. You've got your AI agent that just does the scheduling or the calendar management where it checks people's calendars, it sends notifications. There's an email that has come, "Can you do this? Let's check your diary." So your different agents, you can always pull out the agent that you feel you don't need, and you can carry on.
So I would say find a tool that does maybe your core work, but don't rely on it for everything. That's my take on it.
Galen Low: Maybe let's go there in terms of, 'cause one of the things you were talking about was enhancing or doing a gaps analysis and potentially, like, building a feature that fills that void for a tool.
I totally understand what you mean. I think it's a really good point about the sort of, you know, if the further you get down that path with a tool, the harder it is to sort of extract yourself from it because of what it's learned. But I like this idea of, like, AI agents supplementing a feature set of a larger, more enterprise project management tool.
But, like, what can that look like? I think, you know, a lot of folks are, you know, they do wanna build agents, but how can we get these agents to play well together with the broader project management software that we've chosen? Like, what are some of the entry points, or is it more of just like a tool set and the humans are choosing what tools to use for what, and maybe those systems aren't necessarily talking to one another?
Emmanuels Magaya: So when I look at the space, even globally, in Africa, I think we're even a little bit more behind in this. So what we're finding is- Some PMOs don't even know what are AI agents. They haven't even matured to the point of having AI agents, much less an agentic environment where it's AI agents literally working with a central, you know, it's a full AI-powered PMO.
So what we found in this case is that if you've got a tool already that you have, first of all, does it support AI? If you've been using, you know, the mainstream tools like Monday.com, Asana, Jira, then they are doing a lot of good work to stay up to date with the AI enablement. But not all organizations use those tools for various reasons.
Some companies have standards or, you know, limitations and all of that. They have their own way of work. But we find many companies at least use Microsoft products. So you can then check, if we look at the Microsoft Suite, you can start from there to say just by using Copilot and how it works well with the Microsoft Teams and the Microsoft Office Suite, what are the agents that we can start with from there?
So you're starting it low-key, starting small. Maybe it's just an AI agent that just check after the meetings Let's pull out the transcript, let's create action items, and then we send an email. So that's just a simple use case. You're just having many agents that are doing little things. So that's helping you because there's also a change management element here, Galen, where the team needs to now adjust to having AI as part of their team.
So I taught this at the masterclass to say, going forward as a PMO, when you're now doing your head count and you're saying, "Our team..." Let's say your team is 50, now you need to count the AI agents as part of the team. So if you got, let's say, 10 AI agents, you are actually a team of 60 because we have what we call an AI RACI or an AI-enabled or an AI-augmented RACI chart, where on the RACI chart, who's responsible, who's accountable, who's consulted, and who's informed, there's a section where AI is performing certain functions.
You know, if we had time, I was actually gonna show you a sample of one. So you now need to look at what is the agent responsible for, and then what is the team responsible for? So as you are slowly adopting AI, you need to start looking at, okay, now we're getting a bit more comfortable letting AI do some of the, you know, the manual work, the repetitive task.
Obviously, there has to be human oversight regardless. So as you are moving slowly into getting more comfortable with the agents, that helps you now have a better and clearer roadmap. But what we found with some companies, maybe they don't use Microsoft or they're not really that knowledgeable on how to embed AI agents, which is part of what we also assist businesses with, which is helping companies actually adopt AI and implement these tools.
So what we normally recommend is, you know, you wanna build a house. It's like trying to buy so many tools. Do you know a house needs cement, it needs this, but you're buying everything all at once. Maybe some things you're gonna need them six months down the line, so don't buy everything all at once. So in terms of AI tools, if you wanna start using agents, AI assistants, and by the way, it's got levels as well, if you wanna really do it right, go step by step.
You know, it's a tiered model. It's like a pyramid hierarchy where you start with your AI assistants. Maybe it's time to also look at some of these tools See what they're able to do. Then you go back and you check your tool. Do we need to buy or to integrate with another tool to perform the same function, or do we get Jira or ClickUp, which does everything, we get it in?
Or as you know, we've got Zapier, Make.com you know, n8n. You go with those tools that already allow the automation to happen. So you kept your tool, but just make sure you have somebody that understands the automation process and that can do the workflows for you. Then you can still survive with all those other tools that you're trying out, as long as they're integrated properly.
Galen Low: So in other words, if you've picked a tool and you're noticing, okay, maybe it doesn't do this thing the way we thought it would do, a lot of these platforms are built where you could either automate a workflow, you can integrate your agents, but you will need to go and build those agents. Maybe don't build them all at once.
Maybe don't build a whole army. Treat them like they're a part of your team. They are sort of working autonomously. They're making decisions. Set up some AI assistance for your team to kind of fill in that gap. And I think you're right. I think a lot of these platforms like Jira and Asana and ClickUp and what have you, they're cleverly building in a way where they know that if it doesn't do something that you want it to do, you're gonna go build your agent anyways, and they are sort of building more functionality in there to create your own agents within the platform and also integrate your workflows, which I think is great, and it's in the ecosystem, and I like your point that don't go full bore into this.
Like, this is basically like, treat it almost like you're hiring or adding to the team to augment some of those areas where, you know, you need certain tasks done or certain functionality, and you're enhancing your platform rather than necessarily either just suffering through it to great cost and with great pain or, you know, throwing the baby out with the bathwater and trying to do something again at the risk of maybe even landing exactly where you were with a different tool.
You have a workflow that you wanted to show us. I was wondering if we can... Yeah, we can, like, maybe step through this. For the folks who are watching this on YouTube or on Spotify, we can put something up on screen. I'll do the voice track for my listeners as well, but I was wondering if you could just, like, walk me through an AI workflow that you've built that, you know, could slot into a broader project management tool ecosystem.
You know, something that could work alongside maybe something like a Monday or an Asana or even, like, Microsoft Project. Could we go through something like that?
Emmanuels Magaya: Definitely, Galen. So the example I'm going to give Galen, it's actually a practical use case that came up as a question two weeks back when we were hosting the AI training that we did.
So the question that came up was A director of project management office for a very large company based in Europe that has branches across the world. They've got a PMO that has different resources in different locations, specifically in this case, 15 resources that work together. But the challenge that the director of PMO has is, number one, visibility on the capacity of each resource.
He's got resources, he knows he's got a lot of work that he needs to assign them, and they also have work they already have on their desk. But how do I actually see the capacity of my, you know, the load of my team? What's the load that each person has? You know, can I give them more work? Are they managing with the number of projects we have?
So that's kind of the scenario in a nutshell. So what I then did to simplify it, to give you context why I did it this way, the audience had not just PMO managers, it also had CIOs and CEOs from banks and mining and telecom. So it had leaders that don't really have much project management knowledge per se, like deep, really PMO management type of work.
But I needed to make it as simple as possible for everyone to understand. So I created kind of like a flow using Notebook NotebookLM. NotebookLM is a very good tool if you wanna conceptualize a process. And this process I'm showing you here, we can then take it further into n8n or one of the automation tools and then actually apply this.
So how it would work is this. We've got our resource requirement, right? We've got the team that has 15. Maybe let me take a step back. What I did in NotebookLM, I said, "Look, I've got a team of 15. I need assistance in creating a proper capacity planning, capacity visualization plan. Can you help me frame this in terms of an automation flow?"
So that was my prompt, and I did this live in front of the audience. So then it pulled up, first of all, you know, NotebookLM, first it gives you some sort of text analysis, textual analysis, and then I said, "Now let's go to the mind map." So we then went to the mind map. Now, for the benefit of everyone, you just go under Studio and you click on Mind Map.
It simply gives you the similar flow that you'd get in n8n. So that's just the background around it in case somebody wondered how we got here. So then we call it integrated multi-system resource capacity automation, right? So it's now giving you the, I would say, the f- workflow or the flow. So we've got resources.
What we need is we understand the resource side of things, our project managers, project administrators, whoever is in the team, program managers and whatnot. Then there's integration required. 'Cause mind you, just to give context as well, when you're creating workflows, you need to look at the bigger picture, not just the task themself to say, go from point A to point B to point C to get this result.
If you wanna do a proper workflow or automation, have the full visual image of how the thing needs to work. And in this case, NotebookLM can actually help you create that, which is what we did here. So it told us we need to look at resources, the integration, the workflow automation, and then the final part, which is where the help was needed, was the capacity planning.
Galen Low: And just for my listeners as well, so we've got a mind map view that we've generated in NotebookLM.
It starts with name of the workflow that we're trying to build, the integrated multi-system resource capacity automation, and then it's branching out into four different branches, and that's the project resources, the system integration, the workflow automation, and the capacity planning.
Emmanuels Magaya: Thank you for that Galen.
So under project resources, we look at our team. Our team consists of 15 members which is the resource that we have. We need to look at each individual's capacity, how much capacity does the individual have? And then when we're allocating, we need to allocate according to also their skill set or their specialization within the company or within the organization so that we know who needs to do what, basically.
And then we need to look at the integration now. It needs to connect to multiple systems. So here, the director of the PMO indicated that they use ServiceNow and Jira as their core tools, and then, and of course, Microsoft Office Suite and Teams as their day-to-day. So that's where we need the integrations in terms of when we're doing our workflow now, 'cause this workflow, we then pull it, we can automate it using N8N or any automation tool.
It doesn't matter. But we need to first know what we want to do. 'Cause if we go straight to N8N, the danger is do we know where we wanna go? We could have an idea, but we need to actually have a visual mind map of what we wanna do. So here we know we need to integrate to Jira, we need to integrate to ServiceNow.
So how it works is tickets come through or requests come through from ServiceNow. Some companies use Zendesk, they use whatever ticketing tool. It comes as a request maybe from a customer or from a fix that's required. It comes in, now it needs to be categorized to say, is it a mini fix? Is it actually meant to be a project, or is it actually meant to be, let's say, a program?
So that's where that filtering happens. So we need that integration so that we know whatever's coming, we still know we have our resources that we need to allocate the tasks. So we need to have that integration to all the systems. All the systems are integrated, that means we need to have API endpoints, synchronization of data, so data flows across all of those applications.
Then this is where it gets interesting now, the automation itself. The automation itself, we need to collect the data. The data collection, how are we getting the data? Very crucial. The team completes timesheets. So when they're completing timesheets, we can actually get an idea based on, if we're using AI in this case, historical view of, okay, let's analyze the timesheets for the past, let's say, three months, past six months, past month, whatever, even past week.
We use a timesheet entry as part of our data collection point. So when we're setting up the automation, it must have integration to our timesheet tool. It might be the same tool we use for project management, it might be a separate tool. What I found is some companies have a separate tool just for time tracking to say, "How much time did you spend on this project or on this task?"
So we need that data fed into this workflow through a timesheet, very important, and then the scope as well, because the scope determines how much Work is needed to be done, then it's another collection point. We're collecting data on the project and collecting data on the resources. That's basically what this flow is about.
We need to collect the data from some source, and then now we process it. How are we processing it? The capacity calculation. Now we're working on, first of all, timesheet came in And then there's probably also a forecasting element because some of the project management tools actually allow you to forecast to say what task are you gonna do in, let's say, July, November forward.
So we already have the capacity calculation based on the data we've been given, or it can estimate based on the timesheet entry to say, "Normally, this person spends fifty hours a week on this type of a project." So it has matched the timesheet entry versus the project scoping. So when you're doing capacity calculation, it's using the estimations based on the information you gave it.
Okay, capacity calculation, now we can now balance the workload. So this is where the data processing is coming through. From there now, most of the work has actually been done in this section here where we have the processing. This is where it's crucial. Then still on the automation, we need to set up another flow, whether it's in N10 or whatever, where now it does automated reporting because what the director of PMO is saying is, "I need alerts.
I need notifications to say, 'Hey, your resource so and so," right? "Billy is running behind," or, "The project that normally takes six weeks, he's now on eight weeks. That means capacity-wise, we're going to have a problem." So it's helping you preempt that before it happens. So those are automated reports that are sent based on what it has already determined, and then resource allocation alerts.
So perhaps the system is a little bit more advanced now. It can even send alerts to say, "Billy is free, Mary is free." It actually automatically allocates projects because there's a capacity there. That makes the work of the, you know, director of PMO a lot easier because he doesn't have to then say, "Billy, do you have capacity?
Let me give you a project." No. Projects are already prioritized in our PMO through maybe our, a platform that we have. So projects are just waiting. Whenever somebody's capacity, it pushes a project to them. Whenever there's a gap, it pushes the task, or if there's not enough, then it sends an alert as well to say, "There's a project that we had planned to finish by May.
Now we are in April, and we haven't allocated anyone, so we're gonna have a problem." So it also triggers a risk, and that also helps you as a PMO manager to know what's going on. Then the capacity planning side of it, which is the last flow here, is availability tracking now. So as all of this is happening, we are tracking availability of the resource, who's available based on the work.
Maybe somebody doesn't have too much work, the project they're working on. Based on the timesheets that we're tracking, we can see he or she has enough capacity. Also forecasting, very important, I think, which I touched on earlier to say If we look at your work today, we can forecast that you'll be free in May to take on a specific project.
So I wanna give you a, just a kind of a link between these points. You see the forecasting, and if we go up here, we've got project scoping. These two work together very well because what happens is when we're forecasting saying, let's say resource number one, let's say it's Billy is running a medium-sized project.
Medium-sized project we estimated will take, let's say, three months, just as an example. So we know the scope because we collected the data, and we've checked the availability tracker. We can already tell Billy can take on a medium-sized project from March up until the three months, you know, getting into July, August, then.
So it's already helped us with our planning going forward. And then utilization allows us to say, "Okay, are our teams over-capacitated? Do they have enough workload? Maybe some have more workload than others, or do we have more projects than what we can handle, or do we have a lot of idle hands, you know, available?"
So this is basically what the flow is. So I'm gonna stop sharing my screen. I wanna show you something quickly that relates to this, which is now just a step-by-step. I'm not gonna do the actual flow in because of time, but I will list the steps that you now need to follow when you are applying this in your n8n environment.
Galen Low: While you're pulling that up, I just wanna do a quick recap also for my listeners. So we've went into NotebookLM with a use case around enterprise resourcing for a team that is geographically distributed, and NotebookLM has created a bit of an architecture for us in the form of a mind map that shows us the facets of the workflow that we'll need to plan for and how they plug together.
And Emmanuel, back over to you to walk us through the steps of how you would then build that out in n8n.
Emmanuels Magaya: So there was just a scenario that I put there, you know, what do they need to do? Have a view of resources that have capacity, which ones don't. Reallocate projects based on priority and capacity of the individuals.
Get alerts when resource capacity is depleted, running low, or still available. So we needed to create a workflow for that. So next slide is where the meat of it comes in. Here is just a flowchart, a visual flow of how this all works together. And then here's a step-by-step basically explaining what is going on.
So here we're talking about that first point where I was talking about the data sources. So here we've got... It could be project reports, lessons learned, governance frameworks, templates, and policies. Whatever it is, we're feeding it into... And also those timesheets and all of that. We're feeding it into the NotebookLM.
So this is what we had already done before that flow was developed. But the crucial part which our listeners would need to just be aware of is this part now where we're talking n8n now Right, the ingestion, we're talking n8n, pull project logs, status reports from a designated folder based on, you know, the PMO setup.
Now we sync these into Notebook LLM to... So what we were doing here was to first give the tool as much information as possible. So it's not just about feed it timesheets and whatnot. It needs to understand your environment. It needs to understand when it looks, let's say, at project logs or status reports, that's historical data that will help it make decisions going forward as to now how you manage your capacity.
So the crucial part, I think, which where most of the work was, is on the n8n where we say now we create the agent that then creates triggers, verifies those steps that we were having to say, right, with the information we have, obviously APIs are already done, data integration is done. It has processed the data, processed your timesheets, processed the project scope.
Now it's giving the output. So basically, this is what I was sharing earlier on, basically showing that flow. That's basically what I was showing. It's simply a visual of what we're talking about. In summary, that's the flow. Galen, I don't know if There's any part you wanna query, we can maybe in a separate session then do the actual practical, because this was a setup for a large organization, so there's a lot of touchpoints that would need to be there integration.
So we can kind of do a model of this where we simulate a similar process, or we can even get a use case from the users to say, "Look, my case is this. How can I handle it?" Because this was a specific use case that was raised in the master class.
Galen Low: No, I actually really like that. Thank you for taking us through that because it would take a lot of time for us to step through it in end-to-end.
But folks, if you're listening and you wanna see that, send me a note, let me know, and we can definitely do that. But what I like about this is it ties back into the idea that if you've chosen a platform, and maybe it's not so great at the resourcing and forecasting, maybe it's doesn't have time tracking capabilities, or at least not, you know, to the extent that you want it to have, so you can use AI to grab that data and do something with it to help take proactive action to, like alert you or to, you know, flag risk.
This is something that you could build. And just to kind of go through those steps again, and actually, Emmanuel, so if you can go back to the previous slide, I thought it was really interesting because, you know, in the steps, yes, you use NotebookLM to kind of build the architecture of your workflow, but then you're also using NotebookLM as like a data repository.
So what you've got at the top is, you know, project reports and lesson learned and other contexts like governance frameworks, templates, and folders, feeding that into what you've termed a knowledge engine, right? That's the data source that gives your tools some context, and then building out the workflow in end-to-end based on those, how it's like querying that data, how it's making decisions to, you know, send an alert to somebody if it's noticing that some of the capacity planning isn't lining up to, you know, what we had expected, or if there is some kind of exception in terms of resourcing, someone unexpectedly went on leave, somebody maybe is allocated X number of hours but needs more time, and then it can actually notify somebody, so you can make decisions about that or surface it to a dashboard so that you can look at that, you know, day over day, week over week, whether you're a PMO or, like you said at the beginning, sometimes it's a CIO, it's, you know, a CTO, different individuals who might need to be seeing this and planning this, folks in operations.
And what I like about it is that now this is something that could be using data, project status reports, other data from your project management tool, feeding it into an end-to-end workflow that can then be the notification system to be, you know, its job now. It's like the resource risk Agent, basically, where instead of going, "Oh, I wish my tool had done this, I guess I need to throw it out now," you could actually augment the functionality, not necessarily by like...
This is not even like necessarily developer-level stuff, like building a new feature into the mix, but actually building an agent almost like you'd build like a job description for a human on the team to do this job. Okay, well, what would we need them to do? And then automating it through something like n8n.
It's very sort of enlightening, freeing, maybe overwhelming for folks who are making big decisions on software purchases. But I think there's comfort for me in knowing that a lot of these software platforms, they understand that we're gonna be augmenting, that we're gonna be plugging different things in, different systems are gonna need to be talking to one another.
Yes, they do wanna offer as much as possible to you so that they secure your loyalty, but just because you encounter a limitation doesn't mean that, yeah, you made the wrong decision and now you're stuck. You can go either way. There's, you know, you roll it back if you haven't gotten too far down the road, or push forward and augment and enhance the functionality through tools like n8n, NotebookLM, Make, Zapier, any platform where you can be building an intelligent automated workflow or an agent.
That's very cool. Emmanuels, thanks for that.
Emmanuels Magaya: Yes, and for those that have never... 'cause I don't wanna assume I don't want us to assume. Those that have never used n8n or Make, I mean, this is just an interface just to show kind of how it all links up. When you're in n8n, you can literally start creating your, you know, that flow that we're having there practically now, for example.
So when you start, you just go first step, you set your nodes, what do you want it to do? Do you want it to trigger, activate it manually, or must something trigger it? It could be a very good practical example maybe we can look at a later stage is you can have a workflow that checks emails coming into your Outlook or into your Gmail.
As the emails come every morning, it checks, okay, what are the action items? Or what are the potential risks? This is one scenario that I also presented on the, at the masterclass. When you have AI and AI en-enabled PMO, right, with AI agents and assistants working cohesively, it can actually tell you risks based on just the conversations in the emails, the conversations in Microsoft Teams.
Literally, when your team is chatting, saying, "Oh, by the way, have you sent that report? It was due today at tw- 2:00 PM because it's needed for one, two, three," or, "Have you done that update?" With AI now part of your ecosystem, AI can now trigger or give a h- you know, kind of like a heads-up that look based on the conversations that were happening within the team or the emails that were sent today, there's a potential risk that could actually happen.
So it's predictive. One thing I really like about AI in terms of project management is the predictive AI project management side of things, where you can look at a project Before, traditional PMOs were managing risks reactively. Oh, something's about to happen, okay, let's fix it. But with AI, AI can tell you 12 months down the line what might happen just based on conversations, based on data, based on status logs, based on, you know, just emails that are coming in, based on so much of the feedback that it's reading from all these data sources, you know, based on the data it's reading from all these sources.
So that if you are, you know, still doubtful about AI in your PMO, you are actually doing yourself a disservice because AI can really supercharge your PMO. I mean, like on this workflow, you could literally say, "On a schedule every day at 12:00 midnight, I expect you to do this every day." You know, you create that trigger, add a rule, must it happen at midnight?
How many days? What needs to happen? You tell it, are we integrating to Asana, for example, in this case? Or is it Jira? In Asana, what must it do? It's a trigger that the workflow will have. So in that case, when we had capacity planning, so we need initially to have data to be up to date, so it means we don't really need to create the projects probably there.
So we can look at a function that is created a task. So it creates a task where the task we need to check, you know, you connect it. I'd already done my connections there. But basically what I want to show everyone is that you can create these workflows to serve so many function, to serve so, so many purposes based on what you want the flow to be.
And the integrations there I showed Asana, there's Jira, there's Gmail, all these integrations. We were talking earlier about ServiceNow, you know, is there. There's ServiceNow, there's Zendesk So already that flow that I showed, it's already got the integrations already available in n8n. It's simply a matter of you connecting the APIs, enabling those workflows.
And I know it looks a little bit daunting for a non-technical person, but that's where you need, you know, experts to assist you, which is part of also what we do. But you need to know what you want to do. That's why I shared particularly the flow, to show the flow first of all. Let's have a mind map of what we wanna do.
This step becomes easier when you have the mind map, because here you simply literally it's like you've drawn a plan of a house, now here you're building the house. But you can't start building a house without the plan, and that's where many people make mistakes. When they hear n8n, Make, Zapier, they s- run straight to make workflows.
But what is your workflow all about? Create a mind map in tools like NotebookLM. It helps, you know, visualize what you wanna accomplish. And then now when you start building, you have considered all the elements that you need to take into account.
Galen Low: I love that. Yeah, and you know, for folks listening who are like, "Well, yeah, this is just automation.
We've had this for decades," I think the-- what you said earlier about the, you know, how AI figures into this. Specifically, I'm thinking of the natural language processing, where it's like, okay, someone said something in an email or a chat or in a meeting summary. I know what that means. I don't need the structured data.
I can go, "Oh, well, someone said this could be running late," and that can be my input and trigger, not necessarily just, like, the schedule, not necessarily just, like, a webhook, but also just, like, from conversations, triggered from conversations. Like, that I think is really appealing because a lot of the conversations we are having now are in something like Teams or Slack.
You know, it is digital, but it's not structured data. It's just conversation. And I like that idea to be like, "Whoop, that's a risk. I'm gonna notify someone," or, "I'm gonna, you know, flag this in the dashboard." That's very cool.
Emmanuels Magaya: Thanks, Galen.
Galen Low: I wonder if maybe we could round out just by maybe talking about the future.
Like arguably, AI is not just an enabling technology, but it's also like an equalizer. In other words, people are saying that it's like leveling the playing field. I'm just wondering, what is your optimistic prediction for what AI will do for emerging economies around the world over the next, like three to five years?
Like, what are the opportunities, and what is the mindset that companies in these emerging markets need to take to be in the lead on AI-enhanced project delivery?
Emmanuels Magaya: So what I found from an AI point of view is there is a lot of interest in organizations, and there's a lot of investment also going in. But the downfall that I'm seeing just by observation and listening and being invited to, you know, advise as well, is that sometimes there are rush purchases being done.
There are rush AI stra... Some companies don't even have proper AI strategies because they've not done enough research. A typical example, we hosted a project leader roundtable on the s-sixth of February, right? And one of the PMO managers who attended, she made a comment that was a bit concerning to me. It may sound good, but it also has a downside.
She said, oh, what she told her team is, "Team, go and test as many AI tools out there that have to do with project management and come and give a report." So the danger with that, Galen, you know, I already mentioned to you that there's more than five hundred new AI tools that are launched every single day.
You're gonna be chasing a moving target when you do that. If you are a PMO leader, if you are a head of the, even the CIO, somebody who has influence over the decision-making of this AI implementation, you need to kind of have a framework in place. Before we rush to tools, this is the mistake many people make.
Don't rush to the tool. Your process, first of all, does it support AI? What we found is there are broken processes out there where putting AI will just help it to be more beautifully broken. So you don't wanna find yourself in that situation. So the potential in emerging markets is there, but I feel there has to be an understanding of where AI fits in, which is part of the easy way to do it, create an AI RACI.
Create a RACI in the business at organization level To say, right, organization level, we've got finance. Within finance, where is AI fitting in? What functions are we now removing from the human side to the AI side? Then you thresh it out even theoretically before you test any tool, 'cause AI, you can get any tool to do anything.
So when you ma- la- laying out your strategy and your framework, remove a tool from the picture. Get tool out of the picture and put the process and the principle and the framework and this around it to say, "Right, if we bring in AI into finance, into logistics, into human resources, where is AI going to help us?"
That way now when you're going out to the market, you're getting a tool that solves specific functions. You're not just going to buy, "Oh, our competitor is using this tool. Let's get it. Let's get it. We figure it out, guys. You... We'll send you for training." Maybe that tool is irrelevant for you, so let's not make that mistake.
And also, AI literacy is key. All of this cannot happen effectively without AI literacy. AI literacy is a vital part of this process It's something that without AI literacy, we are just wasting time as an organization. I actually wanted to zero in on, I did have a slide which actually talks about what you need to do when you're talking about AI literacy, because the AI literacy element is where everything starts.
If that part is not done right, we are just going to go round and round in circles. So AI literacy, check your team. What is their skillset? Do a gap analysis, right? In the team, you have people that think AI is ChatGPT. The only thing they think of when they hear AI is ChatGPT, and even in the world at large, that's the sentiment.
When you talk AI, they use it synonymously with the word ChatGPT. "Oh, you're doing using AI? Yes, I'm also using ChatGPT, but it's not the same." So that's what we've also found in some of these organizations, that people don't actually understand what is AI. They don't understand what are the models out there.
Some people think every tool that does prompting is called ChatGPT, even when they use Gemini and Claude. That's another misconception. So you need to get everybody on the same level of thinking, analyze your team's skillset, and do a gap analysis. And again, this is something we saw that was a need on the market, and we even created a, an evaluation framework to say, if you want to evaluate your team, this is what you need to look out for.
It's not just saying, "Okay, do you know how to prompt? Do you know how to do deep research? Do you know how to create images?" There's a way you can actually measure the skillset of your team, 'cause there's general AI skill, and then there's AI skillset as relating to your job. That's where you want to get your company, your team to be, where they understand not just general AI, but AI as relating to their day-to-day.
And then, you know, you create your safe data environment where they test. Obviously, there need to be guardrails, governance, ethics, avoiding bias and all of that, and then you test it slowly. So going forward with any organization that wants to adopt AI, regardless of whether it's a PMO or it's the organization as a whole, you need to look at the AI literacy element.
You cannot avoid, you cannot bypass this part because your implementation will fail, I can guarantee you that, if the literacy level is low, because AI is being required at every touch point. One point that was mentioned, one of the events we had was one of the companies even said, "We want our receptionist to be using AI."
Meaning everybody in the organization, even the driver, needs to be able to use AI. That means AI literacy is no longer a white collar only, you know, skillset. Everybody needs to learn AI based on their job functions, so companies need to put this at the core. AI literacy in place, have your AI strategy defined, you know, the role of AI in the organization, and then when it comes to choosing tools, don't rush before you know what you want.
Know what you want AI to do, and then create a scope document that says, "Our AI, this is what we expect," so that when you avoid that gap analysis that we then found later on. To say, and one good way to do this Galen, a- as a last point, imagine yourself using AI effectively 12 months down the line. How would your organization look?
How would your customers feel? That's how you need to look at it. So when you now put your strategy in place, you say, "Oh, December 2026," it's like you're using a vision board type of thinking approach. December 2026, our customers are so happy. AI is helping solve customer queries, it's helping answer phone calls, it's helping draft project plans.
All across the organization, everything's working seamlessly. That's where you now work from, going back to now saying, "That's our best case scenario. How do we get there?" Before you even pick a tool, that's how you define it. So that's my advice on that, Galen.
Galen Low: My big takeaway there is that especially for folks in emerging markets, it may seem like you have to move fast to keep up, and that you should just rapidly dive in almost recklessly into AI.
But actually, the way to win it is to slow down, is to build that literacy, is to have a vision for what you want AI to do for your business and your teams, to put the guardrails in place and then launch, and that will be the thing that while everyone else is, you know, running at it, so without thinking and probably failing, that means you can get ahead because you'll have already done that thinking, you'll have already done that planning, and you won't have those missteps along the way.
Emmanuel, thank you so much for spending the time with me today. It's been a lot of fun. Where can people learn more about you?
Emmanuels Magaya: I'm very active on LinkedIn. They just search for me, Emmanuels Magaya. I'm very active on LinkedIn. I try to post every day if my schedule allows, and they can also find me through our company, Project Managers Africa.
If they just go to www.projectmanagers.africa or www.projectmanagers.co.za, they can find us. We are based in South Africa, but we serve the African market and even globally. We also have a presence in the Middle East, in Dubai. We're also moving aggressively in terms of the AI empowerment and enablement, so that's where they can find me, and then we can get in touch.
Galen Low: Awesome. I will include the links there in the show notes for this episode. And Emmanuels, thank you again.
Emmanuels Magaya: Thank you, Galen. It was a pleasure.
Galen Low: All right, folks. That's it for today's episode of The Digital Project Manager Podcast. If you enjoyed this conversation, make sure to subscribe wherever you're listening. And if you want even more tactical insights, case studies, and playbooks, create a free account with us at thedigitalprojectmanager.com.
Until next time, thanks for listening.
