AI transformation isn’t just a tech upgrade—it’s a human one. In this episode, Eric Porres (Chief AI Officer at Logitech) pulls back the curtain on what year two of an enterprise-wide AI transformation actually looks like. From shifting mindsets to scaling real adoption, this conversation gets into the messy middle: where curiosity turns into capability, and experimentation starts becoming systems.
What stands out is how much of this journey has nothing to do with picking the “right” model—and everything to do with behavior change. Eric shares how Logitech moved from AI curiosity to AI competency, what it takes to build a culture of creators (not just users), and why thinking of AI as a teammate—not a tool—changes everything.
What You’ll Learn
- Why AI transformation is fundamentally a human behavior change challenge—not a technology problem
- How to move teams from AI curiosity to real, measurable competency
- What “AI as a teammate” actually looks like in practice
- How organizations can encourage a creator mindset across non-technical teams
- The key signals that indicate AI maturity inside a company
- Why prompting is less about inputs—and more about structured thinking
- What skills will matter most as AI becomes embedded in everyday work
Key Takeaways
- Start with behavior, not tools
New models will keep coming. The real leverage is building habits—curiosity, experimentation, and collaboration—that outlast any specific tech. - Treat AI like a teammate, not a vending machine
If you ask once and walk away, you’ll get average results. The value comes from dialogue—refining, iterating, and thinking together. - Show your work early and often
Adoption doesn’t come from mandates. It comes from weekly demos, shared use cases, and visible wins that make AI feel practical and accessible. - Creation > consumption
The shift happens when people stop just using AI and start building with it—custom assistants, workflows, and shared tools that benefit others. - Progress beats perfection (especially in AI)
Waiting to build the “perfect” solution slows learning. Small, imperfect experiments create momentum—and insight. - Think in micro-ROI
Saving 30 minutes here, reducing friction there—it adds up. Like compound interest, small gains scale into meaningful impact over time. - Delegate like a manager—even with AI
The skill isn’t doing everything yourself. It’s deciding what to offload, guiding the work, and stepping back. - You don’t need to organize everything upfront
AI can help you retrieve, refine, and reconnect ideas later. Don’t let structure become a blocker to experimentation. - Curiosity is the real differentiator
The people who stay relevant aren’t the most technical—they’re the ones willing to explore, question, and keep learning.
Chapters
- 00:00 – Year two of AI transformation
- 03:47 – AI as behavior change
- 05:58 – From curiosity to competency
- 09:00 – Systems, workflows, agents
- 12:07 – Asking better questions
- 14:56 – AI as a teammate
- 17:18 – Driving adoption at scale
- 20:52 – Measuring AI maturity
- 25:09 – Building a creator mindset
- 28:00 – Overcoming resistance
- 33:46 – Managing tool sprawl
- 35:23 – Experiment vs. scale
- 42:30 – Future skills that matter
- 46:45 – Curiosity and micro-ROI
Meet Our Guest

Eric Porres is the Chief AI Officer at Logitech, where he leads enterprise-wide AI strategy and transformation, working to build an AI-fluent global workforce and embed intelligent systems into everyday workflows. With a background spanning marketing leadership, product innovation, and entrepreneurship, he previously founded and exited a company focused on improving organizational collaboration before joining Logitech full-time. Known for his hands-on, practitioner-driven approach, Eric focuses on scaling real-world AI adoption—training teams, identifying internal champions, and developing practical frameworks that help organizations move beyond experimentation to meaningful impact.
Resources from this episode:
- Join the Digital Project Manager Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Eric on LinkedIn and Substack
- Check out Logitech
Related articles and podcasts:
Galen Low: Organizations who feel like they're just starting out on their AI transformation journey may find themselves wishing they had a crystal ball. So I've brought in the next best thing, the Head of Global AI for a household technology brand that has just entered its second year of aggressive global AI transformation.
Together, we're going to talk about the challenges that came up in year one and how the teams overcame them. We're gonna be revealing what key skills this company is focusing on in year two and beyond, and we're gonna wax a little bit poetic about whether AI has maybe just turned us all into gamers. Hope you enjoyed this episode.
Welcome to The Digital Project Manager podcast—the show that helps delivery leaders work smarter, deliver smoother, and lead their teams with confidence in the age of AI. I'm Galen, and every week we dive into real world strategies, emerging trends, proven frameworks, and the occasional war story from the project front lines. Whether you're steering massive transformation projects, wrangling AI workflows, or just trying to keep the chaos under control, you're in the right place. Let's get into it.
Okay, today we are talking about what year two of AI transformation looks like for a global tech enterprise, and what different industries and different sizes of organizations can learn from their journey.
With me today is Eric Porres, Head of Global AI at Logitech. At Logitech, Eric leads the strategic development and implementation across Logitech's entire organization, harnessing AI to drive innovation, cognitive augmentation, and sustainable growth both in Logitech's products and experiences as well as its day-to-day operations.
But before that, Eric was already a four-time CMO, who had led marketing and advertising technology companies into award-winning work, and who had co-founded his own digital marketing agency, Underscore Marketing. He has also served on the board of Two Twelve, New York's Interactive Advertising Club, and frequently speaks at industry events.
Eric, thanks so much for being with me here today.
Eric Porres: Galen, thanks for having me. It's great to be here on this chilly day in New York.
Galen Low: We're both in deep freeze. I'm north of you in Toronto, and literally the snow was like past my waist. It was ridiculous.
Eric Porres: It's a little worse for you. It's only 15 inches of snow out here for me. Could be worse, I suppose.
Galen Low: I love that sentence. Only 15 inches, right? I'm really excited to dive in today. You and I, we've had some really good chats leading up to this. Obviously, I want to dive into your AI transformation journey, but also I think we can go all over the map today. I made a loose plan, but maybe let's treat it as a sketch.
Here's what I was thinking. I was thinking that maybe to start us off, I could just set the stage, but maybe hitting you with like a big hairy question that my listeners want your hot take on. Then I'd like to just zoom out from that and talk about maybe three things. Firstly, I wanted to talk about what your organization's greatest challenges and greatest accomplishments have been throughout the first 12 months in your role as head of global AI.
And then I'd like to explore some of the tactics that have maybe had the most impact on logitech's transformation and also how you've been measuring them. And lastly, I thought maybe we could just dive into a bit of future talk. Maybe just I could get your POV on what will have to change for folks in order for us all to sort of keep moving at the speed of AI.
Eric Porres: Yeah, I'll try and do my best on that. What is it in Caesar, right, ambition should be made of sterner stuff. So ambitious agenda, but done delicately like a sketch for Miles Davis and kind of blue. So I'm picking up what you're throwing down. This was not prerecorded and we don't actually have a script and we've only met once before.
It was a good conversation. So we don't know where this is gonna go. Yeah. But we'll try and keep it spicy and interesting. I love it. Not necessarily in that order.
Galen Low: Live and unhinged. I thought I'd maybe start off with like one big hairy question and maybe I'll just give it some context. So when you started as the head of global AI at Logitech, like about a year ago, you took on a massive task of driving AI innovation across the entire 7,000 plus person organization, and now you're entering year two.
So my big hairy question is this, will your next stage of transformation continue, like business as usual, or is the path ahead fundamentally different, requiring different approaches to solve different challenges?
Eric Porres: So it's a good question, Galen, and I think I would approach it like this. The AI transformation is as much about human transformation and behavior change as it is about technology, right?
So it's more of a, right. We are a Swiss company after all, so we want to try and keep the peace and let the tokens flow wherever they, wherever the value gets delivered. And so, and we've seen this over time in terms of, you know, how people. Converse with AI and conversationally in terms of a front end conversation, we've seen how developers will interoperate and say, okay, yeah, you know, 5.1 was good, but now I wanna do 5.2.
Oh wait, 4.5 just came out. Oh, wait. Oh, how about Gemini three? And so you don't want to be in a place where you're making, you're committing to. Human behavior change, which is more important than whatever technology gets thrown at us collectively over the next, you know, year, two years, three years. So I think the most important quotient that we look for, that I look for certainly in terms of people I work with, and our own mindset, is a curiosity quotient.
Which is, you know, you have to be curious. You have to be in it to win it, or at least to experience it. You may not win, but at least you get to experience it. And I think in our case, and again, long answer, but we're very privileged that we haven't made a choice in that everyone now, all 7,000 plus people at Logitech.
Have an opportunity to work with still the three most frontier of frontier models on the planet. What an amazing opportunity for each one of us to then be able to take that into our homes, into our schools, into other professional contexts in the future, into our communities to be these emissaries. From a maybe a little bit from the future, so to speak, just a touch, a touch from the future.
Galen Low: I love that. I really like that framing because as I was shaping that question, that's what I had in mind. It was like, you know, technology's moving so fast that like, maybe year two is gonna be really different, like different challenges with the technology, different version numbers of these, you know, LLMs, different mixes.
But I like that take that, it's like this, the human transformation that is at the core of this. And yes, the technology's gonna change, but fundamentally it's a human sort of change management journey. That Logitech is on, that many of us are on, that isn't just gonna get like tossed about by changes to the technology itself.
Eric Porres: Well, that's right. And I would say, look, when I first took on this role, we said, okay, well what do we need to be thinking about in terms of historically we've written for humans and when it comes to, you know, building an iconic brand. And so how do you build an iconic brand in the age of AI? You have to be able to now think in two modes where you're saying, okay, I need to talk to humans with heart and emotion, and I need to talk to AI or make my website, make my content addressable and accessible to rational, quote unquote rational thinking machine.
Not that we are not rational beings, but we're often behavior rationally, especially when it comes to purchase decisions and brand affinity, et cetera. So that's one of the early observations that I made, which is now bearing fruit. Later we got a recent award. It's like SEM rush that looked at us as an emerging, one of the emerging stars in terms of thinking about AI or AI engine optimization.
Again, GEO, LLMO, AIO, doesn't really matter. The point is how do you show up as a presence there? And so it was an acknowledgement, I think of again, the thinking that we put into it starting in February of last year. The other question that you asked before though, I think is an important one, which is if I think about human behavior change, I would say we went from an AI curious to an AI competent organization over the year, and what that means is now more people. Whereas before, it would be like this little cabal of futurists who were doing AI things and it was like, Ooh, that's scary. Ooh, I want to lean in.
But now that everybody has some common. Currency and common context and common usage and patterns of behaviors. Now we can actually talk about, Hey, okay, how do we wanna build agents for your line of business? How do we also want to materially think about systems of record that we have historically? Used to process information to make better decisions.
How do we turn those into potentially AI workflows or potentially workflows that are enhanced by AI in terms of understanding and concepts? So I, if I were to say that year one was about. Behavior change. Of course that's still, that's not the destination, that's still a journey. Number two is software and process change.
And we also saw, and again, I'll just kind of riff on this a little bit 'cause you gave me the floor. You gave me, it's like, again, kind of blue, who knows how long this solo will go. But when we think about something else I observed and we kind of made a bet on it. And the bet was right, which was almost every system of record in February, March of 2025.
Mc what? Mc Who? Mc this. Mc that. And I said, wait, trust me. Like by the end of this year, every one of these companies will either adopt MCP or A to A DK. So right, you have the philanthropic, you have Google's same concept, which is how do I. Decouple the UI experience that is basically, see, it's funny we talk about, oh, how, you know, how a AI is?
It's a, you know, stochastic parrot and you can ghost talks about probabilities and you know, like it's basically reverting to the mean, which a is not true. That just means people don't prompt well, but b Yeah. All the other software we use is software that reverts to the mean, right? Like your UI experience of anyone's software, whether that be Salesforce.
That's one instance of Salesforce, Workday, Oracle, Google, et cetera. So now with the advent of this middle layer of protocol and communication, we can say to ourselves, okay, what is preventing me from making decisions driven by data? And over the last 30 plus years, SaaS has stood in the middle as that software layer.
In between effectively like a database and you have a UI and you're trying to inform, search, summarize, generate, take action in some way, shape or form through that medium. And now that medium has totally exploded. Now there are certainly a bunch of challenges we could talk about there in too, in terms of like, in terms of maturity and whatnot.
But overall, I'm very bullish on this future where and not every company can do this either Galen, where like I'm privileged that. I am part of an organization that has a software engineering team, so I didn't have to come in and build a software engineering from the ground up. We could take the best of the best and say, okay, hey, we're going to stand up.
What is a still a very small team. We are definitely, I'm very proud of the work that the team does because. It's the mouse that roared in terms of the team size relative to the output that we have. But now we can say, because we have this team, we don't need to rely on third parties. We don't need to rely on consultants.
We don't need to rely on consultants that are really, they themselves are doing this for the first, second, third time. So, you know, one of the takeaways of this Galen, I would say is like, you know, if you are in an organization that has some engineering capabilities. You're already ahead of the game. It's, you have to give those people, you have to kind of get out of their way to create an environment, a set of guardrails, frameworks, if you're public versus private, what kind of data are you transferring or processing?
How does that flow, you know, enter or across the web, across countries, across the world, et cetera. So again, those are non-trivial. How does authentication work? Do you want service accounts or not? What are you gonna do there? Okay. How does that work with MCP? How do you make these connections in between different bit of the system. How do you chain these things together? What happens when the chains break? What about your audit log? How about telemetry? All of that. It's all interconnected, but that's the world we live in now. And so like I'm again, very bullish on where the future is, where as Andrew Carpathy said this at the start of January 25, he said, this is the decade of agents and it absolutely is.
And so I'm pretty excited by it. So there you go. That was my long Miles Davis riff for you on sketch.
Galen Low: I love jazz. Now with you. This is why I was so keen to have you on the show. I love that take on the idea that I don't know, you know, like, yes, MCP is like, you know, this abstraction layer, kinda like the APIs.
It's bringing things together. I like what you said about the interface though, right? Where it's like it's in some ways, if I'm picking up what you're putting down. It's taking away some of the excuses. It's flattening out the playing field and it's like, can you make data different decisions now that we are not, you know, using an interface that is, you know, designed for the mean that might have a different learning curve.
This is a dialogue with access to data from different systems of record, and can we interface with that to make good decisions? And it also, this is forcing function to distill it down to not the technical ability to use a tool. The strategic ability to like ask it the right question.
Eric Porres: Galen, that's a great point. Like a very hot take from you, which is like absolutely on point. It's the, you know, it's sort of the why, the how, the what, it's like the why of what we do. That's the value that you're describing really well.
Galen Low: I like the build that you had after that, which is like, I dunno, part of me exiting 2025, right?
We're like, okay, when you say this is gonna be the decade of agents, I'm like, gosh, I gotta like get deep and become an engineer and like build agents myself, because that's what's gonna be expected of me. And in some ways, yes, it is like the capability, but I like what you said about if you've got an engineering team, like get outta the way and let them cook because they're gonna understand all of those pieces of like, you know, just because I can fit a pipe into another pipe doesn't mean I should like do the whole plumbing for my house.
There are things, and like, I think like what you said about like year two, right? It's like, a tools focus almost as like maturity, right? Where it's like now we're almost not maybe done being curious. Hopefully we'll always be curious, but there is this sort of figuring it out, getting everyone in the pool.
But it sounds to me that like now we need to sort of build serious systems that will scale not just because we can, but because they're the tools that we need.
Eric Porres: Yeah. Oh no I think that's right. And look, you know, pro and con of the conversational interface that we are currently living through. Right. The pro is that we've lived through an interface like that for 30 plus years. Thank you Google, to a lesser extent. Bing and Alca, Vista, Lycos, and all these other Spaso.
Galen Low: Am I dating myself.
Eric Porres: About.com, all the other search boxes that appeared before us. Right? The challenge has been, I will say in our OpenAI, thank goodness they, and I've said this before, but I'll say it again, which is, thank goodness they called it ChatGPT, and not like.
Answer GPT, right? Right. Because it really is a conversation, and where I found, I would say, the challenge in between a year ago versus now is that many people internally as well as externally, didn't treat it like a conversation. They treated it like an answering machine, where you know, you put a coin in, I put in a question, I get an answer, and I'm done.
You're not done. It's like a, how do you ask the right question? B, how do you work with this thought partner that you have access to, to develop the right question that then you can work with that thought partner further to frame the answer. Oh, you don't like the answer. Guess what? You could ask again, it's not the first answer.
It's like if you and I went on a first date. And I said, you know, hi and you said hi. It's like, all right, I'm done. Fine. We're done. You know? Yeah, exactly. Like it doesn't, in the real world, it doesn't happen that way, and yet we've treated this box like that, so that was the challenge.
The plus is that for those who have chosen to go from pure curiosity to competency. Now I can extend and enhance this conversation and I can bring thanks to, you know, the more, the later models, certainly Claude and Cowork. And you can even look at other things like Manus and even now, Claude Bot, which is trending over the last couple of weeks.
Now we can have more than just, we have conversations with hooks.
Galen Low: Right, yeah.
Eric Porres: And those hooks are tools, those hooks are using Claude Cowork to access, you know, your desktop, or being able to manipulate change files and doing file management and doing simple agentic tasks like, Hey, can you help me fill out a continuation of coverage form from one of my kids?
Because their allergy doctor, the healthcare provider that we were with was like, is now their they're like in this different battle, so we have to do this continuation of coverage. I'm like, great. I don't need to do that. I'm gonna give that to cowork. I'm gonna have a cowork do that in the background for me and come back to me with a, you know, so it's now personal.
Agentic AI is like, you really now have this smart, capable intern that you can turn on and, you know, for a hundred bucks a month is like, it's the. One of the best value levers that you can pull as an individual, whether you apply it to your work or whether if you're not able to apply it to your work, apply it to all of these other life tasks, these tiny frictions that exist in just being a human.
And, you know, paying bills and managing kids schedules and. Creating calendar files and all of these other things that get in the way of thought capacity.
Galen Low: Continuing coverage thing is such a great use case. I'm stealing that.
Eric Porres: No, it is. It's like, oh, we have to fill out this form. Oh, fantastic. Okay. What do you need, Claude? Oh, you need a name, date of birth, et cetera. And you have this history in terms of, you know, here's here a couple allergy, like historical forms I have. Fantastic.
Galen Low: Yeah.
Eric Porres: Go do your thing. Do coverage. None.
Galen Low: I love that.
Eric Porres: Five minutes later it's done. It's accurate. And it saved me an hour and a half of time.
Galen Low: There you go.
Eric Porres: And those time and that time stacks up.
Galen Low: Can I zero in on what you said about the learning curve. You know, I think we all kind of started at different places. We're all in, you know, a different stage in our journey when it comes to AI. A lot of us did grow up with search boxes. Ask one question, get, you know, a page of answers.
Don't go past that first page because you know, then there'd be dragons and then you're done. Some of the stories that I've heard about you is that you had personally sat down like one-on-one with like over 800 of your colleagues at Logitech to sort of build this culture of augmented intelligence. And you know, I, when I first started, I'm like, okay yeah, like training.
But it sounds like it was as much about training people as it was about sort of interviewing them about AI and sort of figuring out where they're at. I thought I'd ask just like, you know, what's something that you're. The most proud of from the past 12 months that your team has taken on like a big challenge that you and your teams have overcome?
Or even a mindset shift, or a skillset shift, or just a different way of working? Yeah. Where have people gotten to on that journey there that you're proud of?
Eric Porres: Yeah. Look, as I mentioned, I'm certainly proud of the team, but I would say it goes back to the first thing we talked about, which is humans behavior change and culture is more important than technology and the more conversations you have with other humans.
The more you understand what their pain points are, why their curiosity quotient is or isn't. How do you create weekly examples of in practice, how do you make sure that your CEO is on board with the transformation and that she herself not only is an active. Proponent of it, but also shares her use cases.
Also, make sure that the leadership team every week has an AI in action moment. Make sure that every global huddle we have has an AI in action moment, and that it's not just one person doing it, it's every person across the organization doing that. So I would say I'm proud of the fact that we've gone from, again, certainly curious to competent plus.
Because again, it then sets the stage as I've described, for these more intelligent conversations that we can have in ways that we weren't able to before. And I can look at, you know, on a metric basis, you know, kind of where we were in terms of like a year ago, I would say we were in the, you know, lowest quintile of what I would call, you know, heavy usage of AI.
And now, a year later, you know, that's entirely shifted as we've gone from 2080 to 80 20 roughly. Again, I can't give you exact numbers but that's the general concept, and. I'm proud of the fact that we have the right instrumentation to think about. What is AI maturity. And that it's spread across six different vectors.
Consistency, intensity, creation, breadth, impact, and training. And so consistency is how often are you showing up with AI as a teammate? Ah, that's the other thing too. Sorry, I should have said that before. AI as teammate, not tool. That's probably the other mindset shift that has happened over the last year where I really can collaborate.
It is a coworker with me, co-pilot, weird name, a coworker, a teammate, a collaborator, and not just a piece of dumb software. Right. You know, Excel is a tool. You know, AI is a teammate, and the more you say that to yourself. The better and better place you are to really have, because I, it's like there, I'll give you an example.
There was Matt Damon I think recently, and I love Matt Damon, big fan. Just watched The Martian again with my kids a couple weeks ago. But he made a comment somewhere about how, you know, AI, this what we talked about before in terms of like reverting to the mean and will only give you the average. And that's coming from someone that is clearly to me when I hear people say that, it's like, well you really haven't worked with AI the way.
I've worked with AI and the way other people I know have worked with AI. Jeremy Utley is a great example of, you know, someone I admire and aspire in terms of, you know, what he does for the Stanford D School. And he has a weekly newsletter that he forces himself to publish every week. He's a good friend.
And I'm like, man, these people are really like, they're talking from a place of I wouldn't even say whether they're curious or not curious. They're just there. There's talking from a. Second, third, I don't even know, fourth principle, fifth principle or zero principle. Like there's, they're not talking from a place of like, I couldn't talk about acting the way Matt Damon or Ben Affleck shouldn't be talking about AI.
Because until you're actually in it and really in it and really collaborating with it, you can have a point of view, but it's an opinion, it's not a data-driven opinion. And that's where I think in the Twitterverse and other places, we get a lot of opinions, but we get opinions that are not backed by data.
Which is again, a human foible that we try and overcome with data.
Galen Low: I really do like the teammate angle as a mindset shift. You went you listed off a couple of, I'm gonna call 'em heuristics, but I don't think that's what they're called. Data points, waste measure uptake and adoption of AI.
Eric Porres: Oh yeah. Sorry. So going back to that, if you want to.
Galen Low: Yeah, I'd love to.
Eric Porres: Consistency is, are you showing up every, every day? And so most AI reporting systems have a sense of, you know, number of active days. Intensity you can then look at in terms of depth of, so, depth of conversation is one number of tokens, burn per turn, you know, being number two, creation is and in our case, and we have our own, we have our, and I can call it a wrapper, but it's more than that.
It's a wrapper plus. And don't underestimate the value of owning the user experience that you do have. Then you can create very bespoke solutions for different teams and functions that then it becomes, AI truly becomes meaningful for them. Creation is like, are you creating, whether it be the common parlance would be custom GPTs or projects in our case assistance that not just help you, but help other people.
And so it's like, so there's a creator mindset about like, okay, using it beyond just the surface area of conversation. Are you creating gems? Are you creating assistant, are you creating workflows? Are you, or like in Workspace Studio, are you creating micro agents? The impact piece is also like, well, I can create, you know, if a tree falls in the forest, but no one's there, did it actually hit anybody?
It's like, what is the impact of your creation? Are you inspiring and bringing others along? So are other people using the custom GPT or assistance that you've created because you're thinking more broadly than just yourself? A lot. I think one of the challenges for senior leadership in many organizations right now is that.
This is somewhat of the question about ROI and their hard measures and their soft measures but some of the measures are, well, like what is it that I am trying to, like, where's the value accruing if the value is accruing to the individual? Well, that's not as exciting for me as a CEO or a CFO.
Whereas if I see the value now accruing to others, okay, then that makes sense to me. Breadth is the fifth one. Breadth and then training. Breadth is like in our case, we have what we call LogiQ. Again, that's our governed source where we can, you know, we tap into Microsoft Azure for purposes of getting open AI or open AI models.
We tap into AWS bedrock for getting anthropic models, plus Gemini plus stability, plus some other bells and whistles. So are people, in fact, using all of those, like the entire surface area of AI opportunity Plus Google Workspace plus G Suite. So are you using Gemini? Are you using Gemini within sheets or docs?
Are you creating gems? Are you creating agents? So that's the breadth part of it. Then the training part is also pretty important too. So training, we were an early adopter of pro AI out of section school, and this goes back to again, treating the box like a search box versus treating it like an intelligent teammate.
And I think section does an outstanding job. They're not the only one, but they're one of the best. And they codified the prompts. I don't wanna call engineering or even con role context, task output, boundaries, reasoning. And so now I've got 85% of the company is certified in profit AI and what that means is that everybody's gone through, it's not hours and hours of time, but everybody at least has gone through, you know, a couple hours worth of time thinking about prompting in a different way.
Ooh, guess what? Now I can create a gem that is your, like how do I create, what do I do beyond profit AI? Okay, well now I can create a gem that any idea you have, give it to this gem as a thought partner and it will turn back to you a simple R-T-C-B-O-R or give you more complex one and give you some variations, and then you compare the output.
Okay, pre, I just had this question, now I have this richer, deeper question with context. And the output I want. And what are the boundaries? Just like any system, like what are the boundaries? What are the guardrails, what are the constraints? What kind of reasoning do I wanna apply? Is it more of a Socratic dialogue?
Is it something where I'm actually looking for a decision? Do I want to open it, make it open-ended questions or closed? Again, going back to the Matt Damon thing, it's like that's not the mean, that is definitely not, you know, depending on how you. Converse with AI, you can really get a much richer experience than perhaps you're used to.
So anyway, so tho those are just some thoughts around the topic that you raised.
Galen Low: I really love all those, like Brett is really interesting to me. But I wondered if maybe we could like dive into something like the creation piece. And I'm wondering if like, can we lift the lid and can you talk about maybe some like methods?
And tactics that you and the teams have been using to like encourage that creator mindset? It strikes me that what I was gonna say is I was like, gosh, it sounds like now you kind of have to be a bit of a like product designer slash. You know, someone who's gonna make a business case for Impact slash someone who's like, you know, thinking about like operating with a deeper understanding of the business and what people might need.
But then I was like, of course you can also use AI to help you think that through. But like what kind of barriers did you run into and what kind of tactics did you use to get people over that hump of like. Like not wanting to share the thing that they created because they're like, yeah, I kind of just made this for myself.
I'm like, I don't know if anyone's gonna, you know, use this. Or even like leaders like the C-suite, you know, asking them to share and kind of be vulnerable, you know, and put it out there and be like, maybe it's not the best custom GPT that's ever been built, but we're all, you know, creating. We all need to create, we all need to create.
Which other in mind, like, how do you get there?
Eric Porres: So you get there through like, it's gonna sound trite, but you get there through weekly demonstrations of value. You get there through, so for instance, when we started Pro AI training, pro AI was really my calling card on the leadership team to be having some conversation with the entirety of the leadership team weekly and then.
Breaking off into sort of triage sessions where, hey, you know what, there's a country, you know, whatever the country is, let's call it Taiwan, although Taiwan did a great job. Let's say Taiwan was a little light in terms of their profit. Okay, well now that's targeted intervention with the country manager or an individual to say, Hey.
Team let's look at where you are in relation to, you know, rest of world or rest of countrys, the rest of region, and see, you know, we have a competitive mindset, right? You don't wanna be like, I don't wanna be the guy that gets called out. I certainly don't wanna be the guy or the girl that gets called out by, you know, by Hanukkah in a global huddle where it's like, all right, how about these numbers?
So gentle cajoling through demonstrated excellence is probably the best. Again, human behavior change. So in May, I started a volunteer organization, wasn't voluntold, it was a volunteer organization of AI champions, 130 now, 134 or five AI champions around the world, across every business function, across every major region in which we operate.
And those champions then, and then every week. There's no shortage of things to do, no shortage of things to demonstrate. Every week I would publish something about, here's a new gem I created. Here's a new custom assistant I created. Here's a new custom assistant that someone in legal created. So like we have two folks that are part of the legal team who created this what they called Leo, right?
Give it a personality. Legal expert optimizer. It's not giving legal advice. Okay, thank you. But it is giving legal value in terms of, okay, when I think need to think about an NDA or if I've got, you know, contest terms or whatever it is. And so that was an example of two people who had, you know, zero formal coding experience.
Were able, I was able to work with them in a one-on-one session. Going back to what you said before, like 20, 24 was like the year of one-on-ones. While I had my day job, I'm like, Hey, I'm just gonna start talking to people and training people in some way, shape, or form. Because I had a slightly, you know, ahead of the curve edge versus other folks.
And so, you have to show your work, right? It's like show your work and show your work in as many different places as possible. So there's another AI best practices group that we have that's a Gchat group or Gchat space, 1500 people around the company. So that's a broadcast channel. Slack, 3000 plus people that are with part of a Slack channel that's a broadcast channel.
AI champions tend to get the goodies first, so hey. If we've just created a jam that does the following, give us your feedback, show us what you've done. Hey, how to improve this. Oh, you wanna help edit it? Great. Fantastic. I'm making you an editor now. Now it becomes a co-creation process. Going back to that though, now that I think about it, probably the mindset shift too is and this is still a work in progress, it's.
I need to talk to someone in engineering or the artist formerly known as it, we merged organizations last year. Now we have this one digital office that's 800 or so people that has these different capabilities, hardcore software engineering, it, business analytics, et cetera. And I think the historical mindset has been, oh.
I'm just gonna call somebody else to do this. Yeah, right. Yeah. And so what I push for everyone is, you know, teach a man or woman to fish and they'll eat for a lifetime, is like, no, let's sit together. We are going to collaborate on this. I am not the person who is going to create this for you. We are going to create it together.
And the more we create things together, even if they're imperfect. Progress is better than perfection in the age of AI because like, like we talked about before, all the models keep changing, everything keeps moving, et cetera. So you need to make progress even if it is incremental, and there's also yet more to share, right?
Everyone who works in, in, in AI in some way, shape or form, everybody goes through a J curve. Where it's like, shit, this is really hard. Oh, this is taking me more time. Like, do I really need to do this? 'cause I can do this faster if I just do it myself. And that's the same mindset that I would say young managers have before they learn how to, you know, decide, delegate, and disappear, which is if you do everything yourself, you will never.
Bring other people into their maturity, and so therefore you need to be able to delegate tasks to others. And so, so it is true in the age of AI. So I would say that how people operate with AI, whether they're leaders or laggers, is also a bellwether on how are these people, as in terms of affected managers, what do you need to do as a manager?
What do you need to do when you onboard a new person? You need to give them some kind of training, some kind of onboarding, and make them help them understand the ropes. Give them smaller tasks, kind of give them oversight in terms like give some oversight into that and then gradually release yourself from that oversight where you're like, yeah, I have to let this bird fly.
So I think about that a lot too in terms of, you know, every time, and look, I have some, a reasonable amount of professional experience. So I've like, it's not my first rodeo in terms of managing people and managing change, but it is an important one, which is. We are going to do this together. I am not going to do this for you.
I will demonstrate things and I will always be demonstrating on a weekly basis. And I ran weekly office hours and I have weekly trainings and I have weekly sessions and I'm always talking to someone and I'm always talking to at least one or two or multiple people on the leadership team. Oh, what are your challenges?
So you have to have that kind of energy too. And again, it's not for everybody. So like if you're gonna take on this kind of role, you have to know that you work on, you know, AI time.
Galen Low: Yeah.
Eric Porres: Which is again exciting. And for someone like me, you know, I'm quite certain I have a DHD in my profile based on family history and other things.
Like it's great for me 'cause I would always like, as they say. I would, I say I would always rather have a faster brain than a slower brain. And so I would always rather be working in where the pace of change. You're only limited by our imagination to embrace the pace of change that you're working with and through.
Galen Low: I love that it is frenetic change, right? It's not like the slow version of change management. It's on AI time, and that's a whole different thing.
Eric Porres: Well, it's frenetic and kinetic, right? Like kinetic by way of examples. Okay, here are examples. What can I learn from this example? How can I create a seed? So a lot of what I did last year was also how can I create these seeds? So whether that be a template for system instruction and response guidelines for a given business use case, that then becomes published immediately available to 130 champions, then available to 1500 people. It's like, oh, okay, great. I wanna build a set of system in like whether it be, you know, Leo.
Was built on the back of something, a set of system instructions and response guidelines that I originally worked with people in culture organization to create a, you know, people and culture assistant to help people, you know, manage some Q and a that way. And then, so you can take, and then if you understand, good prompting right, role context.
Task Output boundaries. Reasoning the role is, okay, you are now a legal expert. The context is here are some really great system instructions and response guidelines that were created for a different purpose. But for this task, I want you now to work with me to think about how do we convert this into something that's gonna be net useful in a legal setting versus in a people and culture setting.
The output is a set of system instructions. The boundaries are okay. These are like the boundaries of our system, what we have, and then reasoning, okay, make sure you ask me all the questions you need to, and don't stop until we've made at least like, keep challenging me, until we've made at least four iterations of this.
Then I can attest and experi with. So again, that's a lot of information. We said this, we're going off script or there's no script, so I'm doing this live, but these are some of the ways I think about, again, how to get people on board.
Galen Low: I like the collaboration aspect and I like the fact that everyone's kind of taking these steps together, like all at the same time. No one's running ahead and then everyone feels like they have to catch up. It's you know, we're all stepping through it together. It is collaborative.
Eric Porres: Galen, one thing on that, I, it's okay to be a little bit ahead and so some of the things that I will demonstrate are ahead of the average, but not like so out there that it's like, oh boy, I could never do that. A year ago would've been like that.
Galen Low: Just, yeah, okay.
Eric Porres: A year ago would've been like, oh, I, there's no way I could do that. And now, again, through training, repetition, all of these consistency, intensity, et cetera, now I feel like people are able to more quickly, like the learning curve has the J curve, right?
I mean, with that exponential part. Over time that the learning has, we've shortened the distance in terms of going from experimentation to ideation, to experimentation, to hopefully a, you know, one day mastery.
Galen Low: Maybe we can take it there. I mean, you know, looking ahead, you've got all these folks creating assistance gyms, you know, GPTs, what have you, a like, how are people managing the like information overwhelm?
Is there also the same pressure to be aware of what assistance exists in the ecosphere within the organization? Like to test out and try and build on top of like, is there a bit of overwhelm there? And then also like, will they all continue to exist and will experimentation continue? Or does it sort of exit this quote unquote honeymoon period?
And we've got to, like, now we've got to like double down and build, you know, scalable software in the areas that are having the most impact and maybe leave some of these assistance behind because it was great to get everyone learning, but they are not gonna be the tool that we're gonna use going forward.
Eric Porres: I wish I had a perfect answer for you on that, which is I certainly, and you made a good point before, which is like, you know, are we all product managers now? Or as Reid Hoffman said in that interview, I think it was the Repless CEO e was like, are we all gamers now? Are we all, like, I do believe this, that we are in this world of Minecraft and my kids more than, more so than I, you know, play a Minecraft.
I, it's like, oh, you don't have a tool? Oh, just invent it. Great. Now I have the tool. I need to build the following thing, however. And especially this is also the case for younger companies, for startups. Startups are great at building, they're not so great at maintaining necessarily. And so you have to find that rhythm of like, you know, squirrel like bright.
You know, there's this bright, shiny, what is this new thing? And it's great to keep building something new, but your customers bought the thing that you sold them three months ago, which is based on somebody's product roadmap six months ago, and they're actually getting a lot of value outta that too. So I do think that the skill in terms of project management and product management is more important than ever to be mindful of like, what is the sunrise and sunset around like product lifecycle.
Also, therefore, what do I need to be mindful of as a creator? What do I need to be mindful of in terms of like, when should I look at my custom instructions? So as an example, in my instance of ChatGPT, so I work in ChatGPT work in cloud, I work in logic queue, I work in Gemini. I use sim theory from this day and AI guys who I think is fantastic.
I use notebook, I use occasionally use perplexity. I use all of these systems so I, 'cause I have to or I can't not. But in ChatGPT as an example, where I've arguably had more conversations cumulatively over the last three years than in any of those other systems, I have created a task that reminds me every month, Hey, based on the conversations we've had over the last 30 days.
How should I be evolving my system instructions, my custom instructions rather such that I can get more meaningful output from you because you've now seen me communicate with you over the last 30 days and what can I do to make it better? Better, faster, smarter, more comprehensive, less comprehensive, more nuanced, less nuanced, what have you.
And so task like the ability to, and now most of the major platforms and systems, right? Gemini does, Claude does. GPT does these little helpful reminders that are again, continuing to co-collaborate. With you now to, for continuous improvement. So I think that's one way in terms of managing the chaos is like you don't necessarily throw out what you did before, but it also reminds me of Jack Dorsey, founder of Square and Twitter, right?
There was an interview he did a number of years ago that he's like, I have said no to more things than I've said yes to, and I've left more things on the cutting room floor. Than what actually appeared in product. And so like, it's okay to leave things on the cutting room floor because, you know, as the old saying goes, right, like, you know, happy customers tell three people, angry customers tell 3000.
If there's something that someone is relying on that you're responsible for, you're gonna hear about it. So, so I think that's a really important point too. It's like, don't get too caught up and make sure like, oh my God, I have to make sure that all these things are up in the air spinning at all times.
If there's something really important to someone that you've created that's, again, working on behalf of somebody else, you're gonna hear about it and then you're gonna collaborate with that group, individual team to, to up upkeep it, maintain it, make it better, improve it.
Galen Low: I love that. I love the sort of that Minecraft reference and like I'm picturing that scene in the Lego movie, the first one where he builds like a triple bunk couch or something.
And it's like, just 'cause you can, doesn't mean maybe you should, or maybe you should because you're using that muscle, can use AI as that, you know, maintenance team to help you make decisions around hygiene or whether you should, you know, keep it or kill it or, you know, like that is also there. I like that idea that like, it's okay to, you know, create these things even if they're not gonna be the perfect be all, end all tool that saves the company.
Eric Porres: 'cause you're gonna learn something out of the creation process that you can then apply to the future. And so I do this all the time. I take, you know, little bits of things I, oh right, what was that project? Et cetera. And sometimes I like to use AI to remind myself, Hey, there was that project that we did like three months ago.
I can't exact. Oh right. It was that piece of code. Sitting in a repository somewhere, right? Yeah. That, and what can I learn from that? Again, that then gives context to whether it be an agent swarm or whether it just be for, you know, Gemini 3.0, thinking whatever the case may be. You know, that little kernel of something that you can then explain, right?
What is explainable ai or what is explainable tasks? It can only inform and infuse more context into the conversation, which then creates a better output than you had before.
Galen Low: I actually really like that point and what you said earlier about ADHD and like feel like I need to be so organized to keep all this stuff straight as I experiment, you know, and start using AI and have this methodology, but actually it might actually be okay, seems to be okay to just kind of zip and d around and like, AI can also help you keep some of these things straight and help keep you organized. And, you know, you may not need to get stuck on a naming convention and a folder structure for all these things. You can just go and cook, you know.
Eric Porres: Once I got over the Outlook, the Gmail hump, you know, 10, 10 years ago, where suddenly I didn't net Oh, you mean I don't have to create folders?
Galen Low: Yeah, exactly.
Eric Porres: I just said everything is available at all times. And then like, oh, wow, that is super helpful for me. Now, having said that, I still have certain things that I do. Create a notebook every week that I use to like, okay, here's interesting information I came across in the week that I don't have time to read.
Now I'm gonna drop it into a notebook. I'm gonna create myself a short audio podcast I can listen to while I'm running or whatever it is. Again, to try and maintain my own sense of what's new and or not, or what's coming or not. Spicy take, I think Facebook overpaid for Menace.
Galen Low: You heard it first here.
I love that, like, you know, you're not kidding right about like operating in AI time. It's all available to us. We just need to find a way to kind of harness it. I love that notebook idea. Totally stealing it. We've kind of touched on this. I want, I wanted to look a bit into the future. It's been this underlying thread in the whole conversation about skills.
We've been talking about managers and how to scale and how to not just do everything yourself and be a monomaniac, how to get past that friction point in a J curve. I almost wanna just like look into the future a bit and for my audience, I'm gonna exploit the fact that. You were a producer in your early agency days, which US Digital folks know, just means project manager.
Eric Porres: It was cool to call a producer back in the nineties.
Galen Low: Yeah, it was like, it was a less stiff title, but then we realized it didn't describe pretty much anything. Folks who still have that title, it's okay, but yeah, we feel you.
Eric Porres: Code Ninja versus Actual Ninja. I studied Ninjitsu for 20 years.
Galen Low: Code and Ninja cover that.
Another podcast, we did that on business cards. We had hilarious titles back in the day, which actually is probably a good springboard because you know, on a long enough timeline, titles stopped, meaning things. Because it's gonna be about skills. And if we're gonna fast forward a few years and we find ourselves managing people as well as managing AI and maybe even like managing our own digital twins, like what are the key skills that are going to be the most important skills to keep project managers and people managers, and I guess just professionals in general.
Like what are the key skills that we should be focusing on to stay relevant here?
Eric Porres: I referenced it before, and I say it in jest, but I do mean it decide, delegate and disappear. And the disappear doesn't mean it's like suddenly, all right, I'm off to the next thing. The disappear means you just get outta the way.
You need to get outta the way of your own sense of ownership. You can have mastery without ownership and in, in order to, you know, build great teams, you're rarely a team of one. And now you can be a team of a thousands potentially, of course, that has its own headaches, right? Because you're like, well. Shit.
I suddenly have instead of, you know, ins instead of 500 lines of code, I have 5 million lines of code. How am I ever gonna get through that? Well, now I need to spawn another agent swarm from like, whether it be, you know, Ruben Cohen, and, which I also highly recommend for those who are not followers of roof he's done some pretty extraordinary work.
Another Canadian based tech wizard. And so, yeah, you, so you do need to be mindful of like, you know, agent sprawl, just, you have to be mindful of human sprawl and so, but yeah, you great teams create solutions and so I think that's the most important learning is to, you need to bring people with you.
Not just around you and I, you know, surround yourself with experts. Okay, that's fine. Like, I look for people and so maybe this gets part of your question. It's like I look for people that don't have necessarily a T-shaped experience, but more of an m shaped experience where it's like, okay, you had kind of have broad shoulders, but you have certain subject matter.
It's not just one. Domain specific thing that you do, and that's all that you do, and you know how to do that, and that's amazing. Great. I'm actually looking for someone who has more shoulder experience and I'll take, and this is something to like younger people and something I've been talking about more with schools now is about like, you need to make sure that kids know how to use AI.
Period slash know how to use it responsibly. 'cause there are certain schools, for instance, I could talk about this another time. Many schools have done a great disservice by banning AI. It's just like banning like in sort of like the shadow IT world in terms of the work that ban this. Okay, well I'm gonna find another way to get around of it.
Galen Low: Right, yeah. We're still gonna use it.
Eric Porres: So now they're using Exactly. They're still gonna use it and they're gonna maybe use it a warrior responsibly as a rebellion to the, you can't do X. So the flip side of that is that. Therefore, you're not training the next generation to be able to be a responsible user slash innovator and creator.
With this collaboration thing that didn't exist four years ago. I mean, AI existed for many years since the fifties, but as we think about it now, it's still something very new. So it's a cautionary tale to be like, if you're a parent out there, like make sure your kids are embracing AI in a way you would wanna see it modeled, even if it's even, yeah.
Okay. Kids are gonna cheat, right? Like we all cheat. We cheat in games. We look for Chicos shortcuts, right? How do you work smarter, not harder? And so then we could talk about, well then what's the fundamental kernel of knowledge you need to have? Again, curiosity and a beginner's mindset. You don't know everything.
You especially don't know everything in the age of AI. And so staying curious and also, but again, being able to compartmentalize your work, be able to delegate it, get outta the way of it and see what happens. And then that's truly how we're, I think, like the best collaboration. It's a mentorship, like mentorship and apprenticeship.
You're mentoring AI in as much as you're apprenticing or AI is apprenticing with you. And the more you do that have that apprentice model, the more as agents become increasingly capable. The more work you can confidently give to an agent to do the work, maybe for you, not just with you. And that's okay and I'll take it.
And that is okay. And Ethan Molik has talked about this too, and others, Connor Grin, Ali Miller, like the jagged frontier still exists. And so like there are still, you know, a lot of the research papers that have come out, if you looked at archive and others, like they're based on models that was like, okay, 4.0, 4.1 sort of like models that existed a year ago where they're still finding these extraordinary.
Capabilities and it's like, oh, we want the next model. Okay, that's, you're gonna model chase yourself to death. Use what you have and you'd be surprised by the results you can achieve.
Galen Low: That's a human condition, right? Always chasing a shiny object without recognizing the value of what you have. But it kinda all starts with curiosity, doesn't it?
This bold curiosity, that's how you become MHA to begin with, and that's how you end up in a position where you can like delegate. Get outta the way. Right? And maybe we've been builders all along, right? This is not necessarily unique to AI. It just gives us this capability that we didn't have before to scale something.
Eric Porres: Well, right. And conjure, right? Ethan Molik mentioned this last week. He's like weird things. You know? Funny thing happened in the way of the forum. Weird things happen over the last month and a half, where it's like you can increasingly give pretty short. Prompts to get pretty extraordinary outputs in terms of like, again, these little, these N of one software, like N of one software.
If it solves a problem for you where you can reduce a tiny friction by 10 or 15 or 20% and you can get back that time, to think more curiously, more creatively, more broadly about other things worth it, pay that every day through Sunday.
Galen Low: I love that. Yeah. It doesn't all have to be wholesale mega ROI. The increments are just as valuable.
Eric Porres: We can talk about micro ROI. 'cause we talk about atomic habits, right? One of the best selling books of the last 20 years, right? James? Clear. Hopefully you've read it, but it's like, yo, the book opens with that. Okay. What's the difference? 1% improvement every day for a year is a hell of a lot better, right?
And pl it along every day. And so, you know that micro ROI in the short term becomes macro, ROI in the long term. Even if you don't see it in front of you right away.
Galen Low: Boom. Love it. It's probably a good place to leave it. Eric, thank you so much for joining me today. I appreciate your insights. Always love chatting with you.
For folks who enjoyed this, where can they go and find out more about you?
Eric Porres: Yeah, so, you can find me on LinkedIn of course, and and I started a Substack last year that I need to get back. I was traveling a lot in the month of January, so I need to get back on the stick and really forced myself to do it, but the name of the substack is Beyond Reason, but it's prompted by eric.substack.com.
I'll riff on a bunch of different things that I find interesting and generally in the world related to AI in some capacity. Like what is Agent eCommerce or, hey, now that we got all this time back on our hands, what do we do with it? And what are these tiny frictions and how about, you know, the death of dashboards and what does it actually mean when you can, again, build solutions and build conversational capabilities that you didn't have before?
Anyway, so there's some fun stuff there that I hope people will find value in. Also, how I stop worrying about hallucinations. So that's hallucinations can be a good thing depending on how you use them. And this goes back to the creativity question that we talked about before. So that's a good one too.
Galen Low: I'll include those links in the show notes for folks who are interested. I'm gonna go in Sub right now. Eric, thank you so much again for coming on the show.
Eric Porres: Galen, thank you very much for having me. Really appreciate the opportunity and you know, great questions and again, like I, I loved how you set it up, a nice jazzy riff that we've been doing for about an hour.
So good time. Thank you for having me.
Galen Low: Alright folks, that's it for today's episode of The Digital Project Manager podcast. If you enjoyed this conversation, make sure to subscribe wherever you're listening. And if you want even more tactical insights, case studies, playbooks, create a free account with us at thedigitalprojectmanager.com.
Until next time, thanks for listening.
