Most AI tools weren’t designed with kids in mind—but kids are using them anyway. That tension sits at the heart of this conversation with Aderonke Akinbola, where the question isn’t if we should build AI for children, but how we do it responsibly. From digital playgrounds that shape behavior to the long-term implications of data exposure, this episode explores why the stakes are fundamentally different for younger users—and why product teams can’t afford to treat child safety as an afterthought.
Galen and Ade dig into what it really means to design AI experiences that protect, educate, and develop young users. They unpack practical ways teams can introduce ethical friction, rethink data handling, and advocate for safer systems—while also looking ahead to a future where AI itself may act as a guardian for children navigating an increasingly intelligent digital world.
What You’ll Learn
- Why the conversation has shifted from whether to build AI for kids to how to do it safely
- How children’s cognitive and emotional development changes their interaction with AI
- The concept of AI as a “digital playground” and what safe design looks like in that context
- Why current AI systems prioritize efficiency over development—and why that matters
- The role of advocacy, policy, and cross-functional collaboration in shaping safer AI
- How future AI systems could actively protect and guide younger users
Key Takeaways
- Kids will use AI—whether we design for them or not. Avoiding the problem doesn’t reduce risk; it shifts children into unsafe, adult-oriented systems.
- AI isn’t just a tool—it’s an influence layer. Children may trust and bond with AI in ways that make them more vulnerable to persuasion and misinformation.
- Data permanence hits differently for kids. What’s a minor interaction today could become a long-term digital footprint that shapes future opportunities.
- Too much convenience can undermine learning. Removing all friction can erode critical thinking—AI should sometimes challenge, not just assist.
- “Desirable friction” is a design tool. Like leveling in a game, introducing the right amount of challenge helps build skills rather than bypass them.
- Explainability supports development. AI should show its reasoning, not just deliver answers—mirroring how humans actually learn.
- Privacy should default to protection. Keeping children’s data local (e.g., via federated learning) reduces long-term risk.
- Advocacy matters—even when it doesn’t land immediately. Cultural and organizational change starts with repeated conversations, not one winning argument.
- We’re in a race—and safety isn’t leading. Current AI development is driven by capability and market share, not child protection.
- AI could become part of the solution. Concepts like a “digital guardian” point to systems that actively monitor, guide, and protect young users.
Chapters
- 00:00 — Who’s Responsible for AI Safety?
- 03:36 — Should Kids Use AI?
- 05:24 — AI as a Playground
- 07:54 — Bringing Experts In
- 10:41 — Are Companies Doing Enough?
- 13:37 — Why Stakes Are Higher
- 19:57 — Designing Safer AI
- 24:43 — Advocating Internally
- 27:42 — Lessons from Social Media
- 31:09 — The Digital Guardian
- 35:00 — How Close Are We?
- 36:49 — Building for the Future
Meet Our Guest

Aderonke Akinbola is a Technical Program Manager at Google, where she leads large-scale, cross-functional technology initiatives across the Americas, guiding complex infrastructure and AI-driven projects from strategy through execution. With a background that includes roles at Apple and American Family Insurance, she brings deep expertise in systems thinking, risk management, and navigating complex technical dependencies. A recognized voice in the AI and technology community, Aderonke speaks on topics such as AI risk, cybersecurity, and the future of work, and is passionate about mentoring the next generation of STEM leaders while advancing human-centered innovation in the age of AI.
Resources from this episode:
- Join the Digital Project Manager Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Aderonke on LinkedIn
Related articles and podcasts:
Galen Low: Most AI solutions aren't made for children, but that doesn't mean that children aren't using them. Which begs the question, who is responsible for children's safety when it comes to AI powered experiences? Is it the parents? Is it the teams designing and building AI experiences? Is it the AI companies themselves?
And what are some ways that we can lead our teams to ask the right ethical questions and fly the flag for building proper safeguards into the experiences we create. To explore that topic, I've brought in a Technical Program Manager from Google, who also happens to be a strong advocate for child safety and cybersecurity in AI and tech.
Together we're gonna be diving into how the stakes are higher for younger AI users than it is for adults, what we can do to advocate and engineer for child safety in our AI projects and products, and how AI might itself become an advocate for children's safety as the pace of technology starts to outstrip our ability as humans to create meaningful policies, legislation, and education around children's interactions with AI. Hope you enjoy the episode.
Welcome to The Digital Project Manager Podcast—the show that helps delivery leaders work smarter, deliver smoother, and lead their teams with confidence in the age of AI. I'm Galen, and every week we dive into real world strategies, emerging trends, proven frameworks, and the occasional war story from the project front lines. Whether you're steering massive transformation projects, wrangling AI workflows, or just trying to keep the chaos under control, you're in the right place. Let's get into it.
Okay, today we're talking about AI in the context of children's safety and how project and product teams can ask the right questions to create AI powered experiences that protect vulnerable users.
With me today is Aderonke Akinbola—a speaker, an advocate for child safety and AI ecosystems, and a technical program manager at Google. Ade has over three years of expertise in product development and program management, as well as her experience as a security engineer at Apple. She also recently spoke at the AI Agent Security Summit on securing AI agents in child-centric ecosystems.
Ade, thank you so much for joining me today.
Aderonke Akinbola: Hi Galen, I'm really happy to be here.
Galen Low: I'm excited to have you. I've been looking forward to this. I've liked our conversations leading up to this. I'm really excited to dig in, so I hope that we go on all sorts of tangents together, but just in case, here's the roadmap that I've sketched out for us today.
To start us off, I just wanted to set the stage by hitting you with like a big hairy question that my listeners want your take on. Then I'd like to zoom out from that and talk about maybe just three things. Firstly, I wanted to talk about what makes a child's experience with AI so different from an adult, and what makes an AI experience safe for under 18 now and in the future.
Then I'd like to get practical and talk about how project and product teams developing AI solutions can ask the right ethical questions and design an experience that educates, explains and protects younger users. Lastly, I'd just like to get your take on what the future looks like for AI agents and future generations if we can get things right when it comes to ethics, security, and safety.
How does that sound to you?
Aderonke Akinbola: That's perfect. That's what we're here for. Let's die right in.
Galen Low: Let's dive right in. Awesome. I love it. Okay. I thought I'd start off with like one big hairy question. I'll take a running start at it. Over the past few months, the issue of children's safety and social media has been in the headlines.
Australia recently introduced a social media ban for its under sixteens, and the UK and other countries are contemplating doing something similar. Obviously, social media and AI are not one and the same, but a lot of the same questions of ethics and safety still apply when it comes to young people using AI.
So my big question is this. In your opinion, based on your experience in security and product development at some of the biggest and most AI forward technology companies in the world, should we even be building AI solutions for children under the age of 18?
Aderonke Akinbola: So I think like this is a million dollar question Galen, but firstly my stance is that, you know, we're past the point of if we should be building AI systems for them.
We're more centered towards the how to build these systems for them. The reality of the situation is that the playmate is already here, you know, so if we choose not to build specific and secure child-centric AI, you know, kids will never stop using the technology. It's there and it's available for them.
They'll just use the black box adult AI, and that is very dangerous because adult AI is just built for efficiency. You know, however, our children need tools for development. So if we're not building the right version, we're just essentially leaving them in a digital wilderness without a compass. So I think we're no longer at the point of if we should be building for them.
It's more the conversation is not geared towards how we should build for them.
Galen Low: I think it's a really good point about like the goals of the technology. Like from an adult perspective, it's different than how we treat it for children. And as we know from social media, actually even us saying, oh yeah, you know, minor shouldn't be using this, isn't gonna stop them from using it.
And so I agree with you. I think, you know, Pandora's out of its box and now it's up to us to, you know, create safe environments. Even if the intended user is not necessarily someone under the age of 18 or a child, I wonder if maybe we can like zoom out a bit from that because I had a chance to catch the recording of your talk at the AI Agent Security Summit and just leading up to that, you had posted something on LinkedIn and you had said, children's playgrounds won't just be physical, there'll be digital, intelligent and unpredictable.
And I thought I'd ask, what do you mean by that? And at what point do we need to be planning for that in what we're building today?
Aderonke Akinbola: Okay. That's another good question, Galen. So just think about a physical playground, right? A lot of them now have soft floorings and rounded edges, right? AI is also a playground like that.
However, this is a different type of playground. This is a playground that talks back, you know, and shapes thoughts, right? And because children attribute feelings, and some kids even think like the AI agents are real. You know, and they fall into what is called like deterring pit. And this is where they're so convinced that AI is a real friend.
They become uniquely vulnerable to social engineering. Now, how do we plan for this? This is not something that we do like in the QA phase, right? We need like the shift left approach, right? So not just on security, but this is also something where we also need child psychologists in on this, right? So we need experts like this at the architectural level.
During the model training phase, right? They're asking questions like, is this model optimized to make a sale? Or is this model optimized to support human development? So I think that if we kind of change that trajectory and going outside of the QA phase into the actual building of the model phase, then we're actually thinking about them.
You know, we're actually creating digitally a playground that is more child-centric and also thinking of protecting them kids.
Galen Low: I like the sort of metaphor of like the padded floor on the rounded edges, and not even necessarily for playgrounds, but even just to make things that are not designed for children safe.
I know when my son was young, like, you know, all of that childproofing, we had to put around the house. Like it was a, the whole different level of thing, not because these things were intended. For a child to get into, but because we knew that they would right. And we didn't want them to hurt themselves and yeah, we want 'em to sort of develop in a healthy way.
I like what you're saying, and I know you're quite big on cybersecurity as well. Also another thing that you know, can't be an afterthought, and for my listeners, like are you proposing that maybe a child psychologist should be involved in a project at a project team level or at a product development level?
Even if, as your example, right? Like even if the goal of the product is to like make money and just sell, it's like a sales tool, would you still say that a child psychologist or someone who is looking out in the interests of a minor should be involved in product development and project planning, even if it's like not the main goal or requirement of that particular product?
Aderonke Akinbola: So I think that even if we're not bringing them on, you know, as a full-time employee, we could have them as consultants because now I just think about the GPT and the Geminis. If we said we originally created this technology for just. You know, adults to help them to work faster. This technology has gone way beyond that point.
We're no longer in the age, we're having kids having iPads now. We're literally having a whole generation of iPad kids. How do we stop iPad kids from going on Gemini?
Galen Low: I could tell you from experience, because my son has been very good and very responsible with technology, we've kind of given him, you know, access to, you know, a device.
Yeah, I mean, you know, especially from a Google perspective, like the AI mode is in the search. So it's not even that I was like, Hey, here's Gemini. Lemme teach you how to use it. It was like I was just searching it out and you know, now I'm having a conversation.
Yes, definitely prevalent, definitely applicable.
Aderonke Akinbola: I happen to also live with a toddler and he's not even su and he can say, Hey Siri.
Galen Low: Oh yeah. It's incredible, isn't it?
Aderonke Akinbola: Yeah. 'cause he hears me say that all the time. And now how does Siri respond to. You know, a hate Siri from a child. There are endless opportunities of what the child can ask Siri. So I think just leading up to the conversation even if not having them as full-time staff, but starting this conversations so that we can stop building responsibly.
I don't expect this to happen maybe like in the next two years or the next five years. Like we're literally on an AI boom every week. Therapy is bringing you updates every week. Everybody has an update, you know, so I'm thinking just to start these conversations and let us begin to have these people at the table.
It may not be an immediate response, but let it just be like a sensitization and an awareness of this need, and then let's see how it goes from there.
Galen Low: I love that. And I think the Siri example is such a good example because it is not like we handed a child a device and were intentional about it.
Literally, they could just speak and access this technology and interact with it. And you got me thinking about the thing you said about, yeah, maybe it's not a full-time product or project team member. Maybe they're coming in as a consultant and I think there are actually parallels with cybersecurity because you know, cybersecurity isn't planning for your primary audience and the good actors.
It's actually planning for the bad actors and the things that might happen either maliciously or accidentally. You have to like look at these edge cases and fringe cases. You raise a good point about like all the updates. I know, you know, we are definitely in an AI boom. The models are changing all the time.
Yeah, a spicy question. Do you feel like the AI companies are doing enough to roll out updates that are anticipating that their platforms, that their models are gonna be used by under 18 by children?
Aderonke Akinbola: I don't think that is what's their focus on right now. I think that right now there are endless possibilities to how AI can be used in terms of what AI can do for you.
And I think that is what a lot of companies are exploring right now of how can we harness all of the potential that this AI tools can give us? How can we make our models smarter? How can we make our models more intelligent? How can we train models to solve problems quicker in that? Very high paced environment.
I don't think that there's a lot of advocacy going on for kids because right now everybody is trying to get big into the market. So right now, I think it's mostly about going into the market rather than looking out for this kid, which is one of the reasons why I took like personal interest, you know, in going down this niche of, you know, cybersecurity in children centric AI spaces.
Right. I think there's a lot of the preliminary stages that actually deals with like the parents rather than the people building this technology. So that's just what I would say on that.
Galen Low: It's really interesting. I agree with you, and I think that, you know, you use the word AI, boom, and it's easy to also think of it as like an AI race.
And when you think of it as a competitive race, you know, to use an analogy, they're all trying to build. The fastest car and at present time, the seat belts can come later. Exactly. It's like, how do we make an engine? How do we build suspension? How do we build a vehicle that can go really fast and yeah, maybe brakes next, but you know, cushiony bucket seats, you know the clips for a car seat.
It seems like it'll come later and isn't seen as. The main goal, the main driver for the technology. I think it's interesting what you're saying about, you know, like, where the responsibility lies, and if I'm picking up what you're putting down, yes. It may not be the current focus of the big AI companies.
It may become it later once they've kind of, you know, like gotten deeper into their race and it becomes important. I think there is a responsibility for the teams building these products, you know, on top of the models, on top of the LLMs. And then there is the sort of the parent level of things in terms of like oversight.
In other words, there's no one group here that's responsible or accountable for a child's experience with the technology. We all kind of have to do our part. I wanna come back to something that you said. We were talking about how the experience with AI is different for a child. You know, that it's not necessarily something, and don't get me wrong, there's adults as well who are like forming deep bonds and relationships with, you know, chatbots and agents, but just from a sort of child's.
Perspective, I guess, like are the stakes simply higher when it comes to under 18 using AI? Or like what makes a child's experience with AI different from adults' experience with AI?
Aderonke Akinbola: So for kids, the stakes are exponentially higher. For an adult, a data breach means a new credit card, you know, or just a new card in general.
Or you could just get an email, oh, we had a data breach and stuff. Maybe your information was safe. However, for a child that is different. You know, for a child, AI data is a permanent digital fingerprint. It's a record of childhood curiosity and personality, and that could influence the college admission.
Or job prospects 20 years from now. What I just asked as a kid where I'm trying to learn, trying to ask AI in my talk at Zen's AI Security, AI Agent Summit. You know, I talk about the three shadows that follow children in AI, right? The shadow of influence. So, you know, the shadow of influence. That just means that AI can be more persuasive to a child than a parent or a teacher.
Because if by the time you spend so much time, you know, with these tools, kids, they're bank slates. They take in everything. They're just like sponge that just absorb, right? So they just take in everything and a child can come to believe an agent more than a teacher or a parent. You know, I also talk about the shadow of exposure.
This is just like the permanent data trail that I mentioned earlier. And then lastly, the shadow of stagnation. So this is a big one, you know, for project managers, because if we build AI tools that are too helpful, right, where the tools are doing all the thinking, we are eroding the child's, you know, critical thinking and just resilience to, okay, I didn't get to the first time, you know.
That part is part of the learning process where you don't get it once and you try three or four times and then you eventually get it. It's the same thing we all learn and I try to build a light bulb, you know, 99 times I got it the last time. Right? So if we don't look at this, you know, we're essentially automating a way, the messy process of learning.
Learning is messy. Learning is not fun. I don't wanna do the same things five times. I don't wanna do it, but I have to do it because I know that if I keep doing it, I'm only finding 5 million ways it's not going to work. And then when I find the one way it works, the mistake is also part of the process.
So I feel like, you know, with kids, you know, the stakes are so much higher. They're being molded into these people that they are. They need sometimes to make mistakes. It's a part of it, and the models have to also train them in a way that, okay, if you're spending too much time with me, go outside and do another outside activity.
Go talk to an adult. You know, we lead them away rather than be so engaging that they're glued to talking to something that doesn't even exist. It's the computer, you know? Well, thank you for that question, Galen.
Galen Low: That's what I love about it. Well, I mean, it's really interesting because, you know, from a product design standpoint, from a user interface design standpoint, you know, we're trying to reduce friction.
We're trying to make it as easy as possible, but you raise a massively good point that like making it too easy at a certain stage of development. You know, like you said, we're creating these solutions for adults who are trying to be more efficient because, you know, we're trying to get more done in a workday.
Or in our day period versus a child who's actually developing skills, who is, you know, learning to think critically about things, is looking for role models and figures of authority that you know they can trust. And that's way different than, you know, the way that an adult's using it. It's interesting, this idea and you know, we see it in other platforms too, where, you know, it's like.
Introducing, I guess those like controls and those nudges, right? Like my watch that tells me to stand up and walk about, you know, certain sort of, screen time, right? Limiting my screen time on my phone. It is sort of being built into the technology. I love that idea that, you know, the power of AI can be maybe even just more responsible and a little less addictive.
Just by having some of those things to be like, Hey, listen, like almost, because you know you're using like this technology a lot and you trust me, but also go outside, touch grass, right? Like come back in a bit. I think it's a really neat idea to build in that's actually, you know, quite arguably low effort, maybe not low effort to convince the powers that be that we should, you know, add this feature to the backlog.
But I like that there are some ways that we can account for this because we know it is going to happen.
Aderonke Akinbola: Yeah. I believe that in the coming years we're gonna have like a lot more research going into this. 'cause right now there is some research going into it, but I think we still need more, you know, just if we're researching the effect of all these tools.
On developing kids under 18. You know, however, you know, we haven't even gone that far into AI because the AI boom happened what, literally around COVID, right? So, and we're technically still in the post COVID era, so I don't think like the whole ward has recovered. And most of the kids that were born during COVID, there was turning six.
This year. So I think we still have a long way to go in terms of like research, just researching like the effects of these things, how we can build better systems. But until we have that, somebody needs to be a voice advocating, okay, we need to pay attention to this. We should not overlook this because this is a thing.
Galen Low: I love that like research takes time, but in the meantime, we can't be reckless. We need to actually have the conversations we need to advocate for, you know, younger users, more vulnerable users, and tells us time as we know and can like say more or less definitively with science, like what the impact is and how we need to engineer some of these things.
I like your point about education and learning, and I know you're a huge proponent of like AI education, like explainable AI and just like having tools that help users think critically and maybe not just like be passive consumers of a technology, but you're also a strong proponent of like including ethical and safety and security considerations when designing and engineering AI powered experiences.
So like in addition to, you know, we've been talking about big AI, we've been talking about, you know, the role of parents in all of this. I wondered if we can like zoom into the teams building some of these things, and for my audience specifically, like what can teams be doing to engineer safety and security precautions into the AI solutions that we're creating, whether that's like a full-blown consumer product or like a vibe coded app.
Like what conversation should we be having? What questions should we be asking one another to advocate for, you know, the more vulnerable, younger user in the things that we're building.
Aderonke Akinbola: Okay, so I think we just have to move from a black box model to a building block model, right? You know, for PMs this means a few things.
Firstly, you know, desirable friction and what are we saying? A lot of times we want to take away friction, right? We want to take away, you know, that robbing against each other, but for kids, we need to add to it, right? So if a child is just breezing through like a game and just going through all the levels quickly, you know, there's no challenge there.
So, you know, I think the AI should act like as a cognitive load balancer, you know, where it introduces a trickier problem to make them think, not necessarily harder, not necessarily like a harder problem, just something trickier, you know, to make them think more. I think something else I also talked about, you know, in my talk was.
X AI, explainable AI that does not just give them an output, instead is explains the process that the AI went through to get that output. So instead of just saying, watch this video, right? We have a scenario where you're like, since you liked planets, let me give you this video about stars, right? So it's teaching them algorithmic literacy, you know, breaking things down for them and not just spitting out answers, but walking them through the process because.
That's how we learn. That's how the learning process is. You know, break down this, break a big problem into little bit and figure it out. You don't just have a big solution. Right. And lastly, something I think would be like a huge security win will be like enhancing, like federated learning, which is like instead of sending a child's data to like a huge server, right?
You keep all the learning locally on the device. That means that data never leaves their bedroom. Whatever they're querying or whatever prompts they're asking. It ends on the local device and it ends there. We're not sending out those, you know, prompts outside to the server that is keeping it for how many thousand years?
No. It's just ending there for children systems that way. Right now having, I asked child GBT when I was three years old affects me when I'm trying to get to college. I think that's just setting them up for failure. 'cause children by nature, they're very inquisitive. They ask questions. Sometimes we think they're, you know, stupid questions, even though they're really no stupid questions.
But imagine like having these kids have this, you know, on their record for so long, I don't think that they need that. So that's what I think.
Galen Low: You know, like your shadow exposure when you're talking about the permanence of it all, like, yeah, like as a parent, that's really scary. And like even in our, you know, legal system, in our judicial system, we have grace built into it, you know, like, whether it's, you know, like misdemeanors that won't make it onto your permanent record if you're under a certain age.
And you know, not because we want children to be reckless, but because they're learning, we want 'em to be curious. We want to teach. Not necessarily punish. And like, I think the last thing that we should want to do is, you know, have young people be scared to be curious about the technology because that's how they're gonna build fluency and that's how they're gonna build that skill.
And by avoiding it, you know, it's not gonna make it go away when they're an adult. And they will be, you know, a bit arguably behind if they're not being curious about it. But the safety there is that, yes. Like, go ahead. Like, you know, this is like a walled garden. We've got the, you know, federated data considerations and go for it because it's not gonna be on your permanent record.
You know, you do you, I think it's a really. Cool. I don't know. I'm picturing it like a mode, right? I like the gaming analogy you had because you know, as a game designer you do wanna introduce friction. You want it to be challenging, not so challenging that no one ever wants to play it. You have to find that balance and it does make it something that you have to like apply effort to, and I like the idea that, you know, it's.
There's a practice mode. Do you know what I mean? It's like in, in a lot of games there would be, right, there's like the first level, you're like learning all the things. It's okay, like, you know, make mistakes, do stupid things, but that's how you're gonna learn. And then we'll sort of like ramp you up into, you know, more serious levels or use cases.
I think that's really neat. I'm wondering, you know, from the perspective of like you had mentioned earlier, right? We were talking about, okay, well sometimes the goal of a product is not to create a safe sort of experience for a vulnerable user or a child. How can like a product team or a project team raise this as a requirement if it's not already sort of built into the plan?
How can they advocate to their stakeholders to say, listen, we really need to apply some effort or apply some of our budget to build some of this in. How can they make that case? How have you made that case in the past and how's it landed, I guess.
Aderonke Akinbola: So I think, like being honest, I haven't personally made a case for this because I'm not currently in the space where I'm a part of building, you know, like AI systems.
I am more like on the networking security side, so protecting network and safeguarding networks. But I think that as time goes on, everyone is going to see the importance of this. So I think it may not be a conversation you may be able to win right now in a product development team. But I think that with time, and like I said earlier, research that's going into it, it's going to be so clear.
It's going to be so clear. The generation of children being raised now are so different, right? And we're going to be seeing the effect. But I would say keep advocating. Keep speaking up. Even if you have to say it's a hundred million times, keep saying it. Don't get tired of saying it. If your idea gets shut down now, it'll not always be shut down.
Think about the time when there was no AI. If that was always the case and everybody kept quiet about it, there would not be AI as we are using it today. So keep speaking. I'm very sure that very soon everybody, every country look at what you mentioned about the laws from like the European Union and Australia banning social media for kids under 18.
Now, social media is very different from AI, but however, that is one step. In trying to curb a big problem, you know? So let's just keep speaking about it, keep advocating about it. I will keep speaking about it until I see, you know, like the powers that be are starting to take this, you know, seriously and starting to pay attention to this very vulnerable population, you know?
So that's what I think, at least for now.
Galen Low: I love that. And I think you're right and I know that like, 'cause social media and AI technologically are not the same. But I think you raise a good point about the fact that, yeah, maybe it's not up to us to individually die on the hill to have that requirement added to our product or project, but it's like we need to have the dialogue, we need to have the conversation.
And I think that is one of the things that we kind of collectively all did wrong with social media is we sort of turned a blind eye to some of the things that were happening. We only focused on a certain, you know, band of the population, of the user base. And we weren't necessarily having the dialogue enough that someone on the team eventually would be like, oh yeah, but like, this has been in the news, you know, people are talking about this.
This is a consideration that we should have. And everyone kind of nod, do you know what I mean? Where it's like not an extraordinary thing to propose because it's already, you know, in the zeitgeist, it's in popular culture. It is sort of part of the way we look at technology. Maybe we can dive in there.
Like if we swing back to social media for a moment, I think there's a lot of people who would argue that we took too long to put safeguards in place for young people using social media. Are we at risk of facing that same moment with AI today? Like what's at stake in the future if we get this wrong now?
Aderonke Akinbola: So I think that there's a lot that is at stake. However, I think right now we're a lot smarter than we were during like the social media era, and especially identifying these things. I think in the social media era, it took us a while to identify these things, but I think that right now we're having these conversations.
Now a lot of people are going into rooms and they're speaking up for kids, right? So I think that somewhere. We're already having these conversations. However, we're also in a world where the current systems that are being built, they're more profit driven than ethics driven. So it may still take some time, but I think that there's an awareness of this problem already, and every time there's an awareness of the problem is half solved.
Right. So, and just even to piggyback on what we're talking about, you know how AI is different from social media right now? There's AI in social media. Instagram has AI. WhatsApp has AI. Everything has AI. Now, they already have agents in them that you can ask questions, get response, refine this, refine that.
So what we're separating, they're one and the same, right? So I just think that we don't have as much time to waste time on this. We need to. Even if it starts to start getting things into policies, you know, and taking things all the way to like legislative level, I think we need to start looking at this.
We need to start being more involved. In conversations around leadership, maybe like at local levels, talking about these things. Okay, we want to pass these things. We see a lot of petitions all the time. You know, people try to sign petitions about this. We can start having conversations along those directions.
Also, that if the developers who are supposed to be building this products are not looking at this, why don't we change our approach and go from the government angle. I feel like that's another way to also bring this conversation to a different table and a table that also has an interest in this. The government is funding a lot of public schools and has a lot of money is going into these kids.
I feel like they also have a part to play in making sure that these kids have access to the right technology that is going to help them.
Galen Low: Yeah, and it's like it's easy to forget that some of the focus of any government is actually developing future generations, even if it's for the economy or survival of that nation or region or you know, state.
That is sort of what they should be representing. And what's interesting is that. You know, and this may be common sense to some of my listeners, but like, it's, it is nodding on me that like advocacy is about creating enough like tension or a counterweight, even if everything is, you know, in our current model, our capitalist model, even if it is sort of profit driven and like children aren't the customers that make the money, that advocacy is what balances the scale so that it does become important when.
Teams and organizations are building tools. Again, whether they are full-blown B2C meant to be used by everybody tools or just like Homebrew internal, like AI agents, vibe coded apps and things like that. Like it still should be part of the consideration matrix because you know, by not considering it, it could be damaging to how that product or tool is seen or how it's adopted.
That's really interesting.
Aderonke Akinbola: I also want to add about digital guardian that I also spoke about. Right. So I think also in creating these models, one of the things that we want, we need to start looking at is a digital guardian that acts as a central master agent. You know, and this is just like an ethical firewall that is not just an app, right?
It's like a mediation layer between that is doing three things. One of the things that he is doing is that he is acting as an emotional thermostat. Now it's able to sense that, okay, maybe a child is getting too attached or too addicted or too isolated, right? And I think I mentioned this earlier, and then it cools them down, nudging them, okay, go talk to an adult.
Go talk to your friend, shuts down to protect that child. You know, another thing that this digital guardian can also do is that it can be like a digital lawyer where it negotiates like, you know, privacy settings, you know, with other apps. Is the child allowed to access this? Is the child allowed to access that?
Okay. If not. Then it demands that later be stored temporarily and locally. So he's able to communicate with other apps to check, ask permissions, okay, can the child go here? If not, cancel, shut this down. Don't open that. And then lastly, also, this digital guide is also going to act as an air traffic controller, right?
So it'll be able to break down complex AI into building blocks that children can actually process. And I think Alan, one of the most important parts of this is protection should not be a luxury. It should not be something that, you know, it's a premium version of, you know how we have premium versions of everything, pro versions, right?
So we need to ensure that a child in Ghana can get the same digital lawyer that a child in San Francisco can have, right? So that is a filter that we're building towards that is safe for everybody, and it's not just geared towards a certain demographic or a certain population where children can be safe.
Galen Low: I think it's a massively good point that this technology is, you know, something that we need to protect young people sort of from, but also it can be the technology that also protects 'em. I love that idea of a digital guardian. I had written it down as like guardian agent. I like the equity angle. Right, because like, you know, again, it shouldn't be a luxury.
You're absolutely right. You know, I have like a premium account for a whole bunch of different apps because of the child safety things or because of the family sharing features. And it is that it's a privilege. It's not sort of built in. I really like your idea, the fact that it's like a firewall. The go-between the thing that's sort of interacting with all of the other AI technologies to create an experience that is safe for vulnerable users to negotiate on their behalf or advise them and sort of make more transparent some of the decisions that they're making.
I really like that idea of being the, like almost moderator, like a moderating force to create balance in how other AI tools are being used. Questions that are being asked. I think it's a really neat idea and of course like, you know, I'm a huge fan of like the accessibility aspect of it, that it should be something for, you know, for everybody that can be sort of layered in not just an app that you install.
Or decide not to install and have that as, you know, a bit of a liability. I mean, I think that's a really interesting view of the future. Like, lemme frame it to you this way. How close or far away are we from actually. Having the conversation to develop something like that, like I think it's a really neat vision for the future.
I think my conversations with you, that's the sort of first time it's been framed to me that way. By the way, after we hang up, you should probably patent it, but you know, is this something that is, you know, decades away or is it something that we will start focusing on in terms of providing a protection layer for more vulnerable users using AI?
Aderonke Akinbola: I think this is a very great question. I don't think it should be decades away, but I know it's not like a one or two year away kind of thing, but I don't think it should be decades away. Maybe like in the next five years, you know, I think we should be having more focused conversations, and I think we need like a bridge for this specific, you know, discourse.
I think we really need a bridge between companies who are building this agent and the government. We need to have conversations together. We need to sit at the same table. And we need to make demands and say, okay, because think about this, if the whole idea of technology is to make our lives easier, it should not also make us be in a dangerous spot a few years from now.
So we need to start like critical conversations. Sooner rather than later. I don't think we should do this in like 10 years. 10 years is such a long time. We are building models that right now we have a new robot from Nvidia New that is doing household tasks. Where are we going to be five years from now with that kind of, if that robot has upgrades.
Galen Low: Absolutely. I was gonna say no, I think you framed that tension really well. Like the paradox of, yes, we've learned our lessons from things like social media or even like, I'm thinking of like GDPR in terms of like data privacy. You know, that's something that kind of is that layer now it's like a legislative layer and also there's tools there that are, you know, preventing your data from going everywhere.
But it took us a long time to have that conversation. We know to have it now, but also AI is progressing really fast, so it's changing fast suit in some of the technologies that we've had in the past. Therefore we need to start having the dialogues now and have an open dialogue because the pace is what makes it not just us learning lessons and having history not repeat itself.
It's that it's new, it's fresh, and we need to have an active conversation about it all the time.
Aderonke Akinbola: Also, to piggyback on something you said, which I saw was really nice when we were talking about having modes, you know, like having a mode. Maybe just switching between like an adult mode and a child mode that has all these features.
So sometimes we are not even necessarily be looking at a whole new different app. It may just be like a layer of the same tools that we're already using and just being able to go from adult mode to child mode, or even from toddler lab mode to teenage mode. Honestly, I think that it's not too much to ask for in this day and age.
We have billions of dollars being pumped into this industry, and I think that. It'll be very unwise to keep pumping money. And we're also pumping money into something that could potentially destroy another generation of kids rather than help them. So if we're pumping money, I'm pumping a lot of resources, you know, let us also build and build correctly so we don't have to tear down the same systems a few years from now to build something else.
Galen Low: I love that, engineer for the future.
Aderonke Akinbola: Yeah, I like that topic for this podcast.
Galen Low: Yeah. There you go. Maybe, yeah, maybe that's a title, Future Generations and How We Engineer for Them. I love that. This has been great. Ade, thank you so much for spending the time with me today. I've had so much fun. I've learned a lot.
I love your insights. I love the way you're thinking about this. I know that you are around, you know, we were talking earlier in the green room about, you know, not necessarily being on tour this year at all the conferences, but for folks who wanna learn more about you, where can they go?
Aderonke Akinbola: You go to my LinkedIn. Just my name. My first name, my last name, Aderonke Akinbola. I'm in San in the Bay Area. So I think there are a few people with my first name and my last name, but I believe I'm the only one in the Bay Area right now, depending when the other room has come to join me. But for now, you can find me on my LinkedIn right now.
I don't have like a website or anything. And then if you also want to see the last talk I had, I think it's also up on YouTube on the sanity channel. So just to have, you know, the preliminary conversation that led up to this. And I'm also working on like, you know, my website and all that stuff that is not fully out right now.
I just have other personal projects that are taking my time currently. So by the summer I'll be like more like brand focused, but for now it's just me and advocacy for kids.
Galen Low: I love that. I will include a link to your profile in the show notes for our listeners and also a link to your talk. I really enjoyed it and Ade, thank you again. This is great.
Aderonke Akinbola: Thank you so much, Galen, and it was great to be here. Thank you for the opportunity.
Galen Low: Alright folks, that's it for today's episode of The Digital Project Manager Podcast. If you enjoyed listening to this conversation, make sure to subscribe wherever you're listening. And if you want even more tactical insights, case studies and playbooks, create a free account with us at thedigitalprojectmanager.com.
Until next time, thanks for listening.
