Insights from Women Shaping How we Lead in the Age of Ai : a Keynote with Geraldine Clark

At Truing, we work with teams and leaders across the full spectrum of AI adoption—from those who started experimenting and integrating tools on day one, to those who are still asking a very real and reasonable question: Is this something we can use? And if so, how?

Last night I went to a panel on leadership in the age of AI. Not a topic I engage often, but I was excited because, while I started as an AI skeptic—and still hold deep ethical and theoretical questions about where this all leads—I also know this much: pretending AI isn’t already here and reshaping how we work, decide, and lead isn’t the path forward.

At this point, I’d describe myself as an enthusiastic adopter of AI that helps me be better and do better. Not uncritical. Not starry-eyed. But relentlessly curious.

More and more, my work is about helping leaders understand where they are on the adoption spectrum—and how to move intentionally toward where they want to be. Especially for teams that are just beginning to explore what this moment asks of them.

That’s why the keynote from Geraldine Clark really stuck with me. Not because it was flashy or predictive, but because it was disciplined, grounded, and deeply human.

Here are a few things I’m still chewing on.

  1. There is no shortcut to Mastery.

    Powerful tools don’t eliminate the need for learning—they raise it. Mastery still requires time, practice, and repetition. You have to be willing to get things wrong—often—before you start getting them right. That’s not a failure of leadership; it is leadership.

  2. AI automates tasks. Humans own the consequences.

    This one landed hard for me. Speed doesn’t remove responsibility—it concentrates it. I’ve learned this firsthand through a few fumbles of my own. The output may be automated, but the judgment call never is.

  3. Small mistakes are part of responsible learning.

    If leaders and teams aren’t allowed to experiment, miss, and adjust in low-stakes ways, they’re far more likely to make costly mistakes when the stakes are high. Psychological safety isn’t a “nice to have” in AI adoption—it’s infrastructure.

  4. When problems feel overwhelming, leadership is about decomposition.

    Break the problem down. Understand the parts. Then reassemble it with intention. This is systems thinking, not tool worship—and it’s where clarity starts to emerge.

  5. The machine has the power. Leaders bring the purpose.

    Tools don’t decide what matters. They don’t set direction. They don’t name tradeoffs. That work still belongs to people.

And this may be the through line that ties it all together:

AI doesn’t replace judgment. It raises the bar for it.

As tools keep changing, it’s up to leaders to model how we set purpose, exercise judgment, and cultivate strong cultures—skills that have to be practiced and refined over time.

This moment isn’t just about adoption. It’s about alignment. About whether our systems, norms, and leadership practices are strong enough to hold the tools we’re bringing in.

Let me know what’s resonating—and where your team is being intentional about how it meets the moment.

Previous
Previous

What to Ground In When the Ground Shifts: Holding Integrity and Strategy in Uncertain Times

Next
Next

When Women Leaders Talk AI, Culture Takes Center Stage