From clueless to clued-in: 38 days immersed in AI governance
It’s been 38 days since I joined AI Governance Group (AIGG).
In that short span I’ve been on listening calls with EdTech Founders, Colleges & Universities, K-12 leaders, Executive Coaches and more.
In those conversations I’ve felt the pulse of real-world challenges, and begun to shift how I see AI from a shiny (and sometimes scary) efficiency tool to a governance, trust, and human-impact undertaking.
Here’s what’s unfolded, what I’m learning, and how I’m intentionally moving from clueless to clued-in.
The back-story: from “Write this email like a pirate” to governance
About a year ago while working at a travel-tech start-up, I deliberately engaged with ChatGPT for the first time when my employer asked me to experiment with AI to train my team. My early exposure looked like logging into a free account and prompting: “Write this email like I am a pirate. Okay, Now do a bear. He He He...”
Like many people, I started by playing around but didn’t really know where to take my exploration or how to actually drive productive outcomes.
A lot has changed in a year, and even more so in the last 38-days immersed in the world of AI Governance. What quickly became clear in my listening calls is that my experience is very common: we are told use this technology, but not how and not how to use it safely for your personal IP or your company IP.
I realized there’s a gap between “Hey, try AI” and “Here’s how you integrate it safely, ethically, with your assets, your brand, your people.” That gap became the doorway into me wanting to lean into AI governance.
How education prompted me
Last summer I attended the Coalition of Oregon School Administrators (COSA) AI-themed summer conference. I watched education leaders wrestle with excitement and stress. They were exploring generative tools, but they were also asking: “What happens to student data? What about staff time? What about trust if a tool gets it wrong?”
In their energy I knew: I wanted to be part of that solution. I wanted to help bridge the tool-side and the governance side. I’ve always fancied that I carry the spirit of a social-worker in my work and strive to bring actionable solutions to client pain-points.
And so when the opportunity at AI Governance Group arrived, I embraced it.
The first 38 days: Listening calls, patterns, questions
In these first weeks since coming aboard to support the AI Governance Group, I’ve been deep in listening calls with corporate prospects, education leaders, and Edtech vendors, and clear patterns are emerging. Again and again, people tell me they’re being pushed to “go adopt AI” but have no idea how to do it wisely.
I’ve watched organizations treat AI like any other software: plug it in, roll it out, hope for the best, while governance, IP protection, bias mitigation, and shadow-AI risks sit untouched in the background.
Underneath the operational questions is an emotional layer I didn’t expect: a mix of excitement, anxiety, and caution. People want guardrails. They want clarity. They want confidence.
Across these calls, leaders keep naming the same tension: speed, pressure, and uncertainty colliding with real governance gaps.
A charter-school CIO told me, “we’re trying to go slow to go fast, because if you’re going to use these tools, you have to take responsibility for A to Z, not just A to B.”
A university dean raised the organizational blind spot, saying, “you can craft documents all day long, but if it never gets presidential blessing, nothing actually gets implemented.”And a higher-ed advisor captured the everyday risk on the ground: “we’ve been told not to use ChatGPT but to only use Copilot, yet there are no explicit policies…people are pasting employer documents into it without knowing if that’s a data breach.”
Together, their comments point to the same issue: institutions are adopting faster than they’re governing, and everyone feels the gap. These voices reinforce the reason I signed on: governance matters. Not just because it’s “nice to have” but because adoption without governance is risky, unpredictable, unsustainable.
Women In Big Data (WiBD)
Attending the Women in Big Data event in Portland early in my AIGG journey helped set the tone for everything that followed. The room carried a charge that was equal parts energy, inspiration, and solidarity. It wasn’t just data scientists talking shop. It was educators, engineers, analysts, product leaders, founders, and public-sector voices all naming the same thing in different ways: AI is here, it is powerful, and it requires responsibility.
Conversations kept circling back to governance, ethics, inclusion, bias, trust, and the real human stakes behind every system we build. Listening to women describe their career journeys, their fears, their experiments, and their insistence on equity made the work feel less abstract. It reinforced that AI governance isn’t a technical side project. It is people, policy, culture, and consent. It is organizations trying to build guardrails that respect human dignity. It is community.
And it reminded me that even as I spend my days buried in frameworks, rubrics, and interviews, the heart of the work is simple: protect people, empower them, and make sure the systems we build don’t leave anyone behind.
What I’m learning: framework and mindset shifts
Here are a few key lessons emerging in these 38 days:
Governance is not a tick-box
It’s easy to think of “AI governance” as the compliance layer post-deployment. But listening calls show that governance must be integrated upfront—in data collection, model selection, vendor contracts, rollout plans, culture.The human dimension matters more than the algorithm
Tools will evolve. But if people don’t trust the tool, don’t know how to use it, or are uncertain about its ownership, the ROI will suffer.Shadow AI is real
Many organizations already have “rogue” AI use with teams using generative tools without governance oversight. That’s less a people problem than a governance gap.Listening is a strategic tool
My listening calls are not just for relationship-building; they’re building data. Patterns are emerging: what people ask for, what they fear, what they don’t know to ask. These become the basis for meaningful governance frameworks.Adoption = culture + process + technology
If you focus only on the technology you’ll miss culture and process risks. If you focus only on process, you may get bogged down. The sweet spot is where technology meets culture meets governance.
My personal workflow: going from clueless to clued-in to the AI governance landscape
I’m building a working doc of call-themes: from IP concerns to “what-if” questions about data privacy, from model drift to vendor lock-in.
I carve out an hour each day to read articles and blog posts on the AI Governance and the global AI landscape.
I engage in communities (like WiBD) so I don’t stay siloed in “governance for X” but see cross-industry learnings.
I ask naïve questions intentionally (even if I do know some answers) because they help uncover gap areas in my listening calls.
I challenge myself: if someone says “we’ll just adopt and scale,” I ask “okay — how will you govern at scale?”
I reflect weekly: what surprised me this week? What question I didn’t anticipate? What push-back did I hear?
I’ll talk to ANYONE interested in chatting AI with me and I soak up their fears, curiosities, and excitement like a sponge (even when their particular theories feel totally bonkers!)
What’s next (and how I invite you to join me)
In the next phase I’m looking ahead to:
Converting call-themes into tools/templates (for example: a vendor-governance checklist, or a staff-AI-uptake readiness guide)
Engaging deeper in the education-sector side (where I first felt called) and connecting with K-12/ed-tech use cases
Sharing more about how organizations are translating from “play” (fun prompts) to “purposeful use + safe use”
Building peer-learning groups around governance for non-tech leaders (because many of the people I talk with are not data scientists but operations/HR/education leaders)
Elevating real stories of those who felt successful and those who stumbled—because governance isn’t about perfection, it’s about progress.
If you’re reading this and you’re navigating AI adoption in your organization—whether you’re a leader, practitioner, educator, I invite you to reach out. Let’s have a listening call. Let’s swap stories. Let’s build trust together.
Final thoughts (for now…)
After 38 days I’m only at the beginning but the immersion is real. I’m no longer seeing AI as a novelty and efficiency tool; I’m seeing it as a governance challenge and an opportunity for trust-building, for scaling responsibly, for future-proofing.
My early “pirate email / bear joke” phase of AI feels far behind me. I’m into the meat now: ownership, oversight, impact, people.
If you’re moving into this space too, know this: you’ll need curiosity, courage, humility, and yes, you’ll need to listen. Stay curious. And stay agile. Because the most valuable insights come not from the tool, but from the people using it, the context around it, the unknowns we haven’t asked about yet.
Thanks for letting me introduce myself here. If you’d like to hear more from my 90-day reflection or get a snippet of one of my call-themes lists, just say the word. And let me know if you’d like me to write you my emails like a bear. Or pirate.
Yarr Mateys!
Resources from AIGG on your AI Journey
At AIGG, we understand that adopting AI isn’t just about the technology, it’s about people. People using technology responsibly, ethically, and with a focus on protecting privacy while building trust. We’ve been through business’ digital transformations before, and we’re here to guide you every step of the way.
No matter your type of organization, school district, government agency, nonprofit or business, our team of C-level expert guides - including attorneys, anthropologists, data scientists, and business leaders - can help you craft bespoke programs and practices that align with your goals and values. We’ll also equip you with the knowledge and tools to build your team’s literacy, your responsible practices, TOS review playbooks, guidelines, and guardrails as you leverage AI in your products and services.
Don’t leave your AI journey to chance.
Connect with us today for your AI adoption support, including AI Literacy training, AI pilot support, AI policy protection, risk mitigation strategies, and developing your O’Mind for scaling value. Schedule a bespoke workshop to ensure your organization makes AI work safely and advantageously for you.
Your next step is simple. Let’s talk together and start your journey towards safe, strategic AI adoption and deployment with AIGG.

