Blog
All Things AI .
Categories: AI in Schools - AI in Government - Train for AI - AI Consulting Services - Generative AI - AI Literacy - AI Ethics
Shadow AI is Already Here. Why Your Insurance Likely Won’t Cover What’s Next
Your vendors are adding AI capabilities to software you already own, often without meaningful notification or your consent. Two-thirds of SaaS applications now have AI features. Your legal team reviewed the original contract three years ago. But did anyone review last quarter's terms of service update? Probably not.
Human Dimensions of AI Literacy: Why Teaching Tech Misses the Point
The job market has fundamentally transformed. We’re preparing a workforce for jobs that don’t yet exist. Skills that took years to master are being automated, while entirely new capabilities - from prompt engineering to AI ethics - are suddenly the latest discussion points. This isn't about robots taking jobs; it's about understanding which human skills matter more than ever, and which ones won't save you.
Building the Future of Education: My Journey into Durable Skills
Attended a Lightcast workshop on durable skills for the AI age. Then I actually tried to GET the data. What followed was a perfect lesson in the very skills we should be teaching: problem-solving through failure, asking better questions, and collaborating with AI as a learning partner.
The irony? Learning about AI-age skills BY using AI to learn. Sometimes the process teaches more than the product.
Escaping Local Maxima: What Grief, High Jumping, and AI Taught Me About Transformation
A year after loss, I'm still finding my way forward. Grief taught me what AI is now teaching workplaces: sometimes progress means going backward first. Like Dick Fosbury revolutionizing high jumping by looking foolish, we must escape our comfortable valleys to discover what's possible on higher ground.
3 Ways AI Governance Actually Speeds You Up (Not Slows You Down)
Budget Season is upon us. If you’re sitting down with 2026 numbers, you already know the pressure:
Cut costs where you can.
Find growth where you must.
Show the board a clear return on every line item.
Here’s the mistake too many teams will make in next year’s budgets: They’ll throw money at AI pilots or vendor contracts without investing in governance.It feels faster in the short term. But it costs more in the long run.Here’s why governance is not just risk management — it’s the thing that actually makes AI adoption faster, safer, and budget-friendly.
Patterns in time, minds in motion
The O’Mind helps forward‑thinking leaders systematize knowledge, capture IP, and integrate human and AI‑powered processes, creating a living organizational intelligence that can be used to boost productivity, accelerate onboarding, or prepare for successful M&A.
The countdown has begun
On August 2, 2026, organizations who sell into the EU or whose AI-enabled products or services are used in the EU must prove they proide AI Literacy education and training to their employees. Fines for those who don’t will follow.
America’s AI Action Plan: What’s in It, Why It Matters, and Where the Risks Are
This article sets out to inform the reader about the AI Action Plan without opinion or hype. Let’s dig in: On 23 July 2025, the White House released Winning the Race: America’s AI Action Plan, a 28-page roadmap with more than 90 federal actions grouped under three pillars: Accelerate AI Innovation, Build American AI Infrastructure, and Lead in International AI Diplomacy & Security. The plan rescinds the 2023 AI Bill of Rights, rewrites pieces of the NIST AI Risk-Management Framework, and leans on January’s Executive Order 14179 (“Removing Barriers to American Leadership in AI”). If fully funded and executed, the plan would reshape everything from K-12 procurement rules to the way cities permit data-center construction.
AI Literacy and AI Readiness - the intersection matters most
As leaders consider how to adopt and scale new AI-enabled solutions, we’re hearing two phrases increasingly surface in strategic conversations: AI literacy and AI readiness. They are related, but not interchangeable, and understanding the distinction (and the magic of the intersection of the two) could determine whether your organization is poised to thrive in an AI-driven landscape.
The First AI Incident in Your Organization Won’t Be a Big One. That’s the Problem.
Your first AI incident won’t be big. But it will be revealing. It will expose the cracks in your processes, the ambiguity in your policies, and the reality of how your team uses AI. If you wait for a significant event before acting, you’ll already be behind. Building responsible AI systems doesn’t start with compliance. It begins with clarity and a willingness to take the first step before the incident occurs.
FAIR Principles: a data governance foundation
Unlock the power of robust data governance with the FAIR principles—Findable, Accessible, Interoperable, and Reusable.
Embedding the FAIR principles into your data governance framework doesn’t just keep you compliant; it unlocks agility, trust, and measurable business value. As scrutiny of data practices intensifies, make FAIR your organization’s data governance north star.
Building trust and resilience on a GPAI Code foundation
The EU’s GPAI Code of Practice takes effect in a couple of weeks, and anyone building AI into their products would benefit by aligning to the Code. Especially those who want to build trust early in the AI-enabled world.
EU AI Act: Yes, it’s Critical in the US
The EU’s GPAI Code of Practice provides both a roadmap and an incentive structure for responsible development and integration of advanced AI systems. If you’re deploying AI in specialized sectors like edtech, health tech or fintech, building a foundation with these standards is quickly becoming not just a matter of legal compliance, but one of building trust and readiness for the global future of AI regulation.
How to use AI as a tutor, boosting the brain
I’ve recently been connecting with some wonderful people in the education and ed-tech community online.
In one conversation, my new connection Eric Hoffman expressed concern about playbooks for students in using AI. Neither of us had seen any, but I promised to look for them to see just how educational they might be.
Rather than a playbook, I found an amazing prompt - turning AI into a tutor. Wow.
Your AI Feature Isn’t the Problem. The Trust Gap Is.
AI is everywhere in EdTech—automated feedback, adaptive learning paths, grading support, and content generation. If you’re building smart, AI-powered tools for K–12, you’re in the right race. But many vendors hit the same wall: enthusiastic interest from district leaders, then a long stall… or silence. The reason? Your product is technically impressive, but governance blind.
AI use in US workplaces has doubled in two years (so has trouble)
We talk a lot about using AI as an organization and as consultants to business, education and non-profits. And we always do so from a ‘responsible’ use perspective. In reading and exploring Ai use (and abuse) just this week, there have been some incredibly important reasons why policies and playbooks (guidelines and guardrails) / AKA governance is more important than ever.
Because trouble is out there - and it has nothing to do with AI, but about human behavior.
FERPA, COPPA, and Beyond… Bridging the EdTech-Education Compliance Gap
I’ve been speaking a lot over the last month, with a focus on governance and helping organizations prepare for a future with AI.
One recent event - the AI for Good conference - underscored how much software and security measures have added significant expenses to schools’, districts’, colleges’ and universities’ budgets alike.
But are they aligned properly for maximum effect?
Hidden Dangers in Terms of Service: Why One-and-Done Reviews Are Dangerous
At a recent event - the AI Empowered EDU conference - I said some things that underscored the need for educators and software providers to better uphold and understand governance.
Teachers in the audience gasped out loud when I told them some of the stats in this article. They work so hard to protect student privacy, and hearing that some vendors shared student PII liberally blew them away.
Shadow AI Is Already Happening And It’s a Governance Problem, Not a People Problem
If you think your workforce is calmly waiting for an “official AI rollout,” think again. From sales decks to code snippets, generative tools are already woven into daily workflows—only most of that activity is invisible to leadership.
From Pilot to Performance: Turning AI Pilot Programs into Scalable Strategy
Discover why most AI pilot programs stall and how governance turns early wins into enterprise value: practical steps, stats, and next actions.

