Blog

All Things AI .

America’s AI Action Plan: What’s in It, Why It Matters, and Where the Risks Are

America’s AI Action Plan: What’s in It, Why It Matters, and Where the Risks Are

This article sets out to inform the reader about the AI Action Plan without opinion or hype. Let’s dig in: On 23 July 2025, the White House released Winning the Race: America’s AI Action Plan, a 28-page roadmap with more than 90 federal actions grouped under three pillars: Accelerate AI Innovation, Build American AI Infrastructure, and Lead in International AI Diplomacy & Security. The plan rescinds the 2023 AI Bill of Rights, rewrites pieces of the NIST AI Risk-Management Framework, and leans on January’s Executive Order 14179 (“Removing Barriers to American Leadership in AI”). If fully funded and executed, the plan would reshape everything from K-12 procurement rules to the way cities permit data-center construction. 

Read More
AI Literacy and AI Readiness - the intersection matters most

AI Literacy and AI Readiness - the intersection matters most

As leaders consider how to adopt and scale new AI-enabled solutions, we’re hearing two phrases increasingly surface in strategic conversations: AI literacy and AI readiness. They are related, but not interchangeable, and understanding the distinction (and the magic of the intersection of the two) could determine whether your organization is poised to thrive in an AI-driven landscape.

Read More
July 2025 Brand Brief – AI in EdTech: “When Governance Gaps Become Brand Risks”

July 2025 Brand Brief – AI in EdTech: “When Governance Gaps Become Brand Risks”

Each month, we spotlight real-world challenges that software creators face when trying to align with school district expectations, data privacy laws, and public trust—because in today’s market, your AI doesn’t just need to work, it needs to pass inspection.

Read More
The First AI Incident in Your Organization Won’t Be a Big One. That’s the Problem.

The First AI Incident in Your Organization Won’t Be a Big One. That’s the Problem.

Your first AI incident won’t be big. But it will be revealing. It will expose the cracks in your processes, the ambiguity in your policies, and the reality of how your team uses AI. If you wait for a significant event before acting, you’ll already be behind. Building responsible AI systems doesn’t start with compliance. It begins with clarity and a willingness to take the first step before the incident occurs.

Read More
FAIR Principles: a data governance foundation

FAIR Principles: a data governance foundation

Unlock the power of robust data governance with the FAIR principles—Findable, Accessible, Interoperable, and Reusable.
Embedding the FAIR principles into your data governance framework doesn’t just keep you compliant; it unlocks agility, trust, and measurable business value. As scrutiny of data practices intensifies, make FAIR your organization’s data governance north star.

Read More
EU AI Act: Yes, it’s Critical in the US

EU AI Act: Yes, it’s Critical in the US

The EU’s GPAI Code of Practice provides both a roadmap and an incentive structure for responsible development and integration of advanced AI systems. If you’re deploying AI in specialized sectors like edtech, health tech or fintech, building a foundation with these standards is quickly becoming not just a matter of legal compliance, but one of building trust and readiness for the global future of AI regulation.

Read More
“HIPAA doesn’t apply to public schools.”  That statement is technically correct, and dangerously misleading.

“HIPAA doesn’t apply to public schools.” That statement is technically correct, and dangerously misleading.

For years, the education sector has operated on the belief that FERPA (Family Educational Rights and Privacy Act) is the only law that matters when it comes to student data. And for much of the traditional classroom environment, that’s true. But the moment health-related services intersect with educational technology—whether through telehealth platforms, mental health apps, or digital IEP tools the ground shifts. Suddenly, the boundary between FERPA and HIPAA isn’t just academic. It’s operational, legal, and reputational.

Read More
How to use AI as a tutor, boosting the brain

How to use AI as a tutor, boosting the brain

I’ve recently been connecting with some wonderful people in the education and ed-tech community online.

In one conversation, my new connection Eric Hoffman expressed concern about playbooks for students in using AI. Neither of us had seen any, but I promised to look for them to see just how educational they might be.

Rather than a playbook, I found an amazing prompt - turning AI into a tutor. Wow.

Read More
Schools Don’t Just Buy Software. They Buy Trust.

Schools Don’t Just Buy Software. They Buy Trust.

The best product doesn’t always win. In fact, in K–12, it often doesn’t. You can have the cleanest UI, the sharpest onboarding flow, and the most impressive AI feature set in your category AND still get dropped in procurement. Not because of price. Not because of a competitor’s edge. But because the District couldn’t say yes with confidence. They couldn’t explain your AI use to their superintendent. They couldn’t get your DPA past legal in under six weeks. They couldn’t bet their district’s reputation on a product that might be compliant. And so, they passed. Not because they didn’t like you but because you didn’t feel safe enough to approve. In K–12, Trust Isn’t the Last Thing. It’s the First.

Read More
Caught in the Middle: How AI Is Forcing K–12 EdTech Vendors to Scramble

Caught in the Middle: How AI Is Forcing K–12 EdTech Vendors to Scramble

AI is no longer optional for K–12 EdTech vendors. Whether it’s to meet investor expectations, compete in crowded product categories, or add meaningful personalization, AI features are fast becoming table stakes.

Read More
June 2025 Brand Brief – AI in EdTech: Building Trust, Not Just Tools

June 2025 Brand Brief – AI in EdTech: Building Trust, Not Just Tools

This month, we look at how EdTech brands are navigating the complex trust landscape of AI in K–12. From new Department of Education guidance on shared accountability, to school district concerns about equity, reliability, and privacy, the stakes for brand clarity have never been higher. As AI tools become more common in the classroom, branding has a critical role in turning innovation into confidence—for parents, superintendents, and school boards alike.

Read More
Your AI Feature Isn’t the Problem. The Trust Gap Is.

Your AI Feature Isn’t the Problem. The Trust Gap Is.

AI is everywhere in EdTech—automated feedback, adaptive learning paths, grading support, and content generation. If you’re building smart, AI-powered tools for K–12, you’re in the right race. But many vendors hit the same wall: enthusiastic interest from district leaders, then a long stall… or silence. The reason? Your product is technically impressive, but governance blind.

Read More
AI use in US workplaces has doubled in two years (so has trouble)

AI use in US workplaces has doubled in two years (so has trouble)

We talk a lot about using AI as an organization and as consultants to business, education and non-profits. And we always do so from a ‘responsible’ use perspective. In reading and exploring Ai use (and abuse) just this week, there have been some incredibly important reasons why policies and playbooks (guidelines and guardrails) / AKA governance is more important than ever.

Because trouble is out there - and it has nothing to do with AI, but about human behavior.

Read More
FERPA, COPPA, and Beyond… Bridging the EdTech-Education Compliance Gap

FERPA, COPPA, and Beyond… Bridging the EdTech-Education Compliance Gap

I’ve been speaking a lot over the last month, with a focus on governance and helping organizations prepare for a future with AI.

One recent event - the AI for Good conference - underscored how much software and security measures have added significant expenses to schools’, districts’, colleges’ and universities’ budgets alike.

But are they aligned properly for maximum effect?

Read More
Hidden Dangers in Terms of Service: Why One-and-Done Reviews Are Dangerous

Hidden Dangers in Terms of Service: Why One-and-Done Reviews Are Dangerous

At a recent event - the AI Empowered EDU conference - I said some things that underscored the need for educators and software providers to better uphold and understand governance.

Teachers in the audience gasped out loud when I told them some of the stats in this article. They work so hard to protect student privacy, and hearing that some vendors shared student PII liberally blew them away.

Read More
Master Your Medical Appointments with AI: Ask Smarter, Remember More, Decide Confidently

Master Your Medical Appointments with AI: Ask Smarter, Remember More, Decide Confidently

Many people walk into a medical appointment feeling uncertain and walk out even more confused. You might forget to ask a key question, struggle to remember what was said, or leave unsure about your next steps. It’s a common experience—and one that can affect your outcomes.

Read More
Shadow AI Is Already Happening And It’s a Governance Problem, Not a People Problem

Shadow AI Is Already Happening And It’s a Governance Problem, Not a People Problem

If you think your workforce is calmly waiting for an “official AI rollout,” think again. From sales decks to code snippets, generative tools are already woven into daily workflows—­only most of that activity is invisible to leadership.

Read More