Shadow AI is Already Here. Why Your Insurance Likely Won’t Cover What’s Next

A few weeks ago, I stood in front of a room full of Oregon civic leaders at the League of Oregon Cities annual conference. They were there for a full-day introduction to AI - and they told me something important right away.

The attendees were curious, eager to learn while running cities, making budget decisions, trying to serve their communities responsibly, most without “big city” resources.

Just before my presentation on "the magic in the middle" of AI governance, Sean McSpaden spoke. Sean is the Principal Legislative IT Analyst and Administrator for Oregon's Joint Legislative Committee on Information Management and Technology. Which means when he talks about what's actually happening in government IT environments, he's not speculating. He's reporting.

And what he reported made the room go quiet.

The Oregon AI/GenAI Use Case Inventory

The state had recently conducted an AI/Gen AI Use Case Inventory, requesting data from the 19 agencies with the largest IT organizations. The assumption was straightforward: this inventory would capture the majority of AI-enabled software and systems deployed within Oregon's IT operating environment.

The preliminary inventory identified 208 unique products from 133 unique software vendors.

"Notably, many of the systems within the inventory were not originally procured with AI or Generative AI in mind," he explained. "That means these capabilities are now embedded within the products and available for use by agency employees with or without the customer agency's knowledge or consent."

Let that sink in.

State agencies didn't buy 208 AI tools. They bought software. Then their vendors added AI capabilities. And now state employees have access to AI features that process government data through systems that were never reviewed, never approved, and in many cases, never even disclosed to the people responsible for data governance and security.

I watched people shift in their seats. You could see the mental math happening:

If Oregon's state government - with formal IT controls, legislative oversight, and procurement requirements - has 208 AI-enabled products they didn't knowingly procure... what about my city?

In my part of the presentation, I showed them the most recent statistics from Zylo's SaaS Management Index. The average organization is using 275 applications. IT owns just 26.1% of spending on those apps - but still holds 100% of the accountability when something goes wrong.

That's when the "oh shit" moment hit.

Because here's what I told them next, and what I'm telling you now: panintelligence data showed that more than 67% of SaaS software and applications have added AI features. If you don't know what 74% of your applications are doing, and two-thirds of all applications now have AI capabilities... do the math on your actual exposure.

We're not talking about people sneaking in one AI tool. We're talking about AI being peanut-buttered into tools they're already using. Tools that were approved last year, before they had AI. Tools that auto-update their terms of service. Tools that now process your data through large language models you never agreed to.

Tools that, as Sean put it, now have capabilities "without the customer agency's knowledge or consent."

And if you think your cyber insurance will cover the fallout when this goes wrong, I have a story from Hamilton, Ontario that should terrify you.

An $18.3 Million Wake-Up Call

In February 2024, ransomware attackers crippled roughly 80% of the City of Hamilton's network. Business licensing, property-tax processing, transit-planning systems - all encrypted. The attackers demanded $18.5 million.

Hamilton refused to pay the ransom. They did the right thing.

Then they filed their cyber insurance claim to cover the cleanup and recovery costs.

The insurer denied it.

Why? Because Hamilton hadn't fully implemented multi-factor authentication at the time of the breach. That gap in their security controls - the absence of MFA - meant their insurance policy was void. Taxpayers are now on the hook for the entire $18.3 million recovery bill.

Let me be crystal clear about what this means:

Your cyber insurance has exclusions for violations of security policies. Shadow IT - applications your employees are using without legal or IT's knowledge or approval - is a violation of security policies. One employee, one unapproved tool, one data exposure, and your coverage could be void.

But here's what really hurts: Hamilton's case was about their own security controls. Now imagine your insurance claim gets denied not because of something you failed to do, but because one of your software vendors had inadequate security. Because a vendor you've never heard of - a subprocessor buried three layers deep in your supply chain - got breached and exposed your data.

You inherit your vendors' risk whether you know it or not. And right now, most organizations have no idea what their vendor ecosystem actually looks like.

Your Supply Chain Is Your Attack Surface

The latest Zylo data tells a stark story: In 2024, the average company's SaaS portfolio grew to 275 applications, with annual spending hitting $49 million. For the first time since 2021, both portfolio size and spending increased year-over-year - 2.2% growth in apps, 9.3% growth in costs.

But here's what those numbers don't show:

Every one of those applications is a potential entry point. Every vendor is a link in your security chain. And every time a vendor adds AI capabilities - which most of them are doing - they're changing what they do with your data, often without meaningful notification or your explicit consent.

Let’s think about what's actually happening when software vendors add AI features.

They're sending your data to third-party AI providers (OpenAI, Anthropic, Google) that you never contracted with directly. They're using your inputs to train or improve their models unless you know to opt out - and sometimes even if you do. They're storing your data in new jurisdictions to leverage AI infrastructure. They're creating new subprocessor relationships buried in updated terms of service that no one reads.

And they're doing all of this while shifting liability back to you.

I've read scores of AI-augmented software terms of service in the past year. The pattern is consistent and alarming: Vendors reserve the right to use AI "to improve the service." They disclaim responsibility for AI-generated outputs. They require you to ensure compliance with applicable laws when using AI features. They update these terms with minimal notice, often with changes buried in lengthy legal documents that assume you'll just click "I agree."

Your legal team probably reviewed the original contract when you signed with that vendor three years ago. But did they review the terms of service update that arrived last quarter, adding AI capabilities and fundamentally changing the data processing agreement? Did anyone even notice that update happened?

According to Zylo's research, lines of business now control 70% of software spending and own 50.5% of applications. These organizational leaders are smart, capable people making reasonable decisions about tools their teams need. But they're not equipped to evaluate data processing agreements, assess subprocessor risk, or understand the implications when a vendor's AI features route data through infrastructure in countries with different privacy laws.

That's not a criticism. It's a structural problem. We've distributed purchasing authority without distributing the expertise required to manage the risks that come with it.

Why AI Changes Everything About Vendor Risk

Traditional vendor risk management looked like this: You sign a contract. Legal reviews it. You do a security questionnaire (maybe). You check in annually. The vendor's capabilities and risk profile stay relatively stable.

AI-augmented vendor risk looks like this: You sign a contract for a CRM tool. Six months later, the vendor adds AI-powered lead scoring. Now your customer data is being processed through their AI models. They've added OpenAI as a subprocessor. They've updated their terms to allow using your data for "model improvement" unless you opt out through a setting buried in admin preferences. Your data residency requirements just got complicated because the AI processing happens in different data centers than the core application.

None of this required a new contract. None of this triggered a legal review. IT might not even know the AI features were added if they're not actively using this particular tool. And your security questionnaire from last year is now dangerously outdated.

This is exactly what Sean found in Oregon's inventory. Many systems "were not originally procured with AI or Generative AI in mind" - meaning agencies did their due diligence when they bought the software, but then vendors fundamentally changed what those systems do. The AI capabilities are now "often without organizations knowing."

Think about the governance implications: Oregon did everything right according to traditional procurement processes. They had formal IT governance. Legislative oversight. Procurement requirements. And they still ended up with 208 AI-enabled products processing government data through systems that were never reviewed for AI-specific risks.

If a state government with those controls can end up in this situation, what does your vendor ecosystem look like?

This is happening across your entire software portfolio right now.

Spending on AI-native tools jumped 75.2% year-over-year. ChatGPT went from the 14th most-expensed application to the 2nd.

But the bigger risk isn't the AI tools people are consciously adopting - it's the AI capabilities being quietly added to the software applications you already have, each one potentially changing their data processing practices, their subprocessor relationships, and their risk profiles.

And here's the brutal economics: Zylo reported organizations were spending 9.3% more on software, while also wasting 52.7% of their licenses - which translates to $21 million in unused software annually for the average organization. We're paying more, using less, and inheriting exponentially more risk with every vendor relationship.

The cost increase isn't the real problem.

The real problem is what you're buying: exposure you don't understand, through vendors you don't control, with terms of service that favor them and liability that falls on you.

The Governance Gap That We Should Be Screaming About

I've written recently about durable skills - the human capabilities that remain valuable regardless of technological disruption. Skills like critical thinking, ethical reasoning, judgment in ambiguous situations, and the ability to assess risk when the rules haven't been written yet.

We're seeing exactly why these skills matter now. But we're also seeing why they need to be applied collaboratively across functions that traditionally operated in silos.

This is what I call the magic in the middle - the intersection where data governance meets people governance to create actual AI governance. But there's a third rail that's become critical: legal governance of your vendor ecosystem.

Right now, in most organizations, software acquisition looks like this:

  1. Business unit identifies a need

  2. Someone finds a tool that solves it

  3. Maybe IT gets a courtesy heads-up (maybe not)

  4. Purchase happens through expensing or a simple contract

  5. Legal reviews it if it's over a certain dollar threshold

  6. Tool gets deployed and forgotten until renewal

This workflow made sense when software was relatively static and risks were primarily about cost and basic security. It's dangerously inadequate now.

Here's what software acquisition should look like in an AI-augmented world:

  1. Business unit identifies a need

  2. IT evaluates: What data will this tool access? What other tools does this duplicate or integrate with? What's our visibility into usage and risk?

  3. Legal evaluates: What are the data processing terms? Who are the subprocessors? What AI capabilities exist or might be added? What's our liability if AI features generate problematic outputs? What happens to our data if we terminate?

  4. Security evaluates: What's the vendor's security posture? How do they handle AI model security? What's their incident response history? What certifications do they maintain?

  5. Privacy evaluates: Does this comply with our data handling requirements? What consent do we need? What rights do data subjects have? How do AI features impact our privacy obligations?

  6. Procurement negotiates: Based on all of the above, what contractual protections do we need?

This isn't bureaucracy. This is basic risk management when your vendors are your attack surface.

And yet, according to Netskope's Cloud Confidence Index (CCI) assessment of the applications in Zylo's dataset, 60.8% of expensed software had a "Poor" or "Low" CCI score. These aren't obscure tools buried in forgotten departments. These are applications employees are actively expensing, using, depending on - with security postures that should alarm anyone responsible for organizational risk.

Your legal team should be screaming about this. Because when the breach happens, when the insurance claim gets denied, when the regulatory investigation starts, the question won't be "did IT have good security controls?" It will be "did you have reasonable processes to evaluate and manage vendor risk?"

And if your answer is "we trusted employees to choose good tools" or "we relied on IT to discover what was being used after the fact," you're going to have a very expensive problem.

We Need to Climb Out Of The Comfortable Valley

In my recent writing, I've explored the concept of local maxima - those comfortable valleys where every small step in any direction seems to lead uphill, toward more pain, more change, more uncertainty. So we stay put, optimizing what we know instead of seeking higher ground.

Organizations have been optimizing for speed and autonomy for years. Decentralized purchasing. Empowering lines of business to choose their own tools. "Move fast and break things." It worked, more or less, when the tools were relatively simple and the risks were mostly about duplicated spending or brand inconsistency.

But the ground has shifted beneath us.

We're no longer just managing software costs. We're managing an expanding attack surface where every vendor relationship is a potential security incident, every AI feature update is a potential compliance violation, and every terms of service change is a potential liability transfer.

The numbers make this painfully clear: Organizations without active SaaS management programs could see portfolio growth upwards of 33.2% each year. That's 90 new applications annually for the average company. That's 90 new vendor relationships. 90 new contracts to review. 90 new security postures to evaluate. 90 new opportunities for AI capabilities to be added without your knowledge.

And with IT owning just barely a quarter of spend and 15.9% of apps - yet remaining ultimately accountable for security, risk mitigation, governance, and compliance across all of them - the structural mismatch between authority and accountability is becoming untenable.

Think about that for a moment.

Your business units have the authority to create vendor relationships that could tank your cyber insurance coverage. Your legal team doesn't see the contracts until they're over a certain dollar threshold (if then). Your IT and security teams are trying to protect an attack surface they can't fully see. And your insurance company is writing exclusions based on security controls and vendor management practices you may not even know you're failing to implement.

This isn't sustainable. And the vendors know it.

Terms of Service Are Not Your Friend

Here's an uncomfortable truth: Software vendors are not incentivized to protect your interests when they add AI capabilities. They're incentivized to add features quickly, differentiate from competitors, and shift liability away from themselves.

I've watched this pattern repeat across industries.

A productivity tool adds AI writing assistance. The updated terms of service (buried in an email you probably didn't read) now state that your inputs may be used to "train and improve AI models" unless you manually opt out in settings. They've added OpenAI as a subprocessor. They disclaim accuracy of AI-generated content. They require you to ensure compliance with applicable laws when using AI features.

A customer service platform adds AI chatbot capabilities. The new data processing addendum (which you'd only see if you specifically requested it) reveals they're now routing customer conversations through multiple AI providers depending on language and complexity. Your customers' personal information is being processed by vendors you've never heard of, in jurisdictions you didn't agree to.

A project management tool adds AI task prediction. They've updated their retention policies because training AI models requires keeping data longer. Your project data - including potentially sensitive strategic information - is now retained for "model improvement purposes" beyond your original retention agreement.

These aren't hypothetical scenarios. These are real patterns I'm seeing across the SaaS ecosystem as vendors race to add AI features.

Remember: Oregon identified 208 unique AI-enabled products from 133 vendors, and most were "not originally procured with AI or Generative AI in mind." Each of those 133 vendors made unilateral decisions to add AI capabilities. Each one updated their terms of service. Each one created new data processing relationships. And state agencies - with procurement oversight and more formal governance - only discovered this when they specifically went looking for it.

How many of your software ecosystem vendors have done the same thing?

When was the last time someone in your organization actually read an updated terms of service document?

Meanwhile, the legal liability flows downstream to you. When the AI generates discriminatory hiring recommendations, you're liable. When the AI hallucinates false information that damages someone, you're liable. When customer data gets exposed because a subprocessor three layers deep had inadequate security, you're liable - and your insurance might not cover it because you never properly vetted your vendor's vendor's vendor.

Your legal team needs to be at the table for software acquisition. Not just for contracts over $100K. Not just for "enterprise" deals. For every application that touches your data, serves your customers, or processes information that could create liability.

Because the vendors' lawyers are definitely at their table, writing terms that protect them and expose you.

This isn't sustainable. And the vendors know it - which is why their terms of service are written the way they are.

The Skills Gap No One's Talking About

Most employees have been trained to ask: "Does this tool solve my immediate problem?" Very few have been trained to ask:

  • What am I agreeing to when I click 'accept'?

  • Where does my data go when I upload it?

  • Who are the subprocessors, and what are their security practices?

  • What happens if this tool breaks, or gets breached, or uses our data in ways we didn't anticipate?

  • Does this create risks my organization isn't prepared to handle?

  • What liability am I creating for my employer?

These aren't technical questions. They're judgment questions. Governance questions. Questions about understanding the second-order and third-order consequences of our choices.

And they require collaboration between functions that traditionally operated independently:

IT understands the technical integration, the data flows, the security implications of how systems connect.

Legal understands the contractual obligations, the liability frameworks, the regulatory requirements we're bound by.

Privacy understands the data handling requirements, the consent mechanisms, the rights we need to preserve.

Security understands the threat landscape, the vendor risk indicators, the controls needed to mitigate exposure.

Organizational leaders understand the operational needs, the user experience requirements, the value proposition that makes a tool worth considering.

But here's the problem: These perspectives rarely converge before a purchasing decision happens. Someone in Marketing finds a customer engagement platform with AI-powered personalization. They check with IT about integration. IT says it works technically. The tool gets deployed. Six months later, Privacy discovers it’s sending customer data to three AI subprocessors in different jurisdictions. Legal has never seen the data processing agreement. Security didn't know to evaluate the vendor's AI-specific controls.

The conversation happened - just in sequence, not in collaboration. And by the time all the right questions get asked, the tool is already embedded in workflows, contracts are signed, and the risk is already live.

The magic in the middle requires these functions to collaborate before decisions are made, not discover problems after deployment. Not handoffs. Not 'IT will handle the technical stuff, Legal will handle contracts, Security will scan for vulnerabilities.' Actual collaboration where different expertise converges on the same decision at the same time.

Right now, most organizations don't have forums, processes, or even relationships that make this kind of collaboration possible. They have approval chains that assume someone else is asking the hard questions

It's Budget Season: Are You Budgeting for the Right Things?

Right now, organizations are finalizing their 2026 budgets. IT leaders are making cases for new tools, infrastructure upgrades, security solutions. Organizational leaders are advocating for the applications their teams need to stay competitive. Boards are asking – how are you using AI to get ahead?

But here's what I'm not seeing enough of in those budget conversations: Investment in the governance infrastructure that makes your vendor ecosystem manageable instead of catastrophic.

Are you budgeting for:

  • Legal resources dedicated to reviewing software terms of service, not just six-figure enterprise contracts, but the dozens of mid-tier tools that collectively create massive exposure?

  • Vendor risk management platforms that give you visibility into your entire supply chain, track subprocessor relationships, and alert you when vendors add AI capabilities or change data processing terms?

  • Cross-functional governance processes that bring IT, Legal, Privacy, Security, and Business together beforepurchasing decisions happen, not after incidents occur?

  • AI literacy training that teaches employees to evaluate vendor risk, understand data processing implications, and recognize when they're about to create exposure?

  • Contract negotiation expertise to push back on vendor terms that unreasonably shift AI-related liability to customers?

  • The time and headcount to actually review the terms of service for the hundreds of applications you already have, let alone the new ones coming in at a 33% annual growth rate?

Hamilton's MFA gap cost them $18.3 million in denied coverage. And that was for a relatively straightforward security control failure. What happens when the insurance adjuster asks:

  • "Did you know Vendor X added AI capabilities that process your data through third-party subprocessors?"

  • "Did you review and approve the updated terms of service?"

  • "Did you conduct due diligence on their AI security practices?"

  • "Did you have processes in place to track vendor changes that materially impact your risk profile?"

If your answer is "we trusted our employees to choose good tools" or "we relied on IT to discover things after deployment," your claim might be denied. And unlike Hamilton's $18.3 million, your exposure could be exponentially larger if it involves customer data, regulatory violations, or AI-generated harms.

The growing cost of software - that 9.3% year-over-year increase - isn't the problem. The problem is that we're spending more while managing less, understanding less, and controlling less of what we're actually buying.

You're not purchasing software anymore. You're purchasing relationships with vendors whose security practices, data handling policies, AI capabilities, and subprocessor networks could determine whether your cyber insurance covers you when something goes wrong.

Budget accordingly.

We Need Better Digital Citizens Across Every Function

I don't believe IT leaders are solely responsible for fixing this. I don't believe legal teams should carry this alone either. I think the responsibility lies with every leader in every area of every organization.

IT and InfoSec can protect, but they need to know what exists in the environment and have authority to enforce standards. They need partners, not surprises.

Legal teams can negotiate contracts and assess liability, but they need to be brought into software acquisition decisions early, not just for enterprise deals. They need collaboration, not handoffs after tools are already deployed.

Privacy professionals can ensure compliance, but they need visibility into data flows and vendor relationships. They need involvement, not after-the-fact notification.

Business leaders can innovate and drive productivity, but they need to understand that vendor selection is a risk decision, not just a utility decision. They need education, not blame.

Employees can be productive, but they need training that goes beyond "how to use this software" to "how to be a responsible steward of our organization's data and reputation." They need context, not just tools.

This is about collective digital citizenship. It's about recognizing that in an AI-augmented world where 67% of your vendors have added AI capabilities - often "with or without the customer agency's knowledge or consent," as Sean discovered - every person who touches technology is making decisions that create or mitigate risk for themselves, for the organization, and for everyone whose data you hold.

The durable skills that matter most right now aren't Python or prompt engineering. They're:

  • Critical thinking to evaluate whether a vendor's AI features create risks that outweigh their benefits

  • Ethical reasoning to recognize when terms of service shift unacceptable liability to your organization

  • Risk assessment in situations where vendor AI capabilities are changing faster than regulations can keep up

  • Judgment about what data should and shouldn't be processed by vendor AI systems

  • Collaboration across IT, Legal, Privacy, Security, and Business to create governance that enables innovation while managing exposure

These are human capabilities that become more valuable as AI becomes more capable and more embedded in your vendor ecosystem. And they're capabilities most organizations aren't systematically developing across functions.

Going Backward to Go Forward

I've written before about Dick Fosbury, who revolutionized high jumping by turning around and going over backward. He looked ridiculous. People laughed. But he understood something others didn't: foam mattresses had changed the fundamental conditions of the sport. Entirely new approaches were now possible.

AI is our foam mattress. But it's not just changing what we can do - it's changing what our vendors are doing with our data, often without our knowledge or meaningful consent.

Most organizations are still perfecting their scissors - optimizing decentralized purchasing, trusting vendor assurances, signing terms of service without deep legal review, treating software acquisition as a utility decision rather than a risk decision.

Going backward might look like:

  • Admitting we don't actually know what our vendor ecosystem looks like or what they're doing with our data

  • Acknowledging that the speed-first, trust-based vendor selection process isn't working anymore

  • Recognizing that Legal needs to be involved in software acquisition decisions regardless of dollar amount

  • Accepting that we need cross-functional collaboration, not siloed decision-making

  • Slowing down vendor selection long enough to understand what we're agreeing to and what risks we're inheriting

This feels uncomfortable. It feels like adding friction when everyone's trying to move faster. It feels like going downhill when we thought we were climbing.

But sometimes moving forward means going backward first. Sometimes reaching the global optimum - the truly secure, truly innovative, truly responsible organization - requires the courage to climb out of our comfortable valley and build something better.

Because the alternative is standing in that valley when the breach happens, when the insurance claim gets denied, when you realize that the vendor you never properly vetted just exposed your customers' data through an AI subprocessor you didn't know existed.

An Urgent Plea: Invest in the Magic in the Middle

I started with that room full of Oregon civic leaders, watching them process Sean’s revelation that state government had 208 AI-enabled products from 133 vendors - most "not originally procured with AI or Generative AI in mind," with capabilities now "available for use by agency employees with or without the customer agency's knowledge or consent."

I'll end with the same urgent plea I made to them:

Your employees need to understand why vendor relationships create risk. AI literacy isn't optional anymore, and it's not just about how to use AI tools. It's about understanding what happens when vendors add AI capabilities, where your data goes, what you're agreeing to, and what responsibilities you have as stewards of organizational and customer data.

Your vendor ecosystem needs active governance. The old model - sign a contract, maybe do a security questionnaire, check in at renewal - doesn't work when vendors can fundamentally change what they do with your data by adding AI features and updating terms of service. You need visibility into your entire supply chain. You need processes to track vendor changes. You need Legal involved in acquisition decisions. You need the ability to assess risk before it becomes crisis.

The magic happens in the middle. Data governance without people governance is incomplete. People governance without data governance is dangerous. And both are insufficient without legal and security governance of your vendor relationships. AI governance requires all of these working together - IT understanding the technical implications, Legal understanding the contractual obligations, Privacy understanding the data handling requirements, Security understanding the threat landscape, and Business understanding the operational needs.

We're three years into the generative AI era.

We've had time to see the patterns, learn from the breaches, understand the risks. The insurance companies certainly have - they're now denying claims when security controls aren't in place. Soon they'll be asking about your vendor risk management practices too, if they aren't already.

The question isn't whether your vendors are adding AI capabilities without proper oversight. The question is whether you're going to treat this as a problem to be managed reactively after incidents occur, or as a transformation moment requiring investment in cross-functional collaboration, governance frameworks, legal resources, and collective responsibility.

At AIGG, we were born out of recognition of these growing risks. We work with organizations to build AI governance programs that connect data protection with people development and vendor risk management. That create visibility without creating bottlenecks. That bring Legal, IT, Privacy, Security, and Business together to make informed decisions about vendor relationships. That develop the durable skills - the judgment, the critical thinking, the ethical reasoning, the cross-functional collaboration - that make AI safe instead of catastrophic.

Because here's what I know after years of living through technology transformations: The organizations that survive and thrive aren't the ones with the most AI tools or the lowest software costs.

They're the ones with the best digital citizens across every function. The ones where IT, Legal, Privacy, Security, and Leadership teams understand their shared role in governance. The ones where authority and accountability actually align. The ones where people ask "should we, and what are we agreeing to?" before they ask "can we?"

Your insurance company is already asking these questions. Your auditors will be soon. Your board should be asking them now. And your vendors are writing terms of service that assume you won't ask them at all.

The comfortable valley - where we optimize for speed and trust and assume our vendors will protect our interests - isn't safe anymore. The foam mattress has arrived. The conditions have changed. Your supply chain is your attack surface.

It's time to turn around and jump backward, even if it looks foolish, even if it means going slower before we can go faster, even if it requires admitting we don't have vendor risk management figured out yet.

Because the alternative - the $18.3 million alternative, the denied insurance claim alternative, the "we had no idea our vendor's subprocessor was processing our data that way" alternative - is worse.

Let's build the magic in the middle. Together. Across functions. Before the next budget season becomes the next crisis, and before your vendor ecosystem becomes your liability.

Onward.

Resources from AIGG on your AI Journey

At AIGG, we understand that adopting AI isn’t just about the technology, it’s about people. People using technology responsibly, ethically, and with a focus on protecting privacy while building trust. We’ve been through business’ digital transformations before, and we’re here to guide you every step of the way.

No matter your type of organization, school district, government agency, nonprofit or business, our team of C-level expert guides - including attorneys, anthropologists, data scientists, and business leaders - can help you craft bespoke programs and practices that align with your goals and values. We’ll also equip you with the knowledge and tools to build your team’s literacy, your responsible practices, TOS review playbooks, guidelines, and guardrails as you leverage AI in your products and services.

Don’t leave your AI journey to chance.

Connect with us today for your AI adoption support, including AI Literacy training, AI pilot support, AI policy protection, risk mitigation strategies, and developing your O’Mind for scaling value. Schedule a bespoke workshop to ensure your organization makes AI work safely and advantageously for you.

Your next step is simple. Let’s talk together and start your journey towards safe, strategic AI adoption and deployment with AIGG.

Let’s invite AI in on our own terms.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Next
Next

Request for Collaboration: Help Shape the AI & Data Safety Shield for Schools