Human Dimensions of AI Literacy: Why Teaching Tech Misses the Point

Margaret used to send me the robot emoji when my emails got too “stiffy,” as she used to say. 🤖

After twenty plus years of living tech transformations together, she'd perfected the art of being deeply human in digital spaces, which was something I was always learning.

She's been gone a year now, and I find myself thinking about that 🤖 emoji a lot these days. Especially as I watch organizations scramble to prepare their people for an AI-augmented world by teaching them to write better prompts. To use ChatGPT like a more efficient search engine. To train AI sales agents to connect with their prospects.

We're optimizing the scissors when the foam mattress has already arrived.

Here's what I mean. We're teaching people to do their current jobs faster with AI, when we should be asking an entirely different question: what becomes possible when the fundamental conditions of work have changed?

And more importantly: What makes humans irreplaceable when machines can handle so much of what we used to do?

The Taxonomy I Couldn’t Access (Until I Could)

Last week, I listened to a webinar where Elena Magrini from Lightcast shared research that stopped me cold: eight of the top ten most-requested AI skills in job postings are human skills.

Not Python. Not machine learning. Communication. Critical thinking. Leadership.

She was talking about something she called “durable skills” - the human capabilities that remain valuable regardless of technological disruption. And she mentioned that Lightcast maintains a taxonomy of more than 30,000 of these skills, hundreds of which critically apply to leveraging AI in jobs of the future.

I needed to see that list.

They offered it via an API.

And here's the thing: I'm not a programmer. The Lightcast API required credentials, authentication, scripts I didn't know how to write. I had a clear goal and no obvious path to reach it. Sure, I knew folks who could help me. We have smart people here at AIGG. But we're all incredibly busy.

So I did what anyone preparing for an AI-augmented world should do: I partnered with AI to learn something I didn't know.

Claude and I spent the better part of an afternoon iterating. The first script had syntax errors. Then authentication issues. I asked embarrassingly basic questions: “Do I need quotes around my credentials?” Each failure was a learning opportunity. Each error message taught me something new.

We were practicing exactly the skills the data would later reveal as most valuable: problem-solving when things don't work, persistence through multiple failures, adaptability as we adjusted our approach, critical thinking to understand error messages, collaboration between human and AI, learning agility to acquire new technical knowledge on the fly.

After several attempts, the script worked. We pulled 475 of those AI critical, durable skills from Lightcast's database.

And here's what hit me as I read through them: Technical skills - like the Python syntax I had just created without knowing anything - are being turned over to tools.

The durable human skills were what made the learning possible.

Tomorrow's workers will likely use entirely different tools. But they'll still need these same fundamental, durable human capabilities.

The irony would have made Margaret smile. She'd spent twenty years encouraging me to be more human in digital spaces, and I was being coached through my failures by a machine, while sharing those lessons here - publicly exposing my very human attempts and failures. Twenty years of robot emojis reminding me to be more human had somehow prepared me for this moment.

The Magic in the Middle

I spend most of my days in AI governance conversations. And I've noticed something troubling: Too many organizations obsess over data governance while barely thinking about people governance, if they think about it at all.

They'll spend months building frameworks to protect data exposure, manage risk, ensure compliance. All critical work. But then they'll deploy AI tools to their workforce with minimal thought about who needs what kind of training, how people actually learn new capabilities, which human skills become more valuable as AI handles routine tasks, or what happens to workers whose expertise becomes automated.

It's the same mistake I see in AI “rollouts.” Organizations focus on technical clusters, who “ready” the organization by launching Copilot without restricting other “shadow AI” - because that launch feels concrete. Measurable. A goal to be knocked off a list.

But remember what employers are asking for in their job postings. When Lightcast filtered for roles requiring AI skills, they found: eight of the top ten most-requested capabilities are human skills.

Not Python. Not prompt engineering. Not machine learning expertise. Human skills like communication, critical thinking, leadership, creativity, and ethical judgment.

This is where the magic happens. In the middle. Where technical capability meets human wisdom.

Just like governance. You can have perfect data governance frameworks, but if you don't understand how people will actually use the systems, you'll fail. You can have excellent people policies, but if your data hasn’t been governed properly, you'll perpetuate problems or make assumptions based on misinformation.

The real transformation happens where people governance meets data governance. That's proper AI governance.

And the same is true for AI skills. Technical literacy matters. Human capabilities matter more. And the real opportunity is at the intersection.

Marketing roles prioritize generative AI and natural language processing. Manufacturing needs robotics and computer vision. Finance demands machine learning and predictive modeling. HR is bringing in AI from the bottom up. Different careers need dramatically different combinations of technical skills.

But here's what almost nobody is teaching: How each of these technical clusters intersect with durable human skills to create actual value. We're so focused on which technical capabilities matter for which roles that we're missing the more fundamental question: what happens when you combine technical AI literacy with the durable, human skills that amplify it?

That intersection between knowing how AI works and knowing how humans work. Between technical possibility and human need. Between what machines can do and what only people can do. That's where the real competitive advantage lies. That's where transformation happens. That's where the future of work must be built.

The Scissors We’re All Still Perfecting

As I wrote a couple of weeks ago, Dick Fosbury revolutionized high jumping by turning around and going over backward. He looked ridiculous. People laughed. Coaches shook their heads.

But Fosbury understood something profound: the conventional techniques had reached their ceiling.

For decades, high jumpers had refined their approaches. First the scissors technique (a belly-down scissoring motion over the bar), then the more sophisticated straddle. Athletes spent years incrementally improving these methods, squeezing out fractions of inches. They were all trapped on the same hill, competing to see who could extract one more centimeter from fundamentally limited approaches.

The difference? Foam mattresses had arrived. Before foam landing pits, jumping backward could break your neck. The scissors and straddle weren't just tradition, they were survival. You had to land on your feet or risk serious injury.

When conditions changed, entirely new approaches became possible. But only if you were willing to look foolish while you figured them out.

We're at that same inflection point.

AI isn't just a tool for doing our current jobs faster. It's the foam mattress. It's a fundamental change in conditions that makes entirely new approaches possible.

But most training programs are still teaching people to perfect their scissors.

“Here's how to use ChatGPT to write emails faster.” “Here's how to automate your meeting summaries.” “Here's how to generate first-draft presentations.”

We're optimizing for efficiency when we should be reimagining what's possible.

The executive who stops using AI to write better memos and starts questioning whether memos are the right communication medium at all. The teacher who moves past “AI can grade essays faster” to “I think assessment itself needs reimagining when AI can generate essays.” The financial analyst who realizes that when AI handles pattern recognition, her real superpower emerges: explaining why patterns matter, translating data into story, helping people make meaning from numbers and trend lines.

This requires going backward first. Deliberately climbing out of comfortable valleys. A period where you're less competent, not more. Where you look like you're falling off the back of a truck.

It's terrifying. I know because I've lived it.

What Grief Taught Me About Transformation

After Margaret died, I was lost. The frameworks that used to work didn't fit anymore. The person I'd been didn't exist, but the person I was becoming hadn't emerged yet.

I stumbled into something called gradient descent. Not the mathematical concept I'd later understand, but the lived experience of it. The daily recalculation of which direction leads toward something resembling okay.

Every small decision - do I get out of bed, do I answer that email, do I pretend today is normal - adjusts your trajectory by degrees you can't measure in the moment.

Machine learning algorithms use gradient descent to find optimal solutions. They take a step, measure the error, adjust, and step again. Never a straight line, but a wandering path that feels its way toward the lowest point in the loss landscape.

But here's what the mathematics reveals: gradient descent can trap you.

You find a comfortable valley, a local minimum where things feel stable enough. Every small step in any direction seems to lead toward more pain, more uncertainty, more change. So you stop. You optimize for where you are. You convince yourself this is as good as it gets.

The technical term is “local maximum.” You're stuck on a hill that isn't the highest peak. It's just the one you can see from where you're standing.

Sometimes moving forward means going backward first. Sometimes reaching the best possible version of your life requires the courage to climb out of your comfortable valley and descend into the unknown.

This isn't just about grief.

It's about every transformation we face.

And right now, every organization is standing in a comfortable valley, feeling the ground shift, wondering whether the path forward might require going down before we can climb higher.

The Curriculum Nobody’s Teaching (But Everyone Needs)

Here's what keeps me up at night: We're preparing people for jobs that don't exist yet, using skills frameworks designed for jobs that are disappearing.

Traditional education asks: “What should students know?” Corporate training asks: “What tools should employees master?”

We should be asking: “What capabilities will make humans irreplaceable as AI handles more of what we currently do?”

And then (and here's the hard part) we need to teach those capabilities in ways that don't become obsolete in six months.

Because here's the brutal truth: I can teach you to use today's AI tools, and that knowledge will be outdated by next quarter. We can learn Python syntax, as new interfaces are make coding unnecessary. I can teach you prompt engineering, just as the models are quickly evolving past needing it.

But I can teach you how to think critically about AI outputs, and that skill endures. I can teach you how to recognize when AI misses important context, and that judgment remains valuable. I can teach you how to ask which problems are worth solving (not just which ones AI can solve), and that wisdom becomes more precious, not less.

This is the curriculum transformation we need:

Not “here are the nine AI clusters, memorize them and pick your specialization.” But “here are the distinctly human capabilities that become MORE valuable as AI becomes MORE capable. Let's develop those deliberately while building fluency with AI tools.”

Not “learn these technical skills so you can compete with AI.” But “develop these human skills so you can collaborate with AI to create value that neither humans nor machines could generate alone.”

The magic in the middle. Always the magic in the middle.

Margaret knew this instinctively. We worked in technology for decades, but her talent was never in mastering the latest tool, she did that easily. It was in bringing humanity to digital spaces. In helping people feel seen. In translating between what's technically possible and what actually matters to the person on the other end.

She's not here to see this transformation. But her approach to technology - always human first, always asking “who does this serve?” - is more relevant now than ever.

What This Means for You

Stop perfecting your scissors.

Whatever your equivalent is. The technique you've been refining, the process you've optimized, the expertise you've built. Recognize that it might be optimized for conditions that no longer exist.

Start asking different questions:

Not “how can AI help me do what I currently do?” but “what becomes possible when these constraints disappear?”

Not “which technical AI skills should I teach?” but “which human capabilities become more valuable as AI handles routine work?”

Not “how do I protect my job from AI?” but “how do I become the kind of human that AI makes more powerful, not redundant?”

And then do the hard work of going backward to go forward. Admit that you don't know. Learn in public. Look foolish while you experiment. Collaborate with people who think differently. Ask questions that don't have obvious answers.

I don't know exactly what jobs will exist in five years. Neither does anyone else, despite what the think pieces claim.

But I know this: The jobs that matter will require humans who can do what AI cannot. Ask questions AI wouldn't be trained to ask. Recognize patterns of meaning, not just patterns of data. Make ethical judgments in ambiguous situations. Connect authentically with other humans. Create something new, not just optimize something existing. Hold (and share) complexity with grace. Lead through uncertainty with integrity.

These aren't optional nice-to-haves. They're the core competencies of the AI-augmented future.

Margaret used to simply say, “onward.” Not forward, not upward, just onward. Moving. Learning. Becoming.

That's what we're doing here. Becoming the kinds of humans who make AI more productive while remaining profoundly, irreplaceably human ourselves.

The foam mattress has arrived. The conditions have changed. The question is whether we have the courage to turn around and jump backward, even though we might look foolish, even though we've spent our careers perfecting our scissors.

Let's do this together. Onward.

Note: This analysis draws on Lightcast's global labor market database of billions of job postings, including 1.3 billion from the U.S. alone, combined with Stanford's AI Index Report and patterns from real-world AI adoption across industries, along with my friend Claude. But more than data, it draws on lived experience navigating transformation. In technology, in education, in loss and in learning to be publicly human in digital spaces.

What's your intersection? Where do your human capabilities meet AI possibilities? I'd love to hear what you're learning.

Resources from AIGG on your AI Journey

At AIGG, we understand that adopting AI isn’t just about the technology, it’s about people. People using technology responsibly, ethically, and with a focus on protecting privacy while building trust. We’ve been through business’ digital transformations before, and we’re here to guide you every step of the way.

No matter your type of organization, school district, government agency, nonprofit or business, our team of C-level expert guides - including attorneys, anthropologists, data scientists, and business leaders - can help you craft bespoke programs and practices that align with your goals and values. We’ll also equip you with the knowledge and tools to build your team’s literacy, your responsible practices, TOS review playbooks, guidelines, and guardrails as you leverage AI in your products and services.

Don’t leave your AI journey to chance.

Connect with us today for your AI adoption support, including AI Literacy training, AI pilot support, AI policy protection, risk mitigation strategies, and developing your O’Mind for scaling value. Schedule a bespoke workshop to ensure your organization makes AI work safely and advantageously for you.

Your next step is simple. Let’s talk together and start your journey towards safe, strategic AI adoption and deployment with AIGG.

Let’s invite AI in on our own terms.

Janet Johnson

Founding member, technologist, humanist who’s passionate about helping people understand and leverage technology for the greater good. What a great time to be alive!

Next
Next

Building the Future of Education: My Journey into Durable Skills