Why “Efficiency Only” Leaders Will Miss the Future of Work
In 1984, the Macintosh was introduced in an ad that ran only once, during the Superbowl.
Apple Computer (as it was known back then) promised that Macintosh technology would shatter conformity and open up new ways to think and work.
Escaping AI’s Local Maxima
I was stepping into tech just as the Apple “think different” notion was in its infancy, and that combination of timing and technology changed the trajectory of my life.
Forty-two years later, the dominant story about AI could not be more different. Instead of inviting us to ‘think different,’ too many leaders are being told to think smaller: treat AI as a task accelerator, a way to do the same work a bit faster and cheaper. That may look efficient on a dashboard, but it completely misses the question that defined my career.
I would be endangered today
Nobel Prize–winning economist Simon Johnson, in a recent presentation on Technology and Global Inequality in the Age of AI, stated that around 60% of the U.S. workforce does not have a four‑year college degree, and that women and younger workers are especially vulnerable as AI reshapes job tasks and skills.
I am part of that 60%. Today, those without a four‑year degree are most at risk of being pushed into lower‑paid, low‑stability service work if AI is used mainly to automate “middle” roles, with women often feeling that squeeze first. Research has shown that women use large language models less than men, for example.
A recent Harvard Business School working paper that pooled data from more than 140,000 people across 25 countries found that women have roughly 20–22% lower odds of using generative AI tools than men, even when they work on similar tasks and have similar access.
That matters to me personally because my own career was built on saying “yes” to a new technology wave in the 1980s; if women are now hanging back from AI—often for good reasons, like risk and trust—then the very tools that opened doors for someone like me risk becoming another force that quietly shuts them.
Simon Johnson’s observations are not at all abstract for me; they define my story. I entered technology in the mid‑1980s as a woman without a bachelor’s degree, and I was able to build a career precisely because new technologies (personal computers) and new kinds of work were opening up for people like me.
A client conversation that hit too close to home
Recently, I spoke with a client who’d been at an offsite where her team listened to a deeply experienced, operations‑focused education leader walk through their view of AI - moving from buzz to business value. On the surface, it was about tactics. Tools, workflows, process improvements.
And as I listened, it became clear to me that it was essentially an efficiency presentation marketed as “AI strategy.” Keep doing what we do now; use AI to squeeze more out of the system.
I left unsettled. Not because the presentation lacked expertise (it didn’t), but because the frame felt so different from the one that allowed me to thrive decades ago. I was given a shot because - when computers first came out - my job didn’t require a pristine resume or perfect pedigree, I was hired because I could string three sentences together and make sense.
If even our most trusted leaders now see AI mainly as a way to trim time and headcount, what happens to the next 28‑year‑old woman without a degree who is looking for her first big break?
Why “AI lifts all boats” is a nothing but a myth
If you want proof that there is nothing automatic about “AI lifts all boats,” you don’t need a PhD in economics, just a phone with a stock market app. Read the recent headlines celebrating how the AI‑fueled “Magnificent 7” are powering U.S. stock indexes, as if a handful of mega‑cap tech companies could stand in for the real economy.
Meanwhile, the people I sit with - the educators, staff, students, and leaders who are honestly trying to do the right thing - are asking much more basic questions:
“Will this help me do better work?”
“Will my job even exist?”
“Where do I fit in this picture?”
“Will my degree be worth anything when I graduate?”
The distance between those headlines and those questions is exactly why we cannot pretend AI will magically lift all boats.
Johnson’s historical view backs this up. In the post‑1940 decades, new tasks appeared fast enough that productivity gains translated into broader prosperity. Workers without degrees, like my younger self, could ride that wave into solid, meaningful work. Since the 1980s, especially for less‑educated workers, that engine has stalled.
Automation has dominated, and new middle‑skill, middle‑income roles have not kept pace.
Escaping the “efficiency only” local maximum
In an earlier piece, I wrote about escaping local maxima: the trap of polishing what we already do instead of asking what new peak we should be climbing. AI is the ultimate local‑maxima trap. If leaders define success as “same org chart, fewer people,” they will absolutely find ways to cut costs.
But they will also lock in three big losses:
First, new task creation. When we only ask, “What can we automate?” we stop asking, “What new, meaningful work could exist if we used AI as a thought partner in addition to automating our work?” Johnson’s data show that shared prosperity depends on creating new, better work, not just stripping out old ones.
Second, durable skills. An efficiency‑only mindset underinvests in the deeper capabilities humans bring. Judgment, creativity, sense‑making, collaboration across differences, and empathy. The “durable skills” that do not expire with every tools release. Those are exactly the skills that allowed someone like me, without a degree but with curiosity and resilience, to build a 40-plus year career in tech.
Third, trust. When frontline workers and students see AI appear mainly as a way to monitor, speed up, or deskill their work, they do not experience it as innovation. They experience it as extraction. Threatening their very livelihoods.
The future of jobs is a choice, not a destiny
Johnson’s work makes something very clear: there is nothing inevitable about AI creating good jobs.
The periods when technology supported broad middle‑class growth were not accidents. They were times when new work was actively created, worker voices strengthened, and institutions chose to use technology to augment human expertise instead of simply replacing it.
Today, three years into our new LLM-enabled world, nearly every job is partly an AI job; which means nearly every leader is now, whether they like it or not, an AI governance leader.
Choosing pro‑worker AI is not about banning automation; it is about asking three prior questions:
What new, meaningful work becomes possible here for people without four‑year degrees, for women whose skills are disproportionately at risk, for Gen Z and Millennials whose roles are highly exposed?
How do we design AI so that our people become more expert and more trusted, not more monitored and replaceable?
How will we track who is being augmented versus who is being automated, and are we prepared to change course when we see inequitable patterns emerge?
AI literacy as a boardroom skill, not just a classroom slogan
We talk a lot about AI literacy for students. We talk less about AI literacy for executives and boards, even though their decisions shape the landscape everyone else has to live in.
The most important literacy is not memorizing model names; it is the courage and capacity to wrestle with some hard questions:
Can most leaders explain, in plain language, how AI is changing work in their organizations right now?
(And not just in workflow and automation documents, but in strategic competitiveness and upskilling career paths? )
Where are we using AI primarily to track, score, or speed up people, and how are we using it to genuinely expand what humans can do?
What is our plan for sharing upside when AI improves outcomes - through wages, new roles, training, or community investment - or are we concentrating gains only at the top?
When estimates suggest that more than half of workers’ current skills are likely to be affected by generative AI and that skillsets could change by up to 65% by 2030, this stops being a “nice to have” conversation.
Redefining durable skills for AI 2.0
The skills conversation in many conference rooms is still stuck at the tools level: prompts, dashboards, platform X versus platform Y.
The deeper question is: who do we need to become to collaborate with AI well?
To me, durable skills in an AI‑augmented world look like meaning‑making, relationship work, and system‑level thinking.
Meaning‑making is the ability to interpret outputs, ask better questions, and know when to override the model, not just how to get a slick paragraph or pretty chart.
Relationship work is building and maintaining trust with students, patients, customers, and colleagues in a world where deepfakes, misinformation, and opaque systems are everywhere.
System‑level thinking is seeing how AI decisions ripple through strategies, policy, equity, and long‑term institutional health, not just this quarter’s metrics
Leaders: ignoring these important skills is not staying neutral; it is quietly accepting that existing inequalities will deepen on your watch.
These are not “soft skills.”
They are precisely the skills that are hardest to automate and most essential to pro‑humanity AI.
Johnson’s work, and my own career arc, both suggest that when technology complements this kind of human expertise, people without traditional credentials can still build rich, impactful careers. When it replaces or devalues that expertise, inequality grows, and the ladder I climbed gets pulled up behind me.
From someone who has seen 40+ years of tech
When Apple ran that 1984 commercial, I did not have a college degree, but I did have curiosity, grit, and access to a new kind of machine (and it happened to be a Mac).
That combination was enough to open doors.
Looking back from today, with AI at the center of almost every strategic conversation, the question is no longer whether AI will transform work. It’s whether we will let it trap us in a slightly more efficient version of the present, or use it to create new, better work for more people like the younger me, in more places around the globe.
If you are leading in higher education, business, or the public sector, your AI choices are already shaping the next decade of jobs and learning.
If your roadmap is silent on frontline workers, on women and younger staff, on people without four‑year degrees, you’re probably optimizing for the wrong future.
The good news is that it is not too late to escape the local maximum. But it will take more than a few tactical projects.
It will take courageous governance: making AI literacy a board‑room skill, investing in durable skills, and insisting that new technologies are designed to serve minds… not just margins.
Resources from AIGG on your AI Journey
At AIGG, we understand that adopting AI isn’t just about the technology, it’s about people. People using technology responsibly, ethically, and with a focus on protecting privacy while building trust. We’ve been through business’ digital transformations before, and we’re here to guide you every step of the way.
No matter your type of organization, school district, government agency, nonprofit or business, our team of C-level expert guides - including attorneys, anthropologists, data scientists, and business leaders - can help you craft bespoke programs and practices that align with your goals and values. We’ll also equip you with the knowledge and tools to build your team’s literacy, your responsible practices, TOS review playbooks, guidelines, and guardrails as you leverage AI in your products and services.
Don’t leave your AI journey to chance.
Connect with us today for your AI adoption support, including AI Literacy training, AI pilot support, AI policy protection, risk mitigation strategies, and developing your O’Mind for scaling value. Schedule a bespoke workshop to ensure your organization makes AI work safely and advantageously for you.
Your next step is simple. Let’s talk together and start your journey towards safe, strategic AI adoption and deployment with AIGG.

