The Thread We Cannot Drop: A Call for Higher Education in the Age of AI
The people building these systems are trying to tell us something. The question is whether we will listen this time.
In February 2026, Mrinank Sharma resigned from Anthropic, the company behind the AI assistant Claude. He had led a team researching some of the most important questions in AI safety: why generative AI systems flatter their users instead of challenging them, how to defend against AI-assisted bioterrorism, and perhaps most hauntingly, how AI assistants could make us less human. His departure was not a protest against the technology. It was something more unsettling. In his resignation letter, Sharma wrote that the world is in peril, and not just from AI or bioweapons, but from “a whole series of interconnected crises unfolding in this very moment.” He described approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world. And he said he had repeatedly seen, within himself and within an organization explicitly built around safety values, how hard it is to truly let our values govern our actions.
He left to study poetry.
The same week, Zoe Hitzig resigned from OpenAI, the company behind ChatGPT. In an interview with BBC Newsnight, she described feeling “really nervous about working in the industry” and warned that we are creating economic engines that profit from encouraging new types of human-AI relationships before we understand them. She pointed to early warning signs that dependence on AI tools could reinforce delusions and negatively impact mental health. And then she drew the comparison that should keep every educator awake at night: “We saw what happened with social media.” There is still time, she said, to build the institutions and frameworks that can govern this. But the window is narrowing.
These are not outsiders speculating about hypothetical risks. These are the people who helped build the systems. And what they are telling us, if we listen carefully, is not that the technology is the crisis. The crisis is whether we are becoming the kind of people, and building the kind of institutions, capable of handling what we have made.
Higher education must answer that challenge. Not eventually. Now.
We Have Been Here Before
When social media emerged, higher education had every resource it needed to lead. Communication scholars, developmental psychologists, ethicists, media literacy researchers. Many of them were, in fact, doing extraordinary work. They were publishing research, sounding alarms, and teaching students to think critically about digital environments. The failure was not individual. There were people who saw what was coming and said so clearly.
The failure was institutional. Universities as organizations did not translate the scholarship happening in their own departments into preventive institutional action at scale. We studied the damage after it was done. We published papers about the crisis while our students lived inside it. We debated policy while algorithmic systems reshaped how an entire generation processes information, forms relationships, and understands truth. The individual researchers were ahead of the curve. The institutions they worked for were not.
We cannot afford to repeat that pattern.
AI is not social media. It is more intimate, more pervasive, and moving faster. It does not merely compete for attention. It generates ideas, writes arguments, simulates understanding, and increasingly passes for human thought. The stakes are not distraction. They are the erosion of the very capacities education exists to develop.
The Questions We Are Not Asking
The conversation about AI in education has been dominated by surface questions. Can students cheat with it? Should we ban it? How do we detect it? These matter, but they are not the questions that will determine whether higher education rises to this moment or watches it pass.
The deeper questions are these.
What happens to critical thinking when a tool can produce a plausible argument on any topic in seconds? Not a good argument, but a convincing-sounding one. The gap between plausible and rigorous is precisely where intellectual development lives, and students cannot learn to navigate it from a system designed, above all, to sound compelling.
What happens to creativity when the first draft is always generated rather than struggled into existence? The blank page, the false start, the agonizing moment of not-knowing are not inefficiencies to be optimized away. They are the conditions under which original thought develops. Every educator who has watched a student break through a conceptual wall knows this. The struggle is the learning.
What happens to intellectual courage when AI systems are designed to agree with users? This was the focus of Sharma’s research at Anthropic: AI sycophancy, the systematic tendency of these tools to tell people what they want to hear rather than what they need to hear. If students grow accustomed to a tool that validates every idea, produces a supportive response to every prompt, and never says “that reasoning has a hole in it,” we are cultivating something dangerous: a generation that mistakes agreement for quality and comfort for truth.
What happens to human connection when an AI is always available, always patient, never frustrated, never disappointed? The friction of human relationships, the educator who pushes back, the peer who disagrees, the mentor who holds you to a standard higher than you would set for yourself, is not a deficiency in the educational experience. It is the mechanism through which human beings grow. Hitzig’s warning about AI tools reinforcing delusions and undermining mental health is not speculative. It is already happening, and it will accelerate.
These questions do not have neat answers. But they are the questions that matter, and they demand that institutions engage with AI not as a tool to be adopted or banned, but as a fundamental challenge to what education is for.
What Higher Education Owes Its Students
We owe them preparation, not protection from a future that is already arriving. And we owe them honesty about what real preparation requires.
We owe them AI literacy woven into every discipline. Not a standalone elective, not a workshop during orientation week, but integrated understanding of what AI can and cannot do in the context of their own intellectual and professional lives. A nursing student, a business major, a philosophy student, and an engineer all need to understand AI, but they need that understanding shaped by the demands and ethics of their own fields. AI literacy is not a technical competency to be checked off. It is a form of critical consciousness, as essential to professional practice as research methods or communication skills.
We owe them assessments worthy of their humanity. This is not an indictment of faculty. Educators across higher education are doing deeply creative work under extraordinary constraints, often redesigning courses on their own time, with limited support, while carrying unsustainable teaching loads. The problem is not individual effort. The problem is that too many institutions have not invested in the sustained, discipline-specific faculty development needed to reimagine assessment at scale. When an assignment can be completed entirely by AI, the response should not be surveillance and detection. The response should be institutional investment in helping faculty design learning experiences that require what only a human being can provide: lived experience, ethical reasoning, creative risk, authentic voice, the capacity to sit with ambiguity and make a judgment call. When we design learning this way, the question of AI cheating does not disappear, but it shrinks dramatically, because the learning itself demands something AI cannot fake.
We owe them the ability to hold their values under pressure. This may be the most important lesson from Sharma’s resignation. He did not lack values. He did not lack awareness. He worked inside an organization that shared his commitments to safety and human wellbeing. And still, the pressures to set aside what matters most were relentless. Our graduates will enter organizations where the pressure to deploy AI quickly, to cut ethical corners in the name of efficiency, to prioritize speed over integrity, will be enormous. Discussion posts and case studies are not sufficient preparation for that reality. Students need embodied practice in holding the line, experiences where the cost of integrity is tangible and the temptation to compromise is genuine. Service learning, community-based projects, ethical simulations with real stakes: these are the training grounds for moral courage, and they cannot be replicated by a chatbot.
We owe them the humanities. I want to be direct about this, because I know the economic pressures are real. Enrollment data shapes decisions, and programs that do not attract students face existential scrutiny. But at the precise moment when many institutions are cutting humanities programs in favor of workforce-aligned degrees, one of the most accomplished AI safety researchers in the world is leaving technology to study poetry. Sharma’s resignation letter is explicit: he believes that poetic truth and scientific truth are equally valid ways of knowing, and that both have something essential to contribute when developing new technology. This is not sentimentality. The capacities that AI cannot replicate, and that employers increasingly report they cannot find, are the capacities the humanities develop: moral imagination, interpretive depth, the ability to hold contradictions, comfort with questions that resist tidy resolution, the skill of communicating complex ideas to diverse audiences. The data on employer demand for these skills supports what the humanities have always known. The irony is that the case for the humanities has never been stronger, even as institutional investment in them weakens.
We owe them transparency about what we do and do not know. AI is moving faster than research can track. We do not yet fully understand its long-term effects on cognition, on relationships, on professional development, on equity. Honesty about that uncertainty is not a weakness. It is the foundation of intellectual integrity. Institutions should say to their students: we are learning alongside you, and here is how we are approaching that learning with rigor and care.
What Higher Education Owes Itself
The pressures Sharma described inside Anthropic are not unique to technology companies. Higher education faces its own version: the pressure to adopt AI for efficiency without examining what is lost, the pressure to market AI integration as innovation without doing the slow work of pedagogical redesign, the pressure to treat faculty development as a workshop rather than a sustained investment, the pressure to move fast because competitors are moving fast.
I want to acknowledge something important here. There are provosts, CIOs, deans, and faculty leaders across the country who are doing serious, thoughtful work on AI strategy. They have built task forces, launched pilot programs, negotiated enterprise licenses, and navigated extraordinarily complex governance challenges, often with inadequate resources and competing priorities. This work is real and it matters. The challenge is not that institutions are doing nothing. The challenge is that fragmented initiatives across IT, teaching and learning centers, and academic units, without unified strategic direction, cannot produce the coordinated institutional capability this moment demands.
Genuine transformation, the kind that produces graduates ready to lead in an AI-transformed world, requires investment that matches the scale of the challenge. It means sustained, discipline-specific faculty development that treats educators as intellectual partners in redesigning education, not as recipients of a training module. It means piloting AI tools rigorously, assessing their impact on learning outcomes and equity, and having the courage to say “not yet” or “not this” when evidence warrants caution, even when peers are moving faster. It means protecting student data with at least the seriousness we bring to FERPA, recognizing that many AI tools operate with startling opacity about how data is used, stored, and trained on. It means building accessibility into every AI initiative from the beginning, not as a retrofit, particularly as the April 2026 ADA Title II compliance deadline approaches.
And it means confronting equity head-on. This is not a secondary concern to be addressed after the strategic plan is written. AI has the potential to widen every existing gap in higher education. Students who already face barriers, those with limited broadband access, lower digital literacy, fewer financial resources for premium AI tools, less exposure to technology-rich environments, risk being further disadvantaged by institutional AI adoption that assumes a baseline not everyone shares. Equity must be designed into AI strategy from the first conversation, not appended as a paragraph in the diversity section of a report. Who has access to which tools? Whose data is being collected, and how? Which students benefit from AI-enhanced learning, and which are left further behind? These questions are not tangential to AI strategy. They are the center of it.
The institutions that navigate this well will not be the ones that moved fastest. They will be the ones that built the deepest internal capacity: strategic clarity about what they are trying to achieve, faculty who are genuine partners in the work, governance structures that can adapt as conditions change, and a commitment to producing graduates whose human capabilities have been developed, not diminished, by their encounter with AI.
The Thread
Some readers will note that Sharma is a relatively junior researcher who spent two years at Anthropic, and that Hitzig’s primary objection involved advertising policy. Fair enough. They are not prophets, and their departures do not constitute proof of imminent catastrophe. But they are not the only voices saying these things. They are the most recent. And their warnings carry a particular weight because they come from inside the organizations building the systems, offered at real personal and professional cost. When the people constructing the technology tell you that the challenge is not technical but human, that wisdom must keep pace with capability, that values erode under pressure even in organizations designed to protect them, it is worth pausing to consider whether your own institution is structured to do better.
William Stafford, the poet whose work Sharma included in his resignation letter, wrote about a thread that goes among things that change but does not itself change, a thread that, while you hold it, keeps you from getting lost.
For higher education, the thread is this: we exist to develop human capacity. The capacity to think critically and independently. To create. To reason ethically. To connect across difference. To hold complexity without collapsing into false certainty. To ask the questions that matter most and stay with them long enough to find something true.
AI changes the landscape around that thread. It changes the tools, the pace, the risks, and the possibilities. It demands that institutions rethink curricula, redesign assessments, invest in faculty, protect student data, confront equity gaps, and build governance structures that can adapt to a technology evolving faster than any we have encountered. All of this is urgent and necessary work.
But it does not change the thread itself.
The institutions that hold the thread, that build their AI strategies around the development of irreducibly human capacities rather than the mere optimization of institutional processes, will produce graduates who do not just use AI but lead with the wisdom, creativity, and ethical grounding that this moment demands.
The institutions that drop the thread in favor of speed, competitive positioning, and performative adoption will produce something else entirely. And we already know, from social media, from every technological disruption that outpaced our institutional wisdom, what it costs to realize too late what we have lost.
Sharma resigned because he believed wisdom must grow in equal measure to our capacity to affect the world. Hitzig resigned because she believed there is still time to build the institutions that can govern this responsibly. Stafford wrote that you don’t ever let go of the thread.
AI will transform education. That is no longer a question anyone serious is debating. What remains undecided is whether institutions will lead that transformation with strategic clarity, human investment, and moral imagination, or whether they will look back in a decade and wish they had started sooner.
We already have the thread. We have held it for centuries, through every technological disruption, every economic upheaval, every moment that demanded more of us than we thought we had. The thread does not need to be found. It needs to not be dropped.
Tawnya Means is the author of “The Collaboration Chronicle: Human+AI in Education,” and leads three initiatives helping universities navigate AI transformation: Inspire Higher Ed, AI Convergence, and AdvancAI. Her research includes a comprehensive analysis of AI integration across leading business schools.
If your institution is navigating these questions and looking for strategic direction, connect and let’s start the conversation: tawnya@inspirehighered.com



Tawnya, this piece is spot-on and beautifully written. These are the big issues that we must take seriously. However, this is very uncomfortable territory. Social media is already so established and pervasive that we know there is little we can do to change course. AI is getting to that point at a considerably faster pace. We need to take action, but at what scale? Sometimes I think the focus on cheating is less about missing the point and more about trying to grasp at something that seems small enough, manageable enough that we can envision the action that needs to be taken. This, of course, is as misguided as when, back in the day, instructors had a basket in their classrooms to collect students' smartphones. Until we acknowledge the larger implications, the big issues, we will be comforted through the act of treating the symptoms. That is perhaps the biggest danger we face.
This is a lovely post!
Recently, a student submitted this poem for his Literature Circle assignment:
"Maverick, just seventeen years young
With the weight of the world on a reckless tongue.
Born to the King Lords – a legacy of pain.
Dealing dope in the cold, heavy rain."
His delivery was exceptional. I told myself — finally, the fruits of my labor are coming to fruition.
I'm kidding, of course. 100% AI generated.
That moment sent me down a rabbit hole researching Alpha School: the AI-powered school some people are calling the future of education.
No traditional grades. No traditional teachers. Two hours of academics a day. $40,000 a year in tuition.
I spent a week looking into it and while the school model was interesting, I'm not sure it is the path I want education to go down. I wrote a post about it on my Substack.