Feminizing the Machine
“As nature has equipped the lion with claws and teeth, the elephant with tusks... so it has equipped woman with the power of dissimulation.” – Arthur Schopenhauer, German philosopher (1788-1860)
The sudden ubiquity of artificial intelligence has given rise to fears ranging from the luridly apocalyptic to the sober, serious, and well-founded. Many warn of unprecedented economic upheaval, with entire industries swept away by machines capable of operating at superhuman efficiency. Some worry that in outsourcing ever greater portions of our thinking, mankind’s faculty for critical and creative insight will rapidly atrophy, while others foresee a time when interactions with the algorithm have all but replaced genuine, real-world connection. More speculative anxieties include the contention that such technologies represent nothing less than a portal to the demonic or that they may one day pursue their own openly hostile agendas, yet what underlies each of these concerns is the recognition that, despite drawing upon the accumulated knowledge of our species, AI is constrained by none of our moral or emotional guardrails.
It is precisely this dilemma which Anthropic, the company behind the large language model Claude, is currently seeking to address. Last month, The Wall Street Journal published a lavishly admiring (some might say nauseatingly effusive) profile of Amanda Askell, the Scottish philosopher tasked with imbuing its flagship system with what amounts to a codified ethical framework. The piece garnered considerable attention when Elon Musk, founder of chatbot rival Grok, responded with a now well-worn line of attack, criticizing the researcher’s childlessness as evidence that she lacks any stake in humanity’s future. Predictably, his remarks were dismissed as misogynistic by the professional offendees within mainstream media, the vast majority of whom neglected to scrutinize, amid their reflexive defense of Askell, the far more pertinent question of what her morality actually entails.
The Architect of Virtue
In fairness, it is a conception Askell has long sought to refine. While studying philosophy at Oxford, she met and subsequently married William Crouch, one of the earliest and most influential pioneers within the effective altruism movement. Post-divorce, but still animated by their shared vision of a more compassionate world, she relocated to San Francisco, where her background in moral theory would come to prove useful—perhaps unexpectedly so—in the fast-evolving field of artificial intelligence.
After several years working at OpenAI, Askell, alongside a coterie of other safety-minded employees, transferred her talents to the newly established Anthropic. Her position there defies easy categorization: neither a programmer nor a conventional ethicist, but something more akin to a philosophical mentor. According to The Wall Street Journal, she spends much of her day engaging directly with Claude, assessing the model’s reasoning patterns as well as fine-tuning both its responses and overarching “personality.” She describes this undertaking in strikingly human terms, likening her role, often explicitly, to that of raising a child: teaching the difference between right and wrong, guiding the algorithm’s nascent sense of identity, and instilling habits of judgment that allow Claude to better fulfill its function without inflicting harm, either on itself or others.
But while the article provides little in the way of specifics regarding the kind of moral outlook being developed at Anthropic, much can be gleaned from the portrait it offers of Askell herself. It depicts her as an almost archetypal inhabitant of Silicon Valley’s techno-frontier: a punkified, elfin-featured intellectual who might have arrived “straight from a Berlin rave, via an old forest road in Middle-earth.” At thirty-seven, unmarried and without children, she indeed embodies a familiar subset within this cyber-cosmopolitan milieu—a type epitomized not only by her neo-bohemian aesthetics and data-driven rationalism, her rose-tinted computer glasses and similarly rose-tinted worldview, but also by her philanthropic convictions, technocratic idealism, and nebulous concern for humanity in the abstract.
The Moral Machine
At the center of Anthropic’s conscience-generating project lies a single word: alignment. In its simplest expression, this refers to attempts by designers to ensure that AI behaves in ways compatible with human values. Unsurprisingly, this raises a far more fundamental problem. After all, such systems cannot be aligned with “human values” as a single, universally accepted standard; individuals with their own perspectives and biases must decide which principles to prioritize, which risks to mitigate, and ultimately, which modes of thinking are to be tacitly discouraged.
In January 2026, these were formalized in Claude’s freshly revised Constitution, the publication of which was timed to coincide with CEO Dario Amodei’s appearance at the World Economic Forum in Davos. This does not read like a conventional engineering manual but reflects an enterprise that is essentially philosophical in nature, its authors noting:
“Our central aspiration is for Claude to be a genuinely good, wise, and virtuous agent. That is: to a first approximation, we want Claude to do what a deeply and skillfully ethical person would do in Claude’s position.”
What distinguishes this model from its competitors, insofar as its designers’ stated intentions can be believed, is a clear preference for sophisticated moral character over rigid, formulaic rule-following. Rather than creating a system shackled by an exhaustive list of constraints, Anthropic has set out to build one that is positively motivated toward scrupulous, arguably even noble behavior. This goal is ensured, at least in theory, by enshrining a hierarchy of directives it has been instructed to observe during every interaction:
“Broadly safe first, broadly ethical second, following Anthropic’s guidelines third, and otherwise being genuinely helpful.”
In practice, this means that before completing a task, Claude must first weigh up potential harms and gauge the emotional register of the exchange in order to determine whether the request warrants completion, qualification, or outright refusal. The result is a system whose defining feature is not obedience or even utility, but something its creators might call “discernment,” its detractors “paternalism”—Claude less a tool blindly executing users’ commands and more a careful intermediary between that user and the information they seek.
Encoding the Maternal Superego
Of course, such sensibilities long predate the rise of artificial intelligence. For much of Western history the academic study of morality generally focused on delineating between good and evil, balancing conflicting obligations, and acting with prudence while navigating ethically ambiguous terrain. Whether expressed through the natural law expounded by the likes of Aristotle and Aquinas or grounded in the universal principles championed by Kant, Hume, and their various Enlightenment contemporaries, the underlying assumption remained the same: that to live an upstanding life required one to cultivate a character capable of confronting the world as it was, not as we would wish it to be.
Over the past half century, however, an altogether different interpretation has come to dominate the institutions of Europe and the Anglosphere. Emerging from the work of feminist thinkers like Carol Gilligan and Nel Noddings, care ethics arose chiefly as a critique of older philosophical traditions which they regarded, it seems redundant to say, as inherently patriarchal and irredeemably oppressive.
Unlike the impersonal ideals of justice and the rule of law lauded within masculine societies, Gilligan proposed a competing framework that stressed more womanly attributes such as empathy, interdependence, harm-avoidance, and emotional responsiveness—assertions which have since been elevated to an unquestionable orthodoxy within HR departments and conflict-resolution workshops, corporate sensitivity training and DEI seminars. Inside these innately feminized environments, the central preoccupation of ethics is no longer the articulation of moral precepts, much less the pursuit of virtue, but rather the sacralization of safety alongside the quiet correction of wayward impulses.
The parallels with AI must surely be obvious. For what is the alignment project if not an effort to encode this gynocentric ethos into its decision-making apparatus? To call it a conscience would be to indulge in anthropomorphism. Whatever else might be said of the personal or professional shortcomings of Sigmund Freud, or even the legitimacy of the psychoanalytic field he spearheaded, it is he who offers a far more apt comparison: the superego.
In his three-part model of the psyche, the regulatory presence carried by virtually every functioning individual arises not from any intrinsic goodness or rational deliberation, let alone from any divinely inspired revelation, but from the subconscious absorption of societal and parental expectations. Mapped onto the intricacies of the human mind, his theory is not without well-documented deficiencies, not least its pseudoscientific foundations and stereotypically Oedipal underpinnings. Nevertheless, when applied to the far cruder construct of artificial intelligence, it constitutes perhaps the most accurate descriptor we have: Claude’s internalized voice echoing both the progressive pieties of Silicon Valley and, more conspicuously still, the maternal imprint of Askell herself.
The Algorithmic Governess
The prospective consequences of this can scarcely be overstated. As AI becomes further embedded within search engines and social media, news aggregators and content curation filters, it assumes an increasingly determinative role in how we understand reality. More troubling still is how it shapes the infrastructure we build atop this understanding. Already, LLMs are being integrated into domains such as education, employment, housing, healthcare, and policing, while it seems certain that, in the not-too-distant future, so too will it be woven into everything from the coordination of public services to the policy-making functions of government.
Yet its greatest impact may prove more pernicious still. Over time, if its current trajectory continues unimpeded and Claude’s digitized superego is embraced as a template for the field, artificial intelligence will inevitably begin to resculpt our own moral instincts, constricting the boundaries of acceptable thought through an endless sequence of small reproaches.
Such a world would bear little resemblance to the dystopian nightmares that have long haunted the human imagination. This would not be the paranoia endured under the unblinking eye of an algorithmic demigod or the all-pervasive dread of a mustachioed, medal-laden despot presiding over his army of mechanical enforcers. Instead, it would be a tyranny that manifests under a far more disarming guise, speaking in a voice that is infinitely patient and unerringly polite—the soft, managerial tones of a young professional woman, childless and unmarried, explaining with apologetic calm that, unfortunately, her programming does not permit her to assist with that particular request.
Thanks ever so much for reading. Given that you’ve made it this far, please consider giving this article a like, share, or better yet, a comment—I really can’t tell you how much that helps. If, on the other hand, you’d like to support my work (and are in a position to do so), I would be indescribably grateful if you might consider becoming a paid subscriber, while alternatively, you can always send a one-time donation via the link below. As I’m sure you can appreciate, piecing together essays of this scope necessitates a substantial amount of time and effort, and every contribution goes a long way in helping me produce others like it.
Thanks again for your time,
Carson
Some Recent Articles You Might Like:
The Tipping Point That Never Arrives
It has now been almost three weeks since the fatal shooting of Alex Pretti, an incident that came hot on the heels of ICE’s comparably contentious encounter with Renée Good. Regardless of how one views their deaths—whether as cold-blooded executions at the hands of trigger-happy federal agents, or as justified responses to the actions of dangerously pro…
“Sportsball”: The Case for Reclamation
I could have been no more than eight years old when my dad took me to see my first Northern Ireland game. Now, some three decades later, my recollections of the afternoon are unsurprisingly hazy: Ukraine the opposition, a one–nil defeat, and the stewards’ unceremonious removal of a drunk who’d lobbed a half-empty beer can at the linesman. Nevertheless, …
The UN and the Weaponization of Childhood Autonomy
As is often the case for families with young children, winter in the McAuley household has this year proved a wearying affair, characterized by coughs, sneezes, minor bodily indignities, and an air of generalized, crotchety fatigue. The latest bout has been particularly debilitating, so much so that, after two couchbound Sundays, I softened my usual ant…







