From Moral Questions to Concrete Moves: Ethics, Technology + Public Policy

The most valuable courses don’t hand you answers, they hand you tools and an expectation that you’ll use them. Over the past seven weeks, Stanford’s Ethics, Technology + Public Policy (ETPP) program encouraged me to turn abstract moral unease into concrete questions for my own work in AI regulation and governance. This post shares two key ideas that stayed with me, I’m sharing this publicly to invite critique, collaboration, and better ideas because the real work starts after the final session ends.

A quick note on the course

The course paired deliberately uncomfortable readings with live framing, faculty and guest lectures, and small-cohort discussions. Weekly topics ranged across algorithmic decision-making and fairness, child safety and responsibility, political economy and power, data, privacy and civil liberties, and AI and the future of work.

We also treated AI use transparently; use it, know what you’re using, review it, cite it, own the output. This mirrors the governance posture organisations should use to approach AI. Augmentation is acceptable, accountability is non-negotiable. Curiosity and experimentation are necessary, but they must sit inside clear guardrails, monitoring, and an awareness of AI’s limitations.

  1. Omelas, Um-Helat, and the costs we choose to see

Ursula K. Le Guin’s The Ones Who Walk Away From Omelas and N.K. Jemisin’s The Ones Who Stay and Fight (taking place in the city of Um-Helat) are both allegories that represent prosperity bargains. In Omelas, a society’s joy depends on the suffering of a single child. Most of Omelas’ citizens choose comfort over guilt and stay. Some reject this bargain, and leave. In Um-Helat, ‘social workers’ erase corrosive ideologies through violence to preserve an egalitarian order. One tolerates hidden harm and the other flirts with illiberal control. The allegories force you to answer the questions of what harms are you prepared to normalise, and what are you prepared to suppress in the name of collective good?

Omelas’ central metaphor – a benefit underwritten by a suffering we choose not to confront, illustrated by an abused child locked in a dark room that no citizen of Omelas helps – maps uncomfortably well onto parts of the modern technology stack. Extractive data practices, hidden labour costs, environmental damage, and power. Kate Crawford’s Atlas of AI makes that visible, tracing the material, labour, and political infrastructures that subsidise ‘frictionless’ AI experiences. Once these patterns have been seen, it becomes harder to treat them as unfortunate externalities rather than design choices.

That tension is not abstract for me. The same infrastructure that enables my day-to-day work, and even this post depends, in part, on these systems that are far from ethically clean, and that dissonance is distracting. Does doing ‘good’ work within the system offset complicity in its harms? Is public critique hypocritical if you run on the tools you critique? Or is refusing to look away, and pushing for institutional change from inside, part of the moral work?

Albert Bandura’s account of selective moral disengagement describes how harmful systems are maintained by ordinary, often decent, people through the diffusion of responsibility, sanitising language, and displacement to the market. This doesn’t resolve the discomfort. But it reframes the task to accept an imperfect stack and reduce the gap between what we know and what we’re willing to act on.

For regulators, policy teams, and internal governance functions, this means treating hidden costs as part of the risk surface, not as someone else’s problem. And it means building environments and processes that make it harder for firms and individuals to ignore the metaphorical locked dark room.

  1. Algorithmic fairness as a socio-technical choice, not a checkbox

The algorithmic decision-making and fairness module reinforced something I’d felt in practice – ‘algorithmic fairness’ is not a single metric we can optimise for and forget.

You have to identify which notion of fairness you’re optimising for in a given context. This could be equal opportunity, calibration, error-rate parity, or something else. An essential component is being honest about why. In addition, you must decide (and disclose) who bears the residual error when the system is wrong. That is where ethics, product decisions, and public policy collide.

The course and readings underscored that these choices are inherently plural and political. Separate jurisdictions, communities, and institutions will draw the line on fairness definitions and measurement mechanisms differently. As Richard Susskind explains in ‘How to think about AI: A Guide for the Perplexed’, “Silicon Valley technologists’ conception of responsibility and ethical AI is unlikely to be the same as the thinking and practice of policymakers in China, North Korea, and Russia”. There is no universal slider setting that is the ethical one – for millennia, philosophers have debated this question with no clear answer. It cannot be left solely to engineers, nor solely to lawyers or ethicists.

What follows is a more demanding model of governance involving interdisciplinary design of decision systems, documentation of value trade-offs, engagement with impacted communities, empirical monitoring of impacts, and a willingness to adjust when harms show up in places the original model didn’t identify. As Arvind Narayanan posed in his guest lecture, this looks less like hunting for a perfect formal fairness definition and more like building ‘algorithmic bureaucracies’ – a shift that will include abandoning mathematically precise fairness definitions and embracing empirical social scientific methods instead.

For someone in my position that’s working on AI regulation and previously on internal AI governance, this resonates. Our role is not to bolt compliance on at the end, but to ensure that questions like “fair to whom?”, “under which legal and social standards?”, “who pays for mistakes?”, and “how can we do this differently” are asked early, recorded, and revisited.

What I’m doing next

“Don’t walk away from Omelas” is easy to say and hard to operationalise. But the course challenged us to translate moral caffeination into moral actions. Here are mine:

  • Bridging regulation and design earlier – Use my legal and policy role to feed emerging regulatory, ethical, and societal signals into the design of AI systems sooner—so fairness, transparency, and harm-reduction are treated as product requirements, not retrofits.
  • Pushing for explicit trade-off records – Advocate for decision logs that document chosen fairness notions, assumptions, and known limitations in high-stakes systems. If we can’t say what ‘fair enough’ means in context, we shouldn’t be shipping.
  • Surfacing hidden costs – Routinely ask “what are the environmental, labour, data, and power implications of this system?” Who is metaphorically in the locked room, and what would it take to open the door?
  • Investing in the long game – Anchor this work in a 10-year horizon, not a 12-month one. The systems we normalise now will define what feels inevitable later. The courses final session drove home the message that we underestimate how much we can shift the default over a decade.
  • Normalising interdisciplinary review – Support processes where technical, legal, policy, safety, and front-line teams jointly interrogate models and datasets, not just rubber-stamp them.

An open invitation

Introspection and imagination for change are moral actions and can form the first step toward progress. Human beings consistently overestimate the amount of change that can occur in twelve months, and this can be disheartening. But what we consistently underestimate is the magnitude of change that can occur over ten years. Do not underestimate what you can do, even if now the challenge seems insurmountable.

“Don’t let anybody, anybody convince you this is the way the world is and therefore must be. It must be the way it ought to be” – Tony Morrison, The Source of Self-Regard: Selected Essays, Speeches, and Meditations.

If you’re working on similar problems like choosing fairness metrics under pressure, governing complex AI systems, dealing with safety risks, building safety teams, or trying to align internal AI adoption with public-interest obligations, I would like to be in conversation. I’m especially keen to speak with:

  • Policy and regulatory leaders experimenting with practical guardrails;
  • Product and engineering teams who’ve embedded ethics into real workflows; and
  • Researchers and advocates pressure-testing what ‘responsible AI’ looks like beyond a checkbox.

The ETPP course asked us not to walk away from complexity, but to stay and fight – to name our bargains, confront our blind spots, and act from where we sit. If any part of this challenges you, or if you disagree, I genuinely welcome your pushback. That’s where the work gets real.

References:

Bandura, A. (2002) ‘Selective Moral Disengagement in the Exercise of Moral Agency’, Journal of Moral Education, 31(2), pp. 101–119. Available at: https://www.tandfonline.com/doi/abs/10.1080/0305724022014322

Crawford, K. (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press. Available at: https://katecrawford.net/atlas

Jemisin, N.K. (2020) ‘The Ones Who Stay and Fight’, Lightspeed Magazine, Issue 116 (January). Available at: https://www.lightspeedmagazine.com/fiction/the-ones-who-stay-and-fight/

Le Guin, U.K. (1973) The Ones Who Walk Away from Omelas. [PDF] Available at: https://files.libcom.org/files/ursula-k-le-guin-the-ones-who-walk-away-from-omelas.pdf

Morrison, T. (2019) The Source of Self-Regard: Selected Essays, Speeches, and Meditations. Available at: https://www.penguinrandomhouse.com/books/566846/the-source-of-self-regard-by-toni-morrison/

Susskind, R. (2025) How to think about AI: A guide for the perplexed. Oxford: Oxford University Press. Available at: https://academic.oup.com/book/59718

Leave a comment