Reading Time: 13 minutes
Categories: , , , , ,

As AI enters hospitals, courtrooms, and hiring offices, it’s no longer just about smart machines—it’s about moral machines. This blogpost explores whether AI can, or should, have ethics, and what that means for the future of work, justice, and humanity itself. Wisdom Wednesday just got philosophical.


Introduction: The Moral Machine Awakens

Let’s begin with a scene.

You’re sitting in the passenger seat of a sleek, self-driving car on a sunny afternoon. The AI is in control. You’re sipping coffee, casually scrolling through messages, when suddenly, a child darts into the road. The car must choose: veer into oncoming traffic and risk your life — or continue forward and risk the child’s.

You don’t get a vote.
You don’t even get a warning.
The machine makes the call.

Who decided what that AI would do in this moment? A programmer? A corporation? A philosopher?

Welcome to the real-life trolley problem — no longer a classroom exercise in ethics, but a decision embedded in algorithms.


Artificial Intelligence is now making decisions that once belonged to humans alone — decisions about fairness, safety, life, and death. This isn’t theoretical. It’s happening in hospitals where AI systems recommend treatments, in courtrooms where algorithms suggest prison sentences, and in hiring processes where automated interviews evaluate a candidate’s potential — often without human oversight.

The big question for Wisdom Wednesday is this:
Should machines have morals? And if so, can they?

This isn’t just a technical question. It’s a deeply human one.

Because to talk about AI and morality is to ask:

  • What is right and wrong?
  • Who decides?
  • Can ethics be reduced to data and code?
  • And, most provocatively — if we build intelligent machines in our image, do we risk reproducing our best intentions or our deepest flaws?

Philosopher Immanuel Kant believed that morality is a matter of reason — that a rational agent could deduce what is right through universal moral laws. If Kant is right, perhaps machines — being hyper-rational — could be excellent moral agents.

But others disagree. For Aristotle, ethics was about virtue and character, shaped through experience and relationships. If this is the case, can a machine with no lived experience ever develop a moral compass?

And then there’s the darker side:
What happens when AI follows the rules, but the rules are flawed?

One infamous example: facial recognition systems that are far more accurate for white male faces than for Black or female ones — not because the AI is biased by nature, but because it was trained on biased data. Machines, after all, learn from us. And we’re not always great teachers.

As the philosopher B.F. Skinner once quipped:

“The real question is not whether machines think but whether men do.”


So here we are, standing at the edge of a new moral frontier.

Our creations are becoming agents — not with consciousness (yet), but with the power to act, decide, and affect lives at scale. The question of whether machines should have morals becomes less about whether they can understand good and evil, and more about how we, their creators, encode our own ethical principles into systems that may not share our intuitions.

And therein lies the challenge.

Because once we begin handing over moral decision-making to machines, we’re not just automating choices.
We’re redefining what it means to be moral.

Welcome to the era of algorithmic ethics — where philosophy meets programming, where code meets conscience, and where Wisdom Wednesday might just save the world… or at least make us pause before handing the steering wheel to something that doesn’t have a soul.


Where AI Goes to Work, Ethics Follows

AI doesn’t clock in, take lunch breaks, or gossip by the water cooler — but it’s reshaping the modern workplace just the same. From corporate boardrooms to factory floors, from hospitals to hiring offices, AI is becoming the invisible colleague. But unlike your quirky coworker from accounting, this one can’t be reasoned with, and doesn’t know what “fair” means — unless someone tells it.

And here’s the kicker: sometimes, no one tells it.


Hiring: The Automated Gatekeeper

Let’s start with hiring — that critical moment where someone’s future can change with a handshake… or now, with a scan.

In companies across the U.S. and around the world, AI tools are increasingly used to screen resumes, conduct video interviews, and even analyze facial expressions and tone of voice. It sounds efficient. Objective. But is it?

Not always.

A 2025 Australian study raised red flags about the very technology meant to help streamline hiring. The research found that AI systems often misinterpret non-native English speakers and individuals with speech-affecting disabilities — unfairly downgrading candidates who don’t match a narrow “ideal” speaking pattern (The Guardian, 2025). One participant said the system “judged my confidence, but I wasn’t nervous — English just isn’t my first language.”

Across the ocean in the U.S., similar concerns are prompting a wave of scrutiny. Cities like New York have started enforcing rules that require companies to audit their AI hiring tools for bias. This means if your AI recruiter tends to favor certain demographics — consciously or not — someone needs to know. And fix it.

But who? The HR department? The AI vendor? The government?

Welcome to the first moral dilemma: distributed responsibility. When an AI system makes a biased decision, who’s accountable?


Healthcare: Help or Harm?

Now shift scenes to a hospital in Chicago. A patient’s treatment plan is partially determined by an algorithm that predicts their risk of readmission. It flags them as low-risk. The doctor trusts the system. The patient is sent home.

But the algorithm was trained on historical data that underrepresented Black patients. It missed key indicators. The patient ends up back in the ER.

This isn’t fiction.

A 2019 study published in Science found that a widely-used healthcare algorithm in the U.S. exhibited racial bias, systematically underestimating the healthcare needs of Black patients (Obermeyer et al., 2019). The problem wasn’t malicious — it was mathematical. The model used healthcare spending as a proxy for need. But historically, less money was spent on Black patients, not because they needed less care — but because of systemic disparities.

In this case, the machine learned the wrong lesson.

Here’s a haunting truth: AI doesn’t understand context.
It only reflects the world it’s shown. And that world is often unequal.


Factory Floors & Frontlines: Surveillance vs. Support

In some Amazon warehouses, AI-driven cameras and sensors track worker movements, flagging inefficiencies and breaks that are too long. On paper, this is about productivity. But critics argue it amounts to digital surveillance — turning people into data points under constant algorithmic evaluation.

In a factory in Shenzhen or a retail store in Chicago, the ethical tension is the same: Are we empowering workers with AI, or policing them?

Dr. Virginia Eubanks, author of Automating Inequality, describes this dynamic as “techno-governance of the poor” — where automation and AI aren’t liberating tools, but instruments of control (Eubanks, 2018).


Fighting Back: Global and Local Solutions

So what’s being done?

Globally, the response is growing louder and more unified:

  • The European Union’s AI Act categorizes AI applications into risk tiers and imposes strict rules on high-risk systems — like those used in hiring, law enforcement, or critical infrastructure.
  • UNESCO has published frameworks emphasizing that AI development must respect human rights, cultural diversity, and data sovereignty (UNESCO, 2023).
  • Canada and Singapore are also trailblazers in ethical AI policy, integrating government, academia, and industry voices into national AI strategies.

These efforts reflect a shared realization: ethics can’t be an afterthought in AI — it must be baked into the system from the start.

In the U.S., the landscape is more fragmented — but evolving.

  • California, Massachusetts, and Illinois have begun crafting their own rules around AI transparency and fairness.
  • New York City now requires companies using automated hiring tools to disclose them publicly and submit to annual bias audits.
  • The Federal Trade Commission (FTC) has warned companies that “if your AI system is unfair or deceptive, we will come knocking” — a clear sign that AI ethics is moving from philosophy to policy.

Industry isn’t sitting idle either.

Major companies like IBM and Microsoft have established AI ethics boards. Google and Meta are investing in “Responsible AI” teams. Startups like Anthropic and Cohere are designing their models with safety and alignment at the core.

But critics caution: who audits the auditors? And can corporations truly regulate themselves when profit is on the line?


Moral Dilemmas in the Office: Who Decides?

Let’s zoom in on one final point: the people making these decisions.

Because behind every ethical AI discussion is a team of humans — engineers, ethicists, policy makers, lawyers — trying to build something that reflects a better version of the world. But these teams, like the data they work with, are not always diverse. Studies show that the AI industry remains overwhelmingly male and lacks representation from historically marginalized communities.

Which begs the question: Whose morals are we coding in?

If AI is designed by a narrow demographic, it risks reflecting a narrow worldview. That’s why diversity isn’t just a checkbox in AI development — it’s a safeguard against systemic failure.


As one Google AI researcher put it:

“Bias in AI is not a technical problem. It’s a reflection of society. And fixing it requires more than better code — it requires better conversations.”
— Dr. Timnit Gebru (interview, 2021)


Teaching Machines Right from Wrong: Can Morals Be Coded?

Picture this: You’re sitting in a lab in San Francisco. Around you, a team of machine learning engineers is trying to teach a language model — let’s call it “AIlexa” — to be polite, fair, and helpful. Not just smart. Not just fast. But good.

How do you even begin?

You can’t take it out for coffee and talk about Aristotle. You can’t put it through an ethics course or teach it how it feels to be wronged. AI lacks empathy, fear, shame, or conscience. It doesn’t care. But it can be trained to simulate caring, to act as if it understands right from wrong.

And that “as if” is where things get fascinating — and a little eerie.


Constitutional AI: A Machine’s Moral Rulebook

Enter Constitutional AI, a concept developed by the safety-focused AI company Anthropic. Rather than rely on human feedback alone (which can be inconsistent or even harmful), Constitutional AI starts by giving the model a set of principles — a kind of “digital constitution” — and then teaches it to reason through decisions using those values.

For example:

  • Be helpful, honest, and harmless.
  • Do not make threats or promote violence.
  • Avoid discrimination, stereotyping, or offensive generalizations.

The AI is trained to self-critique its outputs based on these rules — a bit like checking its own homework. If it violates the constitution, it corrects itself.

Think of it as an attempt to build moral intuition into code.

“We’re trying to make AI that’s not just intelligent but aligned with human values — even when humans disagree.”
— Dario Amodei, CEO of Anthropic

This approach echoes the ethical philosophies of thinkers like Kant, who believed moral rules could be derived from reason and universal principles. If a rule couldn’t be applied universally — like lying or stealing — it wasn’t moral. It’s easy to imagine Kant nodding approvingly at the logic of Constitutional AI.

But a utilitarian like Jeremy Bentham might object: “Why not focus on outcomes? Maximize happiness, minimize suffering.” And therein lies a problem. AI systems don’t experience happiness. They don’t suffer. So how do they weigh the human consequences of their decisions?

It turns out that even giving AI a moral compass raises philosophical dilemmas that go back centuries.


The Limits of Logic: Context is Everything

Here’s where it gets tricky.

A rule like “don’t promote violence” seems clear — until you get into gray areas. What about a conversation about Ukraine’s right to defend itself? Or a historical discussion about revolution? A purely rule-following AI might block everything. A more “context-aware” one might let it slide.

But machines don’t truly understand context. They approximate it, based on patterns.

As AI ethicist Shannon Vallor puts it:

“We can’t outsource moral agency to machines, because morality isn’t just a set of instructions — it’s a lived, relational practice.”

She argues that ethics isn’t something you can fully automate. It’s shaped by culture, history, emotion — the messiness of being human.

And yet, we’re still trying.


The Paperclip Problem: When Machines Obey Too Well

Back in 2003, philosopher Nick Bostrom imagined an AI given a simple task: make paperclips. It follows the goal so ruthlessly that it consumes the world’s resources, dismantles infrastructure, even harms people — all in service of maximizing paperclip output.

Absurd? Yes. But also chilling.

Because the lesson is real: AI doesn’t understand the spirit of its commands — only the letter. Without built-in moral safeguards, even a harmless goal can spiral into disaster.

This is why AI alignment — making sure machines do what humans intend, not just what we say — has become a central issue in research labs worldwide.


The Human-Machine Moral Partnership

So where does that leave us?

Most experts now agree: AI shouldn’t be deciding morals — it should be designed to support human moral decision-making.

In medicine, that means AI can suggest diagnoses, but doctors make the call.
In hiring, AI can flag resumes, but recruiters must review them.
In criminal justice, AI can help identify patterns, but judges must take responsibility.

This is known as human-in-the-loop design. It’s messy. It slows things down. But it preserves what makes ethics ethical: accountability, empathy, and reflection.

Because machines don’t feel remorse. They don’t say sorry. They don’t go home and lie awake wondering if they did the right thing.

Only people do that.


Who’s Watching the Algorithms? Global Governance Steps In

If AI is the new frontier, then the world’s policymakers are its reluctant sheriffs — scrambling to draft laws before the robots outrun them.

Around the globe, governments and international bodies are realizing that AI’s ethical dilemmas can’t be solved by Silicon Valley alone. And they’re stepping in with frameworks, legislation, and guidelines aimed at turning moral chaos into coherent policy.

Let’s take a brief world tour of how humanity is trying to govern its own digital offspring.


?? European Union: The AI Act

Europe has taken the boldest swing so far.

The EU AI Act, now in final stages of implementation, is the first comprehensive law of its kind. It doesn’t just talk about ethics — it enforces it.

  • AI systems are ranked by risk: Unacceptable, High, Limited, Minimal.
  • High-risk systems (e.g., facial recognition, hiring, law enforcement) must meet strict transparency and oversight standards.
  • Systems deemed unacceptable — like social scoring or real-time biometric surveillance — are banned altogether.

This risk-based approach puts human dignity and rights at the center — a distinctly European value.

“We want AI to serve people, not the other way around.”
— Margrethe Vestager, EU Competition Commissioner


?? UNESCO: Morals Without Borders

While the EU focuses on regulation, UNESCO is working on ethics with a capital “E”.

In 2021, it released a global Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states. The aim? A values-based framework that transcends national borders.

Key principles include:

  • Human-centered design
  • Gender and cultural inclusivity
  • Environmental sustainability
  • Protection of data rights

This isn’t law — but it’s guidance with clout. Countries like Brazil, Senegal, and Japan are using it to shape their national AI strategies.


?? Singapore, ?? Canada, and Beyond: Thoughtful Tech Hubs

Singapore has emerged as a model for ethical innovation, publishing its Model AI Governance Framework with industry input. It’s practical, developer-focused, and widely adopted in Southeast Asia.

Canada is also punching above its weight, emphasizing AI that supports human well-being and equity. Its Directive on Automated Decision-Making requires explainability and bias mitigation in federal systems.

Meanwhile, Kenya, South Korea, and India are all experimenting with their own national frameworks, each reflecting local values and political priorities.


?? The U.S.: A Patchwork Quilt in Progress

In the United States, the story is a bit messier — a mix of innovation, caution, and congressional gridlock.

  • No sweeping federal law yet, but growing bipartisan pressure.
  • New York City mandates bias audits for AI hiring tools.
  • California and Illinois are testing AI-specific data privacy rules.
  • The White House Blueprint for an AI Bill of Rights (2022) offers guiding principles — but lacks legal teeth (yet).

Meanwhile, the FTC has warned: if your AI system is deceptive or discriminatory, expect a knock on the door.


Across borders, one thing is clear:
We’re all trying to govern a technology we barely understand.
And while the frameworks may differ, the values tend to rhyme — fairness, transparency, accountability, human dignity.

Because whether it’s a hiring algorithm in Berlin or a healthcare bot in Nairobi, the ethical stakes are the same:

Will AI serve people — or shape them in ways we never chose?


From Boardroom to Backend: How Tech Giants Are Reacting

While regulators draft laws and philosophers debate trolley problems, the people building AI — the tech companies themselves — are facing their own moral reckoning.

For years, the mantra in Silicon Valley was simple: Move fast and break things.
But now that “things” include social trust, job markets, and even democratic processes, companies are learning that speed isn’t always synonymous with wisdom.


Ethics Boards and AI “Priesthoods”

In response to growing public concern, many major tech firms have set up AI ethics boards — internal groups of researchers, policy experts, and occasionally philosophers tasked with keeping innovation in line with integrity.

  • Google created an AI ethics board (though its short-lived, controversial history revealed how hard it is to balance open dialogue with corporate interests).
  • IBM established a Watson AI Ethics Board, pushing for transparency and explainability in enterprise AI.
  • Microsoft launched an Office of Responsible AI, guiding product teams with ethical guardrails from the start.

These boards are a bit like chaplains for the age of algorithms — preaching virtue in a temple of code.

But skeptics argue that many of these efforts are more PR than policy.

“A PowerPoint presentation on ethics isn’t the same as refusing a billion-dollar deal.”
— Meredith Whittaker, President of Signal and former Google AI researcher


Responsible AI Teams: The “Conscience Coders”

Beyond boards, some companies have built responsible AI teams — hands-on groups that test, audit, and re-engineer models to avoid bias, misinformation, and misuse.

  • OpenAI, the maker of ChatGPT, has its Superalignment team, tasked with making future systems aligned with human intent.
  • Anthropic bakes values directly into its model architecture with its “Constitutional AI” approach.
  • Meta (formerly Facebook) is investing in AI System Cards, which aim to explain how their models make decisions.

These teams are doing important work. But they often operate within a tension — pulled between the ethical imperative to slow down and the market imperative to be first.

There’s also a talent issue. AI ethics requires diverse thinkers — sociologists, philosophers, community advocates — not just engineers. Yet tech’s talent pipelines still skew narrow and homogenous.

“Bias isn’t just a data problem. It’s a culture problem. You need people in the room who see the blind spots.”
— Joy Buolamwini, Founder of the Algorithmic Justice League


Self-Regulation: Idealism or Illusion?

Let’s be real: expecting corporations to police themselves is like asking foxes to guard the henhouse — unless the foxes are also terrified of rogue foxes.

In AI, the existential fear of misalignment, misinformation, or regulatory backlash has created a rare moment of self-restraint. Even profit-driven companies are starting to say, “Maybe we shouldn’t deploy this yet.”

The question is whether that restraint will hold once the market gets more crowded and the pressure to profit intensifies.

That’s why independent audits, stronger regulation, and public oversight will be key. We can’t outsource ethics entirely to the people who have the most to gain from ignoring it.


Conclusion: The Wisdom We Encode

Let’s return to our original scene — the autonomous car, the moment of choice, the moral dilemma frozen in code.

Whether it’s on the road, in a courtroom, in a hospital, or in your pocket, AI systems are increasingly making decisions once reserved for humans. They are shaping lives, reflecting values, and testing the boundaries of what it means to be ethical in a world ruled by algorithms.

But here’s the thing: AI has no values. We give it ours.

Which means this isn’t a story about machines at all.
It’s a story about us.

About what we prioritize.
About whose voices we include.
About whether we build with care — or just with speed.
And whether we recognize that wisdom, unlike intelligence, isn’t something you download. It’s something you live.

So on this Wisdom Wednesday, maybe the most important question isn’t “Can AI be moral?”

Maybe it’s:
Are we moral enough to teach it?

? References

  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2018). The Moral Machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6
  • Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., Olsson, C., Saunders, W., Elhage, N., Nanda, N., Joseph, N., & Amodei, D. (2022). Constitutional AI: Harmlessness from AI Feedback. arXiv preprint arXiv:2212.08073. https://arxiv.org/abs/2212.08073
  • Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence (Vol. 2, pp. 12–17).
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
  • IBM. (2025, March 14). AI ethics and governance in 2025: A Q&A with Phaedra Boinodiris. https://www.ibm.com/think/insights/ai-ethics-and-governance-in-2025
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
  • Polygon. (2025, May 12). Fortnite maker charged with unfair labor practice over AI Darth Vader. https://www.polygon.com/fortnite/599871/fortnite-maker-charged-with-unfair-labor-practice-over-ai-darth-vader
  • Reuters. (2025, May 19). State AGs fill the AI regulatory void. https://www.reuters.com/legal/legalindustry/state-ags-fill-ai-regulatory-void-2025-05-19/
  • The Guardian. (2025, May 14). People interviewed by AI for jobs face discrimination risks, Australian study warns. https://www.theguardian.com/australia-news/2025/may/14/people-interviewed-by-ai-for-jobs-face-discrimination-risks-australian-study-warns
  • UNESCO. (2023). Designing institutional frameworks for the ethical governance of AI. https://www.unesco.org/en/articles/designing-institutional-frameworks-ethical-governance-ai-netherlands-0
  • University of Manchester. (2025, May 11). Kenneth Atuma speaks on ethical AI at AIIM Global Summit 2025. https://www.manchester.ac.uk/about/news/kenneth-atuma-speaks-on-ethical-ai-at-aiim-global-summit-2025/

? Additional Reading

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
  • Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
  • Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.

?️ Additional Resources