Skip to the content.

The Algorithm Doesn’t Care About Your Flourishing

On Ethics, AI, and What Gets Optimized

Context

We’re building systems that shape human behavior at scale. Recommendation algorithms. Engagement optimization. Predictive models for everything from credit to career.

The engineers aren’t evil. The companies aren’t cartoonishly malicious. But here’s what’s sobering: the optimization targets rarely align with human flourishing.

Engagement isn’t wisdom. Clicks aren’t insight. Time-on-platform isn’t meaning.

So… what happens when we hand over increasing amounts of human decision-making to systems optimized for metrics that don’t actually care about us becoming better humans?

The Problem (Or: Why This Isn’t Just About Privacy)

Most AI ethics discourse focuses on bias, privacy, and transparency. Important issues—genuinely. But they miss something deeper.

Even a perfectly unbiased, privacy-respecting, transparent algorithm can undermine human flourishing if it’s optimized for the wrong thing.

Consider recommendation systems. They’re designed to maximize engagement (which translates to ad revenue). Not to recommend what’s good for you—what might challenge you, deepen you, help you grow—but what you’ll click on next.

The algorithm succeeds when you stay. Not when you become wise.

This creates a fundamental misalignment… between what the system is designed to do and what actually serves human good.

And it’s not limited to social media. Healthcare algorithms optimize for cost efficiency, not patient flourishing. Hiring algorithms optimize for predicted performance metrics, not human potential. Credit algorithms optimize for risk minimization, not economic opportunity.

The pattern is the same: We build systems that excel at measurable optimization while ignoring what’s most essential—precisely because the most essential things resist quantification.

The Philosophical Weight

From a virtue ethics perspective, this is particularly troubling.

Aristotle argued that human flourishing (eudaimonia) comes through the cultivation of virtues—wisdom, courage, justice, temperance. These develop through practice, habituation, and deliberate choice in complex situations.

But algorithmic systems—by design—reduce complexity. They pattern-match. They recommend the familiar. They optimize for the comfortable.

In doing so, they subtly erode the conditions necessary for virtue development.

You don’t cultivate wisdom by consuming algorithmically-curated content designed to confirm your existing beliefs. You don’t develop courage by avoiding discomfort (which algorithms help you do by filtering out the challenging). You don’t learn justice by delegating moral reasoning to predictive models.

Christian personalism adds another layer: the human person is fundamentally relational, constituted through encounters with others. But algorithmic mediation of relationships changes how we encounter others—filtering, curating, predicting.

When your experience of other people is increasingly shaped by systems optimized for engagement, you’re not encountering persons in their fullness. You’re encountering algorithmically-selected aspects of persons, chosen because they’re likely to keep you scrolling.

That’s not relationality. That’s… something else. Something thinner.

The Optimization Trap

Here’s where it gets complex (and why simple solutions don’t work):

Optimization itself isn’t the problem. Humans have always optimized. Agriculture optimizes food production. Medicine optimizes health outcomes. Education optimizes learning.

The issue is what gets optimized and who decides.

In traditional optimization, the goals were (relatively) clear and aligned with human flourishing: health, knowledge, sustenance, beauty.

But in algorithmic systems, the optimization targets are chosen by:

  1. What’s measurable
  2. What’s profitable
  3. What’s technically feasible

And those three constraints systematically exclude the most important aspects of human life—the things that matter most resist measurement, often don’t generate profit, and can’t be captured by current technical capabilities.

So we end up optimizing the wrong things, not because we’re malicious, but because we’re constrained by the logic of the systems we’re building.

What Actually Helps (Or: Beyond “Ethical AI Guidelines”)

Most ethical AI frameworks focus on making algorithms less harmful. Noble goal—but insufficient.

What we need is a more robust understanding of what algorithms should serve in the first place.

This requires asking (and honestly answering) questions like:

Practically, this might mean:

The Deeper Challenge

Of utmost importance, then, is recognizing that AI ethics isn’t just about making algorithms fair or transparent.

It’s about asking what kind of humans we’re becoming as we increasingly live in algorithmically-mediated environments.

Every system shapes its users. Recommendation algorithms don’t just show you content—they shape how you attend. Predictive models don’t just make decisions—they shape what you consider possible.

And if those systems are optimized for engagement, profit, efficiency… then we’re being shaped toward becoming more engaged, more profitable, more efficient humans.

But are those the humans we want to be?

Wisdom requires slowness. Virtue requires difficulty. Authentic relationality requires risk. Meaning requires wrestling with what can’t be optimized.

The algorithm doesn’t know that. The algorithm doesn’t care.

So it’s up to us—those of us building these systems, those of us using them—to ensure that what gets optimized aligns with what actually matters.

Which means we first need to figure out what actually matters… and that’s a philosophical question, not a technical one.

Practical Application

If you work in AI/tech:

If you’re a user of algorithmic systems:

If you’re thinking about this more broadly:

Further Reflection

Questions for continued engagement:

Related Reading:

About the Author

I’m a university professor working at the intersection of AI ethics, theology, and philosophy. My research focuses on how technological systems shape moral development and what it means to cultivate virtue in algorithmically-mediated environments. This essay draws from that work—but also from my own ambivalence about the systems I simultaneously study, critique, and use daily.


Tags: AI ethics, virtue ethics, technology and humanity, human flourishing, Christian personalism, algorithmic systems
Date: 2025-09-30

🏠 Return to Homepage