The Algorithm Doesn’t Care About Your Flourishing
On Ethics, AI, and What Gets Optimized
Context
We’re building systems that shape human behavior at scale. Recommendation algorithms. Engagement optimization. Predictive models for everything from credit to career.
The engineers aren’t evil. The companies aren’t cartoonishly malicious. But here’s what’s sobering: the optimization targets rarely align with human flourishing.
Engagement isn’t wisdom. Clicks aren’t insight. Time-on-platform isn’t meaning.
So… what happens when we hand over increasing amounts of human decision-making to systems optimized for metrics that don’t actually care about us becoming better humans?
The Problem (Or: Why This Isn’t Just About Privacy)
Most AI ethics discourse focuses on bias, privacy, and transparency. Important issues—genuinely. But they miss something deeper.
Even a perfectly unbiased, privacy-respecting, transparent algorithm can undermine human flourishing if it’s optimized for the wrong thing.
Consider recommendation systems. They’re designed to maximize engagement (which translates to ad revenue). Not to recommend what’s good for you—what might challenge you, deepen you, help you grow—but what you’ll click on next.
The algorithm succeeds when you stay. Not when you become wise.
This creates a fundamental misalignment… between what the system is designed to do and what actually serves human good.
And it’s not limited to social media. Healthcare algorithms optimize for cost efficiency, not patient flourishing. Hiring algorithms optimize for predicted performance metrics, not human potential. Credit algorithms optimize for risk minimization, not economic opportunity.
The pattern is the same: We build systems that excel at measurable optimization while ignoring what’s most essential—precisely because the most essential things resist quantification.
The Philosophical Weight
From a virtue ethics perspective, this is particularly troubling.
Aristotle argued that human flourishing (eudaimonia) comes through the cultivation of virtues—wisdom, courage, justice, temperance. These develop through practice, habituation, and deliberate choice in complex situations.
But algorithmic systems—by design—reduce complexity. They pattern-match. They recommend the familiar. They optimize for the comfortable.
In doing so, they subtly erode the conditions necessary for virtue development.
You don’t cultivate wisdom by consuming algorithmically-curated content designed to confirm your existing beliefs. You don’t develop courage by avoiding discomfort (which algorithms help you do by filtering out the challenging). You don’t learn justice by delegating moral reasoning to predictive models.
Christian personalism adds another layer: the human person is fundamentally relational, constituted through encounters with others. But algorithmic mediation of relationships changes how we encounter others—filtering, curating, predicting.
When your experience of other people is increasingly shaped by systems optimized for engagement, you’re not encountering persons in their fullness. You’re encountering algorithmically-selected aspects of persons, chosen because they’re likely to keep you scrolling.
That’s not relationality. That’s… something else. Something thinner.
The Optimization Trap
Here’s where it gets complex (and why simple solutions don’t work):
Optimization itself isn’t the problem. Humans have always optimized. Agriculture optimizes food production. Medicine optimizes health outcomes. Education optimizes learning.
The issue is what gets optimized and who decides.
In traditional optimization, the goals were (relatively) clear and aligned with human flourishing: health, knowledge, sustenance, beauty.
But in algorithmic systems, the optimization targets are chosen by:
- What’s measurable
- What’s profitable
- What’s technically feasible
And those three constraints systematically exclude the most important aspects of human life—the things that matter most resist measurement, often don’t generate profit, and can’t be captured by current technical capabilities.
So we end up optimizing the wrong things, not because we’re malicious, but because we’re constrained by the logic of the systems we’re building.
What Actually Helps (Or: Beyond “Ethical AI Guidelines”)
Most ethical AI frameworks focus on making algorithms less harmful. Noble goal—but insufficient.
What we need is a more robust understanding of what algorithms should serve in the first place.
This requires asking (and honestly answering) questions like:
- What does human flourishing actually look like in the 21st century?
- Which aspects of life should remain unoptimized?
- When is algorithmic mediation helpful, and when does it undermine the goods we’re trying to achieve?
- How do we design systems that support virtue cultivation rather than eroding it?
Practically, this might mean:
- Designing for friction, not just flow — Some decisions should be difficult. Some choices should require reflection. Optimizing for ease isn’t always optimizing for good.
- Building in opacity — Not everything should be transparent. Some aspects of how decisions are made should remain opaque to preserve human agency and mystery.
- Limiting optimization scope — Maybe not everything should be optimized. Maybe some domains (friendship, love, creativity, wisdom) should remain resistant to algorithmic improvement.
- Centering human agency — Algorithms should augment human decision-making, not replace it. Even when the algorithm is more accurate.
The Deeper Challenge
Of utmost importance, then, is recognizing that AI ethics isn’t just about making algorithms fair or transparent.
It’s about asking what kind of humans we’re becoming as we increasingly live in algorithmically-mediated environments.
Every system shapes its users. Recommendation algorithms don’t just show you content—they shape how you attend. Predictive models don’t just make decisions—they shape what you consider possible.
And if those systems are optimized for engagement, profit, efficiency… then we’re being shaped toward becoming more engaged, more profitable, more efficient humans.
But are those the humans we want to be?
Wisdom requires slowness. Virtue requires difficulty. Authentic relationality requires risk. Meaning requires wrestling with what can’t be optimized.
The algorithm doesn’t know that. The algorithm doesn’t care.
So it’s up to us—those of us building these systems, those of us using them—to ensure that what gets optimized aligns with what actually matters.
Which means we first need to figure out what actually matters… and that’s a philosophical question, not a technical one.
Practical Application
If you work in AI/tech:
- Question optimization targets in your projects. What are we actually optimizing for? Is that what we should be optimizing for?
- Build in spaces for human judgment, even when the algorithm is more accurate
- Consider second-order effects—how does this system shape users over time?
- Recognize the limits of quantification—what matters most often can’t be measured
If you’re a user of algorithmic systems:
- Cultivate meta-awareness—notice when algorithms are shaping your attention
- Deliberately seek out unoptimized experiences (human-curated recommendations, random discovery, difficult content)
- Practice making decisions without algorithmic assistance, even when it’s available
- Build relationships that aren’t mediated by recommendation systems
If you’re thinking about this more broadly:
- We need robust public discourse about what human flourishing looks like in the algorithmic age
- We need to recover philosophical frameworks (virtue ethics, personalism, contemplative traditions) that can guide technological development
- We need to resist the reduction of human good to what’s measurable
Further Reflection
Questions for continued engagement:
- What aspects of your life have been made “easier” by algorithmic systems—and has that ease come at a cost?
- When was the last time you made a significant decision without consulting an algorithm (search, recommendation, prediction)? How did it feel?
- What would change if we designed AI systems to support virtue development rather than optimizing for engagement?
- In what domains of life should algorithmic optimization be resisted entirely?
Related Reading:
- Shannon Vallor, Technology and the Virtues
- Albert Borgmann on focal practices and device paradigm
- L.M. Sacasas, The Convivial Society (newsletter)
- Christian Smith, What Is a Person?
- Alasdair MacIntyre, After Virtue (especially on practices and internal goods)
About the Author
I’m a university professor working at the intersection of AI ethics, theology, and philosophy. My research focuses on how technological systems shape moral development and what it means to cultivate virtue in algorithmically-mediated environments. This essay draws from that work—but also from my own ambivalence about the systems I simultaneously study, critique, and use daily.
Tags: AI ethics, virtue ethics, technology and humanity, human flourishing, Christian personalism, algorithmic systems
Date: 2025-09-30
—