Skip to main content

Is it Fair for Machines to Decide Your Future?

Imagine applying for a job and being instantly rejected—not by a person, but by an algorithm. Or imagine a judge using software to predict if someone is likely to commit a crime again—and using that score to decide how long they’ll be in prison. These aren’t science fiction stories. They’re happening today, and they raise an important question: is it ethical to let machines shape such major parts of our lives?

What Is Predictive AI?

Predictive AI is a kind of artificial intelligence that uses past data to guess what might happen in the future. For example, a system could be trained on years of data about employees to predict who’s likely to succeed in a certain role. Or it could use crime statistics to estimate the risk of someone reoffending after release from jail. These systems look at patterns that humans might miss—but that doesn’t mean they’re always right, or fair.

The Problem of Bias

One major issue with predictive AI is that it can inherit bias from the data it’s trained on. If past decisions were unfair—say, if people from certain neighborhoods were over-policed—then the AI might learn to repeat those same patterns. That means people could be punished or denied opportunities not because of who they are, but because of what others like them have done in the past.

From a philosophical perspective, this raises a question: can a system be considered fair if it judges people based on group data rather than individual behavior? Or to put it another way: are we comfortable living in a world where your zip code or school history says more about your future than your choices?

Responsibility Without Agency

Another ethical issue comes from the way predictive AI shifts responsibility. When a judge or employer relies on an algorithm, they might say, “It’s not my fault—it’s what the computer said.” But machines don’t have moral agency. They don’t understand right or wrong. So who is responsible when an AI system makes a harmful decision? The engineer? The company? The user?

This raises a deeper philosophical concern: can we delegate moral judgment to something that isn’t capable of moral thought? And if not, should humans always be required to review and question AI decisions, even if the machine seems more “objective”?

Can We Treat Predictions as Destiny?

One major concern with predictive AI is that people might start treating its outputs as if they are inevitable truths. We'll look at different philosophical views offer different ways of thinking about this and other questions surrounding the use of predictive AI.

  • Determinist perspective: From a determinist view, predictions are simply reflections of cause and effect. If a system correctly models enough variables, then the future should be knowable—and acting on predictions is just being rational.

  • Existentialist or humanist view: In contrast, existentialist thinkers emphasize freedom, choice, and personal responsibility. From this view, predicting someone’s future based on past behavior risks denying the human ability to change, reinvent oneself, or defy the odds.

  • Pragmatic view: A more practical take is to treat predictions as useful tools—not truths. From this angle, predictive AI might help us allocate resources (like providing extra support to at-risk students), but it should never be used to permanently label or limit someone.

This diagram shows how predictive AI transforms personal data into real-world outcomes, illustrating how a model’s score can lead to classifications and decisions that affect an individual’s future—raising questions about whether predictions determine destiny.
Lagioia, F., Rovatti, R., & Sartor, G. (2023). Algorithmic fairness through group parities? The case of COMPAS-SAPMOC. AI & Society, 38, 459–478. https://link.springer.com/article/10.1007/s00146-022-01441-y

Who Is Responsible When AI Makes a Decision?

AI systems don’t make decisions in the way humans do—they process data, run models, and output results. But those results often guide real-world actions. So who’s to blame if the AI leads to an unfair or harmful outcome?

  • Deontological (duty-based) ethics: A duty-based thinker would argue that humans must remain morally responsible, no matter what the AI says. Delegating too much to machines would be seen as a failure of ethical responsibility.

  • Corporate or systemic perspective: Others might argue responsibility lies with institutions or companies that design and deploy these systems. In this view, we need stronger laws and accountability structures—not just individual human review.

  • Techno-optimist view: Some believe the solution lies in better technology, not less of it. With improved transparency, fairness audits, and explainability, AI can become more responsible over time.

In 2019, it was revealed that Amazon’s warehouse worker monitoring system could automatically fire employees for not meeting productivity metrics, without supervisor review. This raised questions about who is responsible—the AI system, the engineers, or Amazon’s management.
"Amazon’s system for tracking its warehouse workers can automatically fire them", Charlotte Jee, MIT Technology Review, April 26, 2019, https://www.technologyreview.com/2019/04/26/1021/amazons-system-for-tracking-its-warehouse-workers-can-automatically-fire-them/

Should Group Data Be Used to Judge Individuals?

Predictive AI often makes judgments based on patterns across large populations—but is that fair to the individual?

  • Utilitarian view: A utilitarian might say that if using group data leads to better outcomes on average (e.g., fewer crimes, better hiring), then it's acceptable—even if some individuals are treated unfairly.

  • Rights-based view: In contrast, a rights-based ethicist would argue that every person deserves to be judged on their own actions, not on what "people like them" have done. Group-based prediction might be efficient, but it violates individual dignity.

  • Contextualist view: A more balanced perspective might argue that it depends on how the data is used. If it informs decisions without fully determining them, and if it’s paired with human oversight, it may be ethically acceptable.

Facial recognition systems used by law enforcement (e.g., London’s Met Police) have been criticized for disproportionately misidentifying people of color—largely due to biased training datasets. These systems work by generalizing group traits, making us wonder about treating people as individuals vs. members of a statistical category.
"Met Police to deploy facial recognition cameras", BBC, January 30, 2020, https://www.bbc.com/news/uk-51237665

Final Thoughts

These questions—about free will, accountability, and fairness—don’t have simple answers. But by exploring them through different philosophical lenses, we can better understand what’s at stake when we let machines help make decisions about people’s futures.

As predictive AI becomes more powerful, we’ll have to choose: do we want a world where patterns define people, or one where possibility still has room to grow?

Comments

Popular posts from this blog

Exploring Mobile Automata with Non-Local Rules

This summer, I had the incredible opportunity to attend the Wolfram High School Summer Research Program. Interested in ruliology, I focused my project on mobile automata, a type of simple program similar to cellular automata. Mobile Automata with Non-Local Rules In cellular automata, all cells update in parallel according to a set of rules, whereas mobile automata feature a single active cell that updates at each iteration. The rules for mobile automata dictate the new state of the active cell and its movement. These rules consider the states of the active cell and its immediate neighbors, determining the new color of the active cell and whether it moves to the left or right. Traditionally, mobile automata involve the active cell interacting with its immediate left and right neighbors. However, in my project, I explored the effects of non-local interactions, where the dependent cells are farther away from the active cell. For instance, I examined scenarios where the dependent cells wer...

The Evolution of Information in Philosophy and AI

Claude Shannon, often called the "father of information theory," developed a groundbreaking way to understand communication. His theory, created in the 1940s, showed how information could be transmitted efficiently, whether through telegraphs, radios, or computers. Shannon introduced the idea of entropy , which measures uncertainty in a message. For example, a completely random message has high entropy, while a predictable one has low entropy. Shannon’s work also addressed how noise, or interference, can affect communication and how redundancy can help correct errors. The formula for Shannon's Entropy illustrates how the probability of each symbol contributes to the overall uncertainty or "information" in a system. This foundational equation in information theory has broad implications in both technology and philosophy, raising questions about the nature of knowledge and reality. (Najera, Jesus. “Intro To Information Theory.” Setzeus, 18 March 2020,  https://www...

Examining Vagueness in Logic and Science Using the Sorites Paradox

Imagine you have a heap of sand. If you remove a single grain of sand, you’d still call it a heap, right? But what if you keep removing grains, one by one? At some point, it seems like you’d be left with just a few grains—and surely, that’s no longer a heap. But where exactly does the heap stop being a heap? This puzzling question is at the heart of the Sorites Paradox, also known as the paradox of the heap. This paradox highlights the challenges of dealing with vague concepts, which can be tricky not just in everyday life but also in science. What Is the Sorites Paradox? The Sorites Paradox comes from the Greek word "soros," which means heap. The paradox arises when we try to apply precise logic to vague concepts. In its simplest form, it goes like this: A heap of sand is still a heap if you remove one grain. If you keep removing grains, eventually you’ll be left with just one grain. But according to the first point, even one grain less than a heap should still be a heap, wh...