Articles and Essays
How can we deal with algorithmic bias and opacity?
Introduction
Nowadays, algorithms and artificial intelligence (AI) systems increasingly shape decisions in areas ranging from
credit scoring and hiring to news feeds and criminal justice. While efficient, these systems can encode and amplify
unfair biases. IBM defines algorithmic bias as “systematic errors in machine learning algorithms [that] produce
unfair or discriminatory outcomes,” reflecting socio-economic, racial, and gender biases (Jonker and Rogers).
Left unchecked, such biases reinforce inequality and erode public trust by perpetuating discrimination. Compounding
this problem is algorithmic opacity – the “black box” nature of many AI systems. When companies keep recommendation
logic secret, affected individuals and regulators cannot understand why outcomes occur. As philosopher Miranda Fricker
argues, epistemic injustice occurs “when someone is given less credence than they deserve,” and “when they are treated
unjustly” (Fricker). Algorithmic opacity can thus be understood as a form of testimonial silencing, where individuals are
excluded not only from decisions but from the reasoning that leads to them.
Against this backdrop, this essay presents a multi-dimensional strategy, spanning technical, ethical, and regulatory dimensions to theoretically address algorithmic
bias and opacity.
Read full essay
Honourable Mention at the Cambridge Re:think Essay Competition 2025
What is self-deceit?
Introduction
Self-deceit – also called self-deception – is the act of forming or maintaining a belief that is false, often
against one's better knowledge, in order to avoid psychological discomfort (von Hippel et al.). Defined this way, it appears
both familiar and straightforward. Yet there is a theoretical contradiction beneath it: how can someone know and not know
the exact truth at the same time? Despite psychologists pointing out that self-deception is a common aspect of human behavior, this
paradox has led some philosophers to question its validity. The conflict is obvious: why is self-deception so prevalent in real
life if it is illogical in theory? Are the mechanisms completely unconscious, or do we intentionally fool ourselves? More significantly,
what moral consequences result from this kind of internal deception?
To address these questions, this essay looks at self-deceit from both philosophical and psychological perspectives. First, I analyze the
logical paradox at its core. Second, I investigate the cognitive and emotional mechanisms that enable individuals to deceive themselves,
often without being fully aware of it. Drawing on real-world examples and research, I argue that self-deceit functions as a motivated false belief:
it is both a defense mechanism and a philosophical puzzle – one that helps preserve emotional stability but at the cost of autonomy, clarity, and, sometimes, truth.
Read full essay
Shortlisted at the 2025 John Locke Institute Essay Competition, Psychology Category
In an increasingly AI-driven world, how is our ability to think for ourselves changing?
Introduction
The answer to how artificial intelligence (AI) impacts our independent thinking depends on our definition of "thinking ability."
Psychologists emphasize that human intelligence is multifaceted. For example, Sternberg's Triarchic Theory distinguishes between analytical,
creative, and practical intelligence ("Learning and Intelligence"). Again, the areas of linguistic, logical-mathematical, spatial, interpersonal
(social understanding), and intrapersonal (self-reflection) intelligence are all identified in Howard Gardner's Multiple Intelligences ("Learning and Intelligence").
In terms of these frameworks, "thinking" is not simply data processing; it involves moral reasoning, creativity, empathy, and flexibility. While machine learning is
improving significantly in identifying patterns, imitating human awareness and context flexibility remains a challenge. So, the question is not whether AI can think
and process information but whether it can match human creative and ethical cognition.
With such complexities in mind, this essay will investigate how AI simultaneously
affects and influences cognitive autonomy in human beings – through memory, creativity, and moral judgment – and will emphasize the urgency of mindful engagement with technology
to preserve independent thinking.
Read full essay
Shortlisted at Horizon Academic Essay Prize 2025