Philosophy

Algorithms do not just inherit bias from society; they are built on categories that decide in advance what counts as knowledge and what gets erased.
The Myth of the Neutral Algorithm (and What Lies Behind It)
We are often told that algorithms are neutral. They process data; they find patterns; they do not discriminate. This claim has been thoroughly dismantled by scholars like Ruha Benjamin, whose concept of the New Jim Code shows how technological systems can reproduce racial hierarchy while appearing to be objective, and Safiya Umoja Noble, whose work on Algorithms of Oppression demonstrates that search engines and recommendation systems encode the biases of the social world from which their training data was drawn.
But there is a deeper problem than bias, and it is the one cognitive justice points toward.
An algorithm trained on data from an unequal world does not merely reflect that inequality; it encodes and accelerates it. But before that encoding, before the training data is even assembled, a series of epistemological decisions has already been made. What counts as a household? A livelihood? A land use? A disease? These categories seem neutral; they are the administrative vocabulary of census forms, government databases, and satellite imagery analysis. But they are not neutral. They are categories produced within dominant frameworks, and they carry within them assumptions about what counts, what is measurable, and what is real.
The knowledge of the Irula communities of Tamil Nadu, one of the world’s most sophisticated bodies of knowledge about snakebite treatment and the behaviour of serpents, cannot be entered into a medical database without being transformed. To datafied it, you must first translate it: into species names, venom classifications, treatment protocols. The translation is not neutral. It selects what is transferable within the biomedical framework and discards what is not. What is discarded is not noise; it is the relational, contextual, place-specific quality of the knowledge itself. The knowledge of which snake behaves how in which terrain at which season, held by a person who grew up in that terrain across multiple seasons, cannot survive intact the process of becoming a data point. The translation is also always a diminishment.
Yuk Hui’s philosophy of cosmotechnics is useful here. Hui argues against the assumption. Technology is culturally singular: there is one trajectory of technological development, and different societies are at different points along it. Against this, Hui insists that technologies are always embedded in cosmological frameworks, in particular understandings of the relationships between humans, tools, and the world. There is no technology in general; there are only technologies that emerge from and express particular ways of being in the world. The question this poses for AI is not merely how to make current AI systems fairer, but whether the very architecture of AI, its categories, its training regimes, and its evaluation metrics could be otherwise. Whether there are ways of building computational systems that are genuinely responsive to the plurality of knowledge forms they encounter, rather than assimilating them to a single epistemological standard.
This is not a question that AI ethics, in its current form, is equipped to ask. The dominant frameworks, namely, fairness, accountability, transparency, and privacy, are all necessary. But they operate within a liberal framework that takes the basic architecture of AI as given and asks how to distribute its benefits more equitably. Cognitive justice asks a prior question: whose framework defines fairness? Whose categories structure the model? Who is in the room when the values are encoded? And what knowledge is simply not available to encode because it is relational, embodied, and seasonal, and the process of encoding would destroy it?