Search

Artificial intelligence is not the panacea for an unequal society – the Panoptic Sort

“The panoptic sort is the name I have assigned to the complex technology that involves the collection, processing, and sharing of information about individuals and groups that is generated through their daily lives as citizens, employees, and consumers and is used to coordinate and control their access to the goods and services that define life in the modern capitalist economy. The panoptic sort is a system of disciplinary surveillance that is widespread but continues to expand its reach.”


- Oscar Gandy in his book, The Panoptic Sort: A Political Economy of Personal Information (1993)


Gandy in his book describes a discriminatory mechanism that he calls the “panoptic sort”. The panoptic sort is the practice of collecting individual information from their interactions and transactions with the commercial and political system. The collected information is then stored, collated, compared, processed, and analyzed to create intelligence about a person’s economic value and political stance. The system is discriminatory because it categorizes people based on the derived estimates, models, assumptions, and strategic orientations.


Personal information has three different utilities in panoptic sort – identification, classification, and assessment. In 1993, Gandy had warned us of the panoptic sort if this discriminatory practice remains unchecked will substantially harm the equality-seeking society. Today we live in a society with accelerated panoptic sort, wherein governments (and their law enforcement agency) and the multinational corporates rely upon artificial intelligence and machine learning (“AI & ML”) to determine our access to justice, employment, banking, etc.


At the outset, one may believe that algorithmic resource allocations are fair and objective and free of human emotions and unfair business. However, AI & ML can often make unfair biases universal and difficult to spot if the databases used as training sets for such AI & ML are derived from the imperfect world with apparent and hidden biases.


To illustrate, for developing an algorithm for the distribution of medical and healthcare resources, we may require the algorithm to identify the sickest patients and, to that end, use the average health care cost of an area as a proxy for sickness. However, marginalized people (including people of color) tend to have lower average healthcare expenditure than the more affluent society. Therefore, the algorithm will prefer healthier people from the more affluent portion of the society over sicker patients in the marginalized society. Similar biases in medical algorithms have favored white patients over sicker patients of color in the United States.


In relation to training AI & ML for automated driving, Georgia Tech in its study made an alarming revelation: camera-based automated driving systems were more efficient in identifying pedestrians with lighter skin tones than darker. According to the study, one of the reasons for such predictive bias was that the dataset used to train the said AI&ML did a poor job representing the larger population.


In January 2020, Robert Julian-Borchak Williams, an African American living near Detroit was arrested on his front lawn in front of kids and wife for theft of USD 3,800 worth of watches from a retail store. Mr. Williams was wrongfully arrested and imprisoned by the police; this resulted from a misidentification by the facial recognition technology used by the Michigan State Police. This is one of the first documented examples of a wrongful arrest of a person using facial recognition technology in the United States. The algorithm used by Michigan State Police worked more accurately on white men compared to other demographics. The images used to develop the algorithm lacked the required diversity.


Therefore, we would need to change the pattern of thoughts in our AI&ML system. We must ensure diverse representation in the field of AI & ML. The database used to train artificial intelligence must be heterogenous with a close representation of the targeted populous. To this end, it is essential to record ethnicity while collecting data. Analyzing these data based on ethnicity will allow the data scientist to study macro-level trends across ethnicity without masking any particular sub-group inadvertently.