System Identities: Global and Unknowable

archive

wall with art
hologram face

Image credit: Mary/Flickr

System Identities: Global and Unknowable

| Volume 12 | Issue 25

Individual identities and behaviors in the United States are increasingly simulated at the systemic level in numerous databases. From one’s credit scores (financial identity), buying habits (customer identity), medical records (health identity), risk scores (criminal identity) to one’s demographic characteristics such as age, gender, income, region and education, a person is transformed, without knowledge, into data profiles to meet certain conditions of communication. During field research for my book, Neutral Accent, on India’s call centers serving global customers, I began to realize that most outbound global calls, pertaining to telemarketing or debt collection, were not initiated by agents in India. They were initiated by a software program called Dialer that targeted specific profiles of American and British clientele, a form of communication that was not started by either party, even though the two found themselves engaged, unwittingly, in real-time conversations. Given the rise of automation in communication, it is perhaps safe to predict that a large share of business-customer conversations in the future would likely involve AI bots dialing our digital identities.

Being constructed by institutional systems without the customer’s consent or knowledge, digital identities suffer from a paradox, however: one is unable to identify with one’s own identity, because it mostly remains unknown to the person. One could argue that even racial identity—being black, white, yellow, or brown—is not necessarily something an individual identifies with. Yet, ethnic, racial, linguistic, national identities hail individuals through publicly assigned attributes. Digital constructions are detached from persons, and the capture of persons in such constructions is only incidental to one’s personhood. For example, the construction of credit scores, while individualized, is independent of, and prior to, any single person’s financial identity. I have labeled such constructions “system identities” (Aneesh 2015). Being logically prior to the persons to whom they later attach, system identities indicate an emerging dimension of identity studies whose beginnings could be seen in the works on consumer surveillance (Gandy, 1993), surveillant assemblage (Haggerty and Ericson, 2000), and social sorting (Lyon, 2003). However, the surveillance model, or more broadly, the security model, while crucial to our understanding, is limited to the control perspective where rationalities of governance—state, corporate, or individual—take center stage.

System identity adds a different dimension to this model: the functionalization of identity beyond the security perspective. A profile of one’s changing musical taste or changing musical contexts (e.g., party, running, road trip), for instance, may lead to a set of suggestions by entertainment systems like Pandora, Apple, or Spotify that is closer to system than security perspectives. The concept of system identity allows us to locate motivations along functional lines where systems of entertainment, health, finance, law, education, or consumption begin to inscribe the individual within their instrumental dynamics.

The notion of system identities posits a space of identity construction that does not have a substrate or foundation on which identity is based. They do not have a real person at the end who could act as the foundation or outside basis for coherent constructions. These systems subsume their own foundations, turning what was supposed to be logically prior and the basis of their operations, for instance, the individual, into a domain infiltrated by techniques of the system. Various systems—financial, legal, medical, governmental or any number of others—challenge or provoke big data in such a way as to make it, to borrow from Heidegger (1977), a “standing reserve,” which responds to the system according to the queries being asked of it. By adapting themselves to their own results, these systems of observation afford no substance or structures (e.g., persons) on which the elements of identity can be founded. Such constructions, therefore, have to take responsibility for themselves, as they are based on, and exist for the purpose of, predictive analysis.

System identities cannot be constructed as having an identifiable substrate or foundation because the identity never actually exists until it is drawn up and observed by a system and for a specific purpose. Take, for example, one’s credit score. A credit score is simply financial data in various databases until the point it is drawn upon. There is no stable identifiable point as the data used in its creation changes with new purchases, payment, non-payment, debt ratios, and time. At the point of request a score is algorithmically generated attesting to the credit-worthiness of the individual, but the mere fact of calling a credit score into existence and initiating a credit inquiry itself can change the given score. The individual is turned into a changing, dynamic, and contingent event.

In recent decades, identity studies have also focused on the collective, with gender-sexuality, race-ethnicity, and class constituting the “holy trinity” of identity formation (Cerulo, 1997; Appiah and Gates, 1995). We may call these identities socially negotiated identities. To the above, one can add a less discussed but well recognized form of identity: bureaucratic identity, an identity ascribed to the individual by various institutions as a means of identifying members. Seen in passports, driver’s licenses, and other identity cards, this form of identity is a construction of fixed personhood for the purposes of modern organizational needs, ensuring that the member has remained essentially the same despite various changes in personality, body, and behavior. The purpose of bureaucratic identity is to absorb the contingency of the individual human being, turning the person into an always identifiable and stable object of observation.

The notion of system identities posits a space of identity construction that does not have a substrate or foundation on which identity is based. They do not have a real person at the end who could act as the foundation or outside basis for coherent constructions.

Unlike bureaucratic identities, system identities do not absorb contingency by constructing a stable foundation in a “real” individual, nor do they require a set of pre-existing rules and laws of membership. System identities may work independently from the subject, producing profiles that an individual may or may not have, or may develop in the future, or simply be confronted with. They may initially be looser and less capable but over time may mold around an individual and acquire capabilities to predict one’s behavior, taste, and predilection. As sociodigitization processes continue to convert a wide variety of social realms into digital information (Aneesh 2001a, 2015; Negroponte, 1996; Sassen and Latham, 2005), areas of life once thought entirely separate from the virtual world are now becoming impossible to understand independently of it. Diverse digital technologies capture and convert our day-to-day lives into bits of electronic data. These data, mined from a variety of sources—online social networks, mobile devices, credit card transactions, and the Internet of Things (IOT)—inform the customer relation databases used by call centers. 
 
Thus, in addition to socially negotiated and bureaucratically assigned identities, we witness the rise of mercurial, non-negotiated identities derived from an ever-growing heap of structured and unstructured digital data. For example, one’s credit history, buying habits, medical history, legal troubles, or musical taste all lend themselves to multiple constructions of personality—of which one is often not aware. These digital persons have not only attained a degree of autonomy from the physical self, they have also become, in certain cases, more effective in predicting one’s life chances, e.g., credit scores and criminal risk scores.
 
These abstracted digital identities are ever growing as information one divulges and the trails one leaves along systems’ webs are recorded and interpreted. The emergence of these new identities produces a sharp contrast with the still operational social and bureaucratic identities. They represent persons as dynamically forming clouds of data. Unlike social identities, they do not rely on interaction, exchange or reciprocity; they are also not as stable or static as one’s bureaucratic representations. The workings of system identity occur in the background beyond an individual’s grasp. Being drawn upon through algorithmic construction of bits of data for particular purposes, including prediction and speculation by various function systems, system identities are not durable, as the same database may lend itself to different identity constructions whose observed states will differ when drawn upon by a different entity for a different functional need.

These abstracted digital identities are ever growing as information one divulges and the trails one leaves along systems’ webs are recorded and interpreted.

Thus, a system identity has no predetermined unity; it multiplies for the various purposes of economic, medical, financial, and judicial systems, setting up individuals’ behavioral records for systemic requirements, and instituting changes as needed. Tucked away in databases, system identities are at a remove from everyday experience; one cannot “experience” the abstract logic of system identity or form a relationship with it. It is an identity created solely for the purpose of a system; it is functional for the system but not necessarily for the person, entailing a certain type of externally-determined identity formation. Inherent in these ascriptions of identity are classifications and categorization, classifications that are invented, not for the social life of persons, but for systems. Identity formations are not reciprocally conditioned but developed through algocratic means (Aneesh 2001b; 2009).
 
In contexts like call centers, in the future it is likely that the human agent may be removed and replaced by an AI-based agent well equipped to directly access and generate the system identities of its customers to make a call. A recent demonstration of Google’s AI system called Google Duplex was seen as hinting at AI’s future, where Google Assistant was shown to hold conversations with restaurants and hair salons based on years of investments in deep learning, natural language processing, speech recognition, and text-to-speech (Goode 2018).
 
We have learned from Arlie Hochschild (1983) about the trained management of emotions in businesses—airlines to restaurants—and the work that goes into exhorting appropriate feelings. It may take long before AI systems could be trained to empathize with their customers’ varying situations. Empathy could initially be programmed in a mechanical fashion, but for the conversation to sound genuine in a complex way an AI system would need to generate the same emotion in itself for a truly appropriate response, a scenario difficult to imagine in the foreseeable future. However, it is likely that a rudimentary level of programmed cheerfulness and empathy will be built into these systems sooner than later.

References

Aneesh, A. 2009. “Global Labor: Algocratic Modes of Organization.” Sociological Theory. 27 (4):347-370.
 
--. 2001a. “Skill saturation: rationalization and post-industrial work.” Theory and Society. 30 (3):363-396.
 
--. 2001b. “Virtual Migration: Indian Programmers in the US-based Information Industry.” Rutgers University.
 
--. 2015. Neutral Accent: How Language, Labor, and Life Become Global: Duke University Press.
 
Appiah, Anthony and Henry Louis Gates. 1995. Identities. University of Chicago Press.
 
Cerulo, Karen A. 1997. “Identity construction: New issues, new directions.” Annual review of Sociology. 385-409.
 
Gandy Jr, Oscar H. 1993. The Panoptic Sort: A Political Economy of Personal Information. Critical Studies in Communication and in the Cultural Industries.: ERIC.
 
Goode, Lauren. 2018. “How Google’s Eerie Robot Phone Calls Hint at AI’s Future.” Wired. https://www.wired.com/story/google-duplex-phone-calls-ai-future/
 
Hochschild, Arlie Russell. 1983. The managed heart : commercialization of human feeling. Berkeley: University of California Press.
 
Haggerty, Kevin and Richard Ericson. 2000. “The Surveillant Assemblage.” British Journal of Sociology. 51 (4):605-622.
 
Latham, Robert and Saskia Sassen. 2005. Digital formations: IT and new architectures in the global realm. Princeton, N.J.: Princeton University Press.
 
Lyon, David, ed. 2003. Surveillance as Social Sorting: Privacy, risk, and digital discrimination. New York: Routledge.
 
Negroponte, Nicholas. 1995. Being digital. New York: Knopf.

A. Aneesh photo

A. Aneesh is Professor of Sociology and Global Studies at the University of Wisconsin Milwaukee.

envelope icon

SUBSCRIBE TO OUR MAILING LIST