Book Review: Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition by Wendy Hui Kyong Chun

in a Characteristic data: correlation, biology, and the new politics of recognitionAnd Wendy Hwi Kyung Chun It explores how technological advances around data amplify and automate discrimination and bias. Through conceptual innovation and historical detail, this book offers engaging and revealing insights into how data exacerbates discrimination in powerful ways, he writes David Berg.

Characteristic data: correlation, biology, and the new politics of recognition. Wendy Hwi Kyung Chun (Sports illustrations by Alex Barnett). Massachusetts Institute of Technology Press. 2021.

Find this book (affiliate link): Amazon logo

Going back two decades, there was a fair amount of discussion about the “digital divide”. Uneven access to networked computers meant that a line was drawn between those who were able to switch and those who weren’t. At the time, there was pressing concern about the disadvantages of lack of access. With the massive escalation in connectivity since then, the concept of the digital divide still has some relevance, but it has become a somewhat blunt tool for understanding widely mediated social constellations. Divisions are now not a product of access; They are instead a consequence of what happens to the data produced through this access.

With the escalation of data and the establishment of all kinds of analytic and algorithmic processes, the problem of unequal, unfair and harmful processing is now a focal point of a lively and urgent debate. Wendy Hwi Kyung Chun’s vibrant new book Characteristic data: correlation, biology, and the new politics of recognition Makes a crossing intervention. At its heart is the idea that these technological advances around data “amplify and automate – rather than acknowledge and fix – the mistakes of the discriminatory past” (2). This is basically the codification and automation of bias. Any ideas about the libertarian aspects of technology are vague. Rooted in a longer history of statistics and biometrics, current ruptures are riven by the differential targeting that big data brings.

This is not just about bits of data. Chun suggests that we need […] To understand how machine learning and other algorithms have been integrated with human bias and discrimination, not only at the data level, but also at the action, prediction and logic levels” (16). It is not, then, that there is a bias in the data itself; It is also the way that separation and distinction are included in the way this data is used. Given the scale of these issues, Chun narrows things down further by focusing on four “core concepts,” with association, homosexuality, authenticity and recognition providing focal points for interrogating data discrimination.

Image Credit: Pixabay

It is the concept of association that does much of the gluing work within the study. The centrality of association is a subtext in Chun’s overview of the book, which notes that “Featured Data reveals how interconnectedness and the eugenic understanding of nature seeks to close the future by activating possibilities; How to naturalize homosexuality apartheid. And how authenticity and recognition reinforce deviation in order to create raging groups of reassuring anger” (27). In addition to developing these lines of argument, using the concept of correlation also allows Chun to think in deep historical terms about the course and politics of correlation and modeling.

For Chun, the role of engagement is complex and performative. It is said, for example, that correlations “do not simply predict particular actions; they also shape them. This is a well-established position in the field of critical data studies, with data describing and producing outcomes that are used to predict them. However, Chun has been able to resurrect this position by While exploring how correlation fits into a broader set of discriminatory data practices. The other performance issue here is the way people are made up and grouped through the use of data. Chun writes that correlations “that divide people into categories based on their being ‘similar’ to each other.” The effects of historical inequalities are amplified” (58). Inequality is reinforced as categories become tighter, with data giving them a sense of apparent stability and a veneer of objectivity. Hence the claim that “correlation contains within itself the seeds of manipulation, isolation, and distortion” (59).

Given the use of data to classify it, it’s easy to see why Featured Data It makes a conceptual connection between association and homosexuality—with homosexuality, as Chun describes it, being “the principle that similarity generates connection” and thus can lead to swarming and agglomeration. The aggregation work within these data structures means, for Chun, that “gay sex not only mitigates conflict; it also naturalizes discrimination” (103). The use of data-to-group correlations informs a homosexual genre that distorts not only representation and segregation; It also makes these splits appear natural and thus fixed.

Chun speculates that there may be some remnants of faith left in the ostensibly democratic characteristics of these platforms, arguing that “homosexuality reveals and creates boundaries within theoretically flat and dispersed social networks; distinguishes and differentiates between supposedly equal nodes; it is a tool for detecting and perpetuating bias and inequality in the name of “convenience.” “The ability to predict and common sense” (85). Since individuals are moved into categories or groups that are assumed to be like them, based on correlations within their data, discrimination can easily occur. One of Chun’s main observations is that data can feel comfortable, especially when it’s covered in predictions, but this can distract from the actual harms of the underlying distinction that it includes. Alternatively, proxies of this data can serve to support—and justify—discrimination” (121).” For Chun, there is a “proxy politics” in which data not only unfolds but can also be used to legitimize discriminatory acts.

As with association and homosexuality, Chun, in a particularly new development, explores how credibility itself has become a mechanism within these data structures. In stark terms, it is said that “originality has become so central to our time because it has become an algorithm” (144). Chun is able to show how the broader cultural push toward notions of authenticity, embodied in things like reality television, becomes part of data systems. The broader cultural trend is translated into something that can be displayed in the data. Chun explains that “the term ‘computational reliability’ reveals the ways in which users are validated and authenticated by network algorithms” (144). The validation system takes place in these spaces, where actions and practices are judged and validated through algorithms. Arithmetic originality “trains them on transparency” (241). It drives a form of openness to us as “operational authenticity” develops, especially in social media.

This focus on authenticity draws people into certain types of interaction with these systems. It shows, in Chun’s words, “how users become characters in a drama called ‘Big Data’ (145). The idea of ​​the drama, of course, doesn’t underestimate what’s going on but tries to reach its vibrant, role-based nature. It also adds a strong sense how to perform in relation to the broader ideas of data governance that the book explores.

These roles are not something Chun wants us to accept, arguing that “if we think of our roles as artists and characters in the drama called ‘Big Data,’ we don’t have to accept the current terms of our publication” (170)). Examining the craft of drama is a vehicle for transformation and challenge. Exposing drama is exposing existing roles and scripts, enabling them to be questioned and possibly regressed. This is not fatalistic or absent-minded; Instead, Chun’s point is that we are “characters, not puppets” (248).

There are some strong currents running through discussions around the book’s four core concepts. The suggestion that big data brings a reversal of dominance is a particularly strong argument. Chun explains that: “Power can now operate through reverse hegemony: if hegemony once meant the creation of a majority by different minorities accepting a hegemonic worldview […], now a dominant majority can emerge when angry minorities, assembled around a common stigma, come together through their mutual opposition to the so-called dominant culture’ (34). This line of argument is repeated in similar terms in the book’s conclusion, stating that “this is hegemony in reverse: if hegemony once entailed the creation of a majority by different minorities that accept—and conform to—a dominant worldview, then the majority now emerges from during the promotion of anger. Minorities—each associated with a particular stigma—through their opposition to “mainstream” culture (243). In this formulation, it appears that Big Data may not only be systemic, but may somehow gain power by overturning any aspect of a dominant ideology. The data does not lead to shared ideas but to break sharing of ideas into group-based networks. It seems plausible that targeting and data modeling practices are unlikely to facilitate dominance. However, the data not only confers power beyond dominance, it actually seeks to reverse it.

The reader may be a little surprised by this situation. Chun generally seems to portray power as emerging and anchored by a strain of technologies that have shaped into contemporary data infrastructures. On this account, power appears to be tied to existing structures and operates through interrelationships, calls to credibility and means of recognition. These attitudes of power – with infrastructures on one side and reverse hegemony on the other – are not necessarily incompatible, yet the debate about reverse hegemony probably stands a little further from that other view of power. I am left wondering if this reverse dominance is a result of these upward processes of power, or perhaps, some kind of facilitator of them.

Chun’s book looks to highlight the deep divisions that data-driven discrimination has already created and will continue to create. Conceptual innovation and historical detail, particularly in relation to statistics and eugenics, give the book a deep sense of context that nurtures a truly engaging and clear set of ideas and thoughts. By closely examining the way the data exacerbates discrimination in very powerful ways, this is perhaps the most informative book yet on the subject. The digital divide may not be a particularly useful term, but as Chun’s book makes clear, the role that data plays in activating discrimination means that technological facilitation of partitions has never been more relevant.

Please read our feedback policy before commenting.

Note: This article presents the author’s views, not the position of the USAPP – American Politics and Politics, nor that of the London School of Economics.

Short URL for this post: https://bit.ly/3FPaP0d


About references

David BergYork University
David Beer is Professor of Sociology at York University. David Beer is Professor of Sociology at York University. His recent books include data look and George Semel closing thoughts. the book Social media and automatic memory production, which he wrote with Ben Jacobsen, was published in 2021 by Bristol University Press.

.

Leave a reply:

Your email address will not be published.