The Panoptic Sort: Surveillance Q&A with Oscar Gandy
With the second edition of his classic 1993 book, The Panoptic Sort, recently published, we speak to Gandy about the past, present, and future of surveillance.
Professor Emeritus Oscar H. Gandy, Jr.'s groundbreaking book The Panoptic Sort (Westview Press, 1993) led the way in directing our attention to the surveillance activities of commercial firms. Other publications at the time were focused on the privacy concerns of individuals, but Gandy's book helped us understand what was at stake when the bureaucracies of government and commerce gathered our personal information and used it to manage social, economic, and political activities.
To honor this important work, and to mark the occasion of the second edition being published, we asked Gandy a few questions about how surveillance has changed over the last three decades, what risks it poses, and what we can expect in the future.
How has surveillance changed since the first edition of The Panoptic Sort and which of these changes are explored in the second edition?
All of us engage in surveillance of our environments. The Panoptic Sort defines surveillance as the gathering of data and transforming it through a variety of analytic methods into a resource for the production of influence or control over the behavior of others.
My earliest references to surveillance in the first edition were focused on the ways in which the design of a prison in which a kind of panoptic observation of prisoners would enable actual and imagined visibility that would help to facilitate their control through a common assumption that they were always being watched through a central tower. This disciplinary surveillance was extended by Michel Foucault into a variety of environments, including schools, wherein experiments and other data gathering procedures would facilitate the “correct training” of individuals. While surveillance had traditionally been associated with efforts by governmental authorities, it was my goal in the first edition to bring scholarly and public attention to the kinds of surveillance that were being developed and used by private corporations, especially those concerned with its ability to improve on the quality and extent of the economic and political benefits that could be derived from the observational and experimentally derived insights about the kinds of influences that might be produced through targeted communications.
Much of what has changed since the first edition has been based on the kinds of technologies that have been developed to differentiate between individuals and the kinds of groups that could be defined through algorithmically enabled processing of the massive amounts of data that were being generated and captured by digital technologies interconnected through the internet.
The second edition includes a foreword which serves as a preface, which had not been included in the original. It calls the reader’s attention to a good number of scholarly contributions that have taken note of and extended my assessments of the nature of surveillance and the actors involved in its further development and use within society. Among those improvements is the distinction between what we refer to as data, and what we now value as useful information that can be derived from its analysis. We might think about this process as the transformation of data acquired through surveillance into a kind of “actionable intelligence.”
What is quite new, and is described in the afterword, is the emergence of dominant firms in what is referred to as the “platform economy.” Here we see that vast amounts of transaction-generated information (TGI) have facilitated the generation of descriptive, classificatory, interpretive, and predictive information that has increased the reach of surveillance beyond anything I could have imagined in writing the first edition.
This new economic system has been characterized by a number of observers who have offered their own versions of a label provided by Shoshana Zuboff — that of “surveillance capitalism.” According to Zuboff and others, wealth is being created through the surveillant capture of human experience, transforming it into a variety of techniques for influencing or controlling human behavior. At the same time, the development of social media, such as that represented by Facebook has expanded quite rapidly. These new media forms not only capture attention, but they are being used to cultivate a kind of performative addiction toward making information about oneself available to countless others for new forms of cultural production and consumption.
What is also included in the afterword is the role being played by more recent advances in data analysis facilitated through some quite remarkable and also quite worrisome technology. These integrated systems are being characterized as having a kind of “artificial intelligence” continually being expanded through a variety of analytical techniques referred to as “machine learning.” Because of its complexity and inscrutability, the kinds of information and the recommendations being generated, and decisions actually being implemented by these increasingly autonomous devices are no longer subject to the kinds of oversight and regulatory control that I believe are essential for democratic governance.
Is it possible for individuals to protect themselves from corporate and/or government surveillance? If so, how?
This is a complicated question. The problem is that people can’t protect themselves from a threat unless they recognize that there actually is a threat, and that it is something they should take seriously. There has been considerable success in what I have seen as an effort by government and corporate users of surveillance technology to convince data subjects that all this data is being used to improve the quality of their lives. In addition, there is a widespread application of what critical observers are referring to as “dark patterns” that further weakens our ability to defend ourselves. These digital media strategies are being used to trick people into doing things that they didn’t actually intend to do. Much of this manipulative activity has led to what Nora Draper and Joseph Turow have referred to as “the corporate cultivation of digital resignation.”
On the other hand, there is still the possibility that there will be an increase in the development and popular acceptance of “trusted algorithmic assistants.” The primary function of these devices is to warn us, and to the extent that it is possible, to intervene and limit the collection of transactional, behavioral and biometric data being gathered by cameras and environmental sensors as we make our way around in public and private spaces.
Are there laws or policies we could (or should) enact to protect individuals' privacy?
The effective use of trusted algorithmic assistants will require the development and implementation of comprehensive regulations that would constrain government and corporate use of technologies that would limit the functionality of these assistants or agents when used by members of the public.
Members of the European Union have made considerable progress in the development of regulatory constraints upon the use of technology in ways that harm privacy, and limit autonomous activity on our part. While the General Data Protection Regulation (GDPR) activated in 2018 has made considerable progress along these lines, there are many problems that still have to be addressed. These include overcoming a widespread tendency to think about privacy as essentially an individual concern, while many of the threats we face today is on the basis of our “membership” in “groups,” many of which have been algorithmically identified, and about which we are largely unaware.
There are ongoing discussions about the need for the development of specialized regulatory agencies designed to address the kinds of problems that are emerging within the digital age and the development of surveillance capitalism. It seems to me that this will need to be an international agency, committed to, well staffed, and adequately funded and empowered to address these challenges at a global level.
Does surveillance pose a threat to democracy?
While I have tended to focus my answers to questions about privacy and surveillance in terms of commercial interests, it is important for me to make it clear that the surveillance of individuals and members of communities and groups is used routinely to influence political actions in exactly the same way. Segmentation and targeting of manipulative communications enabled through surveillance is applied to a broad range of democratic participatory activities ranging from the election of political candidates to the support or opposition of public policies.
Data brokers facilitate this targeting through their sales of data, or the facilitation of messaging across social media. The analysis of interactions on social media, as well as through millions of simultaneous online experiments are being used to provide the most effective versions or framings of strategically oriented political information to different segments of a relevant population. The same experiments are also being used to identify those segments of the population that are less persuadable, and therefore should be ignored or bypassed, consistent with the familiar adage “let sleeping dogs lie.”
What are the most important surveillance issues we should be paying attention to going forward?
I believe that the most important surveillance issues we will be facing in the coming years are what people refer to as sociotechnical developments. There is no doubt that the development of the Internet of Things (IoT), which refers to all of the sensors and networked devices that capture, process, and distribute data and information represents a clear and present danger. At the same time, there is no way to ignore the fact that many of these connected technologies actually do serve important social, educational, and political functions that are beneficial to us all. The problem, of course, is that of distinguishing between those that are beneficial, and those which are likely to harm us, or at least those of us who are already the most vulnerable.
We need to come to terms with understanding the differences between the kinds of concerns we might have about surveillance with regard to the collection of data, and those concerns that are related to the uses, and users of the information, and the kinds of strategic intelligence that can be derived through its analysis. I believe that we need to pay far more attention to uses and users of data and information, and we need to pursue this kind of understanding with regard to the role that increasingly autonomous algorithmic technologies will play as certain kinds of users will be developing uses of information that we have barely begun to imagine.
As I suggested earlier, massive investments are leading to the development of technologies that we will simply not understand, which will weaken the role of transparency as a facilitator of democratic, political and regulatory accountability.
Of course, I am hoping that I will still get to say a bit more at some point about how we will need to come to terms with the threat of an Algorithmic Leviathan emerging in the foreseeable future.