Written by: Kayli Jamieson, SFU Student
Sapere aude! Translated from Latin, it means “dare to know,” and was a featured phrase within Dr. Sun-ha Hong’s recent book, Technologies of Speculation: The Limits of Knowledge in a Data-Driven Society. Dr. Hong, an assistant professor in SFU’s School of Communication, was the most recent author featured in the Book + Speaker Talk series. He was also joined by Dr. Luke Stark, an assistant professor at Western University.
Dr. Hong’s book was released via MIT Press in the summer of 2020, and I excitedly read and reviewed it, as I’m always eager to support a professor within my program. His talk summarized the overarching points of his book perfectly.
More information does not always mean more understanding
Dr. Hong frequently refers to the concept of the ideal “good, liberal subject,” in society — where citizens intentionally take the time to be informed. Because knowing more information will surely lead to a more enlightened, rational decision, right? We may often think about this in the context of democratic elections — the availability of information to citizens, especially by a free press, aids in making decisions about our elected officials. But what about fulfilling this fantasy with “pure” data?
Dr. Hong asserted we have a fantasy that raw data is pure and objective which helps “rationalize our decision-making” because we see it as “untainted by human subjectivity.”
However, Dr. Hong pointed to the Snowden case as an example of this flawed logic. Edward Snowden may have revealed ‘raw’ information to the general population so that we could ingest this new information about government surveillance and decide the rational thing to do — but this did not guarantee transparency.
The NSA documents themselves are extremely difficult for a non-expert to understand, and Dr. Hong said this is exemplary of how “information compels speculation [ . . . ] While Snowden did generate public awareness, [the documents] also fielded a ton of misinformation and speculation.”
He also argued that if it is transparency we are seeking, whistle-blowers or organizations merely making a bunch of information available will not lead to this.
The availability of information “only works when we have a healthy normative information environment in place to guide new information in sensible and valid ways.” Platforms and data-driven systems have a “tendency to make data processes more opaque and disconnected from human understanding.”
Some individuals or even entire communities find fascination and pleasure in measuring and tracking every aspect of their lives. I learned from Dr. Hong’s book that a specific community called Quantified Self-ers (QS-ers) exists, and flock to wearable tech or gadgets to track everything from sexual performance to friendships. They have fallen into the ideal of wanting to “know themselves” by ‘owning’ the data and information provided by machines that claim to generate the ‘objective’ facts they would not otherwise know.
This once again feeds into the idea of the rational and “good liberal subject,” which Dr. Hong argues is “empowering” through the use of “objective data and fancy tech to know yourself better.” Here is where ‘daring to know yourself’ — sapere aude — returns as the ideal of fulfilling a public duty to utilize your own personal understanding.
One popular example he referred to is the Fitbit since the company has started to pass its data from users onto insurance companies “for future recombination and use.” This basically means the upscaling of dataifiction for commercial use when organizations start to incorporate data-driven technology into their core business model. This potential repurposing is of course something to be wary of, especially with any smart device or platform that we willingly allow our data to be exploited by.
The reliance upon the fantasy of “pure data” is also laced with myth; Dr. Hong argues data is “always composed of choices” about what exactly is being measured. There are often instances where people will take incomplete data and unverifiable predictions to “work in the name of technological objectivity.”
An example he highlighted is the case of Sami Osmakac, who was indicated as a potential terrorist interested in acquiring guns. An undercover FBI agent approached him, financed his purchase of weapons, taught him how to use the weapons, and encouraged him to use them. In this process, the agent facilitated the creation of data necessary for Osmakac’s arrest; but since this was a pre-emptive case, would Osmakac have carried out his intentions and fantasies to this extent without the FBI intervention?
Such cases are exemplary of how terrorism is often “characterized as a data problem.” If we have more data, and therefore more invasive forms of surveillance, we can “defeat the uncertainty.” Dr. Hong also expanded in his book how stories like the Osmakac case indicate what happens with “speculative forms of fact-making,” and the consequences of filling-in-the-gaps.
Dr. Stark, the professor responding to Dr. Hong’s presentation, highlighted a similar argument with how machine-learning techniques in scientific research don’t necessarily produce “science.” There are concerns from some institutions that Artificial Intelligence (AI) is “changing the practice of scientific research.”
This is due to its inherent interpretive method in accumulating marginal and irrelevant details to “reveal clues” or predictive speculations. He spoke of the concern of how AI attempts to “claim regularity and predictability and certainty when conceptually, this doesn’t exist.”
Technological objectivity is a myth
Dr. Hong posed some lingering questions: “who is in a position to be able to afford this new power to measure, and who is on the short-end of the stick, turning their own bodies into data for the sake of these decision-making systems?”
The pervasive nature of “smart” machines that track our data (whether via state or self-surveillance) over time has developed fabrications of “objective” truths pulled from our quantified selves that are actually not as reliable as they claim to be. This process narrows down messy data to make certain kinds of truth count, deeply impacting the ways in which we can understand our own bodies, relationships, and lives.
We willingly take part in the transaction of our data to become improved individuals through smart machines that claim to know more about us than we do, using their machinic sensibility to measure data about ourselves that we cannot do alone. When such fabrications achieve a status of knowledge, it often undertakes a justification for its initial gathering (of data) in the first place.
Dr. Hong refers to the well-known exposure of the NSA’s data collection via the Snowden affair, as well as drawing upon multiple examples of state justification in tracking “lone wolves” in the name of their potential futures in terrorism and “what-ifs.”
A line from the opening pages of his book seems to also encapsulate this food for thought: “The moral and political question, then, is not simply whether datafication delivers better knowledge but how it transforms what counts in our society: what counts for one’s guilt and innocence, as grounds for suspicion and surveillance, as standards for health and happiness.”
Dr. Hong’s talk was eye-opening, important, and educational. Perhaps you will engage some of these concepts for your own reflection the next time you want to participate in self-surveillance.
Sun-ha Hong’s Book + Speaker talk event was recorded and uploaded to YouTube by SFU’s School of Communication, and can be viewed online.
The first two chapters of Technologies of Speculation: The Limits of Knowledge in a Data-Driven Society are available to read online. The book is available to purchase at NYU Press and Indiebound.