Commentary

COMMENTARY: DataU: How Much Are You Worth Online?

Lily Haak, University of Florida

In 2010, before the wave of hyper-consumerism and big data engulfed digital markets, author and academic Debra Satz penned “Why Some Things Should Not Be for Sale.” Satz identifies the qualities of repugnant markets—like those for human kidneys or for services that exploit women’s reproductive abilities, child labor, etc.—and qualifies them all as both economically unviable and ethically compromising. Since Dr. Satz published her book, the tech sector has progressed considerably, with the adoption of social media as a full-fledged marketplace. But as consumers reap the benefits of remote online browsing and enhanced accessibility, they’re inadvertently sacrificing components of their privacy. In this new digital landscape, how do we assess the risks of selling our data—are we being bought and sold digitally?

A key financial concern for privacy advocates is that producers (referencing consumers who produce data when they engage with digital platforms) do not directly benefit from the sale and purchase of their own material—the content they produce is aggregated and sold almost exclusively by data vendors or brokers. In pursuit of enhancing user experience, companies that collect and distribute data have returned modern technology to a pseudo-feudal system. A large part of that feudalism consists of illiteracy as well as some manipulation. Informed consent is demonstrated when an agent is fully informed of the conditions surrounding a decision before they make it. Barocas and Nissenbaum identify informed consent as invaluable to respecting individuals as autonomous decision makers with rights of self-determination, including rights to make choices, take or avoid risks, express preferences, and, perhaps most importantly, resist exploitation. However, the informed consent approach is often ineffective online because (a) terms and conditions are abstract to the average user and (b) ambiguity disincentivizes full understanding and consumption. Legality has devolved into the click of a button.

This can explain why the most common privacy violation we experience is often perceived as harmless: cookies. As the name does not suggest, cookies are digitized receipts that your web server sends to your browser to track online behavior—they contain unique identifying data ranging from login details to your search history. Social media platforms that rely on consumer engagement to collect data are often more invasive than aimless browsing. In 2018, New York Times reporter Brian Chen downloaded a copy of his Facebook data. Mr. Chen soon discovered that Facebook had access to all 764 of his contacts, and roughly 500 advertisers had his contact information, including his name, email address, and phone number. Companies leverage user-generated data to: [1] construct unique data portraits that include interests and demographics, helping advertisers run more effective ads, and [2] share data B2B with advertisers and ask them to partake in real-time bidding, or RTB.

The Electronic Frontier Foundation (EFF) defines real-time bidding as the “process by which publishers auction off ad space in their apps or on their websites. In doing so, they share sensitive user data—including geolocation, device IDs, identifying cookies, and browsing history—with dozens or hundreds of different ad tech companies.” This information can be used to sell anything, from shoes to presidencies. A 2012 investigation by ProPublica yielded receipts showing that “political campaigns used onboarding to bombard voters with ads based on their party affiliation and donor history,” in line with Facebook’s data flub in 2014, which allowed Cambridge Analytica to harvest the private information of more than 50 million users. Since that investigation, X and Facebook have begun offering onboarding services that allow advertisers to find their customers online. Google, amidst an antitrust investigation by the DOJ for monopoly over digital advertising technologies, has acquired 214 companies since its inception. Its most notable acquisition is the 2007 purchase of Doubleclick, an advertising platform capable of assembling pseudonymous user profiles and redirecting behaviorally-targeted ads. Paired with Google’s vast network of IP addresses, search histories, screen time metrics, click through rates, and even health information after the purchase of Fitbit in 2019, the possibilities for privacy breach have become literally endless.

Informed consent is a derivation of the concept that privacy is control over information about oneself—that choosing to share our information is a right synonymous with choosing not to, and that we are unequivocally entitled to transparency when we participate in digital environments. Privacy advocates largely evaluate whether a privacy strategy is acceptable based on the principle of reachability—the ability to assert presence in any capacity into one’s life based on data and to associate, amass, and aggregate facts on that basis. Informed consent disputes effective methods of preserving privacy by (a) falsely acquiring it and by (b) being ineffective at preventing reachability. In order for consent to be a more productive privacy strategy, manifestations of consent would require the development of [1] applications that enhance understanding and evolve attention as well as [2] mechanisms that better model consent and represent our rights to privacy. Data is incredibly valuable, but not nearly as important as our right to own and protect the intellectual property we produce; consent is the newest ethical right being challenged by an explosion of user-oriented applications. Even if consent is modeled appropriately, should our data be sold? I ask humbly: What would Satz say?

Leave a Reply