The Myth of Anonymity

2012-FTC-facial-RPG-feature

The Federal Trade Commission (FTC) released its staff report yesterday on facial recognition technologies where it warned of potentially “significant privacy concerns” and called on companies to respect the privacy interests of consumers by implementing FTC-recommended “best practices.”

First, as I have written before, policymakers should not create technology-specific rules for facial recognition. Facial recognition technology belongs to a larger class of biometric technology that should be treated the same. In addition, facial recognition has many benefits, from improving security to automating tasks to personalizing transactions.

That said, there is nothing wrong with the federal government working with industry and advocacy groups to develop voluntary best practices that protect privacy and spur innovation. But these best practices should be based on sound knowledge, such as a clear understanding of technology and an accurate representation of the world. What I’d like to address here is the myth, repeated in the FTC report, that facial recognition technology “may end the ability of individuals to remain anonymous in public places.” The FTC identifies this particular privacy risk as one of the major privacy concerns of the technology. However, contrary to the FTC’s claims, we do not live in an anonymous world, and we should not create policies that are based on this false assumption. While individuals do have a right to privacy, individuals do not have a right to anonymity.

I am not making the claim that anonymity is not useful or important in some situations. Certainly, anonymity in publishing, for example, has a long and important history in political discourse, from pamphlets like Common Sense to Internet blogs by Arab Spring revolutionaries. But anonymity is also not an unqualified good. Anonymity (or the sense of anonymity) has facilitated massive amounts of hate speech and bullying on the Internet. Many websites do not allow anonymous comments precisely because of the lack of civility that anonymity tends to breed. And even those that participate in seemingly anonymous forums have learned that the veil of anonymity on the Internet can be lifted. Just last week, the reviled Reddit troll “Violentacrez” was identified by the website Gawker and lost his job when his employers learned about his activities.

Similarly, the FTC report is not making an argument that anonymity is good or bad (although I suspect some of the report’s writers would say that it is good). Instead, it is arguing that individuals have anonymity in public spaces, facial recognition technology threatens this anonymity, and thus businesses should take actions to ensure that individuals can remain anonymous. But if this is not true (i.e., if individuals are not anonymous in public spaces), then the FTC’s basis for making these recommendations is unfounded. After all, why should the government urge businesses to take actions to preserve something that does not exist?

So let’s explore the privacy and anonymity implications of this technology for users in various scenarios.

Scenario 1: Someone takes my photo in a public place.

This scenario is not directly related to facial recognition technology, but it is a good starting place. It is worth remembering that individuals do not need permission to take photos in public places. Although Samuel Warren and Louis Brandeis famously worried in 1890 about the potential violation of privacy from the invention of low-cost cameras by the Eastman Kodak Company, individuals enjoy the right to take photographs in public places. Claiming that this is a privacy violation is like claiming that it is a privacy violation if someone sees another person walking around in public. The reality is that people in public are photographed, sometimes without their permission or knowledge. (If you need evidence, Google “photo bomb.”)

Scenario 2: Someone takes my photo and guesses a few attributes about me.

Facial recognition technology can be used to identify faces (i.e. people) in a photo and then make assumptions about those people. For example, this might be used to deliver in-store advertising to individuals based on demographic information. However, people routinely make assumptions about each other when they meet, such as the age, weight and gender of the other person. For example, a salesperson in a store might recommend a certain product based on an assumption they make about the customer’s gender. The privacy risks for an individual do not differ depending on whether these assumptions are made by a computer or a human. There may be some interesting social questions about how this information is used (and as a recent Toyota commercial reminds us it is worth remembering that these assumptions may be wrong), but this is not a privacy violation.

Stav Strashko, an androgynous male model

Image: Stav Strashko, an androgynous male model, in a Toyota commercial.

Scenario 3: Someone takes my photo and learns my name.

The typical example of this scenario (and indeed the example cited in the FTC report) is a mobile app that allows users to identify strangers on the street. The FTC report states “companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent.”

First, I have yet to hear a convincing argument about how just simply knowing someone’s name is a violation of his or her privacy. Second, while not everyone is routinely identified by name in public, it is far from an uncommon occurrence for this to happen. For example, anyone who has sat in the waiting room of a doctor’s office (or coffee shop, restaurant, airport gate, etc.) knows that you can learn the names and faces of many individuals. Or consider someone like Zeddie Little (AKA “the Ridiculously Photogenic Guy”) who became an Internet sensation after this photo of him went viral. There are also multiple sources of public information that links photos to names. For example, names and booking photos (i.e. mug shots) are often released as public information.

In this report, the FTC recommended that all companies, regardless of whether they intend to implement facial recognition technologies, “should consider putting protections in place that would prevent unauthorized scraping of the publicly available images it stores in its online database.” This is probably an unwise recommendation for a number of reasons. First, it’s unclear what technical measures could actually achieve this since by definition these are publicly available images. Second, by discouraging data sharing, this could create a barrier to using this data for innovative applications. Third, (and not surprisingly), the FTC does not follow its own recommendation. Like most government websites, it publishes names and photos of its senior staff, and those of the FTC Commissioners could easily be “scraped” from its website (e.g., see here). Indeed many websites including those of large corporations, small businesses, non-profits, and educational institutions contain names and photos of employees.

Zeddie Little the "Ridiculously Photogenic Guy"

Photo: Zeddie Little the “Ridiculously Photogenic Guy”

Scenario 4: Someone takes my photo and learns my name and address.

First, it is important to note that some information, such as an individual’s address, is sometimes part of public records so it may not be truly private information. However, if a company is using information unlawfully (e.g., if a company collects information from individuals and then uses it or releases it in a way that violates the company’s stated privacy policy), then the FTC can and should take enforcement action against the company for engaging in deceptive practices. If the information has been obtained lawfully, then the question is about how the information is used. If John Doe learns that Jane Smith lives at 101 Elm Street and does nothing with that information, is there really a “privacy concern”?

There is a legitimate concern if someone does something illegal with that information, such as using it to harass the individual. But now this scenario is no longer about privacy and anonymity, it is about harm. In this case, the harm is stalking or harassment, and there are laws already on the books that address this type of criminal behavior.

The typical retort by privacy advocates to this argument is, “Well, even if facial recognition technology would not create a new risk for individuals, wouldn’t it make it easier for individuals to stalk and harass others?” Yes, it probably would, but many technologies make life easier for stalkers whether it is binoculars, cameras, phones, pay phones, caller ID blocking, cars, sun glasses, trench coats, or even pencils and paper. I am not trying to trivialize the threat of stalkers, but I am pointing out the irrationality of treating facial recognition technology any differently than these other products. And not even the FTC is arguing that laws on stalking and harassment are ineffective.

As I’ve written before, “Anonymity while in public is never a certainty. Individuals never know if they will encounter somebody they know while in public (hence the expression ‘it’s a small world’).” The main problem with the FTC’s report is that it justifies its recommendations by claiming that facial recognition technology may eliminate anonymity in public while never justifying this (false) assumption about whether anonymity even exists.

As Commissioner Rosch notes in his dissenting statement, the FTC should not be using its “unfairness” authority in this area. This authority is restricted to a case that “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”

My point here is that it is simply not true that individuals have an expectation of anonymity in public spaces. Rather than attempting to create rules based on false assumptions about anonymity, the FTC would be more effective if it promoted a robust harms-based approach to the use of biometric information that works to identify and close any potential gaps in current law. This would help ensure that individuals are fully protected against potential abuses, while not creating roadblocks to innovation for new technology.

 

Print Friendly

About the author

Daniel Castro is a Senior Analyst with ITIF specializing in information technology (IT) policy. His research interests include health IT, data privacy, e-commerce, e-government, electronic voting, information security and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst at the Government Accountability Office (GAO) where he audited IT security and management controls at various government agencies. He contributed to GAO reports on the state of information security at a variety of federal agencies. He has a B.S. in Foreign Service from Georgetown University and an M.S. in Information Security Technology and Management from Carnegie Mellon University.
  • http://www.facebook.com/profile.php?id=24904557 Tracy Carlin

    Good post. I agree with you that false assumptions about anonymity need to be tossed out in order to move forward and innovate. In addition to Gawker’s unmasking of Violentacrez, I am a big fan of Predditors, which uses publicly available information and the posts of redditors themselves to “out” creeps and pedophiles. It provides a check on the trolls while simultaneously giving women a platform to discuss the practice of posting photos of women without their consent. It was amusing to see Reddit members defend their right to free speech while reporting the Predditors tumblr for violating their privacy.

    Instead of wringing our hands about our perception of anonymity (which doesn’t exist anymore) we should instead look at the other side — holding people accountable for what do, both off and online.