Notably, Facebook has created many privacy options around this feature. These include the following:
- Users are notified when they are tagged.
- Users can untag themselves from any photo.
- Users can only tag their friends.
- Users can disable the “Tag Suggestions” feature so that their name will not be suggested automatically.
Some individuals may dislike the change, but Facebook has not done anything wrong. User privacy has not been compromised and users have not come forward to demonstrate actual harm as a result of these new features. Moreover, automatic photo tagging is not unique to Facebook. Picasa, Flickr, iPhoto and others have experimented with this feature in the past and will likely include it in the future.
In fact, tagging photos has proven to be a popular activity on Facebook. As of December 2010, users were adding tags to photos at a rate of 100 million tags per day.
So how much savings does this feature offer? If we assume that the time to tag a photo falls from about 10 seconds to 2 second per photo on average, a back of the envelope estimates shows that we can expect to see a large gain in efficiency:
100 million tags x 10 seconds x 30% users = 83,333 hours x $30/hour = $2.5M
100 million tags x 2 second x 30% users = 8,333 hours x $30/hour = $500K
So by this estimate we might gain as much as $2 million per day or $730 million per year in productivity from this new feature. (If you are interested, the employee costs are from March 2011 BLS data, the 30 percent estimate is the percent of Facebook users in the U.S., and the 10 second estimate is a rough estimate based on an academic paper.) The Facebook Photo Tagging feature is a classic example of how using information technology (IT) for automation can make processes more efficient. Instead of manually tagging hundreds or thousands of photos, the Facebook Tag Suggestion feature allows users to do this in a few clicks.
So with so much benefit, what explains the outrage? Most of this comes as no surprise: privacy fundamentalists have yet to “Like” a single new Facebook feature. Instead, they are stuck singing a one-note tune opposing most technical advancements based on the claim that it reduces user privacy.
In this case they are also objecting to a specific technology: facial recognition. Facial recognition is a subset of image recognition, a challenge that computer scientists have spent countless hours trying to solve. Humans are generally very good at this type of task: show us a photo of a person in one photo and we can identify them in a second photo; show us a photo of an apple and we can pick out the apple in the second photo. But teaching a computer to identify these types of visual patterns is a much more difficult problem. (For those of us who often have trouble putting names to faces maybe this indicates that we are more machine now than man.)
Over the years, computer scientists have been getting much better at this as algorithms, processors and sensors have all improved. Facial recognition software, while still sometimes generating both false positives and false negatives, works fairly well and continues to get even better. In particular, it works well at identifying one individual out of a relatively small population. This means that Facebook, whose users are likely to be photographing others in their own social network, has a much easier task—it does not have to automatically identify individuals in a photo from the entire universe of Facebook users, but rather only from a particular list of Facebook friends.
Facial recognition has many potential benefits. It can be used to improve security, for example, to ensure that an ATM transaction is tied to a specific individual, in addition to that person’s ATM card and PIN number. It can be used to automate the visual authentication of an individual to an identity document. For example, airports have begun to use facial recognition to allow individuals to go through immigration using a passport at unmanned clearances gates at immigration. And, in the future, it may even be used to personalize transactions at self-service kiosks or on Minority Report-like advertisements.
Moreover, facial recognition helps advance the state of the art in image recognition—learning how to do better image recognition could help countless applications function better if these applications can understand more information about their environment. In particular, augmented reality applications—which impose metadata on a virtual display of the physical world—can benefit from better object recognition. Already we are seeing the potential of these applications in mobile apps, such as Google Goggles and the Layar browser.
There are some legitimate questions about the use of facial recognition specifically and biometric information generally. For one, it is not clear how this information can be used. Could a social network release an app that lets you discretely photograph someone to find out his or her name? Could the FBI or DHS license the use of a large database of photographs from a private company that links faces to individual identities? Could a company use videos of users from a site like YouTube to create biometric identifiers from an individual’s gait or voice patterns?
None of these potential applications are necessarily bad, but this does highlight the need for companies to establish clear privacy policies around biometric information, i.e. data about human characteristics or behaviors that can be used to uniquely identify an individual. This is not specific to Facebook or to facial recognition, but a general need for organizations to better address a broad category of information that may be used to uniquely identify individuals based on biometrics. For example, the manner in which an individual types on a keyboard has been found to be a unique behavioral biometric identifier. Organizations should be transparent about if and when this type of information is recorded and converted into a biometric template (i.e. a reference of distinct characteristics). And like other personally identifiable information, organizations should protect this information according to the degree of risk of it becoming public and they should make clear how they use this information.
Most of the concerns about facial recognition are about how it can be used for surveillance or suspect identification, such as the use of DMV photos by the FBI. People are wary about government or private companies tracking where they go. While some potential harms are speculative, they are not unimaginable. With enough data, for example, it may someday be possible to setup a computer outside of an abortion clinic to identify who goes inside. However, this information, although harder to find, could still be found in the pre-digital era. Yes it took a lot more legwork, but the information and potential for abuse still existed.
It may be that in the future we will all have a familiar face. Anonymity while in public is never a certainty. Individuals never know if they will encounter somebody they know while in public (hence the expression “it’s a small world”). In the past, most individuals would not have even found anonymity among smaller communities. And celebrities already experience this lack of anonymity today when they are recognized in public. So it may be that one day most of us are like celebrities, where we cannot go anywhere without someone recognizing us.
The problem is that privacy advocates often present a false choice when critiquing technology. The potential privacy harm from surveillance exists regardless of whether the technology is used or not. We cannot, and we should not, try to turn back the clock. Surveillance already exists and most of us accept it in exchange for security and convenience. I don’t mind that a grocery store uses cameras to prevent shoplifting because I know that keeps down prices for customers like me. If using technology makes the process of tracking and identifying shoplifters easier, we all benefit.
Technology, no matter how simple or complex, is just a tool and it can be used for good and bad purposes. If you give a school child a pencil, he might create a stunning drawing or he might just poke his classmate. What we want to prevent are abuses of technology.
This gets back to why the focus on privacy legislation should be to protect individuals from harm, not tell companies how to use data. So for those who are concerned about the misuse of technology, let’s have a conversation about how to update harassment and defamation laws to ensure individual’s rights are protected. No child should be bullied and no individual should feel threatened because they appeared in a public space. But protection against these harms should be independent of technology.