Proposed Privacy Law for Employers Using Facebook a Step in the Right Direction

Facebook-like-button

Various news outlets recently published articles discussing an emerging trend in hiring practices where employers are asking potential employees for their social network or email account credentials so that they can review the potential hire’s private online profile. Not surprisingly, many people have objected to this practice, noting the intrusiveness of the request and the inherit coerciveness of asking for this information during a job interview. In response, Sen. Blumenthal (D-CT) announced yesterday that he was planning to introduce legislation to prohibit employers from asking potential employees for their social network login credentials.

Privacy activists often argue that government should concern itself with the mechanics of how the private sector manages data rather than prevent harmful uses of data, as the Blumenthal legislation attempts to do. However, this is like asking legislators to write laws that restrict how people move their arms and legs, rather than writing laws that prohibit assault and battery. The reality is that privacy regulations, no matter how well-intentioned, cannot guarantee privacy or prevent accidental disclosures or theft of personal data. As I have argued before, legislators should focus on restricting uses of data that harm individuals (e.g., credit discrimination), rather than restricting particular technologies or practices (e.g., behavioral targeting). Instead of fruitlessly trying to lock down data, legislators should focus on creating protections to minimize or eliminate harm to consumers if private data becomes public.

By this standard, the proposed legislation appears to fall short of the ideal, but is definitely a step in the right direction. As with most legislation, the details matters. Without the text of the legislation in hand, it is impossible to see if this proposal would restrict legitimate background checks or prohibit legitimate uses of social networks. The merits of this particular proposal are certainly open for debate. For example, any law should be technology neutral and protect prospective employees from any unfair coercive disclosure of information, regardless of how it is obtained. Any law should also be needed, and not something that better enforcement of existing law would take care of. But the fact that a policymaker has identified a specific information practice that harms users and then proposed a policy to restrict a narrow and specific use of information to prevent harm to consumers, rather than try to impose new restrictions on the business, is a positive development in the privacy debate.

The concept of protecting users from harm rather than preventing certain behaviors that might lead to harm applies in the real world as much as it does in the online one. For example, a law preventing an employer from asking a woman if she is pregnant is good; a law preventing an employer from not hiring a woman because she is pregnant is much better. Of course, the latter is more difficult to monitor and prevent, but at the end of the day, people are concerned with harmful uses of their data, not how the data is collected. This also affords individuals stronger and less ambiguous protection. (A law preventing employment discrimination on the basis of pregnancy would also prevent the employer from using this information if it was learned from a third-party, such as Target)

Consumers benefit most from clear laws that restrict harmful uses of their data such as physical harm, unfair discrimination and identify theft. While the privacy “bill of rights” released earlier this month contained useful principles, a better consumer privacy bill of rights would list what practices were prohibited, such as unfair discrimination when buying health care insurance, applying for a job, renting a home, or applying for a credit card. Any gaps in the current law should be identified and proposals drafted to fill these gaps. After all, from a consumer perspective, it makes more sense to have a debate over how information should and should not be used in a particular context, rather than a debate over how an organization should manage the data it collects.  The reason for this is that once data becomes public, it is virtually impossible to make it private again (just ask any celebrity who has had a sex tape leaked online). Therefore privacy regulations that only limit how certain data is collected and handled are virtually useless if data becomes public.

Rather than having Congress attempt to micromanage that data practices of every website on the Internet, policymakers should follow Sen. Blumenthal’s lead and pursue policies that give consumer’s clear protection without imposing unnecessary costs or restrictions on the Internet economy.

Print Friendly

About the author

Daniel Castro is a Senior Analyst with ITIF specializing in information technology (IT) policy. His research interests include health IT, data privacy, e-commerce, e-government, electronic voting, information security and accessibility. Before joining ITIF, Mr. Castro worked as an IT analyst at the Government Accountability Office (GAO) where he audited IT security and management controls at various government agencies. He contributed to GAO reports on the state of information security at a variety of federal agencies. He has a B.S. in Foreign Service from Georgetown University and an M.S. in Information Security Technology and Management from Carnegie Mellon University.