This op-ed originally appeared in ComputerWorld.
Last week, Brendan Eich, the chief technology officer and senior vice president of engineering at Mozilla, announced that the organization is planning to block third-party cookies in future versions of the Firefox Web browser. In addition, the Center for Internet and Society (CIS) at Stanford Law School announced that it has created a new organization called the “Cookie Clearinghouse,” which will begin publishing blacklists for websites based on whether it believes a website’s particular usage of cookies “makes logical sense.” Mozilla will use these blacklists to decide which cookies to accept or deny.
Large-scale blocking of third-party cookies may have profound negative consequences on the future of the Internet. There are three main concerns. First, this practice will result in a loss of revenue from online advertising for many websites, and thus lead to less free content available to consumers. Second, it will cut off many legitimate business models for companies that collect and aggregate user data across the Internet to understand user behavior to design better websites, content and features. Third, it will limit the functionality of websites, both
The California Senate’s Public Safety Committee passed SB 255 earlier this week to prohibit the distribution of “revenge porn”—the non-consensual distribution of intimate images of individuals. Groups like Without My Consent have been campaigning to get state laws changed to make this a crime. As the bill’s chief sponsor State Senator Anthony Cannella says, “People who post or text pictures that are meant to be private as a way to seek revenge are reprehensible. Right now, there is no tool for law enforcement to protect the victims.”
The legislation is fairly straightforward so I will quote it directly:
“This bill would make it a misdemeanor for any person who, with the intent to cause substantial emotional distress or humiliation to another person, by means of an electronic communication device, and without consent of the other person, electronically distributes, publishes, emails, hyperlinks, or makes available for downloading nude images of the other person along with personal identifying information of the other person.”
This bill is a common sense solution to an incredibly offensive activity. Indeed, California’s Public Safety Commission approved the legislation unanimously. As I have argued
In the aftermath of the Boston Marathon bombing, we’ve seen a lot of discussion about the crucial role that the abundance of surveillance cameras and smartphones played in finding the suspects. The general consensus seems to be that these technologies were useful. For example, New York Mayor Michael Bloomberg said, “The Boston bombing is a terrible reminder of why we’ve made these investments—including camera technology that could help us deter an attack, or investigate and apprehend those involved.” And Chicago Mayor Rahm Emanuel similarly endorsed surveillance cameras when he said, “I will say, as I always have, because we have continued to put cameras throughout the city for security … purposes, they serve an important function for the city in providing the type of safety on a day-to-day basis—not just for big events like a marathon, but day-to-day purposes.”
Not surprisingly privacy advocates worry that such a high-profile display of the benefits of these camera systems will lead to more public acceptance and adoption, and so they are trying to minimize the value of these systems by arguing that this is a rare event. Jeff Chester, the executive director of
I’ve written before about how well-intentioned privacy laws and regulations can come at the expense of innovation and harm consumers (see here, here, and here), but there is not a lot of data available to show the exact impact. However, a new feature on Netflix provides a good case study of the impact on innovation of at least one of these privacy rules.
First, some background. Today, Netflix announced a new feature—“Netflix Social”—which lets users share the TV shows and movies they watch on Netflix with their friends on Facebook. As Cameron Johnson, the director of product innovation, at Netflix explains on the official Netflix blog:
By default, sharing will only happen on Netflix. You’ll see what titles your friends have watched in a new “Watched by your friends” row and what they have rated four or five stars in a new “Friends’ Favorites” row. Your friends will also be able to see what you watch and rate highly… You are in control of what gets shared. You can choose not to share a specific title by clicking the “Don’t Share This” button in the player.
The past two weeks have seen two important announcements come out of the Federal Trade Commission (FTC). First, Commissioner Edith Ramirez was designated to replace outgoing Commissioner Jon Leibowitz as Chairman. Second, identity theft has been reported as the top consumer complaint to the FTC for the 13th year in a row.
Why are these two announcements related? It’s simple. As Chairwoman Ramirez considers how she will lead the FTC throughout her term, it’s worth looking at where the FTC can help Americans the most, particularly in an era of limited budgets. And the most recent data from the FTC overwhelmingly shows that the top priority for the Commission should be on identity theft.
The latest data on identity theft comes from the FTC’s recently released Consumer Sentinel Network Data Book for 2012. The Consumer Sentinel Network (CSN) is a database of consumer complaints received by a variety of sources including the FTC, state law enforcement agencies, state attorneys general, the FBI, the Consumer Financial Protection Bureau, the U.S. Postal Inspection Service, and the Better Business Bureau. While there are limits to how the data should be used
In my first post on the Location Privacy Protect Act of 2012 I addressed the claims that the legislation is necessary because some companies may share a user’s location data without that user’s knowledge and some companies may share location data about children without their parent’s knowledge. In this post, I will address Sen. Franken’s argument that this legislation is needed to prevent domestic violence abusers from using “stalking apps” to track their victims.
The attempt to link honest uses of location data with domestic violence is a bit disheartening. Let’s face it—nobody is against preventing domestic violence so this unnecessarily makes a rather technical debate about a deeply emotional issue. But because this claim has gotten a lot of attention, I want to dig into it a bit and address it on its merits.
First, for all the talk about this problem, the evidence for the use of stalking apps by stalkers and harassers is somewhat thin. There are some notable cases of victims being stalked but it is not clear how prevalent this is in practice. However, given the increased use in smart phones, I would suspect that
In many ways this is a
Over a decade ago, President Clinton ordered the Department of Defense to discontinue “Selective Availability”, the intentional degrading of the civilian Global Positioning System signal, in an effort to allow all businesses and residents in America to have access to the numerous benefits of location-based technology. This has been an enormously successful policy decision that has unleashed a wide range of innovations for consumers and businesses that use geo-location data in sectors as diverse as transportation, agriculture and public safety. Today location can be determined on mobile devices, with various degrees of precision, from a variety of data including GPS, cell towers, Wi-Fi signals, and IP addresses. Unfortunately Congressional legislation would prohibit companies from collecting or using location information from electronic devices without first obtaining consent from the user might stall many of these benefits.
This bill in question is the Location Privacy Protect Act of 2012 which passed the Senate Judiciary Committee in late December. This legislation would require any company that discloses geo-location information collected from an electronic device to another entity, including its affiliates, to identify these entities and obtain user consent. This is particularly
The Federal Trade Commission (FTC) released its staff report yesterday on facial recognition technologies where it warned of potentially “significant privacy concerns” and called on companies to respect the privacy interests of consumers by implementing FTC-recommended “best practices.”
First, as I have written before, policymakers should not create technology-specific rules for facial recognition. Facial recognition technology belongs to a larger class of biometric technology that should be treated the same. In addition, facial recognition has many benefits, from improving security to automating tasks to personalizing transactions.
That said, there is nothing wrong with the federal government working with industry and advocacy groups to develop voluntary best practices that protect privacy and spur innovation. But these best practices should be based on sound knowledge, such as a clear understanding of technology and an accurate representation of the world. What I’d like to address here is the myth, repeated in the FTC report, that facial recognition technology “may end the ability of individuals to remain anonymous in public places.” The FTC identifies this particular privacy risk as one of the major privacy concerns of the technology. However, contrary to the FTC’s
A survey funded by Nokia and conducted at the Berkeley Center for Law and Technology shows what has become increasingly apparent to those who follow this line of research: some of the most prominent academic researchers have ceased to retain even a veneer of objectivity in their research on privacy. The authors, Chris Hoofnagle, Jennifer Urban and Su Li, state that their survey shows that “Americans have a low level of knowledge about [Do Not Track], but prefer that it mean that websites do not collect tracking data.”
I won’t mince words here: this is shoddy research.
There are two main survey questions in their study related to Do Not Track (for more on this proposal and why it is a bad idea, see this or this). The first is a question about whether people have even heard of the Do Not Track proposal. The survey question reads, “Policymakers are considering creating a ‘do not track’ option for the internet. Have you heard of proposals for a ‘do not track’ system, or not?” Thirteen percent of respondents indicated that they had heard of the proposal; eighty-seven percent had not.