I’ve written before about how well-intentioned privacy laws and regulations can come at the expense of innovation and harm consumers (see here, here, and here), but there is not a lot of data available to show the exact impact. However, a new feature on Netflix provides a good case study of the impact on innovation of at least one of these privacy rules.
First, some background. Today, Netflix announced a new feature—“Netflix Social”—which lets users share the TV shows and movies they watch on Netflix with their friends on Facebook. As Cameron Johnson, the director of product innovation, at Netflix explains on the official Netflix blog:
By default, sharing will only happen on Netflix. You’ll see what titles your friends have watched in a new “Watched by your friends” row and what they have rated four or five stars in a new “Friends’ Favorites” row. Your friends will also be able to see what you watch and rate highly… You are in control of what gets shared. You can choose not to share a specific title by clicking the “Don’t Share This” button in the player.
The past two weeks have seen two important announcements come out of the Federal Trade Commission (FTC). First, Commissioner Edith Ramirez was designated to replace outgoing Commissioner Jon Leibowitz as Chairman. Second, identity theft has been reported as the top consumer complaint to the FTC for the 13th year in a row.
Why are these two announcements related? It’s simple. As Chairwoman Ramirez considers how she will lead the FTC throughout her term, it’s worth looking at where the FTC can help Americans the most, particularly in an era of limited budgets. And the most recent data from the FTC overwhelmingly shows that the top priority for the Commission should be on identity theft.
The latest data on identity theft comes from the FTC’s recently released Consumer Sentinel Network Data Book for 2012. The Consumer Sentinel Network (CSN) is a database of consumer complaints received by a variety of sources including the FTC, state law enforcement agencies, state attorneys general, the FBI, the Consumer Financial Protection Bureau, the U.S. Postal Inspection Service, and the Better Business Bureau. While there are limits to how the data should be used
In my first post on the Location Privacy Protect Act of 2012 I addressed the claims that the legislation is necessary because some companies may share a user’s location data without that user’s knowledge and some companies may share location data about children without their parent’s knowledge. In this post, I will address Sen. Franken’s argument that this legislation is needed to prevent domestic violence abusers from using “stalking apps” to track their victims.
The attempt to link honest uses of location data with domestic violence is a bit disheartening. Let’s face it—nobody is against preventing domestic violence so this unnecessarily makes a rather technical debate about a deeply emotional issue. But because this claim has gotten a lot of attention, I want to dig into it a bit and address it on its merits.
First, for all the talk about this problem, the evidence for the use of stalking apps by stalkers and harassers is somewhat thin. There are some notable cases of victims being stalked but it is not clear how prevalent this is in practice. However, given the increased use in smart phones, I would suspect that
In many ways this is a
Over a decade ago, President Clinton ordered the Department of Defense to discontinue “Selective Availability”, the intentional degrading of the civilian Global Positioning System signal, in an effort to allow all businesses and residents in America to have access to the numerous benefits of location-based technology. This has been an enormously successful policy decision that has unleashed a wide range of innovations for consumers and businesses that use geo-location data in sectors as diverse as transportation, agriculture and public safety. Today location can be determined on mobile devices, with various degrees of precision, from a variety of data including GPS, cell towers, Wi-Fi signals, and IP addresses. Unfortunately Congressional legislation would prohibit companies from collecting or using location information from electronic devices without first obtaining consent from the user might stall many of these benefits.
This bill in question is the Location Privacy Protect Act of 2012 which passed the Senate Judiciary Committee in late December. This legislation would require any company that discloses geo-location information collected from an electronic device to another entity, including its affiliates, to identify these entities and obtain user consent. This is particularly
The Federal Trade Commission (FTC) released its staff report yesterday on facial recognition technologies where it warned of potentially “significant privacy concerns” and called on companies to respect the privacy interests of consumers by implementing FTC-recommended “best practices.”
First, as I have written before, policymakers should not create technology-specific rules for facial recognition. Facial recognition technology belongs to a larger class of biometric technology that should be treated the same. In addition, facial recognition has many benefits, from improving security to automating tasks to personalizing transactions.
That said, there is nothing wrong with the federal government working with industry and advocacy groups to develop voluntary best practices that protect privacy and spur innovation. But these best practices should be based on sound knowledge, such as a clear understanding of technology and an accurate representation of the world. What I’d like to address here is the myth, repeated in the FTC report, that facial recognition technology “may end the ability of individuals to remain anonymous in public places.” The FTC identifies this particular privacy risk as one of the major privacy concerns of the technology. However, contrary to the FTC’s
A survey funded by Nokia and conducted at the Berkeley Center for Law and Technology shows what has become increasingly apparent to those who follow this line of research: some of the most prominent academic researchers have ceased to retain even a veneer of objectivity in their research on privacy. The authors, Chris Hoofnagle, Jennifer Urban and Su Li, state that their survey shows that “Americans have a low level of knowledge about [Do Not Track], but prefer that it mean that websites do not collect tracking data.”
I won’t mince words here: this is shoddy research.
There are two main survey questions in their study related to Do Not Track (for more on this proposal and why it is a bad idea, see this or this). The first is a question about whether people have even heard of the Do Not Track proposal. The survey question reads, “Policymakers are considering creating a ‘do not track’ option for the internet. Have you heard of proposals for a ‘do not track’ system, or not?” Thirteen percent of respondents indicated that they had heard of the proposal; eighty-seven percent had not.
As this is a presidential election year, it’s not surprising that the the “silly season” of politics has been extended into the baseball playoffs. A group of political extremists organized by the Competitive Enterprise Institute (CEI) has filed a complaint with the FCC over the privacy disclosures for an old consumer broadband measurement program. This isn’t the program that the Commission conducts every year with Sam Knows that leads to an annual report comparing actual broadband speeds to advertised ones, but to a program that was developed by the National Broadband Plan some three years ago to provide the team with a snapshot of performance.
The letter has me wondering whether the advocates: (a) Have just come out of a three year coma; and (b) Have any idea at all about how the Internet works. There are also some distortions of law that will slap attorneys in the face. Please read the letter, but sit down first so you don’t hurt yourself rolling on the floor laughing at its circular logic.
Like many government programs that collect information that might be considered personal and sensitive, the FCC’s broadband measurement program
Many elected officials are in favor of more online privacy…except when it comes to how they use data to target voters and raise money. While neither presidential candidate has made online privacy issues a part of his campaign, the debate over privacy is certainly a hot topic in Washington. In addition, both the Obama and Romney campaigns have released mobile apps, and transparency of mobile apps have been the focus of the initial multistakeholder processes for privacy initiated by the NTIA. With that in mind, I decided to investigate the privacy practices of the two presidential campaign websites.
There are some clear differences between the privacy policies on the campaign websites. For example, the Obama for America website has much more detailed disclosure of its practices and uses of information. Perhaps this is not surprising since transparency is one of the key principles in the Obama Administration’s proposed Consumer Privacy Bill of Rights. The Obama for America campaign also appears to be using more services that collect and use data on its website.
In contrast, the Romney for President campaign website has fewer cookies and a shorter, less-detailed privacy
The Wall Street Journal (WSJ) reports that the Federal Trade Commission (FTC) is close to reaching a record settlement with Google for the charge that it tracked Apple Safari web browser users. Google had earlier signed a 20-year consent decree in which it agreed not to misrepresent its privacy practices to consumers. Tracking Apple Safari users appears to be in violation of that agreement because Google had posted a statement in its online help center stating that these Safari users would not be tracked. The WSJ reports that Google will pay a penalty of $22.5 million, a record fine for the FTC.
When this issue first came out I wrote a post in which I argued:
“As always, the FTC can and should investigate if it discovers legitimate concerns about the business practices of a particular company. But companies should not face punitive sanctions for actions that do not cause consumer harm and are taken in good faith. To do so would discourage the type of fast-paced innovation that has defined the remarkable progress of the Internet era.”
I stand by these comments today. It’s always good to see