The Pew Research Center released a survey last week that investigated the circumstances under which many U.S. citizens would share their personal information in return for getting something of perceived value. In the survey, Pew set up six hypothetical scenarios about different technologies—including office surveillance cameras, health data, retail loyalty cards, auto insurance, social media, and smart thermostats—and asked respondents whether the tradeoff they were offered for sharing their personal information was acceptable.
To be sure, some of the questions that Pew asked described one-sided tradeoffs that could have tainted the findings. Nevertheless, the overall results reveal that the Privacy Panic Cycle, the usual trajectory of public fear followed by widespread acceptance that often accompanies new technologies, is still going strong for many technologies.
The Privacy Panic Cycle explains how privacy concerns about new technologies flare up in the early years, but over time as people use, understand, and grow accustomed to these technologies, the concerns recede. For example, when the first portable Kodak camera first came out, it caused a big privacy panic, but today most people carry around phones in their pockets and do not give
Earlier this month, the Electronic Frontier Foundation (EFF) launched a “Spying on Students” campaign to convince parents that school-supplied electronic devices and software present significant privacy risks for their children. This campaign highlights a phenomenon known as the privacy panic cycle, where advocacy groups make increasingly alarmist claims about the privacy implications of a new technology, until these fears spread through the news media to policymakers and the public, causing a panic before cooler heads prevail, and people eventually come to understand and appreciate innovative new products and services.
When it comes to privacy, EFF has a history of such histrionics. The organization has accused desktop printers of violating human rights, spread misinformation about the effectiveness of CCTV cameras, escalated confrontations around the purported abuse of RFID, cried foul over online behavioral advertising, and much more. These claims, even if overblown and ultimately disproved by experience, generate headlines and allow EFF to spread fear, ploughing the ground for harmful regulation or even technology bans.
EFF’s newly launched “Spying on Students” campaign is yet another example of this tendency to put fear ahead of fact. EFF
ITIF‘s latest report—“The Privacy Panic Cycle: A Guide to Public Fears About New Technologies”—analyzes the stages of public fear that accompany new technologies. Fear begins to take hold when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on. This phenomenon has occurred many times—from the portable Kodak Camera in 1888 to the commercial drones of today. And yet, even though the privacy claims made about technology routinely fail to materialize, the cycle continues to repeat itself with many new technologies.
The privacy panic cycle breaks down into four stages:
- In the “Trusting Beginnings” stage, the technology has not been widely deployed and privacy concerns are minimal. This stage ends when privacy fundamentalists, a term coined by the noted privacy researcher Alan Westin, begin raising the alarm creating a “Point of Panic.”
- In the “Rising Panic” stage, the media, policymakers, and others join the privacy fundamentalists in exacerbating
The recent announcement that Verizon Communications Inc. intends to acquire AOL Inc. generated a surprising amount of media coverage, and unfortunately some groups are using the news as an excuse to push for expanded privacy regulations that would stifle innovation and competition in the burgeoning mobile ecosystem.
By telecom standards, this is not a huge transaction. At $4.4 billion, it is a full order of magnitude smaller than either the AT&T-DirecTV deal or the ill-fated Comcast-Time Warner Cable merger. And Verizon’s purchase of the 45% stake Vodafone had in Verizon Wireless was almost 30 times larger. Nevertheless, reporters flocked to the story, perhaps drawn by potential jokes about promotional CDs or the opportunity to poke fun at the 2 million Americans who remain AOL dial-up subscribers.
More likely interest in the deal was driven by its implications for the business Verizon wants to become. AOL is well known for its content, such as Huffington Post and TechCruch, but its growth is now in online ad sales—especially in video ads. The nation’s leading wireless company is looking down the road and seeing mobile video (presumably sprinkled with advertisements) as the future.
History is riddled with examples where attempts to achieve one outcome actually led to the opposite result. In May, the European Court of Justice (ECJ) ruled that Europeans have the “right to be forgotten,” the ability to request search engines to remove links from queries associated with their names if those results are irrelevant, inappropriate, or outdated. Just as Prohibition famously increased alcohol consumption, it would seem the “right to be forgotten,” while intended to increase online privacy, may actually have the opposite effect, both by cataloging shameful information and incentivizing individuals to publicize the very materials people want forgotten.
Since the decision, Google has scrambled to meet Europe’s demands by creating an online form to process removal requests and hiring new personnel to handle compliance. When individuals want information removed about themselves, they must submit verification of their identity, provide the URLs to be removed, and justify why they should be taken down. Google then verifies that the submitted information is accurate and meets the criteria for removal. Then, if the company decides to take the link down, it notifies the website where the content was posted of
The Washington Post printed a story about how the Campaign for a Commercial-Free Childhood has submitted the opinions of six experts on child development to the Federal Trade Commission in support of CCFC’s complaint against toy maker Fisher-Price for marketing its “Laugh and Learn” app for infants and small children.
One of the six experts, Herbert Ginsburg writes, “Existing research suggests that infants and very young children are not cognitively ready to learn key abstract ideas about numbers. Although some children at the upper bounds of this age range might learn to parrot some number words they are highly unlikely to learn important concepts of numbers.”
To be sure I am not a child development expert (although I did study child development in college.) I am a parent of a wonderful daughter. When she was 19 months old I ran across a Fisher Price online game, “The ABC Game“, which taught infants and toddlers their letters. (This was pre-tablet so I used a laptop). My daughter would press keys on my laptop and up would pop a picture of the letter, a picture of an animal whose first
Yesterday the United States Court of Appeals for the Ninth Circuit ruled that a lawsuit against Google for illegal wiretapping could proceed.
The case involves Google’s Street View project which provides online access to panoramic views of public streets in cities around the world. To build the database of images, Google sent vehicles into cities to photograph public streets. At times, these vehicles also unintentionally recorded data that users were transmitting over unencrypted wireless networks. The central claim of the lawsuit is that this collection of unencrypted data from wireless networks is a violation of the Wiretap Act. Google argued that the case should be dismissed because the Wiretap Act exempts “electronic communications” that are “readily accessible to the general public.” In its ruling, the Court denied Google’s motion to dismiss.
The basic logic of the Wiretap Act is that if people do not take action to make their communications private, then they do not have an expectation of privacy. For example, if two individuals use unscrambled CB radios to have a conversation, then other radio users are not in violation of the Wiretap Act if they hear this conversation.
The sign of a civilization in decline is when there is widespread fear of the future and longing for the past. While America may not yet be in decline we are certainly fearing the future. Case in point, California’s decision to back off its deployment of drivers’ licenses with embedded radio-frequency identification chips (RFID) in them. RFID chips are small electronic devices embedded with a unique code that communicates with an electronic reader usually within a range of 1 to 2 inches.
We can thank the privacy-fundamentalists for this, for they love nothing better than to spread fear based on misinformation about technology. As supposedly pro-tech Wired Magazine writes, this is “spy-friendly technology.” The article claims that “Privacy advocates worry that, if more states begin embracing RFID, the licenses could become mandatory nationwide and evolve into a government-run surveillance tool to track the public’s movements.” It goes on to say “law enforcement already monitors drivers’ whereabouts via the mass deployment of license-plate readers. But the ability to scan for identification cards in public areas could evolve into another surveillance tool.”
That sure scares me. I don’t want the
According to the latest online search trends, concerns about Internet privacy are so 2004.
If this sounds incredible to you, let me explain.
First, some background. An individual’s search queries can reveal interesting information about their interests. When this data is amassed across many individuals, it can provide insights into consumers as a whole. Google is one of the first companies to make use of its large historical database of search engine queries to try to better understand consumers. Perhaps the most famous example of this is Google Flu Trends which uses both historical and real-time data to predict the level of influenza in the population across time.
Google has also made a version of its database of search queries available to the public in a product called Google Trends. Google Trends shows how many searches have been performed on a particular search term relative to the total number of searches. Individuals can use Google Trends to discover the relative popularity of search terms for a particular period of time or for a particular location. Google Trends does not show total search volume, but rather normalizes the data to
This op-ed originally appeared in ComputerWorld.
Last week, Brendan Eich, the chief technology officer and senior vice president of engineering at Mozilla, announced that the organization is planning to block third-party cookies in future versions of the Firefox Web browser. In addition, the Center for Internet and Society (CIS) at Stanford Law School announced that it has created a new organization called the “Cookie Clearinghouse,” which will begin publishing blacklists for websites based on whether it believes a website’s particular usage of cookies “makes logical sense.” Mozilla will use these blacklists to decide which cookies to accept or deny.
Large-scale blocking of third-party cookies may have profound negative consequences on the future of the Internet. There are three main concerns. First, this practice will result in a loss of revenue from online advertising for many websites, and thus lead to less free content available to consumers. Second, it will cut off many legitimate business models for companies that collect and aggregate user data across the Internet to understand user behavior to design better websites, content and features. Third, it will limit the functionality of websites, both