The recent announcement that Verizon Communications Inc. intends to acquire AOL Inc. generated a surprising amount of media coverage, and unfortunately some groups are using the news as an excuse to push for expanded privacy regulations that would stifle innovation and competition in the burgeoning mobile ecosystem.
By telecom standards, this is not a huge transaction. At $4.4 billion, it is a full order of magnitude smaller than either the AT&T-DirecTV deal or the ill-fated Comcast-Time Warner Cable merger. And Verizon’s purchase of the 45% stake Vodafone had in Verizon Wireless was almost 30 times larger. Nevertheless, reporters flocked to the story, perhaps drawn by potential jokes about promotional CDs or the opportunity to poke fun at the 2 million Americans who remain AOL dial-up subscribers.
More likely interest in the deal was driven by its implications for the business Verizon wants to become. AOL is well known for its content, such as Huffington Post and TechCruch, but its growth is now in online ad sales—especially in video ads. The nation’s leading wireless company is looking down the road and seeing mobile video (presumably sprinkled with advertisements) as the future.
History is riddled with examples where attempts to achieve one outcome actually led to the opposite result. In May, the European Court of Justice (ECJ) ruled that Europeans have the “right to be forgotten,” the ability to request search engines to remove links from queries associated with their names if those results are irrelevant, inappropriate, or outdated. Just as Prohibition famously increased alcohol consumption, it would seem the “right to be forgotten,” while intended to increase online privacy, may actually have the opposite effect, both by cataloging shameful information and incentivizing individuals to publicize the very materials people want forgotten.
Since the decision, Google has scrambled to meet Europe’s demands by creating an online form to process removal requests and hiring new personnel to handle compliance. When individuals want information removed about themselves, they must submit verification of their identity, provide the URLs to be removed, and justify why they should be taken down. Google then verifies that the submitted information is accurate and meets the criteria for removal. Then, if the company decides to take the link down, it notifies the website where the content was posted of
The Washington Post printed a story about how the Campaign for a Commercial-Free Childhood has submitted the opinions of six experts on child development to the Federal Trade Commission in support of CCFC’s complaint against toy maker Fisher-Price for marketing its “Laugh and Learn” app for infants and small children.
One of the six experts, Herbert Ginsburg writes, “Existing research suggests that infants and very young children are not cognitively ready to learn key abstract ideas about numbers. Although some children at the upper bounds of this age range might learn to parrot some number words they are highly unlikely to learn important concepts of numbers.”
To be sure I am not a child development expert (although I did study child development in college.) I am a parent of a wonderful daughter. When she was 19 months old I ran across a Fisher Price online game, “The ABC Game“, which taught infants and toddlers their letters. (This was pre-tablet so I used a laptop). My daughter would press keys on my laptop and up would pop a picture of the letter, a picture of an animal whose first
Yesterday the United States Court of Appeals for the Ninth Circuit ruled that a lawsuit against Google for illegal wiretapping could proceed.
The case involves Google’s Street View project which provides online access to panoramic views of public streets in cities around the world. To build the database of images, Google sent vehicles into cities to photograph public streets. At times, these vehicles also unintentionally recorded data that users were transmitting over unencrypted wireless networks. The central claim of the lawsuit is that this collection of unencrypted data from wireless networks is a violation of the Wiretap Act. Google argued that the case should be dismissed because the Wiretap Act exempts “electronic communications” that are “readily accessible to the general public.” In its ruling, the Court denied Google’s motion to dismiss.
The basic logic of the Wiretap Act is that if people do not take action to make their communications private, then they do not have an expectation of privacy. For example, if two individuals use unscrambled CB radios to have a conversation, then other radio users are not in violation of the Wiretap Act if they hear this conversation.
The sign of a civilization in decline is when there is widespread fear of the future and longing for the past. While America may not yet be in decline we are certainly fearing the future. Case in point, California’s decision to back off its deployment of drivers’ licenses with embedded radio-frequency identification chips (RFID) in them. RFID chips are small electronic devices embedded with a unique code that communicates with an electronic reader usually within a range of 1 to 2 inches.
We can thank the privacy-fundamentalists for this, for they love nothing better than to spread fear based on misinformation about technology. As supposedly pro-tech Wired Magazine writes, this is “spy-friendly technology.” The article claims that “Privacy advocates worry that, if more states begin embracing RFID, the licenses could become mandatory nationwide and evolve into a government-run surveillance tool to track the public’s movements.” It goes on to say “law enforcement already monitors drivers’ whereabouts via the mass deployment of license-plate readers. But the ability to scan for identification cards in public areas could evolve into another surveillance tool.”
That sure scares me. I don’t want the
According to the latest online search trends, concerns about Internet privacy are so 2004.
If this sounds incredible to you, let me explain.
First, some background. An individual’s search queries can reveal interesting information about their interests. When this data is amassed across many individuals, it can provide insights into consumers as a whole. Google is one of the first companies to make use of its large historical database of search engine queries to try to better understand consumers. Perhaps the most famous example of this is Google Flu Trends which uses both historical and real-time data to predict the level of influenza in the population across time.
Google has also made a version of its database of search queries available to the public in a product called Google Trends. Google Trends shows how many searches have been performed on a particular search term relative to the total number of searches. Individuals can use Google Trends to discover the relative popularity of search terms for a particular period of time or for a particular location. Google Trends does not show total search volume, but rather normalizes the data to
This op-ed originally appeared in ComputerWorld.
Last week, Brendan Eich, the chief technology officer and senior vice president of engineering at Mozilla, announced that the organization is planning to block third-party cookies in future versions of the Firefox Web browser. In addition, the Center for Internet and Society (CIS) at Stanford Law School announced that it has created a new organization called the “Cookie Clearinghouse,” which will begin publishing blacklists for websites based on whether it believes a website’s particular usage of cookies “makes logical sense.” Mozilla will use these blacklists to decide which cookies to accept or deny.
Large-scale blocking of third-party cookies may have profound negative consequences on the future of the Internet. There are three main concerns. First, this practice will result in a loss of revenue from online advertising for many websites, and thus lead to less free content available to consumers. Second, it will cut off many legitimate business models for companies that collect and aggregate user data across the Internet to understand user behavior to design better websites, content and features. Third, it will limit the functionality of websites, both
The California Senate’s Public Safety Committee passed SB 255 earlier this week to prohibit the distribution of “revenge porn”—the non-consensual distribution of intimate images of individuals. Groups like Without My Consent have been campaigning to get state laws changed to make this a crime. As the bill’s chief sponsor State Senator Anthony Cannella says, “People who post or text pictures that are meant to be private as a way to seek revenge are reprehensible. Right now, there is no tool for law enforcement to protect the victims.”
The legislation is fairly straightforward so I will quote it directly:
“This bill would make it a misdemeanor for any person who, with the intent to cause substantial emotional distress or humiliation to another person, by means of an electronic communication device, and without consent of the other person, electronically distributes, publishes, emails, hyperlinks, or makes available for downloading nude images of the other person along with personal identifying information of the other person.”
This bill is a common sense solution to an incredibly offensive activity. Indeed, California’s Public Safety Commission approved the legislation unanimously. As I have argued
In the aftermath of the Boston Marathon bombing, we’ve seen a lot of discussion about the crucial role that the abundance of surveillance cameras and smartphones played in finding the suspects. The general consensus seems to be that these technologies were useful. For example, New York Mayor Michael Bloomberg said, “The Boston bombing is a terrible reminder of why we’ve made these investments—including camera technology that could help us deter an attack, or investigate and apprehend those involved.” And Chicago Mayor Rahm Emanuel similarly endorsed surveillance cameras when he said, “I will say, as I always have, because we have continued to put cameras throughout the city for security … purposes, they serve an important function for the city in providing the type of safety on a day-to-day basis—not just for big events like a marathon, but day-to-day purposes.”
Not surprisingly privacy advocates worry that such a high-profile display of the benefits of these camera systems will lead to more public acceptance and adoption, and so they are trying to minimize the value of these systems by arguing that this is a rare event. Jeff Chester, the executive director of
I’ve written before about how well-intentioned privacy laws and regulations can come at the expense of innovation and harm consumers (see here, here, and here), but there is not a lot of data available to show the exact impact. However, a new feature on Netflix provides a good case study of the impact on innovation of at least one of these privacy rules.
First, some background. Today, Netflix announced a new feature—“Netflix Social”—which lets users share the TV shows and movies they watch on Netflix with their friends on Facebook. As Cameron Johnson, the director of product innovation, at Netflix explains on the official Netflix blog:
By default, sharing will only happen on Netflix. You’ll see what titles your friends have watched in a new “Watched by your friends” row and what they have rated four or five stars in a new “Friends’ Favorites” row. Your friends will also be able to see what you watch and rate highly… You are in control of what gets shared. You can choose not to share a specific title by clicking the “Don’t Share This” button in the player.