All posts by Alan Mcquinn
In 2014, Europe’s highest court ruled that Europeans have the ability to request that search engines remove links from queries associated with their names if those results are irrelevant, incorrect, or outdated. As a result of this ruling, Google agreed to delist search results from country code level domains—such as Google.fr for France—to remove offending results for European users, without affecting the rest of its users worldwide. Earlier this month, Google expanded its practice so that it now will delist offending results from all Google search domains, including Google.com, for all European users, based on geo-location signals, such as IP addresses. So a user in France would not see delisted URLs even if they visit Google.com instead of Google.fr. France is now saying that this is insufficient and Google must take down offending material for all users visiting any of its domains worldwide.
Last week, the French privacy authority, the Commission Nationale de l’informatique et des Libertés (CNIL), fined Google €100,000 ($112,000) for failing to remove links associated with French right-to-be-forgotten requests from its global search index. France is trying to force its domestic policies on the rest of
Movies capture the popular imagination, mirroring society’s hopes and fears. But science fiction is exactly what the name describes: fiction. It is meant to bring enjoyment to the viewer, and these wild depictures of technology run amok should not affect policy decisions. Unfortunately, this is not always the case.
For example, take concerns about Artificial Intelligence (AI). Recently, a number of prominent scientists and well-known luminaries have warned that in the not-so-distant future, humans could lose control of AI, thus creating an existential threat for humanity. This paranoia about evil machines has swirled around popular culture for more than 200 years, and these claims continue to grip the popular imagination. In fact, one 2015 study found 22 percent of U.S. adults are afraid of AI (which is more than fear death), despite no evidence that this technology is anywhere near being as sophisticated as it is portrayed in movies.
But policymakers should not use science fiction films to guide their understanding of science and technology. For example, at a 2013 Senate hearing about threats from space, a senator cited the movie Armageddon—where a team of astronauts try to
The Federal Trade Commission (FTC) hosted the first annual PrivacyCon in January 2016, an event designed to highlight the latest research and trends for consumer privacy and data security. The FTC’s stated goal was to bring together “whitehat researchers, academics, industry representatives, consumer advocates, and government regulators” for a lively discussion of the most recent privacy and security research. Unfortunately, not only did the event not reflect the diversity of perspectives on these issues, but the whole event seemed to be orchestrated to reinforce the FTC’s current regulatory strategy.
First, the “data security” side of this discussion was almost non-existent in the agenda. Of the 19 presentations, only 3 were about security. Given that the FTC has been flexing its regulatory muscle on corporate cybersecurity practices, this was a missed opportunity to delve into important cybersecurity research that could inform future oversight and investigations.
Second, the FTC mostly selected papers that jibed with its current enforcement agenda. As Roslyn Layton, a visiting fellow at the American Enterprise Institute, noted recently, of over 80 submissions that the FTC received for PrivacyCon, it selected 19 participants to give presentations with
The Pew Research Center released a survey last week that investigated the circumstances under which many U.S. citizens would share their personal information in return for getting something of perceived value. In the survey, Pew set up six hypothetical scenarios about different technologies—including office surveillance cameras, health data, retail loyalty cards, auto insurance, social media, and smart thermostats—and asked respondents whether the tradeoff they were offered for sharing their personal information was acceptable.
To be sure, some of the questions that Pew asked described one-sided tradeoffs that could have tainted the findings. Nevertheless, the overall results reveal that the Privacy Panic Cycle, the usual trajectory of public fear followed by widespread acceptance that often accompanies new technologies, is still going strong for many technologies.
The Privacy Panic Cycle explains how privacy concerns about new technologies flare up in the early years, but over time as people use, understand, and grow accustomed to these technologies, the concerns recede. For example, when the first portable Kodak camera first came out, it caused a big privacy panic, but today most people carry around phones in their pockets and do not give
Earlier this month, the Electronic Frontier Foundation (EFF) launched a “Spying on Students” campaign to convince parents that school-supplied electronic devices and software present significant privacy risks for their children. This campaign highlights a phenomenon known as the privacy panic cycle, where advocacy groups make increasingly alarmist claims about the privacy implications of a new technology, until these fears spread through the news media to policymakers and the public, causing a panic before cooler heads prevail, and people eventually come to understand and appreciate innovative new products and services.
When it comes to privacy, EFF has a history of such histrionics. The organization has accused desktop printers of violating human rights, spread misinformation about the effectiveness of CCTV cameras, escalated confrontations around the purported abuse of RFID, cried foul over online behavioral advertising, and much more. These claims, even if overblown and ultimately disproved by experience, generate headlines and allow EFF to spread fear, ploughing the ground for harmful regulation or even technology bans.
EFF’s newly launched “Spying on Students” campaign is yet another example of this tendency to put fear ahead of fact. EFF
ITIF‘s latest report—“The Privacy Panic Cycle: A Guide to Public Fears About New Technologies”—analyzes the stages of public fear that accompany new technologies. Fear begins to take hold when privacy advocates make outsized claims about the privacy risks associated with new technologies. Those claims then filter through the news media to policymakers and the public, causing frenzies of consternation before cooler heads prevail, people come to understand and appreciate innovative new products and services, and everyone moves on. This phenomenon has occurred many times—from the portable Kodak Camera in 1888 to the commercial drones of today. And yet, even though the privacy claims made about technology routinely fail to materialize, the cycle continues to repeat itself with many new technologies.
The privacy panic cycle breaks down into four stages:
- In the “Trusting Beginnings” stage, the technology has not been widely deployed and privacy concerns are minimal. This stage ends when privacy fundamentalists, a term coined by the noted privacy researcher Alan Westin, begin raising the alarm creating a “Point of Panic.”
- In the “Rising Panic” stage, the media, policymakers, and others join the privacy fundamentalists in exacerbating
As ITIF Vice President Daniel Castro explained at the outset of a recent ITIF event on the future of artificial intelligence (AI), we have seen significant advancement in AI in the past few years, from Google’s self-driving cars to IBM’s Watson to Apple’s Siri. At the same time, several prominent tech leaders—including Elon Musk, Bill Gates, and Stephen Hawking—have expressed concern that these advances in AI will lead to supremely intelligent machines that could pose a threat to humanity. Should policymakers actually be worried, or are their concerns hyperbole?
There was general agreement among the speakers that AI has the potential to greatly improve society, including helping to alleviate poverty and cure disease. Manuela Veloso, a professor of computer science at Carnegie Mellon University, explained that most technologies present certain risks but they are outweighed by the benefits. She advocated for additional research funding to build protections into future AI.
Some panelists expressed greater concerns over the dangers, especially if the research community does not work to address them in the near term. Nate Soares, executive director of the Machine Intelligence Research Institute, explained that artificial
PricewaterhouseCoopers recently released a report that attempts to provide a “holistic view” of the so-called sharing economy, including how it is unfolding “across both business and consumer landscapes.” While an exact definition of the “sharing economy” can be hard to pin down, it generally refers to the concept of using information technology to allow consumers to rent or borrow goods, especially those that are underutilized, rather than buy and own them. Undoubtedly, the sharing economy is an important and innovative approach to commerce, especially having given rise to wildly popular services such as Airbnb and Lyft. More broadly, the sharing economy can boost economic welfare by allowing the economy to more efficiently utilize goods and services.
Given all this, we certainly need thorough analysis of the sharing economy to fully grasp its potential. Yet PwC’s report errs badly in at least one important respect: It conflates unlawful sharing of digital media with legitimate peer-to-peer activities.
With regards to the challenges of the sharing economy in digital media, PwC writes:
“The ambiguity of the sharing economy is particularly evident in entertainment and media, where consumers are open to ‘sharing’
Governor Rick Scott (R-FL) is asking the Florida legislature to cut $470 million in taxes that the state collects from residents on their cell phone, satellite, and television bills. This proposal to cut the cellphone and TV tax rate by 3.6 percentage points will not only put money back into the pockets of everyday Floridians, but it is also a positive step in the right direction to help reduce Florida’s digital divide and will enable more innovation though mobile broadband.
Florida has one of the highest tax rates for wireless services (16.59 percent), falling behind only Washington, Nebraska and New York. In fact, consumers in seven states—Washington, Nebraska, Rhode Island, New York, Illinois, Missouri, and Florida—pay in excess of 20 percent of their bills for their combined state and federal tax rates.
So why have these taxes to begin with? States have traditionally turned to taxing services that people consume in their home because it is a reliable form of revenue. Someone in Florida cannot travel to Georgia to get a lower tax rate on their cell phone bill like they could for purchases of goods that include
Advertisers often find their ads appearing alongside unlicensed content on web sites or on sites offering counterfeit goods. This taints brands and inadvertently promotes piracy, fraud, and malware. Now the advertising industry is fighting back. The Trustworthy Accountability Group (TAG), with support from a number of industry groups, such as the American Association of Advertising Agencies (4As), the Association of National Advertisers (ANA), and the Interactive Advertising Bureau (IAB), plans to create a new program to identify offending web sites and ensure that ads are no longer placed on them.
The online advertising ecosystem is highly complex, often automated, and involves a vast range of different actors, including the advertisers, ad networks, ad agencies, websites and other online properties where ads are featured. While this dynamic system is essential to funding the vast array of free and legal online content and services, it has not been without its challenges. The foremost of these challenges occur when an advertisement is placed on a website engaged in illegal activities. The advertising industry calls these bad actor websites “ad risk entities.”
These sites are a big problem for the industry because it