Targeted advertising is nothing new. Offline advertisers have run focused campaigns for decades, and their online progeny have long used persistent cookies to track repeat visitors to their sites, geodata from IP addresses to approximate a user's location, and the data from search histories, emails, and the content of requested pages to provide "Contextual Adverts." Phorm, however takes this to several new lows. Co-hosted by a partner ISP, the software monitors a user's entire browsing activity over time, building a comprehensive pattern of behavior, which they attempt to screen for Personally Identifiable Information ("PII"). When a user subsequently requests a page from a participating site, Phorm automatically intercepts the request, and inserts adverts based on their unique profile. A schematic of the multi-tiered architecture is available here.
Proponents argue that these behavioral tracking systems provide several benefits to consumers. First, they point out that Phorm's servers provide some protection from fraud and 'phishing' by blocking access to a blacklist of sites known to be harmful. Second, targeted adverts are offered as the quid pro quo required for keeping content free, when revenue from more traditional advertising is drying up. Third, it is suggested, users may 'prefer' targeted to random adverts, an analogy being drawn to the referral systems employed by NetFlix and Amazon to recommend DVDs based on past viewing history, and complimentary or substitute products based on the shopping history of customers with similar tastes. Fourth, much of this information is already being retained or monitored by ISPs in compliance with legislation like the EU Data Retention Directive, or the Communications Assistance for Law Enforcement Act. Similarly, browser add-ons such as the Yahoo! Toolbar have been aggregating and reporting on browsing history for some time. Finally, we are told that users can opt-out an anytime, by downloading a simple cookie onto their machines, and that at any rate, consumer outrage—as was recently expressed over the Facebook Beacon system—should mitigate against any egregious conduct on their behalf.
However, these 'justifications' are little more than irrelevant distractions—the rhetorical equivalent of being told to "look at the monkey" before being jabbed with the business end of a blunt needle.
First, whether or not these platforms incorporate an anti-phishing layer is of no consequence. Not only is this already a standard feature of most modern browsers, but search engines like Google flags these sites with similar warnings. At any rate, given the availability of client-based solutions, is it by no means clear that filtering should be provided at the server level. What's worse, is that this feeds into a larger complaint about the complete lack of transparency with regard to which sites will appear on these lists (and the process for removing yourself from them), the lexicon of the so-called 'sensitive terms' which will be precluded from profiling, and the lack of details about the 'anonymizing algorithm' or 'profile categories' which will be used. Bland assurances from their auditors Ernst & Young, that Phorm are 'basically good blokes,' are an inadequate safeguard.
Secondly, most consumers are likely to be completely unaware that any of this is happening, even if they blithely click through an EULA. Ironically, the proposed opt-out method, accepting a cookie from faireagle.com, means that privacy savvy users who have disabled third party cookies (as everyone should), will not be opted-out, nor will any user who has blacklisted that domain using DNS, Adblock and so forth. Attempts by providers to assure us that these 'services' somehow inure to our benefit, smack of "cigarette manufacturers telling us that their new brand is a turning point in the fight against cancer."
Third, the notion that the threat of consumer outrage is sufficient to prevent future abuse is absurd. Privacy statements change overnight, and failed companies have an unpleasant to tendency to offer their clients records free of such encumbrances to the highest bidder. Aggregation of information on this scale just compounds the problem, as there is no way notify consumers of an updated policy ex post, and in the absence of reliable data that advertisers will value this information that much more than less invasive contextual adverts, there must be a huge temptation to expand the uses of this information—the oft quoted example being insurance companies filching the search histories of people interested in expensive illnesses. The ethical integrity of a firm known to rewrite its own Wikipedia entry, and to conduct secret trials of tens of thousands of unwitting customers, is nill.
Fourthly, we need to recognize the unique role of ISPs as the gatekeepers of the Internet, one which between application specific bandwidth throttling and a walled garden approach to mobile services, is increasingly questionable. Knowingly trading a degree of privacy in exchange for gigabytes of email storage, or online photo tools is one thing; unwittingly granting carte blanche to record, analyze every aspect of one's digital alter ego, is both quantitatively and qualitatively a different beast. Although worst case scenarios require that 'anonymous' behavioral history is associated with PII data, glibly assuming that a combination of user-based ad-blockers and internal 'security measures'; will prevent this from happening, is a folly. As is, Phorm's systems already leaks identifiable information to secure sites, and ad-blockers are being undermined by javascript hacks.
Finally, it is worth noting a bill currently pending before the New York legislature. The Brodsky Bill—something similar is brewing in Connecticut—is significant because if passed, will be tantamount to a de facto national standard. Supported by Microsoft (probably as "one in the eye" to Google), it tracks very closely to self regulations proposed by the National Advertising Initiative (a body representing about a quarter of the industry), some eight years ago, and has been criticized as inadequate and out of date. However, while disappointingly opt-out based, it does represent something of a positive step, in that it would prohibit third parties from tracking information from websites with which it does not have a contractual relationship, and prohibiting the collection of certain sensitive information online. One to watch.
Word Count: 997 (ex. Abstract / Further Reading)
Ernst & Young Privacy Audit of Phorm
Wikipedia, Diagram illustrating how Phorm Works
Nicholas Bohm (FIPR), The Phorm 'Webwise' System - A Legal Analysis, Apr. 23 2008
Richard Clayton (Cambridge Computer Laboratory), The Phorm 'WebWise' System, Apr. 23 2008
Third Party Internet Advertising Consumer's Bill of Rights Act of 2008
Conn. HB05765 (2008) (somewhat narrower than the New York bill)
Text of the Dec 2007 FTC Statement
Cornell Law School, Right To Personal Information
The Register, The Phorm Files: All Yer Data Pimping News in One Place
Louise Story, A Push to Limit the Tracking of Web Surfers' Clicks, New York Times, Mar. 20 2008
Louise Story, How Do They Track You? Let Us Count the Ways, New York Times, Mar. 9 2008
Neil McIntosh, Letting it all hang out, The Guardian, Mar. 18 2008
James Edwards, Unblocking Adblock, Feb. 5 2008
Greg Sandoval, Failed Dot-Coms May be Selling Your Private Information, CNET, June 29 2000
US Companies which Meet EU Safe Harbor Provisions