understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border
navigation menu bottom border
left side navigation top border

left side navigation bottom border

left side navigation top border
left side navigation top border

main display area top border
.:id trail mix:.
.:works in progress:. | .:publications:. | .:presentations:. | .:id trail mix:.

PDF Print
My wish list for a few things we need in the privacy world
By: Kris Klein

October 23, 2007


Okay, okay… It’s still a few months away from the Holiday season and the New Year. Regardless, they’ve given me the pen for this spot and I’m making a list. I figure if I get my wish list in early this year, maybe I’ll get a few of the things I want!

So, here’s my wish list for a few things we need in the privacy world:

1. Laws that break through or work around the limitations imposed by our constitution (I mean, provincially regulated employees have no privacy protection in legislation unless their information is used as part of a commercial activity or unless they live in Alberta, B.C. or Quebec).

2. Speaking of commercial purposes… can we please have a better definition that doesn’t involve someone circling and circling and circling? I mean a commercial activity is something of a commercial nature. Gee, thanks for that clarification.

3. Less restriction on the publication of the federal Commissioner’s Reports

4. A version of PIPEDA where the French and English versions translate properly (some sections even have different paragraph numbering)

5. An Act that contemplates that if you go to court on a matter that involved a violation of an individual’s privacy, the Court would be given explicit power to put controls in place that would allow the protection of privacy during the Court process.

6. A recognized ability to get real compensation when your privacy is invaded. Getting a “well-founded and resolved” report is only going to motivate people for so long to stand up for their rights.

7. A recognition that we are in a surveillance state. Question is, are we going to let it get worse, tolerate it the way it is, or fight back?

8. A Privacy Act that is written based on our understanding of computing and database technology in 2007. Not 1977.

9. A recognition that the Privacy Commissioner cannot oversee ALL of government and that it’s high time the government itself takes some responsibility for privacy (yes, they should have Chief Privacy Officers in many departments).

10. Privacy Impact Assessments… oh wait, we do have those, sometimes! (But not nearly enough – and even when they’re done, nobody knows about them.)

11. One more very good conference and then an acknowledgement that we need to actually get the work done and not just talk about it.

Things we probably don’t need:

1. Another privacy lawyer… ooops, well don’t check out www.krisklein.com then.
PDF Print
Rewriting my Autobiography: Me, Myself, and (possibly) a Different ‘I’
By: Cynthia Aoki

October 16, 2007


I’ve always wanted to write my own autobiography. Maybe it’s narcissistic, but I thought it would be a good chance for me to think back, reflect, introspect, and remember both the good and bad things that happened to me throughout my life. I could then maybe figure out what went right, and in some cases, what went horribly wrong. But I told myself that I would save this personal task until I was older and also until I had enough stories and experiences to share and write about. Otherwise, if I wrote my autobiography today, it would be a story about a girl named Cynthia, who went to school, who then decided to go to more school.

I then came across McAdams’ “Life Story Theory” of identity [1] and realized that I didn’t have to wait until I was old and experienced to write my autobiography. I was already in the midst of writing one and in fact, I had been writing and contributing to this autobiography my whole life. According to McAdams, the individual is the primary author of his or her autobiographical narratives and the individual’s memories link together the past, the present, and the future in order to provide a sense of identity and also to provide a sense of purpose for one’s thoughts and behaviours.

This means that all the memories that I formed (both consciously and unconsciously) have helped to provide me with my sense of identity and that I’m continuously evaluating my experiences and integrating them into the larger narrative of my life.

But what would happen if I experienced something so horrifically terrible that I didn’t want it to form part of my life story. Would I have the option of ensuring that I no longer remember this event and that the memory of the event no longer forms part of my autobiography? If so, and I can start actively meddling with my autobiography, would this change who I am?

Memory and Drugs

Because of the importance of memory and its role in defining one’s identity, scientists in the realm of psychology, neurology, and neuroscience have been investigating methods of enhancing or preserving different types of memory. [2]

More recently, scientists have started to focus on developing pharmacological agents that inhibit or dampen the strength of memory formation and recall. These memory dampening agents are currently being investigated for the treatment of post traumatic stress disorder (PTSD).

PTSD and Autobiographical Memories

PTSD is a psychiatric anxiety disorder that can develop in response to traumatic experiences. [3] One hallmark characteristic of this disorder is the alternation between re-experiencing and avoiding trauma-related memories. In some cases, the disorder can be so debilitating that the individual can no longer function in society due to the involuntary and continuous recall of the horrific event.

Currently, researchers are investigating the interaction between autobiographical memories and PTSD. According to Bernsten (2001), traumatic memories are important in that they become reference points to other experiences in one’s autobiographical memory database. More specifically, traumatic memories become significant landmarks, which represent a major threat that is perceived by individuals with PTSD. [4]

By inhibiting the formation of certain autobiographical memories with the use of these memory dampening agents, the potential formation of these important landmarks may be circumvented.

Pharmaceutical Forgetting

Research has shown in both animal and human studies that emotionally arousing experiences are better remembered than those that are emotionally neutral. [5] Arousal is dictated by the level of adrenaline in the body; a higher level of adrenaline results in increased arousal, and therefore, stronger memory formation. Propranolol, which is already being prescribed for the treatment of hypertension, is used to block the effects of adrenaline. Scientists hypothesize that propranolol could help to dampen the recall of traumatic experiences by dampening arousal. Propranolol is currently being tested in multi-centre clinical trials for the treatment of PTSD.

More interestingly, researchers have recently shown that propranolol can also blunt previously formed memories in humans. [6] In a double blind, randomized study, persons with chronic PTSD were asked to recall their traumatic experiences. The mere recall of these previously experienced traumatic events caused adrenaline to be released and resulted in increased arousal. Upon experiencing arousal, half of the participants were administered propranolol; the other half were administered a placebo. Results showed that propranolol retroactively blunted the recall of previously formed traumatic memories.

Once approved for the treatment of PTSD, what would be the legal implications of using these agents in society?

Legal Issues

Propranolol is known as a “beta-blocker” and was developed in the 1950s and has been prescribed for the treatment of hypertension since the 1970s. In both volunteer studies [7] and clinical trials [8] the use of beta blockers was found to impair memory recall. Interestingly, a similar dose (120 mg-160mg/day) is being prescribed for both the treatment of hypertension and for the treatment of memory dampening. [9] Results from these experiments suggest that individuals who are prescribed propranolol for the treatment of hypertension may be subject to memory impairment; perhaps without their knowledge or consent. Of concern to the legal system is that the reliability and accuracy of the testimonies given by these individuals taking propranolol will be called into question. When deliberating future cases, it will be important for Canadian courts to be mindful of the potential effects that propranolol and similar drugs could have on a witness’s testimony.

Another legal issue arising from the use of these agents is the extent of informed consent that would be required when prescribing these memory dampening drugs. After experiencing a traumatic event, individuals will likely be rushed to the emergency room in order to be treated for both mental and physical distress. Upon reaching the emergency room, a tending physician may recommend the treatment of propranolol in order to help minimize the chances of developing PTSD in the future. Despite being informed of the potential risks and uncertainties associated with these agents, it is questionable whether individuals taking these drugs would be in a legitimate position to give their informed consent because 1) their decision making skills would be significantly compromised as they are in times of distress [10], and 2) they would not know the potential role these dampened memories would have played in their future lives and identities.

Some Final Thoughts

Currently, memory dampening agents are not available to the general public. The quickly advancing field of neuroscience, however, may be able to provide new, more specific, and safer agents to help dampen the painful memories associated with traumatic events. In the near future, some of these newer technologies could be potent enough to allow for memory deletion to occur. Recently, the drug, U0126 (not yet available in humans), was able to selectively delete a particular fear-induced memory in rats. [11] Perhaps these memory deleting agents will become available for use in humans.

In conclusion, it will be necessary for the courts and the government to be informed of all of these new pharmacological developments so that they will be in a legitimate position to weigh both the legal and social implications of using these interventions in the future.

Some Final Final Thoughts

By the time I get around to writing an autobiography, I could have gone through some experiences that may have tempted me to take one of these memory dampening agents and artificially blunt some of my memories.
Maybe it’s just me, but if I do decide to write an autobiography, I want to be able to look back and remember both the good and bad times; the times I’ve laughed and sobbed. I want to be confident that the memories I’m recalling and writing about are genuine and that my memories aren’t pharmaceutically modified in any way, shape, or form.


[1] D.P. McAdams, “The Psychology of Life Stories” (2001) 5:2 Review of General Psychology 100-122
[2] Farah, M. J., Illes, J., Cook-Deegan, R., Gardner, H., Kandel, E., King, P., Parens, E., Sahakian, B., & Wolpe, P. R. (2004). Neurocognitive enhancement: what can we do and what should we do? Nat Rev Neurosci, 5(5), 421-425
[3] Vasterling, J. J., Brewin, C. R. (2005). Neuropsychology of PTSD. New York: Guilford Press.
[4] Bernsten, D., Willert, M., Rubin, D.C. (2005). Splintered memories or vivid landmarks? Qualities and organization of traumatic memories with and without PTSD. Applied Cognitive Psychology, 17, 675-693.
[5] McGaugh, J. L. (2006). Make mild moments memorable: add a little arousal. Trends Cogn Sci, 10(8),
[6] Brunet, A., Orr, S. P., Tremblay, J., Robertson, K., Nader, K., & Pitman, R. K. (2007). Effect of post-retrieval
propranolol on psychophysiologic responding during subsequent script-driven traumatic imagery in post-traumatic
stress disorder. J Psychiatr Res. (in press).
[7] Frcka, G., & Lader, M. (1988). Psychotropic effects of repeated doses of enalapril, propranolol and
atenolol in normal subjects. Br J Clin Pharmacol, 25(1), 67-73.
[8] Blumenthal, J. A., Madden, D. J., Krantz, D. S., Light, K. C., McKee, D. C., Ekelund, L. G., & Simon, J.
(1988). Short-term behavioral effects of beta-adrenergic medications in men with mild hypertension. Clin
Pharmacol Ther
, 43(4), 429-435.
[9] Pitman, R. K., Sanders, K. M., Zusman, R. M., Healy, A. R., Cheema, F., Lasko, N. B., Cahill, L., & Orr, S. P.
(2002). Pilot study of secondary prevention of posttraumatic stress disorder with propranolol. Biol Psychiatry, 51(2), 189-192.
[10] Hammond, K. R. (2000). Judgments under stress. New York: Oxford University Press.
[11] Doyere, V., Debiec, J., Monfils, M. H., Schafe, G. E., & LeDoux, J. E. (2007). Synapse-specific reconsolidation
of distinct fear memories in the lateral amygdala. Nat Neurosci, 10(4), 414-416.

PDF Print
Intimate Invasions: How Far Will Internet Users Push the Realm of Acceptability? or Have You Been Facebook Stalked Yet?
By: Kayleigh Platz

October 9, 2007


I recently, for the first time in my life, set up my own wireless router in order to connect my laptop, as well as my roommate’s, to the Internet. This was not a user-friendly experience, and my stress level was heightened by my need to safeguard my wireless signal from outside intruders. I was creating a code of identity for my actions through my computer network: I had to name my signal and trust that it will safeguard my IP address which is now, through my actions online, an extension of my self and identity.

By giving a name to my Internet network, I was sending a secure signal of my own personal identity out into cyberspace. This is a name that anyone in my physical world close enough to pick up on my Internet signal will be able to see. The Internet, as a social system, is a lot less anonymous than many people seem to still think; whether consciously or unconsciously, we are constantly sending out signals of our identity online. From postings on a blog to a wireless network name, our physical life-based identities seep out to the cyber world.

It’s an alarming trend to notice how oblivious people are to their cyber identities, and how careless they are with cyber information that can have a massive affect in their physical world. The online psyche is now a permanent aspect of most people’s lives.

With such a plugged in world, people live and communicate endlessly via online routes. However, like an unguarded Internet signal, many people leave themselves open to cyberintrusions that endanger both their cyberidentites and their physical life identities. Two women have recently been in the news for such open intrusions into their private lives through seemingly safe online channels. Neither Jessica Coen, nor Allyson Stokke intended to victimize themselves through innocent online actions, yet both had their identities and privacy victimized and destroyed through the very avenues they left open to the cyberworld.

Jessica Coen is an online blogger who is now deputy online editor for Vanity Fair magazine. In a previous job, however, she was a popular blogger on the snarky Manhattan-based gossip website, gawker.com [I]. Coen wrote aggressive observations about people’s looks, loves and lives in New York City through the online medium. Coen wrote to receive a reaction, which she received in hordes. Emails, phone calls, letters in the mail, false email accounts set-up under her identity were just some of the reactions she caused from her caustic writing. All were, of course, anonymous. All were invasions of her privacy. None of which would have been so easily acted upon in the physical world. What was a wake-up call to Coen and her lifestyle should be a wakeup call to us all. Just because the anonymity of online actions makes it easier for many people to do or act in ways they are not comfortable in the physical world, does not mean the actions do not have an affect in the physical world. Voyeuristic tendencies have increased in popularity of negative online actions. The Internet has increased many people’s freedom of expression, both positive and negative. In this “me” generation, where the staged reality show, “The Hills,” is a hit, men and women not only feel that it is alright to comment and act as they desire in the online world, but seemingly get approval of their actions through physical world reactions such as media social relations. In today’s world, it is just as common to end a relationship through online or cellular means as it is in a physical world situation.

It is interesting to note that Coen is still active online. She is currently working online and still maintains a blog. A quick search on Facebook brings up a profile that appears to be hers as well. While Coen has been awakened to the threats that are online regarding her own privacy, as well as the malleableness of her identity in the online arena, she has continued to safely traverse the online realm as well as educate other women about both her experiences and her suggestions.

Allison Stokke is young woman with a similar story [II]. However, Stokke’s online privacy invasion began innocently with a sports blogger posting a picture of the young track and field athlete on his website. Rapidly, Stokke received an overwhelming amount of friend requests on her Facebook profile, and YouTube montages made in her honour. More online and even real-life harassment followed in the wake of that one posted picture. Today it is very easy to still find pictures of Stokke online, but not her physical cyber self. Stokke, as an individual, has all but disappeared online due to her experiences.

Online voyeurism has, I dare say, become more dangerous today than in the early days of the Internet when adults were arrested for meeting minors they had met online. You see, online voyeurism has gone beyond something that both appals and frightens us as it was in the past: online voyeurism has gone mainstream. While neither Coen nor Stokke were physically harmed by their attacks, not all individuals have been so lucky. Indeed, the separation between people’s physical world actions and their cyberworld actions is becoming more apparent by the more vicious people become online. Indeed, many people feel comfortable acting out online in ways they would never do in the physical world. As the cyberworld becomes more “real” in our daily lives, our ethics and responsibilities online must be reassessed. The separation of self and ethics must cease to exist. Verbally tearing into someone online may be exhilarating, but has “real life” affects on people’s lives. We need to keep in mind the humanist aspects of the online world. To continue to be wired we must keep it real.

In short, we must redefine the real to fit our new dimensions of our world. What is the real experience? How do we feel the real in cyberworld? How do we let the cyberworld fully compliment the physical world? Finally, how far do we let the two worlds go?

[I]I See Jessica Coen, Online Bullies Back Off. Glamour Magazine. Oct. 2007: 227-228.
[II] See Rebecca Webber, Give This Girl Her Life Back! Glamour Magazine. Sept. 2007: 80.


Kayleigh Platz is a Master’s student in Public Issues Anthropology at the University of Waterloo, Ontario, Canada. Kayleigh’s interests range from on-line communication and social networks, the cyberworld culture, on-line voyeurism, tactical media, and Harry Potter. Kayleigh’s main research focuses on online social networks and user identities. Kayleigh will be speaking at the Student "I" conference at the University of Ottawa on October 25th.
PDF Print

Wikisurveillance: a genealogy of cooperative watching in the West
By: Michael Arntfield

October 2, 2007 


As the duly elected Liberal government currently serving the Province of Ontario stands poised to infuse one of the largest revenue collection and fine levying agencies in the Western hemisphere—the Ontario Provincial Police—with $2 million (Can) to fund the operation of a state-of-the-art spy plane ostensibly required to identify “racers” or “stunt” drivers using the King’s Highways (Cockburn & Greenberg 2007), all while police in Britain continue to append audio-video recording equipment, or “Bobbie-Cams,” to the helmets of their patrol officers in the vein of Paul Verhoeven’s dystopic 1987 film Robocop (Satter 2007), one is prompted to take a look back at the corpus of police surveillance devices suborned by modernity, that have in aggregate given way for what might be called the golden age of voyeurism.

The mechanical metamorphosis from Althusser’s (1971) Ideological State Apparatus, into the more palpable “technical apparatus” (Ellul 1964: 101) of the police as we know them today, has been achieved in large part through a process of technological determinism, or the means by which human culture and history are simultaneously rendered and reified by our machines. In other words, the ubiquity of those police surveillance and reporting tools that have pervaded urban life for well over a century, has in turn propagated a mimetic response in occidental consumer culture whereby the general public is increasingly enamored by the “democratization of surveillance” (Staples 2000: 155) made possible by portable, affordable, and elegant devices that, through their egalitarian accessibility, make “coercion embedded, cooperative, and subtle, and therefore not experienced as coercion at all” (Ericson & Haggerty 1997: 7). As public and private interests ultimately converge through a phenomenon I call wikisurveillance, the denizens of this self-supervising panoptic state cooperatively pen the requiem for once valued tenets of privacy through the normalization, even fetishization, of corporate and private data mining, cell phone videography, security camera ubiquity, home “monitoring” systems, the proliferation of spy stores, and systemic Facebook cultism.

As such, I define wikisurveillance as the manner in which the community at large has been seduced by, or at the very least summarily acceded to, the idea of watching, recording, reporting, and even the expectation, or exhibitionism, of being watched, as the new de facto social contract for the post-industrial age. Ergo, the computing neologism “wiki” is an appropriate prefix to denote and describe this present Zeitgeist of freelance information brokering in which we presently live, as not unlike any open-source wiki-based text that is publicly inclusive, accessible, modifiable, and even corruptible in its design, the commercial surveillance technologies that define the new historicism of Western media have fostered an age of consensual spying and reporting perhaps best described as the Vichy state of late-capitalism. As conventional law enforcement’s monopoly on surveillance has consequently been muscled out by a veritable coup d’état spearheaded by free unlimited video messaging, Dateline hidden camera specials, and “how’s my driving?” bumper stickers, we must to some extent acquiesce to the troubling truism that Orwell was wrong: that “[t]here is no Big Brother…we are him” (Staples 2000: 153).

From the discreet distribution of “Constable keys” in the early 20th century to select citizens who could then access locked police signal-boxes and secretly report on the activities of their neighbors, illegal or otherwise, through to the efforts of the Ontario Green Ribbon Task Force in the early 1990s to have affluent commuters armed with what were then nascent and comparatively costly cell-phones report on the movements and identifiers of any vehicle similar to that believed to have been driven by serial killer Paul Bernardo, to modern AMBER-Alerts that function under this same basic pretense, and ultimately to the use of virtual communities like You Tube to solve crimes as serious as murder in some instances (Quintino 2006), there is indeed a long standing confederacy between hegemony and communications technology—even a co-constitutive evolution—which is being increasingly co-opted by private citizens and private enterprise as the state’s observational authority is deregulated.

As Western law enforcement continues to increasingly assert itself through largely privately owned and definitively for-profit entities whose loyalty remains to its capital interests in earnest, the “technical apparatus” of the police is diffused amongst an untrained, unaccountable, and largely anonymous civilian populace who mimic the police methodology by not only buying the compatible hardware, but also buying-in to the associated mindset that all human activities have an inherent intelligence-gathering value.

Whether it be the regular use of clandestine listening devices in Dunkin’ Donuts stores throughout the US (Staples 2000), or the Argus Digital Doorman maintaining and potentially selling off a facial recognition database containing the images of all visitors traveling to and fro any subscribing condominium or apartment building, we see that wikisurveillance allows the Western narrative on both privacy and paranoia to be scribed by a cabal of agents provocateurs who, in working for purely commercial interests, transform the thin blue line into a proverbial Maginot Line of strategic technical installations that expedite the erosion of human agency in not only the management, but also the manufacturing, of law and order.

Wikisurveillance has shown us that the rise of the dreaded police state in the West will not come with the terrifying, sweeping reforms of some new radical and totalitarian government that somehow seizes power, nor from under the boot of some fascist despot, but rather, with the efforts taken in the here and now largely to protect actuarial assets. While police agencies are generally subject to public oversight and accountability, and to archival audits and the eventual de-classification or disclosure of some information, where, when, and how the fragments of unregulated and individually mined data presently floating around will ultimately be used becomes the nagging query written into the code of wikisurvillance. As all human activities become increasingly part of a permanent and quantifiable record that is in large part privately owned and maintained, the Monday morning quarterbacking of historical surveillance data will consequently ensure that “[a] crime can always be found” (Solove 2007: 5) amongst the assorted images, as the floating definition of deviance ensures that crime becomes the last truly renewable Western resource.

Michael Arntfield is a PhD candidate at the Faculty of Information & Media Studies, University of Western Ontario.


Adlam, Robert C.A. (1981) “The Police Personality.” In: Pope, David W. & Weiner, Norman L. (eds) Modern Policing. pp. 152-162. London: Croom Helm Ltd.

Chu, Jim (2001) Law Enforcement Information Technology: A Managerial, Operational and Practitioner Guide. USA: CRC Press

Cockburn, Neco & Greenberg, Lee (2007) “Ont. to Impose $10,000 Fines for Street Racing.” National Post on-line, Aug 15, 2007. Electronic document: http://www.canada.com/nationalpost/news/story.html?id=6b7d070b-7d48-466c-96db-586d2a5f6def&k=10512. Retrieved Aug 16, 2007

Dandeker, Christopher (1990) Surveillance, Power and Modernity: Bureaucracy and Discipline from 1700 to the Present Day. Cambridge: Polity Press

Ellul, Jacques (1964) The Technological Society. New York: Knopf

Ericson, Richard V. & Haggerty, Kevin, D (1997) Policing the Risk Society. Toronto: University of Toronto Press

Lind, Laura (2007, August 18) “Hysteria Lane” The National Post, Toronto Weekend Magazine, p.14

Mann, Steve (1998) “’Reflectionism' and 'Diffusionism': New Tactics for Deconstructing the Video Surveillance Superhighway,” Leonardo, 31(2): 93-102.

Manning, Peter K. (1992) “Information Technologies and the Police” In Tonry, Michael & Morris, Norval (eds) Modern Policing. pp. 349-398. Chicago: University of Chicago Press

Marx, Leo (1964) The Machine in the Garden: The Pastoral Idea in America. New York: Oxford University Press

Maxcer, Chris (2007, March 6) “Cops Nab Crooks Using YouTube” Tech News World.com. Electronic document: http://www.technewsworld.com/story/56108.html
Retrieved July 10/07

Morgan, Rod & Newburn, Tim (1997) The Future of Policing. Oxford: Oxford University Press

North, Dick (1978) The Lost Patrol. Anchorage: Alaska Northwest Publishing Co.

ODMP (2006) Officer Down Memorial Page. Fallen officer directory. Electronic document: http://www.odmp.org/agency.php?agencyid=2758. Retrieved June 14/06

Packer, Jeremy (2002) “Mobile Communications and Governing the Mobile: CBs and Truckers,” Communication Review, 5(1) pp. 39-58

Phillips, Alberta (2005, March 17) “After Club Fire Police Comments Still Smolder” Statesman.com. Electronic document: http://www.statesman.com/opinion/content/editorial/stories/03/17phillips_edit.html. Retrieved May 2/06

Quintino, Anne-Marie (2006, December 15) “Police Discovering Power of YouTube” Globe and Mail.com. Electronic document: http://www.theglobeandmail.com/servlet/story/RTGAM.20061215.gtcopsyoutube1215/BNStory/Technology/home. Retrieved July 17/07

Richardson, Mark (2005) On the Beat: 150 Years of Policing in London Ontario. Canada: Aylmer Express Ltd.

Rubinstein, Jonathan (1973) City Police. USA: Hill & Wang

Satter, Raphael G. (2007, July 13) “Britain’s surveillance to new levels with video cameras strapped to police helmets.” CBC Newsworld. Electronic document: http://www.cbc.ca/cp/world/070713/w071347A.html. Retrieved July 14/07

Seltzer, Mark (1992) Bodies & Machines. New York: Routledge

Smith, Merritt Roe (1994) “Technological Determinism in American Culture.” In Smith, Merritt Roe & Marx, Leo (eds) Does Technology Drive History? The Dilemma of Technological Determinism. pp. 1-36. Cambridge, Mass: MIT Press

Solove, Daniel J. (2007) “I’ve Got Nothing to Hide and Other Misunderstandings of Privacy,” The San Diego Law Review (44), pp. 1-23

Staples, William G. (2000) Everyday Surveillance: Vigilance and Visibility in Postmodern Life. Lanham, MD: Rowman & Littlefield

Stewart, Robert W. (1994) The Police Signal Box: A 100 Year History. Glasgow: University of Strathclyde. Electronic document: http://www.eee.strath.ac.uk/r.w.stewart/boxes.pdf. Retrieved April 25/06

Vanderburg, Willem H. (2000) The Labyrinth of Technology. Toronto: University of Toronto Press

Wade, John (1829) A Treatise on the Police and Crimes of the Metropolis. London: Longman, Rees, Orme, Brown & Green
PDF Print
A Canadian Privacy Heritage Minute: Surveillance, Discipline, and Nursing Education
By: James Wishart

September 25, 2007


In this particular historical moment of fetishized “security” and state-sponsored surveillance carried out “for our own good,” it is tempting for some of us to think that we are reaching some low point in the history of privacy, where new technologies already allow the deployment of an Orwellian omniscience by states and corporations. This may indeed be so, but some research I did some years ago on the history of nursing education (of all things) has inclined me (a privacy advocacy neophyte) to wonder if the drive for total surveillance is neither novel nor dependent upon new technologies. In the spirit of Heritage Canada’s iconic television spots, I offer my own “Privacy Heritage Minute,” with all the skeletal theoretical framework, carefully-selected facts and simplistic moral that such an approach implies.

Prior to the 1950s, most Canadian nurses (who were predominantly young, white, unmarried women) were trained through an apprenticeship system, learning their craft by working for three years unpaid on hospital wards. This training was extremely arduous and strictly regimented, and was overseen by a limited number of paid nurse overseers and by senior nurse apprentices. The vast bulk of nursing labour in hospitals was completed by students, who lived on the hospital campus and seldom left the site until their training was complete.

Beginning in the late 19th century, it was understood that moral rectitude (read virginity) and feminine deference (read unquestioning obedience) were key characteristics of the ideal nurse. In part this was because prevailing models of health contained an unmistakably moral component (as arguably they still do – see the rhetoric around obesity, heart disease, HIV, etc.). Likewise hospitals, which were in competition for the dollars of wealthy patients and donors, used the image of the physically and morally clean (female) student nurse as advertising to convince the well-to-do of the safety and efficacy of institutional health care. [1]

Hospitals posted extensive lists of rules intended to ensure the proper behaviour of their student nurses. Obedience was far too important to be entrusted simply to sets of rules, however. As was explained in one nurses’ orientation manual, each individual would be “carefully watched to ensure strict obedience.” Surveillance, embodied in the policies, procedures, and the very architecture of the training school and Nurses’ Home, provided the disciplinary backbone for nursing training. Michel Foucault described similar developments with respect to 18th-century reform schools and prisons in Discipline and Punish: “We have here a sketch of an institution ... in which three procedures are integrated into a single mechanism: teaching proper, the acquisition of knowledge by the very practice of the pedagogical activity, and a reciprocal, hierarchised observation.”

Surveillance of student nurses began from the moment they applied to their training. Candidates underwent gynecological screening tests, which allowed hospital management to determine whether the candidates showed signs of sexually transmitted diseases, previous pregnancy, or loss of virginity. Applicants who showed evidence of such indiscretions were likely to be rejected as “not suitable to become a nurse.” This managerial anxiety over sexuality permeated the apprenticeship program. Of particular concern in these all-female spaces was homosexuality, a “vice” that dared not speak its name but that nevertheless attracted careful scrutiny by managers and hospital trustees. As one former nurse explained to me,

A rule was posted that ‘only one may bathe at a time’. We didn’t have time to wait in the mornings, so we often shared showers and tubs. The bathrooms were patrolled [by matrons] and so if a matronly voice said ‘is there only one of you in the tub,’ our rule was that only the one in the middle would call out ‘Yes, miss!’. I realized later that they were scared stiff of lesbianism.

In some residences, bath doors were designed like the swinging doors of saloons with spaces above and below, a technology of observation noted by Foucault at Paris-Duverney's Ecole Militaire. [2]

Surveillance was also trained upon the movements of apprentice nurses in their leisure time and private spaces. Purpose-built Nurses’ Homes were designed along panoptic principles, situating the Matron’s quarters adjacent to the main exit, an arrangement that gave the impression that the foyer was under constant supervision. Anyone entering or exiting the residence was required to sign a log, and bedrooms were checked for absent (or extra) bodies every evening. Strict curfews were enforced with the threat of dismissal, and reinforced with the possibility of character assassination for young women seen “out on the town” after curfew. In this latter area, the hospital enlisted the aid of the surrounding community as observers and judges of nurses’ conduct, and upright citizens regularly informed managers of suspected infractions by students.

On the hospital wards, surveillance took its shape via the ideology of scientific management. By the 1910’s, hospital managers had joined the cult of efficiency, and strongly believed that minute regulation of workers’ time and motion would lead to increased production and lower costs, concepts which fit awkwardly into the provision of health care but which nevertheless persist in hospital management to this day. [3] To this end, nurses were monitored carefully as they learned nursing tasks in a deskilled [4], routinized manner, with harsh discipline as the reward for lapses of technique or behaviour. A fundamental goal of this system was that students would internalize the observing eye, and like Jeremy Bentham’s panopticized prisoners, govern their behaviour according to the priorities of the institution.

Although there were obvious functional reasons for hospitals to maintain strict control over their unpaid labour force, the diligence with which such controls were implemented cannot be explained without attention to the larger discursive webs in which hospitals and nurses were caught. Rapid urbanisation and economic change in Canada, with the attendant increases in single women's urban employment and public visibility, fostered in the imaginations of civic leaders the spectre of the 'woman adrift', the young working girl living in unsupervised residences in an urban environment, untended by patriarchal authority. Promoting women's chaperoned boarding houses, the Toronto Star-Weekly prodaimed in 1917: "It would seem to be but our duty, from an economic as well as a humanitarian stand-point, to see that [the working girl] lives under conditions which tend to make her more efficient, as well as a worthy citizen. It is not too much to say that the future of our country lies in the hands of these girls.” This disingenuous language reflects (in part) anxieties about “degeneracy” that brought us such historical highlights as eugenic sterilization and the Chinese head tax. Regulation of the young female student nurses was thereby elevated to the level of a patriotic duty. Hospitals as major Canadian institutions bought into this wholesale, boasting that their system of discipline and training worked to produce “the best type of Canadian womanhood.”

With the future of the nation apparently at stake, there was little or no concern expressed about the privacy or autonomy of student nurses. [5] No privacy laws governed the surveillance of these young women – there were compelling moral, economic, political, medical, and other reasons to watch them, and so they were watched.

Without overstating the case, I wonder whether this Heritage Minute tells us a couple of things about reasonable expectations of privacy. To me it says that where fear and prejudice coalesce into social panic, surveillance is a ready tool for the identification and punishment of deviance, and privacy rights will be among the first in a long line of casualties. It also implies that surveillance technology takes the form of whatever is at hand. Hospitals used architectural techniques, documents, holes in walls, and human eyes to watch nurses, and socialized their students to watch themselves and each other. So although resisting the development of new methods of surveillance is important, it’s maybe just as important to keep our eyes on the core reasons why our privacy comes under constant assault. The longevity of the hospital system of nursing training suggests that where serious abrogations of privacy rights have apparent social or economic utility, or where they support the societal status quo, they may persist invisibly or unremarkably for decades.

Thank you. This has been a Canadian Privacy Heritage Minute brought to you by the idTrail.

[1] Even until the 1920’s, most hospital health care was “charitable,” reserved for persons who could not afford home visits by doctors and nurses. Hospitals had poor reputations as charnel-houses until they became the centralized repositories of expensive medical technologies like X-Rays, antiseptic operating theatres, and professional nursing care. This is a long story, for which there is not room here.
[2] Discipline and Punish (NY: Random House Vintage Books, 1979) at 172-173.
[3] Recently some RFID manufacturers and hospital administrators have proposed that increased efficiency could be achieved by attaching RFID tags to the bodies of hospital workers and patients, thus facilitating a constant surveillance of their motions through real-time monitoring from a central site.
[4] The “skill” level of the tasks taught to nurses is the subject of a healthy historical debate which has the “professional” status of nursing at stake in its outcome.
[5] Student nurses themselves expressed such concerns, and acted on them in important and effective ways, but that is a story for another time.
PDF Print
The Wrong Kind of Privacy
By: Julie Shugarman

September 18, 2007


I recently received news that my friend Kelly was found dead in her single room occupancy [1] hotel in Vancouver, several days after she had died. [2]

I knew Kelly as a great force working to improve the lives of street level sex workers in Vancouver’s Downtown Eastside (DTES). Feeling far away and alone in my grief, I googled her to see whether anything had been written about her death. To my surprise, I found a handful of references to her (full name included) as a participant in a free heroin trial program, and identifying her as a woman living out of a shopping cart in Canada’s poorest postal code. I was frustrated and angry that this one-dimensional sketch of Kelly, involving incredibly private details about her life, was so accessible. My first instinct was to wonder whether she had consented to having her name published in these articles. But then a different, and rather more pressing set of questions struck me.

Why, when so few people took notice of her daily existence and suffering, when she was allowed to die almost invisibly – was it possible for me to access information about her health, [3] her poverty and her homelessness on the World Wide Web? I couldn’t shake the idea that Kelly had too much of the wrong kind of privacy.

Kelly didn’t need the state to be kept “out”. [4] She needed the state and society more broadly to be let “in”, to actively participate in her existence by recognizing her humanity and not remaining indifferent to her poverty. The privacy she needed is that which comes from access to private property and adequate housing. The privacy she needed was that which would have enabled her to develop her identity and sense of self outside of the apathetic public scrutiny that happens on the street where the privileged are indifferent voyeurs of suffering.

What is privacy, anyways?
I write this with the qualification that it is not entirely clear to me what privacy is. I am puzzled about what it means for something to be “private”, what it means for someone, or some identifiable group, to have a right or an interest in “privacy”, or what exactly happens when this peculiar thing known as “privacy” is lost.

Warren and Brandeis famously quoted Judge Cooley’s definition, describing privacy as a right “to be let alone”. [5] Westin is most frequently attributed with informing us that privacy is about a right to control information about ourselves. [6] Judith Jarvis Thompson said privacy is a reductive concept that essentially consists of clustered property rights and rights to ones own person. [7] Ruth Gavison and Anita Allen have identified privacy as a limitation of access to individuals. [8] Richard Bloustein outlined privacy as integral to human dignity. [9] Jeffrey Reiman offered a notion of privacy as critical for personhood formation. [10] Many other wise theorists have offered still more accounts of privacy, more attempts to define what remains, in many senses, opaque.

Legally, the concept of privacy has largely developed in the context of rights of the individual accused as against the state. The Supreme Court of Canada has ruled that privacy is an instrumental right – integral to the realization of fundamental entitlements such as liberty, security of the person, and equality. [11] Section 8 Charter jurisprudence instructs that there is a distinction to be drawn between public and private space – fostering the notion that we are, at least in some ways, entitled to less privacy in public. [12]

So what’s the problem?
Almost all of this theorizing and analysis seems to take for granted that everyone has access to private space. It assumes a means to limit or control access to oneself. It further assumes that while privacy may not be a fundamental right in and of itself, it is an intrinsic aspect of human life that must be vigilantly protected from theft by the state, the corporate world, or other actors. The reality is that this access and these means are far from universal and that sometimes state intervention and support is necessary in order to foster privacy and/or the ends that privacy aims to achieve (like dignity, autonomous decision-making, the ability to exercise even constrained ‘choice’ with respect to decisions of a private nature, etc.). [13]

The notion of an obligation on the state to protect vulnerable people, even from activities that occur in otherwise private settings, is not new. Largely as a result of feminist activism, the idea of a man’s home as his impenetrable castle – a sacrosanct space that should be fiercely guarded from the hands of the law no matter what occurs within – has been challenged and discredited. It is not okay for the state to remain passive when a person is beaten-up or raped by her spouse. The legacy, however, of the historical role of privacy in protecting male domination of women in the marital home is significant and enduring. Martha Nussbaum, for example, warns: “anyone who takes up the weapon of privacy in the cause of women’s equality must be aware that it is a double edged weapon, long used to defend the killers of women.” [14]

Suspect of privacy, and at the risk of being perceived as taking it up as a “weapon”, I am becoming increasingly interested in arguments that call on the state to facilitate the privacy of historically marginalized groups - like women living and working on the streets. If the law has deemed it inappropriate for the state to ignore abuses suffered by women in their homes, it should not be permissible for the law –and for individuals more generally- to ignore the poverty of women working and living on Canada’s streets. It is their poverty that forces them into public space, and robs them of the privileges of privacy.

Elisabeth Paton-Simpson has pointed out that, “contrary to a widely held assumption in privacy law, reasonable people do not intend to waive all rights to privacy by appearing in public places.” [15] However, Paton-Simpson does not discuss the reality that many Canadians do not have the option to choose whether to appear in public or whether to leave the relative security of their homes – because they have no homes. [16] Unlike the people Paton-Simpson discusses, homeless and precariously housed Canadians have no option to “trust” that they will not be made objects of media excesses and advances in surveillance technology. [17] And yet, while they are infinitely accessible and have no adequate private space within which to develop – they are simultaneously scorned, ignored, and turned into ghosts counted only in studies and statistics. [18]

Final thoughts
Privacy comes in degrees. [19] A person or group of people can conceivably have too much privacy – or not enough. Indeed, without regular access to private property or the capacity to ensure that personal information is not made publicly available, a person’s existence can be completely lived in the presence of others.

It is understandable why legal and philosophical concern about privacy has been focused on protecting against loss of privacy. I think, however, that we need to refocus our attention on whether in some cases positive action is required to facilitate privacy and the goods associated with it (like dignity, security of the person, and liberty). We need to begin addressing the role of the state, the corporate world, and communities in facilitating conditions conducive to the “privacy” that continues to be erroneously assumed as the starting point for all.

Many of my friend Kelly’s daily rituals, no matter how intimate, were performed in “public” – they were accessible to all who passed by, and yet the three-dimensionality of her life and eventually her death remain invisible to most. We are repulsed, we simply don’t give a damn, or we actively disengage and explain-away our responsibility to pay attention, to do something, and to not let people who are in need of assistance alone. Perhaps until we learn better when it is okay to look away, we should take a positive obligation to facilitate privacy as our starting point – so that women do not go missing or die unnoticed.

[1] Single room occupancy (SRO) residential hotel units represent the most basic shelter provided for low-income individuals living in Vancouver’s Downtown Eastside (DTES). The people who live in SRO buildings are low-income singles at high risk of homelessness.
[2] This is not her real name.
[3] I am writing from a perspective that treats drug use as a health issue.
[4] This is intended as a reference to privacy as involving an entitlement to keep the antagonistic state out of the lives of individuals.
[5] Samuel Warren and Louis Brandeis, “The Right to Privacy” (1890) 4 Harv.L.Rev. 193. at p. 195.
[6] Alan F. Westin, Privacy and Freedom (New York: Atheneum, 1967) at p. 7.
[7] Judith Jarvis Thomson, “The Right to Privacy” (1975) 4 Philosophy and Public Affairs 295-314
[8] Ruth Gavison, “Privacy and the Limits of Law,” (1980) 89 Yale Law Journal at p. 428; Anita Allen, Uneasy Access (New Jersey: Rowman and Littlefield, 1988).
[9] Bloustein, E.J., “Privacy as an aspect of human dignity: An answer to Dean Prosser,” (1964) 39 N.Y.U. L. Rev. 963. It is worth noting that Bloustein is referencing “dignity” in what some might call the liberty sense, and not the equality sense. He writes of privacy as dignity offending by explaining: “an intrusion of our privacy threatens our liberty as individuals to do as we will, just as an assault, a battery or imprisonment of our person does.” at p. 1002.
[10] Jeffrey Reiman “Privacy, Intimacy, and Personhood” (1976) 6 Philosophy and Public Affairs at p. 26
[11] See for example: R. v. Dyment, [1988] 2 S.C.R. 417 at paras. 17, 21-22; R v. O’Conner [1995] 4 S.C.R. 411 at paras. 110-113, 115; R. v. Mills, [1999] S.C.J. No. 68 at 91.
[12] Section 8 of the Charter provides that “[e]veryone has the right to be secure against unreasonable search and seizure.” In R. v. Silveira, [1995] 2 S.C.R. 297, at para. 140, Cory J, found: “[t]here is no place on earth where persons can have a greater expectation of privacy than within their 'dwelling-house'”. See also: R. v. Tessling, [2004] S.C.J. No. 63, in which the SCC indicated that expectations of privacy are less reasonable when one moves outside of the sphere of the home, at para 22.
[13] On privacy’s functional role in facilitating dignity, integrity and autonomy see: R. v. Mills, [1999] S.C.J. No. 68 at para 81.
[14] Martha Nussbaum, “What’s Privacy Got to Do With It: A Comparative Approach to the Feminist Critique” in Women and the United States Constitution: History, Interpretation, and Practice ed. Sibyl A. Schwarzenbach and Patricia Smith (New York: Columbia University Press, 2003) at 164.
[15] Elizabeth Paton-Simpson, “Privacy and the Reasonable Paranoid: The Protection of Privacy in Public Places,” (Summer, 2000) 50 Univ. of Toronto L.J. 305.
[16] Canada has no official data on homelessness – an omission which has attracted critique from the United Nations Committee on Economic, Social and Cultural Rights. For a somewhat dated discussion of this, see: Patricia Begin, Lyne Casavant, Nancy Miller Chenier, & Jean Dupuis, “Homelessness,” Political and Social Affairs Division, Parliamentary Research Branch, 1999. Online: http://dsp-psd.pwgsc.gc.ca/Collection-R/LoPBdP/modules/prb99-1-homelessness/index-e.htm
[17] Elizabeth Paton-Simpson, supra note 15: “To the extent that they have any choice in the matter, [reasonable people] generally refuse to be governed by suspicion and paranoia, preferring to trust that their privacy will be respected. They leave the relative security of their homes in order to survive and participate in society, and their experience and expectation is that public places do afford varying degrees of privacy.”
[18] In using the term “ghosts,” I am mindful of Jeffrey Reiman’s theory that there would be no person, or moral agent, to whom moral rights could be ascribed if it weren’t for the boundary drawing, person creating, “social rituals” we call privacy. According to Reiman, privacy “protects the individual’s interest in becoming, being, and remaining a person”: Jeffrey Reiman, supra note 10 at p. 25, 43-44. Charles Fried has similarly made the point that privacy is integral “to regarding ourselves as the objects of love, trust and affection” to understanding ourselves “as persons among persons”: Charles Fried, “Privacy” (1967-68), 77 Yale L.J. 475, at p. 477-78.
[19] I am not speaking here about what courts sometime refer to as “degrees of privacy” in the Charter s. 8 context - as dependent on the type of search (the degree of rights, for example, yielded by a search of a person, as opposed to a search of a person’s home or vehicle). See, for example, Roback v. Chiang, [2003] B.C.J. No. 3127 at para 14.

PDF Print

For Better, For Worse, or Until I Decide to Spy on You
By: Dina Mashayekhi

September 11, 2007


Being recently married, I still haven’t quite adjusted to the idea that you can’t change certain traits in your spouse. For example, my other half tends to view cell phones as a leash, and he regularly “forgets” to call me when he’s going to be late, or going out after class or work. As a result, I end up panicking, thinking he has been in a terrible accident and is unconscious somewhere, and I promptly begin my routine of repeatedly calling his cellphone (which is usually off or at the bottom of his bag on silent mode). By the time he finally gets to the phone and sees 18 missed-calls from me, I’m usually anxiety ridden and he calls me laughing, telling me I’m crazy, and that he’s on his way home. This conversation is usually followed by certain expletives and ends with my threat that I’m going to implant him with a GPS tracking device.

Of course, when I raised this idea, I was completely joking. For the sake of fantasy, my ideal device would be a microchip and to my knowledge, the Verichip doesn’t operate as a GPS device for commercial use (yet). Such a use would also run contrary to my convictions as a privacy advocate, but at times, I feel as though my sanity is at stake. I decided to inquire further into the practical aspects of my GPS threat (after all, there’s no point in a threat without any substance), and to examine the idea of spousal surveillance in general. [i]

The Newly Married or Soon-to-be-Married

I first looked to an online forum that is geared towards wedding planning and is frequented by brides-to-be and newer brides. I visited this forum quite a bit back in the wedding-planning days. I posted a simple 3-question poll. My questions weren’t intended to examine the moral implications of surveillance; rather, I was just trying to get a basic overview of what people would do.

My first question was “Have you ever used any type of surveillance on your spouse?” Out of 154 responses, 10 people (0.6%) answered Yes, with the remaining 144 (93%) answering No. The types of surveillance, whether electronic or not, were not specified. My second question was “Have you ever read your spouse’s email without him knowing?” Of 155 replies, 92 (59%) answered Yes and 63 (40%) answered No. A few people, however, chose to comment on this question stating that they have their spouse’s implicit consent to check their email. Finally, my third question was “If given the opportunity, would you use GPS tracking or an RFID chip to track your spouse?” Out of 155 replies, 21 (13%) answered yes, and 134 (86%) answered No. Some people who chose “Yes” commented that they only chose “Yes” because they would want the option in case of an emergency situation and not because of a lack of trust. Others confirmed that they would not want to so much “track” their spouse, but would want to be able to “find” them when necessary. And, of course, some users pointed out if you got to the point where you needed to resort to tracking your spouse, your relationship was in serious trouble. One user relayed a story of a past relationship where reading her boyfriend’s emails, and trying to find out what he was doing, confirmed that he was cheating on her.

From this small poll I learned that (a) I’m not the only one who has little fantasies about wanting to know where her spouse is and (b) More spouses than I’d expected have read their partner’s emails.

Marriage, Surveillance, and Privacy

This lead to my next finding -- a major target audience of surveillance software, surveillance devices and GPS products is married spouses. As I was searching for various products, it seems that they were geared towards tracking and catching that “wayward” spouse. More often that not, website visitors were invited to catch their “cheating wife” in the act. I actually did not find one product marketed towards safety for worriers (my initial purpose). I was impressed by the array of technologies available, saddened by the distrust existing in marriages, and concerned by the lawfulness of many of these technologies.

In her article “Spy vs. Spouse: Regulating Surveillance Software on Shared Marital Computers”, [ii] Camille Calman raises arguments in favour of the regulation of surveillance software on shared computers between spouses as a basis of bringing consistency to the law of communications privacy and reinforcing the social perception of marriage as a partnership of autonomous individuals characterized by mutual trust. Calman examines laws governing the protection of information and the concept of the reasonable expectation of privacy. She reasons that the use of surveillance technology for “spying on a spouse cannot be justified by the rationale that spouses have a lower expectation of privacy within marriage than they do with outsiders.” She traces the lack of recognized privacy rights between spouses to the lack of legal rights given to women upon marriage until the nineteenth century. Married women were, after all, considered to be subordinate to their husbands and the couple was seen as a single legal entity. She explains:

Changes in privacy law and in social constructs of marriage converge in the area of communications privacy. One of the most important aspects of personal autonomy is freedom to communicate with other persons. The law does not require married couples to tell each other everything; such a requirement could not be practically enforced. Entry into marriage does not entail signing away the right to communicate privately with persons outside the marital relationship. Some writers have described spheres or zones of privacy, with an innermost zone open to no one, and the next zone open only to spouses, close friends, and relatives. Even within those inner spheres, the law does—and should recognize a right of personal privacy.
Certainly individuals within a marriage have far more access to each other’s private information than strangers would. Spouses can behave in many ways that are intrusive but not legally actionable: They can read letters or e-mails or credit card bills that their spouses have already opened; they can eavesdrop on live conversations; they can rummage through filing cabinets; they can read diaries. But the use of electronic devices to spy at times and in places where live eavesdropping is impossible—to eavesdrop in a way that evades the likelihood of detection— seems to cross a line.
A person’s right to privacy is not absolute and must be weighed against countervailing rights and social interests. Clearly the expectation of privacy is lower within a marriage than in other less intimate relationships. Some reasonable expectation of privacy remains, however, and spousal spying by surveillance software violates that expectation. [iii]

While it is true that spouses have access to aspects of each other’s lives, which are essentially off-limits to others, it doesn’t seem that this grants one spouse an unencumbered right to spy on the other.

The Law and Spousal Surveillance

As far as I know, laws governing communications privacy do not make exemptions for spouses or family members. Section 184(1) of the Criminal Code [iv] makes it an offence to intercept a private communication except in limited enumerated circumstances.

184. (1) Every one who, by means of any electro-magnetic, acoustic, mechanical or other device, wilfully intercepts a private communication is guilty of an indictable offence and liable to imprisonment for a term not exceeding five years.

It is clear then, that this law would prohibit one spouse from surreptitiously recording the telephone conversations of the others. A spouse would fall under “every one”. Additionally, the Canada Post Corportion Act [v]prohibits the opening of mail by anyone other than the addressee:

48. Every person commits an offence who, except where expressly authorized by or under this Act, the Customs Act or the Proceeds of Crime (Money Laundering) and Terrorist Financing Act, knowingly opens, keeps, secretes, delays or detains, or permits to be opened, kept, secreted, delayed or detained, any mail bag or mail or any receptacle or device authorized by the Corporation for the posting of mail.

Again, “every person” would include a spouse. It is understood that this applies to postal mail only; however, it raises the questions as to why the same guarantees of privacy aren’t afforded to electronic mail. There are clear laws prohibiting wiretapping, opening postal mail addressed to somebody else, and regulating electronic surveillance in certain situations; however, the law appears to turn a blind eye to spousal spying and the technologies used therein.

In the United States, the laws governing communication privacy similarly refer to “whoever” opens the mail or “any” unauthorized person recording telephone calls. American jurisprudence is ripe with examples of spouses attempting to use electronic surveillance to the detriment of the other. Calman points to two cases in the 1970s where federal appellate courts carved out a marital exemption. In Simpson v. Simpson [vi], the Fifth Circuit held that although the “naked language” of the Wiretap Act seemed to prohibit all wiretapping, Congress could not have intended to intrude into the marital relationship. The court also did not wish to interfere with the interspousal tort immunity that then existed in a majority of states.

The Second Circuit reached a similar result in Anonymous v. Anonymous [vii], in which a husband recorded his wife’s telephone conversations with their eight-year-old daughter, hoping to use the tapes in a custody fight. While holding that Congress had not meant to create a blanket exemption for all spousal wiretapping, the court declined to apply the Wiretap Act. It held that this was a domestic conflict, which did not involve the privacy rights of anyone outside the family, and which would be better handled by state courts. Both decisions have been widely criticized and Simpson was overruled in 2003 in Glazner v. Glazner [viii], explicitly on grounds that the plain language of the statute precluded the spousal exemption.

One notable case comes from New Jersey. In M.G. v. J.C. [ix] a husband surreptitiously recorded his wife’s telephone conversations in the marital home. The conversations disclosed that the wife was having a non-heterosexual affair. The husband confronted the wife and threatened to use the tapes in a custody battle, as well as disclosing the tapes to friends and family. As a direct result, the wife suffered extreme emotional distress and required extensive psychological care. The husband went one step further and played the tapes for the wife’s sister and offered to play them for other family members and friends. The wife sued for damages and obtained $10,000.00 in compensatory damages and in consideration of the husband’s willful and wanton disregard of the wife’s right to privacy, he was assessed $50,000.00 in punitive damages. In Florida, an appellate court affirmed the trial court’s refusal to admit evidence obtain by a wife using the Spector surveillance software. The Court ruled that by installing the Spector spyware on her husband’s computer, and reading the logs, the wife had in fact broken the Florida wiretapping law, which says that anyone who intentionally intercepts any electronic communication without appropriate authority commits a criminal act. [x]

Canadian jurisprudence does not appear to have considered spousal surveillance to the same extent as American case law. A case from the early 1990s, Seddon v. Seddon [xi], considered surreptitious recordings, which were obtained by a voice activated device. The court was faced with an application to vary interim custody and the 20 hours of recordings were supposed to demonstrate the mother’s shortcomings when dealing with her children. The court refused to vary custody and deferred the issue of admitting the recordings to the trial judge. The trial judge did not admit the recordings but did not explain his reasons. [xii]

The dearth of Canadian case law and statutory protections for individuals in a marriage may become problematic as technologies become increasingly affordable. In some cases, these technologies are directly breaking the law [xiii], while in others, they occupy a grey area. Although divorce laws are applied on a “no fault” basis, the product of surreptitious surveillance and recordings could readily be used in custody cases when determining the best interests of the children. The surveillance and recordings could also be used by one spouse against the other in order to leverage a more favourable property settlement where the recordings could be damaging/embarassing. In the absolute worst cases, these technologies can be used by abusive spouses to further their ability to control and terrorize their partners. [xiv]


In the end, I decided that it would probably be healthier for my relationship to hold off on the GPS and to try to communicate the virtues of calling when you’re not coming home and keeping your cellphone turned on. Spouses are in a legally vulnerable position. The mutual trust and respect that forms the basis of these relationships can easily be exploited by one spouse in a climate where there are few repercussions.

Dina is a 2005 graduate of the University of Ottawa Common Law Program and a former student member of the idtrail project. She is currently pracitising labour and employment law in Ottawa and has a special interest in employee privacy issues.

[i] For those who don’t know me, I wouldn’t ever plant a GPS device on my husband. My postulation remains in jest.
[ii] (2005) 105 Colum. L. Rev. 2097.
[iii] Ibid. at 2113-14.
[iv] R.S., 1985, c. C-46, s. 184.
[v] R.S., 1985, c. C-10, s. 48.
[vi] 490 F.2d 803 (5th Cir. 1974).
[vii] 558 F.2d 677 (2d Cir. 1977).
[viii] 347 F.3d 1212 (11th Cir. 2003).
[ix] 254 N.J. Super 470 (Ch. Div. 1991).
[x] O’Brien v. O’Brien, 899 So. 2d 1133 (Fla. Dist. Ct. App. 2005).
[xi] 1993 CanLII 2597 (BC S.C.).
[xii] 1994 CanLII 3335 (BC S.C.).
[xiii] See http://www.usdoj.gov/criminal/cybercrime/perezIndict.htm “Creator and Four Users of Loverspy Spyware Program Indicted”.
[xiv] See http://redtape.msnbc.com/2007/08/leah-lived-for-.html “High-Tech Abuse Worse Than Ever”.

PDF Print
Cash(less) on the Road
By: Byron Thom

September 4, 2007


Credit cards and databases/data-mining/data aggregation. How does the database nation get affected by a cashless society?

I recently had the opportunity to dwell upon the loss of anonymity as we continue the path to cashless-ness. It was on one of those west coast road trips that seem like the perfect way to cap off a summer.

Driving to South Bay

This August, a couple of friends and I drove down to the Bay Area of California from Vancouver to visit with friends working there. An interesting exercise we got caught up in was to see how difficult it would be to “stay off the radar”. Although we realized that giving out personal information itself is not dangerous, but rather simply provides a possibility for misuse, the recent discourse on domestic spying and the Patriot Act in the US got us to think deeper about sharing our spending habits with US businesses and the US government.

Like any good conspiracy theorist, travel begins by taking large wads of cash out from under the mattress - or a Canadian bank, if your mattress is rather thin. Minimizing our use of credit cards was the obvious step. This was also facilitated (others say caused) by the midsummer drop in the Canadian dollar and our desire not to be gouged by Visa’s exchange/conversion rate. [1]

So we used cash, and lots of it. All of our food, hotel rooms, and activities were anonymous transactions. When we stopped for gas, we prepaid the attendant in $20s. As Canadians, we had never seen so many green bills. Because realistically, although not quite to the level of a wheelbarrow or a duffel bag, carrying enough money for three guys on an 11 day trip is a significant task in itself and more than a little inane.

For the most part, our experiment was successful. Although frustrated by the inefficiency of their monotone bills, our system seemed to work as cash equalled anonymity in most situations encountered. But one time it didn’t was when we came up against the dreaded loyalty card.

Safeway and the Loyalty Card

Loyalty cards are a common occurrence in today's consumer driven world. It seems like everything from airline tickets to cups of coffee have a mode of tracking your purchases and collecting detailed information regarding your personal shopping habits. [2]

But loyalty systems also seem to “work”. The collection of points almost seems like a North American sport. Canadians seem to do anything for their points. [3] And sometimes using the loyalty system is almost forced upon you.

While at the local Safeway trying to buy some supplies in California, we encountered an insidious ploy to force shoppers to self-identify. It has always been part of the loyalty system to offer discounts to those who sign onto the system; discounts of 5% to 10% are not uncommon. But at this particular Safeway, oranges were over $1/lb cheaper for those showing a Safeway card. 1$/lb or more than 30%!

With this kind of price differential, how can you resist? How can you compare the intangible benefit of remaining anonymous with the prospect of saving money on fresh fruit? Although I knew about the privacy implications and why Safeway was operating in such a manner, my biggest concern wasn't about data mining but rather me not having an American Safeway account to be able to take advantage of this offer!

Luckily, or scary depending upon your point of view, the Safeway databases in the United States and Canada are linked and my Canadian account worked just fine. And on top of that, I didn't even need my physical card. Supplying my phone number was enough for the clerk to identify me by name and recite my home address. I'm sure in some way it is useful for Safeway to know that while on vacation in California I enjoy oranges, bananas and croissants for breakfast.

But data collection can go far beyond that. Demographic shopping information is big business in today's always-on marketing environment. Companies like Choicepoint and Acxiom aggregate and sell personal information to government and businesses on everything from health and insurance records to consumer purchasing information. [4] The US government even claims that these aggregators fill a necessary role in the “war on terror” by allowing the government to search for specific purchasing trends and monitor suspicious activity. [5] Vast databases are being filled and very few seem to mind that there are numerous instances of databases being hacked or leaked due to shoddy security practices and inadequate protections.

Adam Greenfield says in his book Everyware that

We may have to accept that privacy as we have understood it may become a thing of the past: that we will be presented the option of trading away access to the most intimate details of our lives in return for increased convenience, and that many of us will accept this possibility.

But, seriously? Identity or oranges. The red pill or the blue. They were good oranges.

Final Thoughts

The beauty of technology is its ability to make life easier. A GPS system and a cell phone were lifelines in trying to navigate the complicated mass of streets and highways of California's Bay Area. But, there are always trade-offs. Simson Garfinkel's Database Nation [7] draws a picture of a frightening dystopia where identifiers such as credit and debit cards, cell phones and surveillance records link to vast databases of personal information that can track you from dawn to dusk and from birth to grave. It is already a reality. There are billions to be made. [8]

But, it doesn’t have to be this way. Besides better laws to control the transfer of personal information, there are electronic alternatives to large wads of money. Electronic e-cash or smartcard systems are making the rounds. They can be programmed with privacy in mind.

An example of an effective privacy respecting system is the Octopus Card system implemented in Hong Kong. The Octopus Card, in one of its selectable iterations, allows its users to anonymously access the transit system in addition to purchasing items from a wide variety of stores. All this is done with a contactless RFID embedded in the card that boasts a 95% penetration rate. [9]

By not requiring any information to purchase, the Octopus Card has many of the same privacy benefits as cash. But not all implementations of this ubiquitous technology are so benign. [10] When done without sufficiently respecting privacy concerns, electronic cash is an effective form of surveillance allowing marketers to tie purchase and travel history to other demographic information.

Even more effective is comprehensive legislation protecting consumer privacy. But it's difficult for legislatures to keep up with advancing technology. Safeguards need to be put in place where the convenience and benefit of a cashless system benefits consumers and is not a tool for marketers and data aggregators. Without that framework, and the penalties to compel adherence, corporations will continue with policies that are in their best interests, in an environment where the majority of consumers are unaware and uninterested in personal data protection.

By the end of our trip, a little bit sunburned and a little bit poorer with cash supplies depleted, we broke down and resorted to credit. We were pretty good, though. Over an 11 day trip and 4000km, 10 days went by without using credit – although there were numerous instances where we had to self-identify. The fact of the matter is that credit is just too easy, and that's how they like it.

[1] Joe Paraskevas, “Credit Cards No Bargain Abroad” Winnipeg Free Press (August 22, 2007) http://www.winnipegfreepress.com/local/story/4025999p-4637816c.html
[2] CBC Marketplace, “Loyalty cards: Getting to know you” (October 24, 2004) http://www.cbc.ca/consumers/market/files/services/privacy/loyalty.html
[3] ACNielsen, “Loyalty Program Participation Rate on the Rise According to new ACNielsen Study” (September 16, 2005) http://www.acnielsen.ca/news/20050916.shtml
[4] EPIC, Choicepoint, online: http://www.epic.org/privacy/choicepoint/
[5] Richard Behar. “Never Heard of Acxiom?” (February 23, 2004) http://money.cnn.com/magazines/fortune/fortune_archive/2004/02/23/362182/index.htm
[6] Greenfield, Adam. Everyware: The Dawning Age of Ubiquitous Computing, (Berkeley: Peachpit Press, 2006).
[7] Garfinkel, Simson. Database Nation: The Death of Privacy in the 21st Century, (Cambridge: O’Reilly, 2000).
[8] Choicepoint alone reported revenue of $1.05 billion in 2006. See Google Finance, online: http://finance.google.com/finance?q=NYSE%3ACPS
[9] Opening Remarks by Mr. Alfred Ng, Assistant Government Chief Information Officer, at the NFC Conference 2007 of the ICT Expo (April 17, 2007) http://www.ogcio.gov.hk/eng/pubpress/esp070417.htm
[10] The Oyster Card in London is used to track customer transit movements. See Aaron Scullion. “Smart Cards Track Commuters” (September 25, 2003) http://news.bbc.co.uk/1/hi/technology/3121652.stm
PDF Print
Existing and Emerging Privacy-based Limits In Litigation and Electronic Discovery
By: Alex Cameron

August 28, 2007


Privacy law is increasingly important in litigation in Canada. Contemporary litigants routinely file requests for access to their personal information under PIPEDA and its provincial counterparts. Such requests can give a party a partial head-start on litigation discovery, or aid a party in rooting out information held by an opponent or potential opponent.

That said, with some possible room for improvement (at least in the case of PIPEDA), [1] data protection law in Canada takes a relatively hands-off approach when it comes to legal proceedings. Parties in legal proceedings are generally required to disclose information in accordance with long-standing litigation rules and are largely exempted from restrictions that might otherwise be applicable under data protection laws in other contexts. Yet, this does not mean that privacy considerations are not relevant or applicable to discovery in legal proceedings. This short article identifies some existing and emerging privacy-based limits in litigation discovery at the intersection between privacy interests and the need for full disclosure in litigation.

I. The Implied Undertaking Rule

As a starting point, it is important to note that privacy protections are built into discovery at a fundamental level. Information obtained through discovery is generally subject to an implied undertaking of confidentiality. This prohibits parties from using or disclosing information obtained during discovery for purposes outside of the litigation. The implied undertaking rule is based on a recognition by Canadian courts of the general right of privacy that a person has with respect to his or her own documents. [2] Many Canadian decisions cite the English text Discovery by Matthews & Malek for the principle behind the rule:

The primary rationale for the imposition of the implied undertaking is the protection of privacy. Discovery is an invasion of the right of the individual to keep his own documents to himself. It is a matter of public interest to safeguard that right. The purpose of the undertaking is to protect, so far as is consistent with the proper conduct of the action, the confidentiality of a party’s documents. [3]

A party may apply for relief from the implied undertaking rule where a party's interest in using information outweighs the privacy interest protected or where the document is otherwise available. However, the courts do not take the principle of privacy behind the rule lightly, as such applications for relief are frequently denied, for example, on the basis that it would be “an unwarranted intrusion on [the party’s] privacy rights”. [5]

Privacy has similarly been invoked as a limitation in defining what is and is not reasonable in discovery. For example, in Fraser v. Houston, the court declined to order production of the plaintiff’s financial documents on the basis of privacy concerns, despite concluding that the documents had “at least marginal probative value” to an allegation of economic duress:

I am satisfied that this line of questioning, […] could result in a detailed exploration of a man’s state of wealth or state of non-wealth as the case may be, and that that is a major invasion into a man's privacy which is generally only allowed in matters of execution on judgments that are not paid and perhaps, in some other circumstances. However, in the present case I am of the view that to allow an exploration of the nature that is requested by the defendants has a potential prejudicial effect upon Mr. Fraser's privacy which well outweighs any apparent probative value that there may be. [6]

Information potentially subject to disclosure in legal proceedings could be held directly by a party to the litigation or by a third party, such as an Internet service provider (ISP). In each of these categories, discussed in turn below, courts have balanced privacy considerations against the interests of full disclosure in litigation.

II. Information Held by a Party

A. Motions for Production

In Park v. Mullin, [7] a party applied for discovery of its opponent’s computer. Relying on earlier Supreme Court of Canada jurisprudence, Dorgan J. expressly drew on privacy considerations in refusing to order disclosure:

That the issue of privacy is a robust and real issue should be taken into account on an application such as this. In [A.M. v. Ryan, 1997 CanLII 403 (S.C.C.)], McLachlin J. commented on a party’s privacy interests in the context of an application for third party clinical records under Rule 26(11). […]:
... I accept that a litigant must accept such intrusions upon her privacy as are necessary to enable the judge or jury to get to the truth and render a just verdict. But I do not accept that by claiming such damages as the law allows, a litigant grants her opponent a licence to delve into private aspects of her life which need not be probed for the proper disposition of the litigation.
In my view, similar privacy concerns should be considered in a determination under Rule 26(10) where the order sought is so broad it has the potential to unnecessarily “delve into private aspects” of the opposing party’s life. [8]

Privacy also played an integral role in the leading case Desgagne v. Yuen [9], where the Court balanced the relevance of the information sought against other considerations, including privacy. The plaintiff had been injured in an accident, and the defendant sought production of her hard drive, Palm Pilot, video game unit, and photographs (both electronic and hard copies) taken since the accident. The plaintiff argued that the information was relevant since it would shed light on the defendant’s post-accident cognitive abilities and quality of life. Myers J. refused to order production of the plaintiff’s photographs because of privacy considerations:

In my opinion, the vacation photographs (and other photographs relating to the plaintiff’s family, friends and hobbies) sought have limited - if any - probative value on this matter. Production of these photographs, however, is invasive of the plaintiff’s personal life, because the photographs are largely of moments spent with her family and friends. The limited probative value considered against the invasiveness of production leads me to conclude that production of the photographs should not be ordered. [10]

Access to the plaintiff’s video game unit, Palm Pilot, and Internet Browsing history were also denied on the basis of their probative value being outweighed by the plaintiff’s privacy interest and the invasiveness of ordering their production. Similar reasoning was applied in Goldman, Sachs & Co. v. Sessions, [11] Ireland v Low [12], and Baldwin Janzen Insurance Services (2004) Ltd. v. Janzen. [13]

B. Motions for Preservation

In the context of preserving evidence for discovery, ex parte orders for the seizure of evidence (such as Anton Piller orders) allow litigation opponents access to documents that may contain personal or confidential information. Although such orders relate to the preservation of evidence, they form part of the overall process of document discovery. Given the invasiveness of such orders, privacy considerations can play an important role in Anton Piller cases. Courts urged taking a cautionary approach to Anton Piller orders as early as 1981. In the words of Browne-Wilkinson J. (as he then was) in Thermax Ltd v. Schott Industrial Glass Ltd: [14]

As time goes on and the granting of Anton Pillar [sic] orders becomes more and more frequent, there is a tendency to forget how serious an intervention they are in the privacy and rights of defendants. One is also inclined to forget the stringency of the requirements as laid down by the Court of Appeal. [15]

In Harris Scientific Products Ltd. v. Araujo, [16] the Court found that an Anton Piller order had been improperly obtained and improperly executed. The plaintiff had misrepresented a material fact in its application for the order, and the court found numerous and serious breaches of the order’s execution by the plaintiff. Two of the more serious breaches included the seizure of material subject to solicitor-client privilege and the seizure of an audio cassette that clearly had no relation to the proceedings (“a state-assisted major invasion of Mr. Araujo’s privacy on an unrelated matter”) [17]. When considering the quantum of damages to be awarded, the court reiterated how seriously such breaches of privacy are taken:

Damages for trespass resulting from a defective Anton Piller order should not be so low as to condone the wrongdoing; the use of state powers to breach an individual’s privacy must be jealously guarded. Even where the target of the order has suffered no, or little, in the way of pecuniary damage, the level of damages awarded can be more than nominal and can reflect mental distress. [18]

Finally, in CIBC World Markets v. Genuity Capital Markets, [19] an order in the nature of an Anton Piller order was made for full preservation of “computers, Blackberries and other types of similar electronic devices of every nature and kind” including all devices “owned or used by others including spouses, children or other relatives”. [20] An order for a seizure of this magnitude obviously has a broad privacy impact. However, the order provided that a technical consultant would perform the imaging and indexing of information and that the imaged drives and information would not initially be shared with the plaintiffs. [21] The court addressed the matters of relevance and confidentiality in a subsequent order, holding that if there were confidential or irrelevant documents contained in the devices imaged, then the defendants could apply to have the full index of documents sealed and one made public that only contained relevant material. [22]

IV. Information Held by a Non-Party

Privacy also plays an important role in contouring limits to discovery from non-parties in litigation. A great deal of personal information is held by non-parties such as ISPs and banks; it is increasingly sought out by parties in litigation.

In BMG v. Doe, [23] the Federal Court of Appeal considered an appeal by music providers who were seeking disclosure of the identities of customers alleged to have infringed copyrights by sharing music on peer-to-peer networks. Sexton JA, for the court, held that plaintiffs must conduct their initial investigations in a way that minimized privacy invasion; failure to do so could justify a court refusing to order ISPs to identify potential defendant customers as requested by the plaintiffs:

If private information irrelevant to the copyright issues is extracted, and disclosure of the user’s identity is made, the recipient of the information may then be in possession of highly confidential information about the user. If this information is unrelated to copyright infringement, this would be an unjustified intrusion into the rights of the user and might well amount to a breach of PIPEDA by the ISPs, leaving them open to prosecution. Thus in situations where the plaintiffs have failed in their investigation to limit the acquisition of information to the copyright infringement issues, a court might well be justified in declining to grant an order for disclosure of the user's identity. [24]

In other similar cases of discovery from non-parties, courts have relied on privacy as one of the key considerations factoring into whether production should be granted. For example, in Irwin Toy Ltd. v. Doe, [25] Wilkins J. provided the following view of privacy considerations: “some degree of privacy or confidentiality with respect to the identity of the internet protocol address of the originator of a message has significant safety value and is in keeping with what should be perceived as being good public policy.” [26] Although the court ordered the ISP to disclose the identity of the targeted ISP customer, it required the plaintiffs to meet a privacy-informed threshold test before disclosure would be granted.

Finally, discovery limits based on privacy considerations may also be developed after the fact, in the form of sanctions for wrongful behaviour. Where ex parte orders for evidence seizure (such as Anton Piller orders) are obtained or executed improperly in a way that has an impact on privacy, the courts may step in. This may result in the removal of the offending party’s counsel, or possibly even a stay of proceedings. For example, Grenzservice Speditions Ges.m.b.H. v. Jans [27] concerned an order in the nature of an Anton Piller order. The Court found that the plaintiff’s solicitor allowed flagrant abuses of privacy in the execution of that order, including questioning of the occupants of the home and videotaping of the proceedings surrounding the search. Because of the egregious nature of the infringement on the individual’s right to privacy, Huddart J. (as she then was) disqualified the plaintiff's counsel from further involvement in the case, in order to “assure the defendants and members of the public, all of whom are potential subjects of search and seizure orders, that their rights will be protected.” [28]


This article has briefly reviewed some of the rules and jurisprudence at the intersection between privacy and litigation discovery. Although data protection legislation has an impact on discovery, it generally leaves established litigation rules untouched. However, as seen in the cases reviewed here, there are a number of existing and emerging privacy-based limits on discovery in litigation. Conflicts between the need for full disclosure in litigation and privacy interests will certainly arise more frequently in light of the increasing prominence of electronic discovery and the increasing role that electronic devices play in the creation, processing and storage of personal information.

[1] Statutory Review of the Personal Information protection and Electronic Documents Act (PIPEDA), Fourth Report of the Standing Committee on Access to Information, Privacy and Ethics, Tom Wappel, MP, Chairman, May 2007, 39th Parliament, 1st Session, online: Standing Committee on Access to Information, Privacy and Ethics
(Recommendation 9: “The Committee recommends that PIPEDA be amended to create an exception to the consent requirement for information legally available to a party to a legal proceeding, in a manner similar to the provisions of the Alberta and British Columbia Personal Information Protection Acts.”)
[2] See Lac d'Amiante du Québec Ltée v. 2858-0702 Québec Inc., 2001 SCC 51 (CanLII) at para. 61.
[3] Paul Matthews and Hodge M. Malek, Discovery (London: Sweet & Maxwell, 1992) at 253, cited in Goodman v. Rossi, [1995] O.J. No. 1906 (C.A.) (QL) at para. 29. See also Tanner v. Clark, 2003 CanLII 41640 (ON C.A.); Royal Bank of Canada v. Bacon (1999), 218 N.B.R. (2d) 98 (Q.B.); Vitapharm Canada Ltd. v. F. Hoffmann-La Roche Ltd., [2002] O.J. No. 1400 (S.C.) (QL).
[4] Letourneau v. Clearbrook Iron Works Ltd., 2003 FC 949 (CanLII) at para. 5.
[5] Kunz v. Kunz Estate, 2004 SKQB 410 (CanLII) at para. 17. See also Letourneau v. Clearbrook Iron Works Ltd., ibid.; L. H. v. Caughell, [1996] O.J. No. 3331 (Ont. Gen. Div.); Sezerman v. Youle, 1996 CanLII 5610 (NS C.A.).
[6] Fraser v. Houston, 1997 CanLII 3227 (BC S.C.) at para. 21.
[7] Park v. Mullin, 2005 BCSC 1813 (CanLII).
[8] Ibid. at para 21.
[9] Desgagne v. Yuen, 2006 BCSC 955 (CanLII).
[10] Ibid. at para. 49.
[11] Goldman, Sachs & Co. v. Sessions, 2000 BCSC 67 (CanLII).
[12] Ireland v Low, 2006 BCSC 393 (CanLII).
[13] Baldwin Janzen Insurance Services (2004) Ltd. v. Janzen, 2006 BCSC 554 (CanLII).
[14] Thermax Ltd v. Schott Industrial Glass Ltd, [1981] F.S.R. 289 (Ch. D.).
[15] Ibid. at 294.
[16] Harris Scientific Products Ltd. v. Araujo, 2005 ABQB 603 (CanLII).
[17] Ibid. at para. 103.
[18] Ibid. at para. 105.
[19] CIBC World Markets Inc. v. Genuity Capital Markets, 2005 CanLII 3944 (ON S.C.).
[20] Ibid. at para. 3.
[21] Persons connected to the defendants were entitled to review the information in order to assess whether to advance claims of privilege.
[22] CIBC World Markets v. Genuity Capital Markets, 2006 CanLII 11908 at para. 5.
[23] BMG Canada Inc. v. Doe, 2005 FCA 193 (CanLII).
[24] Ibid. at para. 44.
[25] Irwin Toy Ltd. v. Doe, [2000] O.J. No. 3318 (S.C.) (QL).
[26] Ibid. at para. 11.
[27] Grenzservice Speditions Ges.m.b.H. v. Jans 1995 CanLII 2507 (BC S.C.).
[28] Ibid. at para. 116.
PDF Print
Blogging While Female, Online Inequality and the Law
By: Louisa Garib

August 21, 2007


“Those who worry about the perils women face behind closed doors in the real world will also find analogous perils facing women in cyberspace. Rape, sexual harassment, prying, eavesdropping, emotional injury, and accidents happen in cyberspace and as a consequence of interaction that commences in cyberspace.”

- Anita Allen, “Gender and Privacy” (2000) 52 Stan. L Rev. at 1184.

In 2006, the University of Maryland’s Clark School of Engineering released a study assessing the threat of attacks associated with the chat medium IRC (Internet Relay Chat). The authors observed that users with female identifiers were “far more likely” to receive malicious private messages and slightly more likely to receive files and links. [1] Users with ambiguous names were less likely to receive malicious private messages than female users, but more likely to receive them than male users. [2] The results of the study indicated that the attacks came from human chat-users who selected their targets, rather than automated scripts programmed to send attacks to everyone on the channel.

The findings of this study highlight the realities that many women face when they are online. From the early days of cyberspace, women who identify as female are frequently subject to hostility and harassment in gendered and sexually threatening terms. [3] These actions typically stem from anonymous users.

Recent news articles from around the world have chronicled the latest spate of online misogyny. [4] Not only have the women bloggers in these cases been personally threatened, their images distorted and disseminated, in some cases their blogs and websites have also been subject to denial of service (DoS) attacks. Feminists [5] and women who blog about contentious political or social issues are not the only women who are singled out for abuse. Similar patterns of violent threats have also been directed toward women who blog about the daily life of a single mother, [6] computer programming, [7] and a variety of ordinary interests on sites with a female following, but no feminist content or agenda.

The effects of repeated online harassment has profound consequences for women’s equality online and in the real world. Online threats and attacks can have had a chilling effect on women’s expression. [8] Some women may either stop participating in open online forums, unless under the cloak of anonymity or pseudonymity, or self-censor their speech, rather than risk being the subject of violent threats or DoS attacks. These choices reduce a woman’s online identity to being the invisible woman, or a quieter, edited version of herself. Fortunately, women actively continue to blog and participate in cyber-life in the face of threats and harassment, with the support of both women and men in online communities.

Women’s retreat from the Internet can also have an economic impact on those seeking entry into technology-based labour markets. One prominent technology blogger observed: “If women aren’t willing to show up for networking events [because of harassment], either offline or online, then they’re never going to be included in the industry.” [9] Women’s absence from the creative process also has implications for equality in terms of influencing what kinds of technology are made, and what societal interests those innovations ultimately serve. [10]

To date, the law has provided a limited response to harms directed against women online. Traditional torts such as defamation are available, but are difficult to pursue against multiple, anonymous individuals who could be anywhere in the world. In light of the uncertainly in Canadian case law, [11] a claim for invasion of privacy would be very challenging to make in the absence of an appellate level decision recognizing the right to privacy. An action for intentional or negligent infliction of emotional distress may also be possible, although plaintiffs must meet stringent standards to succeed. [12] Complainants may have difficulty overcoming the view that in the absence of physical contact, no real harm can be inflicted in the virtual world, particularly within the context of fantasy/gaming environments.

Without a more complete and critical examination of actions that target women in cyberspace, there is the danger of reinforcing substantive inequality by dismissing the individual and social harm experienced as an “natural” part of online life. Although tort actions represent some avenues for redress, they are individual, private law remedies that do not speak to the public nature of harms against women. While criminal sanctions for assault, obscenity, hate speech and uttering threats are possible, they would only apply if actions could be proved to fall within Criminal Code [13] definitions and precedents. It should not be forgotten that women continue to face difficulties with the law in seeking protection from, and compensation for violence, harassment, discrimination and exploitation experienced in the real world. [14]

Given the market drive for more intense and realistic sensory experiences in the virtual world, it is not far-fetched to foresee online acts that more closely reflect conventional legal and social notions of physical and sexual violence in the future. [15] As “[t]he courts will increasingly be confronted with issues that are ‘lying in wait’ as virtual worlds expand,” [16] so too will feminists, lawyers, and policy makers be faced with opportunities to think about how to expand the law in favour of greater equality.

[1] Robert Meyer and Michel Cukier, “Assessing the Attack Threat due to IRC Channels,” (2006) University of Maryland School of Engineering, at 5-6 http://www.enre.umd.edu/content/rmeyer-assessing.pdf
[2] Ibid.
[3] See Rebecca K. Lee, “Romantic and Electronic Stalking in a College Context,” (1998) 4 WM. & Mary J. Women & L. 373 at 404, 405-6 which discusses sexual harassment from e-mail messages, in chat rooms, and Usenet newsgroups. A well-known account of sexualized threats towards female and androgynous virtual personas and the emotional harm experienced by the real-life participants is in Julian Dibbell’s, “A Rape in Cyberspace,” My Tiny Life (1998), ch. 1 http://www.juliandibbell.com/texts/bungle.html.
[4] Jessica Valenti, “How the web became a sexists’ paradise” The UK Guardian (April 6, 2007) http://www.guardian.co.uk/g2/story/0,,2051394,00.html; Anna Greer, “Misogyny bares its teeth on Internet,” Sydney Morning Herald (August 21, 2007) http://www.smh.com.au/news/opinion/misogyny-bares-its-teeth-on-internet/2007/08/20/1187462171087.html;
Ellen Nakashima, “Sexual Threats Stifle Some Female Bloggers,” Washington Post (April 30, 2007)
[5] See Posts on “Greatest Hits: The Public Woman” and “What do we do about Online Harassment?” on Feministe http://feministe.powweb.com/blog/archives/2007/08/09/what-do-we-do-about-online-harassment/?s=online+harassment&submit=Search
[6] Ellen Nakashima, Washington Post, supra note 4.
[7] BBC News, “Blog Death Threat Sparks Debate” (27 March 2007) http://news.bbc.co.uk/1/hi/technology/6499095.stm
[8] Deborah Fallows, “How Women and Men Use the Internet,” Pew Internet & American Life Project (December 28, 2005), at 14 <http://www.pewinternet.org/pdfs/PIP_Women_and_Men_online.pdf>. The report states.” “The proportion of internet users who have participated in online chats and discussion groups dropped from 28% in 2000 to as low as 17% in 2005, entirely because of women’s fall off in participation. The drop off occurred during the last few years coincided with increased awareness of and sensitivity to worrisome behavior in chat rooms.”
[9] Nakashima, Washington Post, supra note 4.
[10] For an study on women, technology and power see Judy Wacjman, Technofeminism (Polity Press: Cambridge, UK, 2004).
[11] Recently, lower courts in Ontario have found that complaints are free make a case for invasion of privacy: Somwar v. McDonald’s Restaurant of Canada Ltd., [2006] O.J. No. 64 (Ont. S.C.J.) and Re: Shred-Tech Corp. v. Viveen [2006] O.J. No. 4893. However, the Ontario Court of Appeal has explicitly found that there is no right to privacy in Euteneier v. Lee, [2000] O.J. No. 4533 (SCJ); rev’d [2003] O.J. No. 4239 (SCJ, Div Ct); rev’d (2005) 77 O.R. (2d) 621 (CA) at para 22.
[12] Jennifer McPhee, “New and Novel Torts for Problems in Cyberspace,” Law Times (30 July-August 6 2007) at 13.
[13] Criminal Code ( R.S., 1985, c. C-46 )
[14] Just two examples are: Jane Doe, The Story of Jane Doe: A Book About Rape (Random House: Toronto, 2003) and Patricia Monture-Angus, Thunder in my Soul: A Mohawk Woman Speaks. (Halifax: Fernwood Publishing, 1995). For an analysis of the limitations of the Supreme Court’s privacy analysis in obscenity, hate propaganda and child pornography cases, see Jane Bailey, Privacy as a Social Value - ID Trail Mix: http://www.anonequity.org/weblog/archives/2007/04/privacy_as_a_social_value_by_j.php
[15] Lydia Dotto, “Real lawsuits set to materialize from virtual worlds; Harm, theft in online gaming may land players in the courts: Precedents few, but Vancouver lawyer thinks cases coming” Toronto Star (2 May 2005) at D 04 (ProQuest).
[16] Ibid.

PDF Print
PETS are Dead; Long Live PETs!
By: A Privacy Advocate

August 14, 2007


In this Google Era of unlimited information creation and availability, it is becoming an increasingly quixotic task to advocate for limits on collecting, use, disclosure and retention of personally-identifiable information ("PII"), or for meaningful direct roles for individuals to play regarding the disposition of their PII "out there" in the Netw0rked Cloud. Information has become the currency of the Modern Era, and there is no going back to practical obscurity. Regarding personal privacy, the basic choices seem to be engagement or abstinence, so overwhelming are the imperatives of the Information Age, so unstoppable the technologies that promise new services, conveniences and efficiencies. Privacy, as we knew it, is dying.

Privacy advocates are starting to play the role of reactive luddites: suspicious of motives, they criticize, they raise alarm bells; they oppose big IT projects like data-mining and profiling, electronic health records and national ID cards; and they incite others to join in their concerns and opposition. Privacy advocates tend to react to information privacy excesses by seeking stronger oversight and enforcement controls, and calling for better education and awareness. Some are more proactive, however, and seek to encourage the development and adoption of
privacy-enhancing technologies (PETs). If information and communication technologies (ICTs) are partly the cause of the information privacy problem, the thinking goes, then perhaps ICTs should also be part of the privacy solution.

In May the European Commission endorsed the development and deployment of PETs(1), in order to help “ensure that certain breaches of data protection rules, resulting in invasions of fundamental rights including privacy, could be avoided because they would become technologically more difficult to carry out.” The UK Information Commissioner issued similar guidance on PETs in November 2006(2). Other international and European authorities have released studies and reports discussing and supporting PETs in recent years. (see references and links below)

PETs as a Personal Tool/Application

Are PETs the answer to information privacy concerns? A closer look at the European and UK communiqués suggests otherwise - for all their timeliness and prominence, they reflect thinking about PETs that is becoming outdated. The reports cite, as examples of PETs, technologies such personal encryption tools for files and communications, cookie cutters, anonymous proxies and P3P (a privacy negotiation protocol). Not a single new privacy-enhancing technology category here in seven years. Other web pages dedicated to promoting PETs list more technologies, such as password managers, file scrubbers, and firewalls, but otherwise don’t appear to have significantly new categories of tools.(3,4).

The general intent off the PETs endorsements seem clear and laudable enough: publicize and promote technologies that place more controls into the hands of individuals over the disclosure and use of their personal information and online activities. PETs should directly enable information self-determination. Empowered by PETs, online users can mitigate the privacy risks arising from the observability, identifiability, linkability of their online personal data and behaviours by others.

Unfortunately, few of the privacy-enhancing tools cited by advocates have enjoyed widespread public adoption or viability (unless installed and activated by default on users’ computers, e.g. SSL and Windows firewalls). The reasons are several and varied: PETs are too complicated, too unreliable, untrusted, expensive or simply not feasible to use. The threat model they respond to, and benefits they offer, are not always clear or measurable to users. PETs may interfere with normal operation of computer applications and communications, for example, they can render web pages non-functional. In the case of P3P, a privacy negotiation protocol, viable user-agents were simply never developed (except for a. modest but largely incomprehensible cookie implementation in IE6 and IE7). PETs simply haven't taken off in the marketplace, and the bottom-line reason seems to be that there are few incentives for organizations to develop them and make them available. (Where there has been a congruence of interests between users and organizations, some PETs have thrived, for example, SSL for encrypted secure web traffic and e-commerce. Perhaps the same is happening for anti-spam and anti-phishing tools, since deployment of these technologies helps to promote confidence and trust in online transactions.)

Perhaps the underlying difficulty may be a conceptualization of PETs as a technology, tool or application exclusively for use by individuals, complete in itself, expressed perhaps in its purest form by David Chaum’s digital cash Stefan Brands' private credentials. As brilliant as those ideas are, they have had limited deployment and viability to date. It seems that, to be viable, PETs must be also meet specific, recognizable needs of organizations. Secure Socket Layer (SSL) is a good example, responding as it did to well-understood problems of interception, surveillance and consumer trust online. SSL succeeded because organizations had a mutual interest in seeing that it was baked into the cake of all browsers and its use largely transparent to user.

Meanwhile, technology marches on. Many PETs weren't very practical to use. Sure you can surf anonymously, if don't mind a little latency and the need to tweak or disable browser functionality. But as soon as you want to carry out an online transaction, sign on to a site, make a purchase, or otherwise become engaged online in a sustained way, you had to identify yourself, provide a credit card, login credential, registration form, mailing address, etc. Privacy suffered from the 100th window syndrome: your house, just like your privacy, could be Fort Knox secure but all it took was to leave one window open and the security (privacy) was compromised. Privacy required too much knowledge and effort and responsibility on the part of the individuals to sustain in an ongoing way. Online privacy was just too much work.

And, anyway, the benefits of online privacy tended to pale in the face of immediate gratification needs, and greater conveniences, personalization, efficiency, and essential connectedness afforded by consent and trust. The privacy emphasis slides inexorably towards holding others accountable for the personal information they must inevitably collect about us, not PETs. The only effective privacy option for most people in the online world is disengagement and abstinence.

PETs as a Security Technology

Certain consumer PETs have thrived, such as SSL, firewalls, anti-virus/anti-spyware tools, secure authentication tools. Perhaps anti-phishing tools and whole disk encryption will follow –if incorporated and activated by default into users’ hardware/software. But note: these are all largely information security tools. PETs have tended to become equated with information security. Safeguards are certainly an important components of privacy. We may not be able to stifle the global information explosion, but with appropriate deployment of PETs we can help ensure that our data stays where it belongs, is not accessed inappropriately, tampered with, or otherwise subject to breaches of confidentiality, integrity and availability.

Personal security tools like firewalls, virus/spyware detection, encryption are available to individuals. To the extent that PETs have been adopted by organizations public and private, rather than users, they have been security technologies. Legal and regulatory compliance for managing sensitive information in accountable ways, and for notifying individuals of data breaches, as well as the desire to build brand and promote consumer trust, have helped drive innovation and growth in the data security technology products market. Organizations, both public and private, today are deploying information security technologies throughout their operations, from web SSL to encrypted backup tapes to data ingress and egress filtering, to strong authentication and access controls, to privacy policy enforcement tools such as intrusion detection/prevention systems, transaction logging and audit trails, and so forth. When it comes to organizational PET deployments in practice, security is the name of the game.

But are these technologies really PETs? They may be technologies that are deployed with the end-user in mind - it is their data after all, but they don't really involve the user in a meaningful way in the life-cycle management of the information. The security measures listed above are put in place mainly to protect the interests of the organization. Of course, some organizations do go further and put in place technologies that help express important principles of fair information practices, such as technologies that promote openness and accountability in organizational practices, that capture user consent and preferences, and which allow to clients a measure of direct access and correction rights to the data and preferences stored about them - but this is still the exception rather than the norm..

PETs as Data Minimization Tools

More critically, security-enhancing and access/accountability technologies controls really miss out on the final ingredient of a true PET: data minimization. Information privacy is nothing if not about data minimization. The best way to ensure data privacy is not to disclose, use or retain the data at all. The minimization impulse is well captured by the fair information practices that require purposes to be specified and limited, and which seek to place limits on all data collected, used, disclosed and retained pursuant to those purposes. But such limitations run contrary to the impulses of most information-intensive organizations today, which is to collect and stockpile as much data as possible (and then to secure it as best as possible) because it may be useful later. More data, not less, is the trend. Why voluntarily limit a potential competitive advantage?

Apart from being a legal requirement, arguments for data minimization should be compelling, beginning with fewer cost and liabilities associated with maintaining and securing the data against leaks and misuse, or with bad decisions based upon old, stale and inaccurate data, as well as reputation and brand issue (faced with growing public concerns about excessive data collection, use and retention, major search engines and transportation agencies alike are now adopting more limited data usage policies and practices, but off course these policy-level decisions not PETs).

The problem is that there are few benchmarks against with to judge whether data minimization is being observed via use of technologies. How much less is enough to qualify as a PET? Is a networked, real-time passenger/terrorist screening program that flashes only a red, yellow or green light to the front line border security personnel a PET because the program design minimized unnecessary transmission and display of sensitive passenger PII? Similarly, is an information technology that automatically aggregates data after analysis, or which mines data and computes assessments on individuals for decision-making, or which is capable of delivering targeted bbut pseudonymous ads, a true PET because the actual personal information used in the process was minimized so not to be revealed to a human being? If a specific technology’s purpose for collecting, using, disclosing, and retaining customer or citizen data is sharply limited to "providing better services" and "for security purposes" then can these technology properly be considered PETs?!

PETs as expressing the Fair Information Principles (FIPs)

PETs minimize data, but not all technologies that minimize data are PETs. Data minimization is a necessary but insufficient requirement to become a PET. Enhanced information security is a necessary but insufficient requirement to become a PET. User empowerment is a necessary but insufficient requirement to become a PET. Together, all these impulses are expressed in the ten principles of (CSA) fair information practices, all of which must be substantially satisfied, within a defined context, in order for a given technology to be judged a PET worthy of the name, and of public support and adoption:

To enable user empowerment, we find the (CSA) fair information practices of:
1. Accountability; 2. Informed Consent; 3. Openness; 4. Access; and 5. Challenging Compliance. These principles and practices should be substantially operationalized by PETs.

To enable data minimization, we find the CSA fair information principles requiring 1. Identifying Purposes; 2. Limiting Collection; and 3. Limiting Use, Disclosure, and Retention.

Finally, the CSA Privacy Code calls for Security (Safeguards() appropriate to the sensitivity off the information.

[Comment: The CSA principle ‘Accuracy’ can fit under all three categories, since it implies a right for users to inspect and correct errors, as well as an obligation upon organizations to discard stale and/or inaccurate data, as well as a security obligation to assure integrity of data against unauthorized tampering and modification.]

A more comprehensive approach to defining and using PETs is required - one that clearly accommodates the interests and rights of individuals in a substantial way, yet which can be adopted or at least accommodated by organizations with whom individuals must inevitably deal. This requires a more systemic, process-oriented, life-cycle, and architectural approach to engineering privacy into information technologies and systems.

PETs as we know them are effectively dead, reduced to a niche market for paranoids and criminals, claimed by some security products (e.g., two-factor authentication dongles) or else deployed by organizations as a public relations exercise to assuage specific customer fears and to build brand confidence (e.g. banks' anti-phishing tools, web seals).

PETs as Information Architecture?

The future of PETs is architecture, not applications. Large-scale IT-intensive transformations are underway across public and private sector organizations, from real-time passenger screening programs and background/fraud checking, to the creation of networked electronic health records and eGovernment portals, to national identity systems for use across physical and logical domains. What is needed is a comprehensive, systematic process of ensuring that PETs are full enabled and embedded into the design and operation of these complex data systems. If code is law, as Lawrence Lessig posited, then systems architecture will be the rightful domain for privacy technologies to flourish in the current Google era.

The time has come to speak of privacy-enabling technologies and systems that help create favorable conditions for privacy-enhancing technologies to flourish and to express the three essential privacy impulses: user empowerment, data minimization, and enhanced security. Objective and auditable standards are essential preconditions.

Examples abound: Privacy-embedded "Laws of Identity" can enable privacy-enhanced identity systems and technologies to emerge; as is the development of 'smart' data that carries with it enforceable conditions of its use, in a manner similar to digital rights management technologies. Another example are intelligent software agents that can negotiate and express the preferences –and take action on behalf of- of individuals with respect to the disposition of their personal data held by others. Yet another promising development are new and innovative technologies that enable secure but pseudonymous user authentication and access to remote resources. These and other new information technologies may be the true future of PETs in the Google Era of petabytes squared, and worthy of public support and encouragement.


So, to summarize: the essential messages of this think piece are:
* PETs are attracting renewed interest and support, after several years of neglect and failure
* PETs are an essential ingredient for protecting and promoting privacy in the Information Age (along with regulation and awareness/education), but their conception and execution in practice is highly variable and still rooted in last-century thinking.
* True PETs should incorporate into information technologies ALL of the principles of fair information practices, rather than any subset of them.
* In today's Information Age, true PETs must be comprehensive, and involve all actors and processes. Evaluating PETs will increasingly be a function of whole systems and information architectures, not standalone products.
* It may be more useful to think of privacy-enabling technologies and architectures, which enable and make possible specific PETs.


(1) European Commission Supports PETs
Promoting Data Protection by Privacy Enhancing Technologies (2 May 2007)
Background Memo (2 May 2007): http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/07/159&format=HTML&aged=0&language=EN&guiLanguage=en

(2) Office of the UK Information Commissioner
Data Protection Technical Guidance Note: Privacy enhancing technologies (Nov 2006)

(3) Center for Democracy and Technology
Page on Privacy Enhancing Technologies

(4) EPIC Online Guide to Practical Privacy Tools

Other Useful Resources:

Dutch Ministry of the Interior and Kingdom Relations, the Netherlands
—Privacy-Enhancing Technologies. White paper for decision-makers (December 2004)

OECD Directorate For Science, Technology And Industry
—Committee For Information, Computer And Communications Policy
Inventory Of Privacy-Enhancing Technologies (January 2002)

Danish Ministry of Science, Technology and Innovation
—Privacy Enhancing Technologies
Report prepared by the META Group v1.1 (March 2005)

Office of the UK Information Commissioner
—Data protection best practice guidance (May 2002)
Report prepared by UMIST

—Privacy enhancing technologies state of the art review (Feb 2002) www.hispec.org.uk/public_documents/7_1PETreview3.pdf

EU PRIME Project
—White paper v2 (June 2007)

Andreas Pfitzmann & Marit Hansen,
TU Dresden, Department of Computer Science, Institute For System Architecture
—Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity, and Identity Management - A Consolidated Proposal for Terminology (Version v0.29 - July 2007)

EU FIDIS Project
—Identity and impact of privacy enhancing technologies (2006)

Roger Clarke
—Introducing PITs and PETS Technologies: technologies affecting privacy (Feb 2001)

Office of the Ontario Information and Privacy Commissioner & Dutch Registratierkamer
—Privacy-Enhancing Technologies: The Path to Anonymity (Volume I - August 1995)

George Danzesis, University of Cambridge Computer Lab (Date Unknown)
—An Introduction to Privacy-Enhancing Technologies
PDF Print
By: Jeremy Hessing-Lewis

August 7, 2007


A short story on the ID Trail


Incorrect username or password. Please try again.

He tried again.


Incorrect username or password. Please try again.

He tried again.

Incorrect username or password. Your ID is now locked. Please proceed to the nearest SECURE ID Validation Center for formal authentication. The nearest location can be found using the GoogleFED Search Tool.

After sitting stunned for a couple moments, Ross began to appreciate the full gravity of the situation. His ID was frozen. Everything was frozen. He just couldn't remember his damn PIN and that was the end of it. No PIN. No renewal. No ID. No authentication. No anything.

Since the government had launched the Single Enhanced Certification Using Reviewed Examination [SECURE] initiative, he really hadn't thought too much about it. Aside from a couple of headlines describing massive budget overruns and the usual privacy geeks heralding the end of the world, the New Government had pushed everything through without much fanfare.

That was four years ago. Since Ross already had a passport, the conversion to SECURE ID was pretty painless. He vaguely remembered something to do with a strand of hair and that they didn't even give him a card or anything, just read him his reauthorization PIN, thanked him for his time, and took his passport.

Since the carbon rationing system came into place in 2012, Ross really hadn't traveled anywhere off-line. There was no way he was going to save up carbon credits just to take a damn flight to some 45° cesspool. Plus, Google Travel could put him anywhere in the world in two clicks. A couple weeks ago he made some sangria and hit-up all the top clubs in Spain. He even bought a t-shirt at one which arrived in the mail two days later. That's why the SECURE ID renewal caught him off guard – it just rarely came-up for someone in his position.

Ross was just trying to buy a new snowboard for his Third Life avatar when things went wrong. He was notified that the transaction could not be processed because his GoogleCash account had been frozen pending authorization of his SECURE ID. Like just about everything else on or off-line, his identity was always confirmed back to this single source. While his ID Keychain supported a Federated identity management system in which he currently had 47 profiles (male, female, and gecko), they were all meaningless without reference to the master ID.

The SECURE system required multiple layers of redundancy. The PIN component would be required in addition to variable biometric authenticators. He had specifically written his 10 digit reauthentication PIN on a piece of paper and put it somewhere “safe.” So much for high-tech. That was four years ago and now, “safe” could be anywhere. The idea behind the routine expiry of SECURE IDs was to prevent identity theft from the deceased using stolen biometrics. Grave-robbing had been rampant for the first couple years of the program.

Ross grabbed his jacket and headed off to the SECURE ID Validation Center downtown knowing full well that he was as good as useless until he could authenticate himself.


The SECURE ID Validation Center was run by Veritas-SECURE, a public-private-partnership born of the New Deal 3.0. The idea was to exploit private-sector efficiencies while delivering top-notch public services. This P3 mantra had been something of an ongoing joke for years now but the government was unlikely to admit the error of its ways any time soon. Interestingly, the company that won the contract also ran the municipal waste disposal system. The critics couldn't stop talking about “synergies” and “leveraging technical expertise” when the winning bid was announced.

Ross arrived at the blue-glassed Veritas facility just after noon. He couldn't even buy lunch because the digital wallet in his phone had been deactivated when his SECURE ID was frozen. The day before, Ross had been mired in expense reports, cursing his multiple digital cash accounts associated with different profiles, devices, and credit sources.

Today, he had been thwarted by the keystone ID, the one that held everything else together and couldn’t be separated from his DNA.

The line for Formal Authentication zigzagged around two corners of the building against a cold marble wall. The only consolation was a nice big overhang covering the identity refugees from a light rain. He stepped into line behind a professional looking man with a brown leather briefcase and gray sports jacket.

Normally, he would've passed the time by watching movies on his iPod. Along with everything else, the DRM on his iPod was frozen pending authentication. The days of watching movies, or doing much of anything without authentication had evaporated long ago.

After a couple minutes of preliminary boredom, he tapped the gentleman with the briefcase on the shoulder asking with generalized ennui “Is this line even moving?”

“It depends how you define moving” the man replied, “if you're talking physics, then the answer is not for at least an hour. If you mean the decay of civil rights, then I guess you might say that we’re racing straight to the bottom.”

Somewhat surprised by the unprovoked disapproval, Ross was just happy to have a conversation to pass the time. He nodded his head enthusiastically. “This new ID system is only moderately infuriating though” he said. “I just hate these queues and the way they always try to make you feel like you're just another number.”

“Are you kidding? I would love nothing more than to be a number. Instead, I'm cursed with Jihad!” the man spat the final words.

Ross glanced up anxiously looking for the nearest Proxycam. Those things all had microphones and speakers these days and he was sure that the unit would ask the two of them to step out of line for questioning. Nothing happened.

The man quickly realized his error and extended his right hand saying. “I’m very sorry if I shocked you. My name is Jihad Azim, but everyone calls me Azi. I’m a university professor.”

Ross relaxed immediately, shaking the man’s hand as Azi continued “It’s just that my name brings me no end of grief. Jihad is actually a somewhat common name, but that sure isn't what you find with a Google search. The reason I'm stuck in this forsaken line is that they've red flagged my SECURE ID again! It happens every couple of weeks. I'm supposed to fly to Scottsdale for a conference tomorrow, but I'm pretty much grounded until I get this cleared up. The minions at the airport could neither confirm nor deny that the sky was blue, so I had to come down here. That's why I'd like nothing more than to be identified as a number. Then at least some fool with a grade 9 education wouldn't be fighting a holy war against my parents’ choice of name.”

“But couldn't you just change your name?” Ross asked, without giving it much thought.

“I could, but then I'd have a yellow flag on my ID noting that there'd been a change to my identity profile. That could be even worse. A colleague of mine has retinal implants and had to have her SECURE ID changed accordingly. Now she can't do anything without being questioned about the changes.” Azi said.

“I couldn't help but hear you two,” said a woman who had approached behind Ross and was pushing a stroller. “I know that this new system has been hard on some people, but you've gotta admit that this whole country is safer for it.”

Ross could see that this logic was going to make Azi angry, so he intervened first, questioning “But don't you think that sacrificing anonymity and privacy in the name of security is something of a false dichotomy?” Ross wasn’t entirely sure what he’d said, but he'd heard the line before and was satisfied that it sounded smart.

“Well, there might have been a better way.” She replied, “But I don't mind sacrificing a little privacy. I don't have anything to hide. And my daughter here, I'd gladly sacrifice my privacy for the security of my daughter. I can't bear to think of all those sickos out there. We’re here today for her first formal authentication so that they can confirm the samples they took at birth. Did you know that the SECURE ID is issued at birth now? I feel better knowing that she's already in the system.”

“You people are so out of it,” a new voice chimed in, “haven't you ever stopped to ask what an ID really is? It's not a number or name.” It was a young woman sitting crosslegged in front of Azi and wearing a pair of yoga jeans.

She continued “Identity doesn't come from some guy behind a computer representing the Government. Identity is how you tell the world who you are. My identity changes all the time. Like when I get a new job, or new friends, or a new hook-up. It seems like the older you get, the more attached you get to who you are. I don’t really care, for the last two weeks my avatar was a gecko.”

“No kidding.” Ross nostalgically remembered going through his gecko days.

The young woman cleared her throat and continued “The point is, you can't let The Man tell you who you are. It should be the other way around. We should control our identities.”

“So why are you here then?” the new mother retorted sarcastically. “Shouldn't you be busy launching DoS attacks against the ‘corporate agenda’ and all the complicit government agencies that hold it together?”

“I want to go volunteer at a monastery in New Burma, but The Man won't let me leave the country without a valid SECURE ID.”

Ross jumped-in noting “Hey, I was at a New Burmese monastery a couple weeks ago with Google Travel. Because of the time change, prayers don’t begin until four in the afternoon our time. Its perfect.”

The young woman was clearly not impressed. “No, like a REAL monastery with air and things you can touch.”

Ross had this debate all the time. “But…”

Azi was clearly not impressed by where this was going and interrupted “Well, I appreciate your helpful commentary. On the way to Scottsdale, maybe I’ll try ‘I am whoever I say I am and I choose to fly anonymously. If you absolutely must be provided with an ID, I happen to enjoy green tea, string theory, and the colour orange. Now please let me board the plane.”

As Azi was dismissing the young woman, a man in a gray suit neared Ross and stared blankly into the horizon of the queue. The man's pale face looked like he’d seen a ghost.

“Hey, so what's your story?” Ross couldn't help but ask.

“Ummm, I don’t know” the man replied.

“You don’t know? How can you not know?” Ross said.

“I just don’t know who I am anymore.” the man stuttered. “my identity has been stolen.”

The others gasped.

“Well, it's not that I don't know who I am, it’s just that the system has canceled my identity file as a result of concurrent use. There’s no way to verify that I am who I say I am because all my biometrics in have been compromised.”

The others remained silent. The SECURE ID system had been designed to be unbreakable. The authentication routine is so strong, and identity theft so difficult, that victim recovery remained nearly impossible. Everybody knew this. The only option was to create a new ID and start from scratch. The media labeled these victims “Born Agains.” Ross hadn't actually met one, but he’d read a couple blogs describing depressing encounters with these unfortunate souls. It was like being killed but leaving the body left to rot.

The young woman stood up, approached the identityless man, gave him a hug and gently requested: “Please, go in front of me.” The others tried not to make eye contact.

Out of sight and far down the line came a call for: “NEXT!” The line moved forward one meter.

Jeremy Hessing-Lewis is a law student at the University of Ottawa. He is writing a travel guide entitled “101 Must See Hikes in Google Maps” as well as his first novel “Things That are Square” (2009).
PDF Print
Haste Makes Waste: Attending to the Possible Consequences of Genetic Testing
By: Kenna Miskelly

July 31, 2007


Technological advances are making genetic testing and screening easier and more accessible. My concerns are that the ease and accessibility are masking the fact that these are not straightforward decisions that should be made quickly. Such decisions may include whether or not to terminate a pregnancy if your fetus has Down syndrome, whether to have prophylactic surgery if you test positive for breast cancer genes, whether to be tested for a late onset disease that may have no treatment or cure, and whether or not to submit to genome testing without knowing what the future will hold in terms of discrimination and possible privacy threats. The reasons for genetic testing have real world consequences that are often not spelled out before the testing takes place.

A recent article in the Globe and Mail discusses new recommendations that pregnant women over the age of 35, but under the age of 40, should no longer undergo routine amniocentesis. It has been standard practice that amniocentesis be available to women over the age of 35 because the probability of conceiving a child with a disability or genetic condition increases with maternal age. New non-invasive screening tests such as maternal blood tests and the nuchal translucency test (a detailed ultrasound taken at 11-13 weeks gestation that measures the fluid levels behind the fetus’s neck) can now indicate whether further testing is indicated or whether the risk of abnormalities is low. This development is very positive as amniocentesis is invasive and carries with it a risk of miscarriage.

However, the article states, “40 is the new 35 when it comes to being labelled a high-risk pregnancy.” [1] The implication here that is repeated several times throughout the article is that pregnant women who are over 35 no longer have the same risks associated with this maternal age; it seems that somehow their risks have decreased, which is not true.

As well the article quotes a physician stating,

“Even if you’re over 40, your risk may be that of a 20-year-old. Screening is making you different from your age.” [2]

Obviously the screening tests are a positive medical advance. Yet coupled with the misleading implication that risks have somehow decreased, what we see here is often the case: the language of genetic discoveries and genetic technologies seems to support a “wait and see” attitude – find out what the testing tells you, then decide what to do. It sometimes appears a bit like a lottery.

Francis Collins, direction of the National Human Genome Research Institute has mentioned that genetic technologies are much like new drugs – we must see what the general reactions are to them after they are first introduced. And many authors advocate that we should work to address concerns as they appear, as opposed to limiting technological advances with unnecessary policies. This is not to confuse the “wait and see” attitude of the researchers developing the technology with the “wait and see” attitude of the doctor performing the testing – they seem to be on a continuum.

Sonia Mateu Suter notes from her research as a genetic counsellor for prospective parents, “little emphasis is placed on the many emotional and psychological ramifications of undergoing such testing, leaving patients unprepared for certain choices and emotional reactions.” [3] She feels that this has “impoverished the informed consent process”. [4] Likewise, a “wait and see” attitude ultimately diminishes autonomy because we are not able to make choices we might have made if we had a comprehensive understanding of all the options and consequences.

Much is unclear as new technologies emerge. What we do know is that the vast majority of those individuals at risk for Huntington’s disease choose not to be tested for the HD gene. A child whose parent has had Huntington’s has a 50% chance of inheriting the gene and developing the disease. There are no cures or preventative measures. Yet at-risk individuals also have a 50% chance of not inheriting the gene and never developing Huntington’s disease. The choice not to be tested struck me as surprising until I read the stories of those at risk and those living with the knowledge that they are carriers. Some of the stories such as Katharine Moser’s (http://www.hdfoundation.org/news/NYTimes3-18-07.php) really put in perspective what it must be like to live with the end of your life before you. She had prepared herself with the requisite six months of counselling when she decided to be tested at age 23, yet admitted she never really believed the test result would be positive. Is it fair for certain people to live this way when no one’s future is certain?

Many would say that genetic testing for other conditions such as Alzheimer’s disease or Multiple Sclerosis, which may become reality in the near future, are not on par with testing for the HD gene. Likely such testing will be in terms of probabilities rather than certainties, such as the current testing for the breast cancer genes – a positive test translates into an increased risk for developing breast, uterine, and ovarian cancer but does not mean a woman will get any of these for certain. Nor does it mean that a woman without these genes is immune to these illnesses. Most likely this difference is part of the reason that intensive counselling is often not part of the testing process, though many acknowledge that the system would be improved if it were. Yet I wonder what the idea of an “increased risk” will mean to people and their families, especially for diseases with no known cure? What will the consequences be for them? Will it be easily accepted as a “probability” – something to think about or watch out for – or will they feel that the die is cast, and they cannot escape their fate? It seems that the outcome will be based on each situation and individual, which underlines the inappropriateness of the “wait and see” attitude.

As testing advances, home testing, where an individual sends a sample away and waits for results, may become more commonplace. Such scenarios have serious implications for privacy and ethics. I read a story of a man who did a home paternity test behind his wife’s back (this is actually encouraged on one paternity website as a way to gain initial information before proceeding with overt testing). The man confronted his wife with the test results that showed he was not the biological father of their children. She flew into a rage and told him he would never see the kids again. While he still has rights as a father, even if he is not a biological one, he now has to battle for these in court. He confessed that he had never fully thought through the consequence of a negative result and deeply regretted doing the test. He was unsure what relationship to have with his kids now, how to think of them, whether he was really their “daddy”. My point here is not to begin a commentary on paternal rights – I mean merely to highlight that this man felt he had acted without fully considering how the test results would affect him.

As genetic testing becomes easier and more commonplace concerns over emotions, psychological states and privacy concerns may be easily overlooked to the point that they are seen as unimportant. Yet to promote autonomous choices we must attend to genetic decision-making in context and encourage individuals to think about what test results will mean to them, their families, and their future. This is not to decry genetic testing; it is to open a dialogue about choices before decisions need to be made. Let’s not “wait and see” what the future holds if diminished autonomy becomes an accepted part of our medical system.

[1] Pearce, Tralee. 2007, July 10. Amniocentesis: New guidelines. 40 is the new 35 for test. Globe and Mail, L1 and L3; p.L1.
[2] Ibid, at p.L3.
[3] Mateu Suter, Sonia. 2002. The routinization of prenatal testing. American Journal of Law & Medicine, 28: 233-270; p.234.
[4] Ibid.
PDF Print
Collision Course? Privacy, Genetic Technologies and Fast-tracking Electronic Medical Information
By: Marsha Hanen

July 24, 2007


Andre Picard, writing in the Globe and Mail on June 14, made a poignant plea for speeding up the move to electronic health records for all Canadians. He says:

It’s not enough to create health records; it must be done right. That means including information on visits to physicians, hospital stays, prescription drugs, laboratory and radiology tests, immunization, allergies, family history and so on. It also means integrating all these records and making them compatible in every jurisdiction…

Picard points out that medical records should be accessible to all health professionals we consult, from the pharmacist close to home through the emergency room at the other end of the country. And then he adds, in parentheses: “With the requisite protection of privacy, of course.”

And there’s the rub. Just what is the requisite protection of privacy, and how should it be implemented? For example, in British Columbia a few years ago there was a huge, and quite public to-do about the contracting out of the Medical Services Plan databases to a U.S. company, and the need to protect the information from unwarranted access through the Patriot Act. The B.C. Privacy Commissioner, David Loukidelis, played a very visible role in helping to achieve a reasonable understanding of what would be appropriate in this case. But it turned out that, a year after contracting out the information collection and management to EDS Advanced Solutions, an employee of the company spent several months improperly and repeatedly surfing the files of sixty-four individuals, including the file of a woman whose ex-husband had claimed he could find out where she lived, despite her efforts to keep her location secret. And the source of that information, apparently, was to be the employee who had been doing the surfing. As it happened, none of this had anything to do with access through the Patriot Act.

EDS performed an audit that revealed “some unexplained accesses”, and then claimed there had been no privacy violations because they found no evidence that the information had actually been disclosed to anyone! Furthermore, it took nine months before the woman who had complained received notification about what had actually happened and what lay behind her ex-husband’s claims that he could find her. Various safeguards were subsequently put in place, but one can’t help wondering how much “snooping” of electronic health records might take place without being detected, especially considering the access that vast numbers of employees of pharmacies, hospitals and physicians’ offices would have to such information.

Meanwhile, British Columbia has embarked on a major effort to digitize all medical records, including providing electronic medical records technology to groups of doctor’s offices, much along the lines advocated by Picard. Indeed, B.C. plans to be a leader in Canada in this area of moving from paper records to electronic ones. It is clear that such a project could have the effect of improving medical care enormously by integrating records so that each physician or nurse or pharmacist with whom we interact has access to an overview of our medical histories and records. Advantages may include the fact that tests don’t need to be repeated endlessly, that many errors can be avoided, and that some diagnoses can be made without requiring patients to travel long distances. All good. But since many people are quite concerned about preserving their medical privacy, there is a remaining worry revolving around how we are to ensure the protection of that privacy within the system, and the related autonomy and dignity of patients.

So the first questions are about who needs to have access to all this information, and how we can ensure that access is not granted beyond those groups, except under carefully monitored conditions. Secondly, we need to devise ways to ensure that the information is never used to the detriment of patients, that patients are fully informed at all stages, and that they are involved to whatever degree they wish to be in all decisions about their testing, their results and their treatment. All of these are standard issues in designing good medical care plans – it is just that some of them are more likely to lead to problems when medical records are computerized and networked.

The situation becomes more complicated when we add the more recent developments in genetic and genomic technologies, which will, if they haven’t already, expand not just the amount of information available about individuals, but also the kind of information that is gathered. Individuals who agree to the collection of information are usually assured that their privacy will be protected by secure coding of the information and other means. But to what extent are these measures monitored, and how easy or difficult is it for the codes to be cracked? Even if the coding is secure now, it may well be easy to decipher with new information technology methods.

To be sure, not everyone worries about the privacy implications of these technologies. There has been much discussion surrounding the sequencing of individual genomes, two of the most recent highly publicized examples being J. Craig Venter, former president of the Celera Corporation and James D. Watson, one of the scientists who formulated the double helix model for DNA. And amidst the excitement about these developments the likelihood increases that certain genetic information pertaining to individuals will become part of their medical records and, in due course, so will their entire genomes. No doubt for some purposes this is all to the good in the sense that more information about an individual may well make it possible to provide better care.

But what if making this information available leads to refusal of treatment for people with certain “genetic diseases” or various other forms of discrimination such as denial of insurance or employment? Or what if the individual simply wishes to keep certain matters about his genetic make-up private? Or what if he does not wish to know that he is at risk for a disease such as Alzheimer’s, which manifests itself later in life? Or what if someone’s records are retained and used at a later time in a non-secure environment? We must also remember that genetic information about a given individual tells us quite a bit about his or her family, which may expose many people to having their genetic information widely known, whether or not they have consented to such exposure.

In discussions about information technology and medicine, one commonly heard complaint is that privacy advocates are holding up progress by making it difficult to implement the obviously necessary computerization and integration of medical records. On the other side, one might argue that the focus on technology in this area carries with it the danger that privacy considerations will be relegated to the sidelines and may even come to be seen as insignificant. Unfortunately, a consequence of failing to respect privacy is that the dignity and autonomy of individuals is likely to be impaired. In that case, we will all pay the price.

PDF Print
By: Meghan Murtha

July 17, 2007


Planning to litter, hang around looking intimidating, or just generally be a public nuisance in England? Careful where you do it.

This past spring, Britain, already host to more video surveillance cameras than any other country in the world [2], rolled out a new crime prevention measure: ‘Talking CCTV’ (closed-circuit television). Government officials describe the new development as “enhanced CCTV cameras with speaker systems [that] allow workers in control rooms to speak directly to people on the street.” The ‘Talking CCTV’ initiative is just one component of the British Home Office’s Respect Action Plan a domestic program designed to tackle anti-social behaviour and its causes. [3]

What this means in practice is that when staff, operating from an unseen central control room, observe an individual engaged in anti-social behaviour they can publicly challenge the person using the speakers. At the moment the one-sided conversation is relatively unscripted, although workers are expected to be polite. The first time a member of the public is spoken to about her behaviour, she hears a polite request. If she complies, she is thanked. If not, she can expect to hear a command . If she fails to correct her behaviour, the anti-social individual may find surveillance footage of her alleged infraction splashed across the evening news.

While ‘Talking CCTV’ may be novel, video surveillance is nothing new in Britain. It is estimated that a person living and working in London is photographed an average of 300 times a day. [4] One commonly quoted figure is that there is one surveillance camera for every 14 people in Britain. [5] This year the government is spending half a million pounds to set up ‘Talking CCTV’ in twenty communities and it is likely that the program will be expanded in future funding cycles.

Critics of the program argue that the money spent adding speakers to existing surveillance cameras is being wasted. The human rights organization Liberty contends that 78% of the national crime prevention budget in the past decade has been spent on CCTV equipment without proper studies conducted to assess whether or not the expenditure is effective. The organization argues that spending the same percentage of the budget to increase the number of law enforcement officers on patrol would go a lot further to improving public safety. [6]

‘Talking CCTV’ supporters, on the other hand, cite statistics that would please any elected official. In Middlesbrough, where the pilot program took place, officials claim that the system adds an “additional layer of security”:

But measured against what? In their 1999 study of CCTV in Britain, Clive Norris and Gary Armstrong demonstrated how government and law enforcement officials often present CCTV as a panacea without proving it provides the dramatic results attributed to it. Their review of the numbers suggested that, throughout the 1990s, publicly-quoted figures about the benefits of CCTV were often inaccurate or did not tell the whole story, yet they were used to convince taxpayers to buy into the surveillance system. [7] This is not to say that Middlesbrough is faking its numbers. It is quite likely that 100% of individuals exhibiting the anti-social behaviour of littering, who were publicly reprimanded when caught on camera, put their garbage in the bin as directed.

The ‘talking’ modification to the existing CCTV system is being sold to the public as a way to clean up the streets and create a safe, law-abiding community. The Home Secretary, John Reid, states that the new measure is aimed at “the tiny minority who make life a misery for the decent majority.” Safe, clean streets sound great but one academic has noted that public debate about CCTV tends to be shaped more by the government’s focus on how technology can improve law and order and far less on other, more complex, issues about the appropriateness of using the technology. [8]

Government employees now have a powerful tool to single out and shame an individual in public. The fact that “100%” of litterbugs in Middlesbrough obeyed the authoritative, disembodied voice ought not to be underestimated. They likely did so out of shame and embarrassment. Before signing on to such a program, it is worth noting that video surveillance operators, no matter how well-intentioned they may be, are human and they bring their very human biases to their jobs. Norris and Armstrong’s 1999 study showed that the workers watching the monitors disproportionately targeted males, youths, and black people as surveillance subjects. [9] Biases may change depending on the era and the community. The past few years, for example, has seen an aggressive crack-down on panhandling in Liverpool, along with laws designed to minimize youth loitering about urban shopping districts. [10]

Will youth people, the urban poor, and members of visible minority communities be disproportionately targeted by ‘Talking CCTV’? Officially, the answer is likely to be “no” but it has been observed that:

Unequal relations between rich/poor, men/women, gay/straight and young/old are precisely relations that have been managed and negotiated through state activities via combinations of welfare, moral education, and censure and exclusion from public space. For some who inhabit our cities, their identity, through the eyes of a surveillance camera, is constructed in wholly negative terms and without the presence of negotiation and choice that middle class consumers may enjoy. [11]

Public shaming of individuals engaged in so-called anti-social behaviour may result in British cities ‘designing away’ social problems as those who are targeted too often by authorities will find other spaces in which to spend their time. [12] The rest of the community may find itself enjoying litter-free streets and ‘Talking CCTV’ will be given credit. But it will all have happened without the benefit of serious public debate about whose behaviour is anti-social behaviour and why that makes people uncomfortable. Britain has been trying to rid itself of anti-social behaviour for a long time now and it seems unlikely that a few talking cameras will get to the root of the problem.

[1] http://www.forbes.com/2007/06/11/urban-surveillance-security-biz-21cities_cx_cd_0611futurecity.html
[2] Clive Norris et al., “The Growth of CCTV: a global perspective on the international diffusion of video surveillance in publicly accessible space.” Surveillance & Society 2:2/3 (2004).
[3] Anti-social behaviour has been seen as such a problem in Britain for the past few decades that the Crime and Disorder Act 1988 gave it a legal definition and criminalized it. That was followed by the Anti-Social Behaviour Act 2003. Legally defining the problem doesn’t appear to have helped much as the government continues to struggle with anti-social behaviour across Britain.
[4] Clive Norris and Gary Armstrong, The Maximum Surveillance Society: The Rise of CCTV (Oxford: Oxford University Press, 1999): 3. (Note that this was a 1999 study. While this continues to be the figure quoted it is possible the number has increased in the past eight years.)
[5] Clive Norris et al., “The Growth of CCTV”.
[6] Norris and Armstrong also quote the ‘78% of the budget’ figure in their 1999 work. It is unclear if this continues to be the expenditure or if Liberty is quoting their work. See Norris and Armstrong, The Maximum Surveillance Society: 54.
[7] Norris and Armstrong, The Maximum Surveillance Society, 60-7.
[8] William R. Webster, “The Diffusion, Regulation and Governance of Closed-Circuit Television in the UK,” Surveillance & Society 2:2/3 (2004): 237.
[9] Norris and Armstrong, The Maximum Surveillance Society: 109-10.
[10] Roy Coleman, “Reclaiming the Streets: Closed Circuit Television, Neoliberalism and the Mystification of Social Divisions in Liverpool, UK,” Surveillance & Society 2:2/3 (2004).
[11] Coleman, “Reclaiming the Streets”: 304.
[12] Bilge Yesil, “Watching Ourselves: Video surveillance, urban space and self-responsibilization,” Cultural Studies 20:4 (2006).

PDF Print
Calibrating Public Access to Personal Information in Legal Databases: Anonymity and 6 Degrees of Google Clicking
By: Alana Maurushat

July 10, 2007


Hi, I’m Alana. I’m a techno-luddite who confesses to rarely participating (well writing at least) in weblists, chatrooms or blogs. In the fall of 2006 I felt compelled, however, to respond to a posting in the closed list server, cyberprof. The posting in question concerned public access to personal information found in a legal database known as projectposner. Projectposner is a database developed by Tim Wu and Stuart Sierra containing many influential judgements of the late American Judge Richard A. Posner. One such judgment referred to a sexual harassment case where the plaintiff was fired for allegedly refusing to have sex with her boss. The plaintiff (who shall remain anonymous) requested the removal of her name (or the entire case) from a judgement found in projectposner. This request for removal triggered a long debate amongst cyberprof colleagues as to the scope of anonymity (and pseudonymity) with regards to online public access to court records.

Privacy was seen as important but absolute privacy was neither seen as desirable nor possible. Some argued that there was already an appropriate mechanism in place, namely a protective order to remove all references to a party’s name during the course of litigation. The ability to remain anonymous in court proceedings is at the discretion of the judge residing in the matter (at least it is in the United States). It was argued that protective orders are better made as a matter of public policy by judges rather than disclosure decisions done on an ad hoc (or post hoc) basis by individual website owners. Some further argued that there was no objectively significant invasion of privacy in the case at hand. There were references to star chambers, decreasing access to case reports, and the social utility of online searching.

Others, including myself, expressed concerns of the personal, psychological and social effects about public accessibility about sensitive personal information. We noted the lack of education with regards to accessibility of online judicial opinions and court files. We noted any legal obligations requiring website operations to edit and censor information. We even looked at psychological motivation to access and stalk former victims of sex crimes, as well as those of employers wishing to gain access to potential employees.

As lawyers we did a good job debating the legal and policy elements of the situation. As moral agents or ethicists we failed badly. We failed to consider those most vulnerable to the consequences of access to court records – women and children. We failed to consider the privacy invasion from a subjective perspective. And we failed to consider the consequences of 6 degrees of Google clicking.

This situation is not about appropriate court issued protective orders and the ability to access court records online. It is about the ability with a single “I feel lucky” click to have unfettered and unnecessary personal information outside of the scope of the original intended search. It is about using Google ethically (I like Googlethics). It is about what I call 6 degrees of Google clicking.

Similar to our dilemma, consider the following hypotheticals:

1) You are a university student taking a literature course from Professor Woolengala. You wish to see a list of some of her publications and you are, in general, a bit of a nosy parker. In short, you google your professor. The first result produced is a link to a legal database with a judgment where your professor was the victim of a sexual harassment suit which occurred 12 years ago. Within two clicks, you have retrieved and are reading this personal and sensitive information.
2) You are a partner at the law firm McQuarey Nightrum. You wish to hire a new associate. You ask your assistant to conduct a personal background check of all candidates. This includes a search on Google. Your Google search indicates that a candidate was a plaintiff in a workplace harassment suit, as well as a plaintiff in an insurance suit to obtain additional refunds for radiology treatment (3 clicks). Based on this information, you do not shortlist the candidate.

There is an appalling lack of education amongst Google users and website owners on the extent of google search-ability. There are only too many online privacy blunders illustrating this point. Sensitive information of corrupt Hong Kong police finding their way to subdirectories on the Internet (many linked to organized crime). Ongoing police investigations files in Japan again finding their way to subfiles on the Internet. All searchable through Google. All avoidable with the use of FTP protocol, or robot exclusion protocol which does not allow Google’s webspiders to retrieve information from a website – none of these protocols were used by professional IT security experts.

What if FTP or robot exclusion protocol had been used in projectposner? It would still be possible to retrieve the decision from the actual website but the judgment would not be searchable with Google. This would, theoretically, better limit the ability for those to find and use personal information in an unnecessary and unfettered matter (Google search/click for online legal databases, click on database selected, type in party name and click, click on judgment(s) – at least 4 degrees of Google clicking). For this reason, many free online legal databases such as those found in worldlii.org are not searchable with Google. Of course, this also hinders legitimate and efficient searching methods. Google is popular because it works well. There is a middle ground. The same robot text can be used to retrieve access to a website but not to a deeplink. In other words, you may be directed to projectposner but then have to perform an internal search once within the website. More beneficial, of course, would be in the ability to dissociate website ranking so that a result with personal information would not appear in the first page of results. These small technical specifications could have reduced some of the ethical (and legal) dilemmas of online access to court information, but they could not, of course, have avoided altogether many of the issues.

There is no quick answer to this issue but I for one, would like to see a policy of 6 degrees of Google clicking. In the game of 6 degrees people try to link actors to movies starring Kevin Spacey. The object of the game is to make the link with as minimal degrees as possible with a maximum link of 6. The reverse for online searching of personal information found in legal databases may be good policy. Requiring 6 degrees of Google clicking would provide a stronger incentive for those with genuine vested interest in obtaining personal information while reducing unnecessary and unfettered access.

I haven’t nearly begun to explore the many important and deserving ethical issues presented in accessing online information in legal databases. It is an act requiring fine calibration. I invite your input.

Alana Maurushat, B.A. (University of Calgary), B.C.L.(McGill), LL.B. (McGill), LL.M. with Concentration in Law and Technology (University of Ottawa), PhD Candidate (University of New South Wales). The author is Acting Academic Director of the Cyberspace Law and Policy Centre, sessional lecturer, and PhD candidate at the Faculty of Law at the University of New South Wales, Australia. Prior to moving to Sydney, she was an Assistant Professor and Deputy Director of the LLM in Information Technology and Intellectual Property at the University of Hong Kong’s Faculty of Law. She has taught in summer programs for the University of Santa Clara, Duke University, and has been invited to teach at the Université de Nantes this coming year. Her current research is focused on technical, ethical and legal dimensions of computer malware building on past research projects which addressed the impact of surveillance technologies on free expression and privacy. She currently teaches Advanced Legal Research.
PDF Print
Where the Heart is: Dignity, Privacy and Equality under the Charter
By: Daphne Gilbert

July 3, 2007


A country’s constitution can be described as the mirror into the national soul. A constitution is a foundational instrument, reflective certainly of its country as it exists, but also aspirational in nature. In countries, like Canada, where the constitution protects individual rights and freedoms, citizens are empowered by the values that shape the legal guarantees. This is at least, the hope behind Canada’s Charter of Rights and Freedoms. What then to make of the fact that an interest or value in ‘privacy’ is not expressly protected by our constitution?

The question of the role privacy plays as a foundational constitutional value has been addressed by the Supreme Court of Canada on numerous occasions. It is well-settled law that sections 7 and 8 of our Charter do contain protections for some aspects of a privacy interest. What is less clear is whether a robust concept of privacy, and privacy-related interests, are adequately and wholly protected in Canada’s Charter. Given the constraints of the privacy protections recognized in sections 7 and 8, finding another home for privacy in the Charter might open up new potential. In my view, it would be both helpful and appropriate to consider privacy in the context of the section 15 equality guarantee.

I stress here that I am proposing “another” and not a “new” home for constitutional recognition of privacy interests, because I agree that sections 7 and 8 offer important and necessary protections for certain privacy interests. These two sections are, however, limited in their scope. They appear in a part of the Charter labeled “Legal Rights”, a heading that has been interpreted as placing boundaries on the application of sections 7 and 8. In Gosselin v. Quebec (Attorney General), [1] a majority of the Supreme Court of Canada affirmed that the guarantees under the “Legal Rights” section of the Charter are triggered by state action involving the administration of justice. In most situations, the “Legal Rights” guarantees are triggered in the criminal law context, though these protections can be used in administrative contexts too (as they were, for example, in the case of New Brunswick (Minister of Health and Community Services) v. G.(J.) [2] , involving challenges to child protection processes). While Gosselin left open the question of whether an adjudicative context was required for “Legal Rights” to apply, the majority insisted that it was appropriate to restrict the applicability of the “Legal Rights” protections to the administration of justice. [3] In Gosselin, this meant the section 7 guarantee to life, liberty and security of the person was useless in challenging an inadequate welfare regime. If privacy protections are housed only in sections 7 and 8 of the Charter, the nature of the interests protected are necessarily limited. These limitations mean that only certain kinds of privacy interests are protected by the Charter, and that a “right” to privacy only comes into play in situations captured by section 7 and/or 8. In my view, this is an impoverished interpretation of what privacy could offer as a constitutional value.

Since the Canadian Charter does not recognize the same sort of “penumbral effects” as the Americans see in their Bill of Rights, we are required to locate our constitutional values within specific Charter guarantees. If there is potential for constitutional recognition of privacy outside of the “Legal Rights” context, privacy must find another resting place. In my view, section 15 offers significant hope and advantages as another home for privacy. Chief Justice McLachlin of the Supreme Court of Canada describes “equality” as perhaps the most difficult of the Charter rights to interpret and define, and indeed, section 15 has had a tumultuous history since it came into force in 1985. In the 1990s, the Court was particularly divided on the proper interpretive approach to section 15, until in 1999 the Court reached a tentative consensus on a “test” for equality violations in Law v. Canada (Minister of Employment and Immigration). [4] [Most section 15 scholars agree the Law test is problematic and that the Court has in any event fractured into differing views on equality rights in recent years, however, Law remains in theory and in practice at least, the prevailing structure for section 15.] In Law, the Supreme Court decided to make “human dignity” the central focus of the equality guarantee, explaining the purpose of section 15 as:

to prevent the violation of essential human dignity and freedom through the imposition of disadvantage, stereotyping, or political or social prejudice, and to promote a society in which all persons enjoy equal recognition at law as human beings or as members of Canadian society, equally capable and equally deserving of concern, respect and consideration. [5]

Section 15 claimants must show, as one of the three required steps in the Law test, that the legislative provision they contest violates or demeans their human dignity. [6] Justice Iacobucci, writing for the Court in Law, outlined his version “human dignity” in the equality context, intending his approach to be comprehensive but non-exhaustive:

What is human dignity? There can be different conceptions of what human dignity means… [T]he equality guarantee in s.15(1) is concerned with the realization of personal autonomy and self-determination. Human dignity means that an individual or group feels self-respect and self-worth. It is concerned with physical and psychological integrity and empowerment. Human dignity is harmed by unfair treatment premised upon personal traits or circumstances which do not relate to individual needs, capacities, or merits. It is enhanced by laws which are sensitive to the needs, capacities, and merits of different individuals, taking into account the context underlying their differences. Human dignity is harmed when individuals and groups are marginalized, ignored, or devalued, and is enhanced when laws recognize the full place of all individuals and groups within Canadian society. [7]

Connections between privacy and human dignity have long been acknowledged and explored by theorists [8] and the Supreme Court of Canada has declared, “a fair legal system requires respect at all times for the complainant’s personal dignity, and in particular his or her right to privacy, equality, and security of the person.” [9] It seems almost natural, then, that privacy should find a new home outside of the “Legal Rights” portion of the Charter, within human dignity, as it is understood and protected under section 15.

There are many benefits to interpreting section 15 to include a privacy interest, broadly captured by two significant features. First, protecting privacy as part of the Charter’s equality guarantee provides opportunities for a set of privacy-related claims that do not fall within the boundaries of the “Legal Rights” section to be brought forward. A claimant whose privacy interests have been violated outside of the Legal Rights context (meaning sections 7 and 8 are not triggered), may now have an avenue under section 15 to bring forward the claim, expanding the Charter’s spectrum of privacy protections. For example, in contexts including (dis)ability discrimination, social welfare or employment regimes, access and funding for abortion or contraceptive services, poverty and homelessness, government relationships with aboriginal peoples, as well as other pressing equality concerns, arguments around privacy interests might be helpful in unpacking and explaining the human dignity step of the Law framework.

Second, an understanding of privacy embedded within the Charter’s equality framework could open up more expansive possibilities for protecting a range of privacy interests beyond those that fall within sections 7 and 8. Section 8 has been interpreted as protecting three specific ‘classes’ of privacy interests: personal, territorial and informational privacy. Section 7’s protection for security of the person, which includes bodily integrity, includes decisional privacy interests. A number of theorists, however, including feminists Allen, Roberts, Gavison, McClain and others, have argued that a robust understanding of privacy includes more than simply protecting these manifestations of recognized privacy interests, and may include such features as positive obligations on the state to provide the conditions necessary for true private choice to be exercised. It is possible that interpreting privacy within section 15 could lead to the legal recognition of new or different ‘kinds’ of privacy, over and above those protected by sections 7 and 8.

Whatever the content of privacy is understood to include, there is general agreement in law and society that privacy is worth protecting, as a “core value of a civilized society,” [10] and as a requirement both of “inviolate personality” [11] and human dignity. Expanding the possibilities for protecting privacy by including it within the ambit of the section 15 equality guarantee is further and uniquely Canadian recognition of the foundational role that privacy plays in our society. Equality, and by necessity a constitutional right to equality, is at the heart of a compassionate democracy. While the Charter protects and advances many of our most cherished values, section 15 is at the heart of the Charter’s vision for Canada. Finding a home for a privacy interest in our understanding of human dignity, not only promotes a more fulsome understanding of the many facets of privacy as a core value, but also opens up new equality arguments for vulnerable and marginalized groups.

[1] 2002 SCC 84
[2] [1999] 3 S.C.R. 46.
[3] Then Justice Arbour took a different and radical approach to section 7, and would have removed it from the limitations of its placement in the “Legal Rights” section of the Charter. She left the Court soon after the Gosselin decision and her views have not gained traction at the Court so far.
[4] [1999] 1 S.C.R. 497.
[5] Ibid. at para. 59.
[6] The first two steps in the Law test are that the claimant establish that he or she is a member of one of the enumerated or analogous grounds listed in section 15 and that the impugned legislative provision imposes a burden or denies a benefit to the claimant on the basis of the ground.
[7] Ibid. at para. 53.
[8] A number of philosophers have connected privacy to human dignity, and explained the relationship between the two as harmonious and even symbiotic in nature. Edward J. Bloustein reasoned:

The man [or woman] who is compelled to live every minute of his [or her] life among others and whose every need, thought, desire, fancy or gratification is subject to public scrutiny, has been deprived of his [or her] individuality and human dignity. Such an individual merges with the mass. His [or her] opinions, being public, tend never to be different; his [or her] aspirations, being known, tend always to be conventionally accepted ones; his [or her] feelings, being openly exhibited, tend to lose their quality of unique personal warmth and become the feelings of every man [or woman]. Such a being, although sentient, is fungible; he [or she] is not an individual.

See: Edward J. Bloustein, “Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser” in Schoeman, Ferdinand, eds. Philosophical Dimensions of Privacy: An Anthology, (Cambridge University Press, 1984 at page 188). See also: Jeffrey H. Reiman, “Privacy, Intimacy and Personhood” in Ibid, at page 305; Helen Nissenbaum, “Privacy as Contextual Integrity” (2004) 79 Wash. L. Rev. 119.
[9] R. v. O’Connor [1995] 4 SCR 411 at para 154.
[10] See Olmstead v. United States, 277 U.S. 438 (1928) (Brandeis J., dissenting).
[11] Warren & Brandeis, “The Right to Privacy” 4 Harv. L. Rev. 193, 194 (1890).

PDF Print
Excuse me, are you a threat to aviation security? Canada’s no-fly list
By: Katie Black

June 26, 2007


Picture this: you are traveling to an important conference in Ottawa, titled the Revealed “I”. While getting your boarding pass, the airline attendant asks for a piece of government-issued photo ID. You provide it and wait for him to smile and print your boarding card. He doesn’t smile. In fact, he looks concerned, makes a phone call and tells you to step aside. You are prohibited from boarding you flight because, in that moment, you were silently labeled “an immediate threat to civil aviation”. [1]

While this hypothetical will remain an incredulous story for most Canadians, it will realize for some over the course of the next year. [2] If your name, age and gender match that of an individual on Canada’s Specified Persons List, implemented on June 18th, 2007 as part of Transport Canada’s Passenger Protection Program, you might be barred from boarding an aircraft. Regulation [3] responsible for the program requires all airline carriers in Canada to screen passengers over the age of twelve [4] on domestic and international flights against those described on the List. Once a match is made, the airline carrier is obligated to contact the Minister of Transport or his authorized official and have him or her verify the individual’s identity and decide whether or not to permit boarding. If individuals find themselves on the list, they can have their case independently reviewed by applying to Transport Canada’s Office of Reconsideration (OoR). [5] If they remain unsatisfied, they can appeal the OoR decisions to the Federal Court, the Security Intelligence Review Committee, the Commission for Public Complaints against the RCMP or the Canadian Human Rights Commission.

While this program superficially appears to further Canada’s goal of increasing aviation security, many concerns have been raised regarding the impact of the program’s design and implementation on privacy and anonymity in Canada. This ID Trail Mix will briefly survey the main concerns raised by such public interest groups as the BC Civil Liberties Association (BCCLA) and the Council for American Islamic Relations (CAIR-Canada). It will explore: i) the potential inadequacy of the Passenger Protection Program in light of forgery techniques, ii) concerns regarding how the list is compiled, iii) the potential for violations of Canadians’ privacy rights through the sharing of personal information with foreign governments, iv) the possibility for mistaken inclusion on the list and v) the potential that Canada’s no-fly list could lead to the targeting and profiling of racialized groups.

Forged Documents

It remains unclear how the Passenger Protection Program will get around the practical problem of forged documents. With ID cards so easily forged, how does asking for one reduce the threat of on-board terror? Moreover, are terrorists or other threatening individuals likely to fly under their own name? Speaking to this concern in an interview with CBC News, Barry Prentice, Director of the Transport Institute at the University of Manitoba in Winnipeg, commented, “I don’t think it’s going to help one bit. What terrorist is going to travel with their own name and passport? These people are going to steal or create a forged passport and identification if they’re going to do anything, anyway”. [6]

Also pertaining to the program’s efficacy, in 2005, the Privacy Commissioner submitted the following question to Transport Canada: “what studies, if any, has the department carried out to demonstrate that advance passenger information will be useful in identifying high-risk travelers”? Transport Canada provided the following response on their website, “the Passenger Protect program proposes to use a watchlist to prevent specified individuals from boarding flights based on practical global experience and risk assessment rather than specific studies”. According to Allen Kagedan, Chief of Aviation Security Policy for Transport Canada, such lists are increasing air travel safety as, “they do work”. However, when asked by reporters, he could not cite any specific instances of when it worked. “The problem with giving examples” he said, “is that they defeat security and also, ironically, defeat the privacy rights of those individuals”. [7]

How is the list compiled?

Does notification of one’s inclusion on the Specified Persons List also defeat security? It may because the list is not available to the public. [8] People can only find out if they are on the no-fly list once they are prevented from boarding their flight. [9] The wording of the regulation [10] is such that anyone who i) poses a threat to aviation security, ii) could endanger the security of any aircraft or aerodrome, or iii) the safety of the public, passengers or crew members would be placed on the list by the Passenger Protect Advisory Group [11]. This will result in a “dynamic” list, according to Mr. Kagedan, as intelligence agencies must re-assess their “reliable and vetted” security information every 30 days. [12] While it is clear that this would likely include “an individual who has been involved in a terrorist group [or] has been convicted of one or more serious and life-threatening crimes against aviation security”, [13] it is unclear if it would also include such people as Andrew Speaker, the Atlanta lawyer, who was placed on the American no-fly list because he had a rare form of tuberculosis. In the Canadian context, would a communicable disease constitute a threat to aviation security?

Will Canada’s no-fly list be shared with foreign governments?

The extent to which the regulation allows Canada to share information contained on its no-fly list with foreign governments is also unclear. According to the Privacy Impact Assessment (PIA) Executive Summary of the Passenger Protection Program, “law enforcement and intelligence information on Specified Persons received from Canadian, or foreign or multilateral, law enforcement or security intelligence agencies” will be kept and gathered using the Passenger Protection Program. It will be used for the sole purpose of increasing transportation security. [14] Moreover, comments made by Brian Brant, who serves as Director of Security Policy for Transport Canada, during the Air India Inquiry presided over by former Supreme Court Justice Major, indicated that “names of Canadians on the forthcoming federal list could end up in the hands of foreign governments, whether or not Ottawa gives its official consent to sharing the information”. [15] While the list of names will only be initially released to commercial airlines, foreign governments could access the names without the consent of the Canadian government by going to the airlines. The lists could be accessed via the airlines that are based in the foreign country. “Should their national government require that information of them”, Brant testified at the inquiry, “that's up to them to decide what they want to do with that information. We recognize that possibility exists”. [16] As such information sharing, either voluntary or involuntary, between Canada and foreign governments is likely.

It wasn’t me: the possibility for mistaken inclusion on the list

While the new no-fly list may add the kind of excitement to one’s travel plans as experienced by Conservative MP John Williams - who was temporarily grounded because his name appeared on the American no-fly lists - it also means that many innocent people are going to be swept up in the list’s identity net. One need only look at how the American no-fly lists ballooned out of control. At one point, it contained more than 70, 000 names including those of civil libertarians, peace activists and most notably Senator Ted Kennedy. [17]

Although individuals who have been wrongfully identified on the Canadian list retain the right to reconsideration through the OoR process (see above), Canada’s Privacy Commissioner, Jennifer Stoddart, warned that the list could become “a nightmare for ordinary Canadians”. [18]

On the bright side of things, one retains a statistically smaller chance of being on Canada’s no-fly list than on America’s. This is because fewer than 1,000 names are thought to be on Transport Canada’s Specified Persons list at the moment. [19] Advocates for CAIR-Canada, however, argue that this statistical good news will disproportionately apply to non-racialized groups. CAIR-Canada fears that Canada’s no-fly list has the potential to lead to the targeting and profiling of Muslims and Arabs in Canada.

The chill sets in: fears of racial profiling

People within Canadian Muslim and Arab communities already report that they disproportionately experience the effects of social and technological changes aimed at ensuring “national security”. In Faisal Babha’s article, “The Chill Sets In: National Security and the Decline of Equality Rights in Canada”, he writes that in a post-9/11 era “ensuring ‘national security’ has become a euphemism for ethnic and religious profiling, and that the Anti-Terrorism Act (ATA) has become a guise for the systematic targeting and demonization of Muslims and Arabs”. [20] While hard data indicating that Muslims are being systematically profiled by government agencies is challenging to acquire, [21] it is clear that “Muslims and Arabs in Canada have been thrust involuntarily into the spotlight of the national consciousness”. [22] The effects of the no-fly list are likely to intensify that light as “Muslims are already subject to increased scrutiny at airports” [23] and “among Muslims, there’s a great similarity in names and it’s very easy for names to be the same or similar”. [24] While this will practically translate into Muslims and Arabs being disproportionately mistaken for those on the list, it might also have the corollary effect of generally increasing the sense of insecurity and incidents of discrimination experienced by these populations. [25] As Faisal Babha wrote, “profiling is a simplistic response to a complex problem; it involves highlighting a specific characteristic about a person, unrelated to that person’s actual deeds, and extrapolating to reach a presumptive conclusion about the person’s intentions and probable conduct”. [26]

While fears of racial profiling are being voiced in relation to racialzed members of society, Jennifer Stoddart phrased the same concern of the use of one’s identity more generally. As she sees it, the problem is that the list exemplifies “the increasingly intrusive use of your identity in order to make decisions about you as an individual, [decisions] that are pretty drastic… Every time we go to the airport, are we going to expect to be challenged?” [27]


[1] A threat to aviation security is explained in the section 4.72(2)b of the Aeronautics Act, as threat to “any aircraft or aerodrome or other aviation facility, or to the safety of the public, passengers or crew members”.
[2] According to section 4.72(3)(b)(i) of the Aeronautics Act, the Act that provides the Minister of Transportation with the statutory authority to create the new Passenger Protection Program as a “security measure”, the Minister must repeal the security measure before the day that is one year after the notice of the measure was published. Notice of the Identity Screening Regulation was published on April 26th, 2007.
[3] Section 3.2 of the Identity Screening Regulation outlines the screening protocol that airline carriers must follow. They are required to obtain either one piece of valid government-issued photo ID or two pieces of valid government-issued ID prior to boarding. The Identity Screening Regulation was created by the Department of Transport Infrastructure and Communities on April 26th, 2007, is under the statutory authority of the sections 4.71 and 4.9 Aeronautics Act which gives the governor in council the statutory authority to make regulation with respect to aviation security. The Public Safety Act, 2002, which received Royal Assent on May 6, 2004, made these changes to the Aeronautics Act as part of Canada's National Security Policy. The Identity Screening Regulation was registered by the Department of Transport Infrastructure and Communities in order to create the Passenger Protection Program.
[4] An exception to the identification requirement is currently being granted to children between the ages of 12 and 17. They only need to present one piece of government-issued ID until the mid-September.
[5] Transport Canada, Office of Reconsideration, available online: http://www.tc.gc.ca/reconsideration/menu.htm. [6] Barry Prentice, in an interview with CBC News reporters on Monday, June 18th, 2007. [CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html]
[7] Allen Kagedan in an interview with CBC reporters on Monday, June 18th, 2007. [CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html].
[8] During the question period on Monday, June 18th, 2007, Liberal MP Joseph Volpe demanded that the government release the names of those on the no-fly list. Meanwhile, NDP MP Joe Comartin proposed that while the government should not get ride of the list, it should at least set up an ombudsman to handle cases where innocent people find themselves on the list. [CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html]
[9] CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html.
[10] Section 50.(4)(b) of the Canadian Aviation Security Regulation of the Aeronautics Act.
[11] The advisory group, led by Transport Canada, is comprised of a senior officer from the Canadian Security Intelligence Service (CSIS), a senior officer of the Royal Canadian Mounted Police (RCMP) and a Transport Canada representative. Once on the list, membership is reevaluated every 30 days. [Transport Canada, (June 8th, 2007) Passenger Protects: Privacy Impact Assessment (PIA) Executive Summary, available online: < http://www.tc.gc.ca/vigilance/sep/passenger_protect/executive_summary.htm >]
[12] Allen Kagedan told CBC reporters on Monday, June 18th, 2007 from CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html.
[13] Cited by Transport Canada as possible instances where a person would be placed on the list in the article by CBC News, titled Critics alarmed by Canada's no-fly list.[CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html]
[14] Transport Canada, (June 8th, 2007) Privacy Impact Assessment (PIA) Executive Summary, available online: < http://www.tc.gc.ca/vigilance/sep/passenger_protect/executive_summary.htm>.
[15] CBC News, (June 5th, 2007) No-fly list could end up in foreign hands, Air India probe is told, available online: http://www.cbc.ca/cp/national/070605/n0605112A.html.
[16] CBC News, (June 5th, 2007) No-fly list could end up in foreign hands, Air India probe is told, available online: http://www.cbc.ca/cp/national/070605/n0605112A.html.
[17] CBC News, (June 5th, 2007) No-fly list could end up in foreign hands, Air India probe is told, available online: < http://www.cbc.ca/cp/national/070605/n0605112A.html >.
[18] CBC News, (June 13th, 2007) Privacy commissioner ordered to testify at Air India inquiry, available online: http://www.cbc.ca/canada/british-columbia/story/2007/06/13/airindia.html; Barry Prentice, Director of the Transport Institution at the University of Manitoba Winnipeg, told CBC reporters that some travelers are going to be wrongly identified as security risks under the Passenger Protection Program. [CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html]
[19] CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html.
[20] Faisal Babha, (2005) The Chill Sets In: National Security and the Decline of Equality Rights in Canada, 54 U.N.B.L.J. 191 at 192.
[21] A report by the International Civil Liberties Monitoring Group, In the Shadows of the Law: A report by the International Civil Liberties Monitoring Group (ICLMG)in response to Justice Canada’s 1st annual report on the application of the Anti-Terrorism Act (Bill C-36) (14th May, 2003); online: Development and Peace www.devp.org/pdf/shadow.pdf, argues that the ATA’s reporting process is too narrow in scope. Consequently, it does not accurately indicate and reflect the ATA’s effect on Muslims and Arabs, as well as other aboriginal rights and anti-globalization activists.
[22] Faisal Babha, (2005) The Chill Sets In: National Security and the Decline of Equality Rights in Canada, 54 U.N.B.L.J. 191 at 195.
[23] CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html.
[24] Larry Shaben, former Alberta MLA and current president of the Edmonton Council for Muslim Communities, cited in CBC News, (Monday, June 18, 2007) Critics alarmed by Canada's no-fly list, online: http://www.cbc.ca/canada/story/2007/06/18/no-fly-list.html.
[25] Canadian Arab Foundation, Arabs in Canada: Proudly Canadian and Marginalized, (Toronto: Canadian Arab Federation, 2002).
[26] Faisal Babha, (2005) The Chill Sets In: National Security and the Decline of Equality Rights in Canada, 54 U.N.B.L.J. 191 at 197.
[27] Don Butler, (June 8th, 2007) “No-fly list curbs privacy rights: commissioner ‘Quite a nightmare’ ahead for some; Stoddart urges updated privacy act”, The Ottawa Citizen.

PDF Print
Who Needs Your Name?
By: Jason Millar

June 19, 2007


Every now and again I Google my own name. If you’ve never Googled your own name, try it. It’s a strange way to spend fifteen minutes—there’s not much to be found, in my case—but every time I do it something different pops up in the search results. Sometimes I check to see if a new piece of information associated with me has trumped the usual results, other times, and for reasons still not clear to myself, I simply want to make sure that my stuff is on the first page of hits.

I know there are other individuals out there who share my first and last names. I met one once. Recently, while undergoing a security check for some work I was doing, it wasn’t until I provided my fingerprints and middle name that I was eventually cleared. I can only surmise the existence of another Jason X Millar (maybe the one I once met) who is less trustworthy than myself according to those who know and care.

One thing I have noticed, I’ve been Googling my name for years, is that there are more and more pieces of information associated with various Jason Millars popping up in the results. Many of those pieces of information are associated with me. But there are other individuals named Jason Millar out there—artists, soccer players and a host of other random individuals with random interests and opinions have posted information about themselves. I can only imagine that anyone interested in compiling all of the stuff exclusively associated with me would have some fancy guesswork to perform in the filtering. This is because it isn’t at all clear which of the information belongs to a single Jason Millar.

The same problem occurs when trying to piece together random information collected about random individuals. When trying to aggregate it under a name, complications arise due to the problems associated with authenticating the data.

This assumes, of course, that someone would be interested in stitching together what are ostensibly disparate chunks of information into an aggregated whole that would describe various aspects of a single individual’s life in a more holistic manner. To be sure, one could imagine data mining projects that involve this type of aggregation, such as the kind that could be used for psychological profiling. But for a great many applications—perhaps profiling for marketing purposes—the kind of complete data mining that would involve stitching together information under the heading of a name, might not be as important as it first seems.

Stitching a person’s information together based on first and last names is complicated. Authentication can be a tricky business where privacy laws are in effect, and the fact that there are so many “Jason Millar”s in the search results makes one wonder how useful names really are to those who know and care to authenticate information as mine.

In fact the more I do these searches the more I’m convinced that, in the information age, traditional identifiers that tend to make us want to associate complete sets of information with a “me”, or “her”, or any “particular individual” in the first place, are becoming obsolete. The type of association that seeks an identifiable individual at the focal point of the relevant information may soon be replaced by newer means of association and identification, which will allow individuals to aggregate information about other individuals through the various proxies indirectly associated with them.

I can only imagine that my name, address, phone number and other personal information traditionally used as a starting point when aggregating information about me will cease to be of primary relevance to the vast majority of individuals interested in accessing me for, say, marketing purposes. In their places, sets of numbers uniquely associated with the things I wear and carry with me on a daily basis will provide a highly reliable, and oddly descriptive, means for identifying {me}.

Here’s why this is plausible…

Consider the fact that in the near future every item that rolls off of an assembly line will have an Electronic Product Code (EPC) associated with it, and often embedded in it. Simply put, an EPC is a unique number, or identifier, for every product; every shoe, can of pop, bag and watch will have one—Wal-Mart says so. EPCs will be readable by any compatible reader operated by anybody who owns it (or them), and they will be very cheap. Now consider the fact that every communication device already has a unique identifier associated with it; every cell phone, Wi-Fi device, laptop, Bluetooth device, PSP and Nintendo DS has some hardware identifier associated with it per the relevant communication protocol—international telecommunication standards say so. Our future includes visions of wirelessly (ad-hoc) networked municipalities in which individuals are perpetually connected by means of their portable communications devices.

Any one of those numbers can function as a proxy in identifying an individual, even though only one number would be relatively unreliable if the task were ensuring that the same individual is carrying it at any given time. But with these two pieces in place it is easy to imagine networks of EPC readers constantly logging the information associated with the products I carry, and computer networks constantly logging the presence of communications that my wireless devices are constantly transmitting by virtue of their perpetual connectedness.

Let’s focus on EPCs for a moment, and imagine that consumer profiling is the application of the day (though it could easily be employee profiling). Every day I get dressed and leave the house carrying various products with me. Every set of numbers that is read at a given time will represent the set of EPCs I am carrying. On any given day that set will be different, owing to various possible combinations that I might possess at the time. However, over time the complete set can be built up by whatever network is logging the EPCs given that EPCs will begin to associate themselves with one another in the database. For example, my shoes will form a common link between many of the shirts and pants I wear, such that my EPCs will allow complex inventories to be built about my possessions. After a given time, by reading a subset of EPCs, a relatively unintelligent system could be extremely confident which complete set of EPCs it was dealing with, meaning that any future subset that is read and associated by relatively few common EPCs could be deemed part of the same larger set. Of course, every reader is associated with a location, such that a smart network of readers would be able to track the movement of the EPCs through space.

If you add the known locations of wireless ad-hoc network routers into the mix, sets of EPCs moving through space can be associated with particular communications devices. This means that information flowing to and from those devices on privately owned networks could be associated with the sets of EPCs. Anonymous blog postings, emails etc. could all potentially be associated with the set of EPCs and wireless devices.

Anyone interested in understanding a set’s purchasing patterns, its certain eating habits, daily movements, etc. need not know anything about credit card transactions, names, phone numbers, addresses or any of the other traditional pieces of personal information deemed sensitive. In fact, the particular individual at the locus of the set of numbers simply disappears, replaced by the things that matter most to marketers: information about an inventory of products and a means of communicating with whoever is associated with them. Access to whatever is at the locus of buying power, or at the locus of influencing buying power, is all that counts in profiling for marketing.

Speculating about the kinds of information that can be gleaned about the sets in this kind of environment could run pages. The point I want to make is that there will be the ability to identify clouds of numbers that self-associate through the indirect association they have with the individuals carrying them. The other point is that aggregating the associated sets does not involve directly identifying the individuals carrying the items.

I am not a lawyer, but I have heard a lot of mention of emanations lately (search the ID Trail blog for “Tessling”). Given the sketch provided here the questions I would raise are these:

a) Are the emanations coming from an individual’s possessions personal information or not, especially where identifying the individual in the traditional sense becomes unnecessary?
b) Does an individual have a reasonable expectation of privacy with respect to these kinds of data?

It seems we should gather opinions before the readers hit the streets. I’ll let the lawyers comment.
PDF Print
it’s different for girls: the importance of recognizing and incorporating equality in discussions of Internet speech
By: jennifer barrigar

June 12, 2007


Kathy Sierra used to run her own blog, one that had attained No. 11 on the Technorati.com Top 100 list of blogs (as measured by the number of blogs that linked to her site). These days, however, when one logs on to Kathy Sierra’s blog Creating Passionate Users one is presented with a post from April 6, 2007 where she writes:

As for the future of this blog, I know I cannot just return to business as usual -- whatever absurd reasons have led to this much hatred for me (and for what I write here) will continue, so there is no reason to think the same things wouldn't happen again... and probably soon. That includes anything that raises (or maintains) my visibility, so I will not be doing speaking engagements--especially at public events.

Sierra first went public in March 2007 about threats she had received on her own and other sites that included: photos of her with a noose around her neck; photos of her with a muzzle over her mouth apparently smothering her; and violent and sexual messages that included her home address. She cancelled public appearances and has ceased blogging (at least for the time being).

Nor is this issue confined to the so-called blogosphere, as the recent controversy around AutoAdmit shows. (Anonymous) posters on AutoAdmit, which bills itself as “the most prestigious college discussion board in the world”, and an allegedly related web-based contest rating the “Most Appealing Women at Top Law Schools” featured photographs, personally identifiable information, sexually explicit and derogatory comments on a number of womyn. Some of these womyn spoke to Ellen Nakashima of the Washington Post about the situation, alleging that the postings were not only personally but also professionally damaging.

As these incidents have garnered more attention, debates have primarily focused on the question of censorship versus free speech, with such attacks glossed over as an unfortunate side effect of (important) anonymous internet participation but ultimately unrepresentative of the majority of Internet readers/speakers. Where the issue of gender is put in the forefront, discussions have tended towards what Joan Walsh, writing at Salon.com, characterized as “…telling them to stop wearing such provocative outfits online, lest they get that they deserve.” Dahlia Lithwick, at Slate.com, suggests that discussions about the issue have too often been framed in terms of “are women tough enough?” or “are women playing victim.” Such approaches have the unfortunate effect of seeming to focus on gender, without ever truly examining the underlying equality implications of such actions.

Lithwick claims, in her article Fear of Blogging: why women shouldn’t apologize for being afraid of threats on the Web that “…the Internet has blurred the distinction between a new mom’s whimsical blog about the new baby and Malkin or Ann Althouse blogging about politics. The intent of these writers is totally different, but on the Internet, that difference evaporates.” Although Lithwick is arguing that not all womyn bloggers are public figures, in doing so she seems to accept that at least some bloggers are public in such a way that such attention(s) may not be entirely unexpected. In a similar vein, the operators of AutoAdmit commented in the Nakashima article in the Washington Post that “…some of the women who complain of being ridiculed on AutoAdmit invite attention by, for example, posting their photographs on other social networking sites, such as Facebook or MySpace.” In fact, it seems that the mere presence of a womyn in online spaces may be enough to attract unwanted attention -- a University of Maryland study of IRC chatrooms in 2006 found that female usernames received 25 times more threatening and sexually explicit messages than did those with male or ambiguously-gendered usernames – an average of 163 messages a day.

Existing remedies to these problems seem either non-existent or ineffective. A panel discussion , convened at Harvard University to discuss the issue of Internet Speech, focused extensively on the AutoAdmit issue. Much of the discussion revolved around what, if any, remedies might be available to the affected womyn and against whom they could be exerted. Various panelists suggested that the students might seek redress via: suits against the ISP and/or the website operators, from the individual posters themselves, from the individual universities under a claim that the posts constituted sexual harassment and the Universities had obligations under Title IX to take action against it, and through the medium of defamation or privacy torts.

The womyn affected have taken various forms of action already. Kathy Sierra reported her harassment to the police as well as going public about it online. Some of the womyn in the AutoAdmit conflict have hired Reputation Defender to try to address the issue. Joan Walsh admits that pervasive misogyny on the Web has impacted her own voice, but still concludes that “[a]nd yet, mostly, women on the Web just have to ignore it. If you show it bothers you, you’ve given them pleasure.” A 2005 Pew Internet & American Life Project report suggests that other womyn have internalized this lesson and are simply avoiding participation – the report, entitled How Women and Men Use the Internet, shows that participation in chat and discussion groups dropped by 11% between 2000 and 2005 due to womyn choosing not to participate.

I am concerned about these remedies, concerned that womyn’s options seem to be to fight an isolated and individual battle, to just “deal with it” or to walk away, silenced. I am concerned that the remedies offered all seem to be focused on individual situations and harm. By focusing on individuals and individual remedies, we may lose sight of the larger issue.

Dahlia Lithwick’s article examined the differences between offline and online communication and argued that there are quantitative differences at work when it comes to these kinds of attacks and threats. She concludes:

No woman should have to choose between writing – either personally or professionally – and being told that her family will be raped. Sadly, that appears to be the current choice. But the important inquiry isn’t whether she should drop out or not. Nor is it whether she should stop whining or keep screaming. Those questions are personal and subjective, and the answers will be as different as the writers who consider them. The better questions are: Are these threats serious? Why do they feel so serious? How often do they result in something serious? And what might we do about it? Gender differences are only the beginning of the important discussions – not the end of them.”

With all respect to Ms. Lithwick, gender differences may only be the beginning of the discussions, but they are a beginning that has neither been fully explored nor fully weighted in these debates. Gendered, sexualized threats are inherently serious, not only because of the violence or danger of it, but because of their impact on equality.

Another Washington Post article from April 2007 suggests that:

As women gain visibility in the blogosphere, they are targets of sexual harassment and threats. Men are harassed too, and lack of civility is an abiding problem on the Web. But women, who make up about half the online community, are singled out in more starkly sexually threatening terms..

The problem with looking at this issue through individual lenses is that while individual redress (of some limited kind and in some limited cases) may be available, in doing so we leave in place the existing norms that created the situation in the first place. When womyn are being singled out more and being subjected to greater and more sexualized violent harassment, we must continue to explore this issue. Not, as so many writers have done of late, to ask “how should womyn respond” but rather to question “where does this come from and what are its overarching effects?” In examining this issue, we become aware that the online environment has become a new, broader environment for these things to emerge, be expressed, proliferate and to some degree become accepted.

I must confess – I have no answers. Many issues come up in this discussion – free speech, fear of censorship, the importance of anonymity, and the problem of whether we can or should regulate the Internet. As we seek to weigh all the issues and arrive at some understanding – ideally some solution – it is imperative that we not forget to add to the mix and weight appropriately our social commitment(s) to equality and the recognition of the communal benefits of equality. Any solution that is arrived at without taking this into account will hinder the transformative potential of these new spaces just as the current gendered, sexualized violence and harassment is now doing.

PDF Print
Are Biometrics Race-Neutral?
By: Shoshana Magnet

June 5, 2007


Biometrics regularly are described as technologies able to provide both "mechanical objectivity" [1] and race-neutrality. The suggestion is that biometrics can automate identity inspection and verification and that these technologies are able to replace the subjective eye of the inspector with the neutral eye of the scanner. In this way, biometric technologies are represented as able to circumvent racism: they are held up as bias-free technologies that will objectively and equally scan everyone's bodily identity. Frances Zelazny, the director of corporate communications for Visionics (a leading US manufacturer of biometrics systems) asserted that the corporation's newly patented iris scanning technology "is neutral to race and color, as it is based on facial features recognized by the software" (2002). In an online discussion on the use of iris scanners at the US-Canada border, one discussant claimed he would prefer "race-neutral" biometric technologies to racist customs border officials:

If I was a member of one of the oft-"profiled" minorities, I'd sign up for sure. Upside--you can walk right past the bonehead looking for the bomb under your shirt just because of your tan and beard. . . . In short, I'd rather leave it up to a device that can distinguish my iris from a terrorist's, than some bigoted lout who can't distinguish my skin, clothing or accent from same (Airport Starts Using Iris Screener, 2005).

Biometrics are central to the attempt to make suspect bodies newly visible. This is a complicated task, and one that is regularly tied to problematic assumptions around race, class and gender identity. It is not surprising therefore, that when biometric technologies are enlisted in this task they fail easily and often. What is most interesting about biometric malfunctions are the specific ways that they fail to work. Thus, as biometrics are deployed to make othered bodies visible, they regularly break down at the location of the intersection of the body's class, race, gender and dis/abled identity. In this way, biometrics fail precisely at the task that they have been set.

As biometric technologies are developed in a climate of increased anxiety concerning suspect bodies - stereotypes around "inscrutable" racialized bodies are technologized. For example, biometrics technologies significantly are unable to distinguish the individual bodies of people of colour. Research on the use of biometric fingerprint scanners has regularly found that it is difficult to fingerprint "Asian women . . . .[as they] had skin so fine it couldn't reliably be used to record or verify a fingerprint" (Sturgeon, 2004). Arguably, stereotypes concerning the inscrutability of orientalized bodies thus are codified in the biometric iris scanner.

These biometric failures result in part from the technological reliance on outdated and erroneous assumptions that race is biological. These assumptions partially can be noted from the titles of the studies that describe the biometric identification technologies. For example, one paper is titled "Facial Pose Estimation Based on the Mongolian Race's Feature Characteristic" (Li et al., 2004). Others titles include "Towards Race-Related Face Identification" (Yin et al, 2004) and "A Real Time Race Classification System" (Ou et al, 2005).

race classification.gif
This image is taken from A Real Time Race Classification System. Its caption in the original article reads: Two detected faces and the associated race estimates.

The suggestion that race is a stable biological entity that reliably yields common measurable characteristics is deeply problematic. Such conclusions are repeated in a number of articles that claim to classify "faces on the basis of high-level attributes, such as sex, 'race' and expression " (Lyons et al, 2000). Although the quotes around the word "race" would suggest that the authors acknowledge that race is not biological, they still proceed to train their computers to identify both gender and race as if it were so. This task is accomplished by scanning a facial image and then identifying the gender and race identity of the image, until the computer is claimed to be programmed to classify the faces itself. Unsurprisingly, error rates remain high. Neither gender nor race are stable categories that consistently may be identified by the human eye, let alone by computer imaging processes.

The assumptions concerning the dependence of biometric performance on racial and ethnic identity can also be noted in the locational differences in hypotheses around race and biometrics that are specific to each site of the study. In the US, biometric technologies have failed to distinguish "Asian" bodies. In the UK, biometric technologies have difficulty distinguishing "Black" bodies. In Japan, one study posited that it would be most difficult for biometrics to identify "non-Japanese" faces (Tanaka et al, 2004).

Nor do the failures of biometrics end with the errors that result from the codification of a biological understanding of race. Biometric technologies consistently are unable to identify those who deviate from the norm of young, able-bodied persons. In general, studies have shown that "one size fits all" biometric technologies do not work. For example, biometric facial recognition technology works poorly with elderly persons and failed more than half the time in identifying those who were disabled (Black Eye for ID Cards, 2005; Woolf et al, 2005). Other studies on biometric iris scanners have shown that the technologies are particularly bad at identifying those with visual impairments and those who are wheelchair users (Gomm, 2005).

Class is also a factor that affects the functioning of biometric technologies. Those persons with occupations within the categories "clerical, manual, [and] maintenance" are found to be difficult to biometrically fingerprint (UK Biometrics Working Group, 2001). Biometric iris scanners failed to work with very tall persons (Gomm, 2005) and biometric fingerprint scanners couldn't identify 20% of those who have non-normative fingers: "One out of five people failed the fingerprint test because the scanner was 'too small to scan a sufficient area of fingerprint from participants with large fingers'" (Black Eye for ID Cards, 2005). Many kinds of bodily breakdown give rise to biometric failure. "Worn down or sticky fingertips for fingerprints, medicine intake in iris identification (atropine), hoarseness in voice recognition, or a broken arm for signature" all gave rise to temporary biometric failures while "[w]ell-known permanent failures are, for example, cataracts, which makes retina identification impossible or [as we saw] rare skin diseases, which permanently destroy a fingerprint" (Bioidentification, 2007).

In addition to having technologized problematic notions around the comprehensibility of difference, biometrics are discursively deployed in ways that continued to target the specific demographics of suspect bodies. For example, biometric facial recognition technology requires Muslim women to completely remove their veils in order to receive new forms of id cards while older forms of identification such as the photos on driver's licenses only required their partial removal. In this way, biometric technologies are literally deployed to further the invasion by the state of the bodily privacy of Muslim women – an application that surely is not "race-neutral."

The examples cited above demonstrate that the objectivity and race-neutrality of biometrics needs to be called into question.

[1] I take this phrase from Daston and Galison (1992).


(2005). "Airport Starts Using Iris Screener." Available at http://www.vivelecanada.ca/article.php/20050715193518919. April 27, 2007.

(2005). "Black Eye for ID Cards." Available at http://www.blink.org.uk/pdescription.asp?key=7477&grp=21&cat=99. April 27, 2007.

Bioidentification. (2007). "Biometrics: Frequently asked questions." Available at http://www.bromba.com/faq/biofaqe.htm. April 27, 2007.

Daston, L. and P. Gallison. 1992. "The image of objectivity." Representation 40, Fall.

Gomm, K. 2005. "U.K. agency: Iris recognition needs work". News.com, October 20.

Li, H., M. Zhou, et al. 2004. "Facial Pose Estimation Based on the Mongolian Race’s Feature Characteristic from a Monocular Image ". In S. Z. Li, Z. Sun, T. Tanet al (eds.) Advances in Biometric Person Authentication.

Lyons, M. J., J. Budynek, et al. 2000. Classifying Facial Attributes using a 2-D Gabor Wavelet Representation and Discriminant Analysis. Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000. Proceedings, Grenoble, France.

Ou, Y., X. Wu, et al. 2005. A Real Time Race Classification System. Proceedings of the 2005 IEEE: International Conference on Information Acquisition, Hong Kong and Macau, China,.

Roy, S. "Biometrics: Security boon or busting privacy?" PC World.

Sturgeon, W. (2004). "Law & Policy Cheat Sheet: Biometrics." Available at http://management.silicon.com/government/0,39024677,39120120,00.htm. April 27, 2007.

Tanaka, K., K. Machida, et al. 2004. Comparison of racial effect in face identification systems based on Eigenface and GaborJet. SICE 2004 Annual Conference.

UK Biometrics Working Group. (2001). "Biometrics for Identification and Authentication - Advice on Product Selection." Available at http://www.idsysgroup.com/ftp/Biometrics%20Advice.pdf. April 27, 2007.

Woolf, M., F. Elliott, et al. 2005. "ID Card Scanning System Riddled with Errors ". The Independent, October 16.

Yin, L., J. Jia, et al. 2004. Towards Race-related Face Identification: Research on skin color transfer. Sixth IEEE International Conference on Automatic Face and Gesture Recognition.
PDF Print
Privacy and Surveillance in Web 2.0: Unintended Consequences and the Rise of “Netaveillance”
By: Michael Zimmer

May 29, 2007


This post is an attempt to collect and organize some thoughts on how the rise of so-called Web 2.0 technologies bear on privacy and surveillance studies. After presenting a few examples of unintended consequences of Web 2.0 that bear on privacy and surveillance, I will introduce the term “netaveillance,” which might provide a useful concept around which a more robust theory of surveillance about the Web 2.0 phenomena might be built.

The rhetoric surrounding the Web 2.0 movement presents certain cultural claims about media, identity, and technology. It suggests that everyone can and should use new Internet technologies to organize and share information, to interact within communities, and to express oneself. It promises to empower creativity, to democratize media production, and to celebrate the individual while also relishing the power of collaboration and social networks. Websites such as Flickr, Wikipedia, del.icio.us, MySpace, and YouTube are all part of this apparent second-generation Internet phenomenon, which has spurred a variety of new services and communities – and venture capitalist dollars.

This cartoon of a room full of people arguing at a cocktail party after someone mentioned the provocative theories of Marshall McLuhan reminds me of today’s emotional debates over the relative impact – and even the very existence – of Web 2.0. Many hail Web 2.0 as the “new wisdom of the web,” and “a new cultural force based on mass collaboration,” while others deride it as merely a marketing jingo, “amoral,” and even an extension of Marxist ideology.

This last notion, the relationship between Web 2.0 and Marxism, was suggested by Andrew Keen, one of the loudest provocateurs of the Web 2.0 ideology. Keen has received considerable criticism for making comparisons between the Web 2.0 meme and Marxism, but, between the vitriol, he does make some valid points about the utopianism and solipsism that seems to underlie much of the Web 2.0 discourse. In particular, he criticizes the fervent commitment to technological progress:

The ideology of the Web 2.0 movement was perfectly summarized at the Technology Education and Design (TED) show in Monterey, last year, when Kevin Kelly, Silicon Valley’s über-idealist and author of the Web 1.0 Internet utopia Ten Rules for The New Economy, said:

“Imagine Mozart before the technology of the piano. Imagine Van Gogh before the technology of affordable oil paints. Imagine Hitchcock before the technology of film. We have a moral obligation to develop technology.”

But where Kelly sees a moral obligation to develop technology, we should actually have–if we really care about Mozart, Van Gogh and Hitchcock–a moral obligation to question the development of technology. [emphasis added]

This moral obligation to question the development of technology compels Keen to identify some of the unintended consequences of the emergence of Web 2.0 infrastructures, including the flattening of culture, the overabundance of amateur authors and producers, and narcissism run wild.

As I begin to study the Web 2.0 meme from the perspective of privacy and surveillance theory, a different set of unintended consequences emerges, including shifts in the flow of personal information that might threaten personal privacy in ways much more damaging than Keen’s concern that content is now made and distributed by mere amateurs instead of honed professionals.

For example, Web 2.0 applications often rely on rich metadata to create value in information, such as the geotagging of images uploaded to Flickr. While it might be useful and fun to have locational data automatically associated with your images, considerable privacy concerns emerge as an externality. For instance, law enforcement officials can simply search for all photos online matching the location & timing of a certain political rally in order to broaden their ability to keep records of who was present. Or, combined with the development of facial recognition technologies with shared online photos, stalkers (or other annoying folks) might soon be able to search for a certain person’s face, and discover the GPS coordinates of the coffee shop they seem to be pictured in every Tuesday morning. Someone even developed a tool, FlickerInspector, to facilitate this kind of mining of the datastreams users leave behind on Flickr.

Of course, one doesn’t need a fancy application like FlickerInspector to reap the benefits of the new datastreams facilitated by Web 2.0 applications. Inherent in Web 2.0 evangelism is an overall faith in the network to be the processing platform: users are encouraged to put as much of their lives as possible online, to divulge and share their personal lives, their professional development, their favorite websites, their music, their friendships, their appointments, and even where they’ve connected to wi-fi. If you know a person’s “handle” on one Web 2.0 site (“michaelzimmer” at del.icio.us), you probably can find them on many more (Plazes, LibraryThing).

The prevalence of sharing so many details of one’s life through various Web 2.0 and social networking sites, and the relative ease of finding users across these services, leads to a second key externality: the rise of amateur data-mining. Fueled by the power and reach of Web search engines, it seems anyone can now engage in the kind of tracking and data-mining of user’s online activities that was once possibly only by the most powerful of computer systems.

An interesting case of amateur data mining made possible through Web 2.0 involves “Don, the camera thief.” The blog BoingBoing posted a story of a woman who lost her camera while on vacation, but was contacted by the family who happened to find it. Unfortunately – and oddly – the family who found it refused to return the camera because their child liked it so much. BoingBoing thought the actions by the finders of the camera were “shameful.” A few days after posting this, BoingBoing received an e-mail from someone who claimed his name was “Don Deveny,” purportedly a Canadian lawyer, who implied that the post was illegal and that BoingBoing was liable for making it. The folks at BoingBoing doubted the legitimacy of the email (the word “lawyer” was misspelled, for example), and decided to see what he could find out about “Don.”

They first contacted many of the law societies in Canada, none of whom had any record of a “Don Deveny” licensed to practice law in Canada. (by the way, it is illegal to pretend to be a lawyer). From their e-mail exchange, they were able to isolate the writer’s real e-mail address from the message headers, and through a Google search, located other pages that contain that address. That led them to a profile page for a user of the website called “Canada Kick A**” who shared the very same e-mail address. That profile page had a different person’s name (perhaps “Don’s” real name?), and also listed a location and profession for the user (he’s not a lawyer). It didn’t take much to figure out (or at least get a better clue) as to who this e-mailer was, and his profile page on a Web 2.0-inspired discussion board made it much easier.

Readers of BoingBoing did some amateur data mining of their own: a commenter at the original camera owner’s blog seemed to share many of the same sentiments of “Don,” along with many of the same spelling errors. This commenter used a different screen name, but when asked to identify himself, also said he was a lawyer. Another reader then discovered that a user with that same screen name recently bid on memory cards at eBay that would have been used in the stolen camera. More amateur data mining ensued, and discovered another user profile at a different discussion forum with the same user name and same “favorite sites” listed in the signature file. And this page included a photo of the user: Is this “Don” our camera thief?

Another example of the ease of amateur data mining with the help of Web 2.0 services is the outing of Lonelygirl15. Lonelygirl15 was the mysterious girl leaving video confessions on YouTube, garnering a huge following of devoted fans, yet know one knew who she was or if they were really just a kid’s video diary or perhaps a large hoax or advertising campaign. After some amateur data mining, the truth came out:
A reader was surfing an article on Lonelygirl15 at a random website when he came across a comment that linked to a private MySpace page that was allegedly that of the actress who plays Lonelygirl15. Since the profile was set to “private,” very little information one could glean from the page. However, when he queried Google for that particular MySpace user name, “jeessss426,” he was able to access Google’s cache from the page a few months ago when it was still public. A lot of the details of the girl’s background quickly emerged: She was an actress from a small city in New Zealand who had moved to Burbank recently to act. The name on the profile was “Jessica Rose.” When he happened to query Google image search for “Jessica Rose New Zealand” he was instantly rewarded with two cached thumbnail photos of Lonelygirl15, a.k.a. Jessica Rose, from a New Zealand talent agency that had since removed the full size versions. A search on Yahoo for “jeessss426” also turned up various pictures from her (probably forgotten) ImageShack photo sharing account. Lonelygirl15 was revealed.

Little effort was needed to link up the various e-mails, user names, personal data flows, and photos shared across blogs, discussion forums and other Web 2.0-style sites to track down “Don the camera thief” or “LoneyGirl15”. Moving more and more of our activities to Web 2.0 makes it harder to remain anonymous, and the myth of “security through obscurity” seems to be disappearing as various crumbs of our true identity are being scattered across the Web 2.0 landscape.

A final externality of Web 2.0 relates to a new form of informational voyeurism that these platforms enable. While Web 2.0 sites have enjoyed incredible growth and heavy viral participation, only a small fraction of overall users actually use the services to upload content – the vast majority just likes to lurk and watch. According to one report, only 0.16 percent of YouTube’s total traffic is made up of users who upload videos. Similarly, only 0.2 percent of Flickr’s regular users are there to upload photos. And slick new tools emerge daily to facilitate the surveillance and voyeurism of people’s daily activities. For example, “feeds” on Facebook allow users to be notified immediately when a friend updates their profile (changing their mood, their friend list, their relationship status, etc), dodgeball helps users find friends (and unknown friends of friends) within a 10 block radius of their present location, DiggSpy allows real-time monitoring of user’s activities on the popular news ranking site Digg, and Twitter has quickly emerged as the hottest new voyeuristic service, allowing users to share text snippets of their day-to-day activities, and monitor others’ streams of the mundane details of their lives (such as “a whole gang of women with dogs just walked past my window”).

What seems to be emerging is a new form of voyeuristic surveillance of people’s everyday lives, fueled by Web 2.0. This has been referred to varyingly as “peer-to-peer surveillance” or even as a new kind of “participatory panopticon.” Yet these terms – and the theories embedded within them – seem insufficient to fully grasp the significance of the emergence of this new voyeurism of the mundane. Surveillance, of course, implies the “watching over” of subjects from above, with an explicit power relationship between the watchers and those placed under its gaze. Trying to describe surveillance as “peer-to-peer” suggests a flattening of the power relationship that is counter to its very definition. Similarly, the notion of a “participatory panopticon” is at the same time redundant and contradictory. Foucault revealed how panoptic power becomes internalized by the subjects, thus, they necessarily “participate” in their own subjugation. Yet the top-down power relationship within the panoptic structure remains. The participation by the subjects does not make them equal with the watchers. Yet the informational voyeurism associated with Web 2.0 seems to imply a balance between the users: one shares their data streams in order to improve the overall worth of the network, coupled with the presumption that they’ll be able to observe and leverage others’ streams as well.

This notion resembles that of “equiveillance,” a state of equilibrium between the top-down power of surveillance, and the resistant bottom-up watching of sousveillance. Yet, this notion implies merely a balance in access to surveillance information, and is focused more on how to reach some kind of harmonious relationship with our rising surveillance society. With the informational voyeurism of Web 2.0, however, the goal isn’t to resist or come to terms with the power yielded by traditional surveillance, but rather to participate in a widespread and open sharing of the mundane details of one’s daily life. To give one’s peers a glimpse into one’s own personal universe. These snapshots of the minutia of people’s lives have been compared to the Japanese concept of “neta”, the tidbits of people’s lives that are shared with family and friends as a kind of social currency. The Japan Media Review (an affiliate of Annenberg’s Online Journalism Review) recently made an insightful connection between “neta” and Web 2.0 voyeurism:

In Japanese, "material" for news and stories is called "neta." The term has strong journalistic associations, but also gets used to describe material that can become the topic of conversation among friends or family: a new store seen on the way to work; a cousin who just dropped out of high school; a funny story heard on the radio. Camera phones provide a new tool for making these everyday neta not just verbally but also visually shareable.

As the mundane is elevated to a photographic object, the everyday is now the site of potential news and visual archiving. Sending camera-phone photos to major news outlets and moblogging are one end of a broad spectrum of everyday and mass photojournalism using camera phones. What counts as newsworthy, noteworthy and photo-worthy spans a broad spectrum from personally noteworthy moments that are never shared (a scene from an escalator) to intimately newsworthy moments to be shared with a spouse or lover (a new haircut, a child riding a bike). It also includes neta to be shared among family or peers (a friend captured in an embarrassing moment, a cute pet shot) and microcontent uploaded to blogs and online journals. The transformation of journalism through camera phones is as much about these everyday exchanges as it is about the latest headline.

Building on this Japanese concept of “neta,” I propose a new kind of “veillance” has emerged with Web 2.0 infrastructures: “netaveillance”. Netaveillance can be defined as the process of openly and purposefully providing an almost continual stream of the details of one’s daily life – the mundane, the profane, and the vain – through Web-based technologies, coupled with the ability to capture similar data streams from one’s peers. Netaveillance constitutes an emerging ecosystem of personal data flows – not the exceptional information meant to be protected from state or commercial surveillance, but the free and open sharing of the minutiae of our lives.

My conceptualization of netaveillance is, to be sure, in its most nascent of stages. Much work needs to be done to contemplate how it relates to existing theories of privacy and surveillance, how power relations between and among participants might still exist, how such data flows could be captured by state or commercial interests, and so on. Theorizing and understanding netaveillance is no small task, but it might provide a new language and framework from which to understand the informational voyeurism and related unintended consequences of the Web 2.0 phenomenon.

Whether you want to bring it up at a cocktail party is up to you.

Michael Zimmer is completing his Ph.D. in the Department of Culture and Communication at New York University, and will be a Fellow at the Information Society Project at Yale Law School. He is looking forward to developing his theory of netaveillance at the up-coming Surveillance Summer Seminar. He can be reached via his website, michaelzimmer.org.
PDF Print
“All about us” – personal identity and identification systems
By: Jason Pridmore

May 22, 2007


A few weeks ago I watched the 1950 movie “All About Eve.” It is a classic I am told, nominated for 14 academy awards and winner of the award for best picture. Mind you, in an age that emphasizes the role of experts, I do not claim to be a film critic, novice or otherwise, so I’ll leave it at that. I can say that I found the performances in the film to be compelling, something confirmed both by the DVD extras and a cursory web search which suggest this to be, in specific, Bettie Davis’ best performance. The film has its interesting plot twists and turns, clearly a film set against the backdrop of a bygone era, but with several themes that pervade into our lives today, namely the intricacies of social relationships, how much others know about us, and the potential for this knowledge to turn into manipulation.

In the film, the character “Eve” (whom we are to learn all about) sets out seemingly innocently to bathe in the glow of Davis’ character, the actress Margo Channing, but ultimately subverts this glow into her own personal limelight. The film begins at the end, as it were, with Eve Harrington receiving an award for an exceptional performance in a role we soon learn was taken from Channing. In the midst of this ceremony, a narrative voiceover mentions Eve directly:

Eve. Eve, the Golden Girl. The cover girl, the girl next door, the girl on the moon... Time has been good to Eve, Life goes where she goes – she's been profiled, covered, revealed, reported, what she eats and when and where, whom she knows and where she was and when and where she's going... ... Eve. You all know all about Eve... what can there be to know that you don't know?

Plenty, apparently, and the next hour and a half is a journey into the history of intricate relations between Eve, Margo and their group of friends. Despite the new found knowledge of Eve’s character in these relational histories, there is something to be said about Eve playing a part, following a scripted role. If in fact we had been able to read the accounts of her life mentioned in the voiceover, to see the profiles and her coverage in the media, we would know something about who she was and what she was like that the revelations of the remainder of the movie, however stark the contrast with mediated reports, would not have shown us. In the end, these would only augment to some extent our expectations of how Eve is to be understood.

I realise that by now I may have lost any number of you who have not seen nor care to see the film. But I use it here to suggest something about which I can claim at least some expertise – the relationship between our sense of identity and its inherent relationship to how we are identified by others. As Richard Jenkins (2000) points out, “we know who we are because, in the first place, others tell us.” Yet in our society, our understandings of self, our identity is increasingly related to how we exist under socially and technically created systems of identification that seemingly know “all about us.” To put it in the terms of the film, the way in which we are profiled, covered, revealed and reported affects our sense of who we are.

I wish I could say that my watching of classic film was inspired by a maturation of my entertainment tastes: an increasing desire to read classic literature and watch the great films of our age. I am afraid this would be less than honest. In fact, the motivation to watch this film was driven by my personal academic research. Andrew Smith and Leigh Sparks, British marketing researchers at the University of Nottingham and Stirling (respectively), entitled a 2004 article in the Journal of Marketing Management “All about Eve?” In the article they describe the purchasing habits of a woman they give the pseudonym “Eve.” Smith and Sparks were given access to two years worth of purchase data based on a particular retail store’s loyalty card program. With this data, they surmise the following things about Eve:

• She is overweight and very concerned about her appearance, especially her poor complexion
• She has long hair, usually wears contacts but wears glasses occasionally, and has numerous problems with her feet
• She has hay fever and struggles to overcome a common cold several times a year
• She has a boyfriend or partner she occasionally buys items for
• She is someone who plans holiday gifts and cards well in advance

These could be intimate details about a person’s life, and the authors readily admit to the fact that they could be wrong about any and all of these descriptions. However they (as am I) are reasonably sure that they know more than Eve herself would be comfortable with. They further recognize that without personally identifiable data or even aggregate sets of data that pertain to her (like geodemographic profiles), they know far less than what the retailer may in fact “know” about Eve.

What I want to suggest is that in a world in which, in the words of Zygmunt Bauman (1992), consumption has become the “cognitive and moral focus of life, the integrative bond of the society, and the focus of systematic management,” marketers do know much about us. In the midst of the increasingly desperate situation with Eve, Margo Channing states “so many people know me. I wish I did. I wish someone would tell me about me.” Ms. Channing can be assured that today marketers are keen to tell her exactly who she is. Based on her affinities with certain products, her past purchasing behaviours, the neighbourhood in which she lives, the relations she has with others, and far more information which is increasingly knowable, known and quantified, Channing could be situated as a consumer quite readily. We have become statistically significant sets of data (see Zwick and Dholakia 2004), something which affects both how we understand ourselves and how we are understood by consumer systems.

In many cases, we may be seen to “sort ourselves out” as Richard Burrows and Nicholas Gane’s recent article on geodemographics suggests (2006), specifically as a form of “commercial sociology” aids us in deciding the type of people we would like to live with – splitting up neighbourhoods into lifestyle clusters and reengineered class constituencies. On the other hand, loyalty programs, such as the ones Smith and Sparks discuss, are keen to use the data we have given over to “help us solve our problems.” These problems are of course indicative of who you are, your life stage, your income and career, your family, your personal appearance, your diet, etc. In return, they only ask and hope for more patronage, and of course, more data. How else would they be able to know who we are and meet our needs?

After several years of studying the means by which corporations monitor the current and potential customers and after several interviews with executives of loyalty programs, I am convinced that corporations know much about us. Ironically, though the film “All About Eve” suggests we will know all about her, it is the character Eve who in fact seems to know all about us. While we learn all about Eve’s rise to stardom, she does so by means of clever and subtle manipulation. I am reminded quite succinctly of the ways in which marketing practices remain covert and subtle. In one interview I conducted it was suggested to me that the loyalty program (read: data collection program) was meant to know all about you, not in a “big brother” like way, rather in a “best friend” sort of way – to target advertisements meant specifically for your situation, your context. This is never overt of course, both for fear of “getting it wrong” and for fear of appearing as a form of ominous surveillance, but these are clearly and specifically meant to connect with your personal life and I am convinced this has an affect on one’s self concept.

In the end, despite a concern for appearing ominous, it is consumer surveillance and it is ubiquitous. The personal knowledge surmised from the collection of consumer data may not always be right, but based on that information one may begin to experience life differently because of the way it serves to distribute certain resources and penalties (Jenkins 2000). Increasingly, our personal identity – our conception of self – is produced and reproduced in institutionalized contexts and as corporations gather and integrate more and more personal data, the potential for the expectations of this data to become lived out in the experiences of the lives to whom it correlates is high. While this may prove a particular advantage for upwardly mobile consumers, it likewise leaves a rather dismal future for those who may be seen as “collateral damage” for an economic system focused on particular types of consumers (Bauman 2007). Which is to say, knowing all about “us” applies to only a certain categories of people, like Eve, but even for her, what is known about her inevitably affects how she understands herself in the context of a society in which consumption is both a focus and a social bond…

Jason Pridmore is a Ph.D. Candidate in the Sociology Department at Queen's University.


Bauman, Zygmunt. 1992. Intimations of Postmodernity. New York: Routledge.
—. 2007. “Collateral Casualties of Consumerism.” Journal of Consumer Culture 7 (1):25-56.
Burrows, Roger, and Nicholas Gane. 2006. “Geodemographics, Software and Class.”Sociology 40 (5):793-812.
Jenkins, Richard. 2000. “Categorization: Identity, Social Process and Epistemology.” Current Sociology 48 (3):7-25.
Smith, Andrew, and Leigh Sparks. 2004. “All about Eve?” Journal of Marketing Management 20 (3-4):363-385.
Zwick, Detlev, and Nikhilesh Dholakia. 2004. "Whose Identity Is It Anyway? Consumer Representation in the Age of Database Marketing." Journal of Macromarketing 24 (1):31-43.
PDF Print
Is Anything Private in the Age of Internet Social Networking?
By: Robynn Arnold

May 15, 2007


In recent weeks, the popular social networking website, facebook.com has found itself at the centre of much discussion. From government and employer bans on the use of the website in workplaces, to sanctions and expulsions against students and employees stemming from information posted on facebook accounts, it seems of late that the site has never been far from media attention. Ironically, this has all come at a time when I have faced increasing pressure from friends to finally get with the program and join the network, being that I am one of the few people I know not already connected. I admit that the above mentioned issues surrounding the website are not the reason I have yet to become a member – I am more simply concerned with the time that would be lost in my schedule to keeping up with this phenomena, having witnessed it firsthand with friends. However, being a virgin to the social networking game, its recent newsworthy attention does give me reason to pause before logging in and signing on, but not for the reasons most would think. In fact, it shocks me that what I see as the most concerning aspect of this new way of sharing and communicating seems to be somewhat flying under the radar, overshadowed by the predominant concerns surrounding lost productivity. The bigger picture that seems to be misplaced in the recent wave of attention is the more concerning issue of privacy, or lack thereof, surrounding information posted in such a forum.

Facebook started in 2004 by a sophomore student at Harvard University keen on bringing the idea of university paper ‘facebooks’ into the technological age. Since then the site has developed and grown tremendously. It now boasts more than 19 million registered users and is in the top ten most trafficked websites in the United States. But it is Canada that can currently lay claim to the title of the nation with the fastest growing membership to the site, estimated at representing 11% of users, up from 5% last year. Canadians, in fact surpass both the United Kingdom and the United States in rates of new membership. The site works by allowing registered users to essentially create a profile and link into numerous networks based on interests, geography, etc. Each member’s profile acts like a personalized website, and can include a list of friends, as well as showcase photos. The page also features a message board that each member can choose to make public. However, gaining access to a friend’s page that is not publicly available is as simple as placing a request that is yielded. After granting access to another user, all control over what the grantee can post is lost. It is easy to see how concerns over posting content and lost productivity of employee and student users has arisen, with members utilizing the site to post thoughts and keep up with relationships. But what of the matter of privacy in regards to information posted on member profiles?

There appear from first glance to be numerous issues surrounding anonymity and privacy with regards to social networking websites. The obvious ones that emanate with all web pages, such as data mining and information sharing with third parties are arguably possible and occurring. But the concerns that are specific to sites like facebook.com are conceivably more intrusive. For example, since a member who grants access to another user has no control over what that member posts on their message board, even personal information not divulged by the member could end up posted on their own page. Not to mention that such information is always possible as being posted on the other user’s page. Even in a private profile, this information becomes instantly accessible to all those having admission, and where the profile is public, the information automatically would be spread further. Another privacy concern surrounds ‘RSS feeds,’ which function to allow ongoing updates, capable of being posted from your Blackberry. Such minute details of daily life and location could prove dangerous in the hands of a stalker. While these are concerning enough issues, they lead to the broader question over who exactly may be interested in accessing your information. Colleges, universities and police have all utilized facebook in investigations, and recently it has been suggested that employers may be interested in looking up potential employee’s profiles as part of their hiring processes. For a site specifying itself as being available, “for your personal, noncommercial use only,” many users are naively being misled. Beyond the issue of maintaining control over and some semblance of privacy in the information posted, the notion of who should be examining posted information is important. While it is arguable that police and school intervention is a good thing, possibly solving crimes and stopping hateful or derogatory postings, should job appointments really be determined partially on the basis of what someone has posted on their facebook account?

The question to be answered then is how do we classify such social networking forums? Are they simply open public spaces where members lose any claim to their privacy and anonymity once becoming a user? Or, should such venues simply be seen as the modern version of private conversation with technology simply providing the global link, and thus off limits to those not knowingly in the circle? One thing is for sure, at the present rate of growth of over 1 million new users each week online social networking sites like facebook.com are not going away anytime soon. Simply avoiding such forums may not provide a feasible solution when trying to maintain modern relations. Perhaps then it is time to think hard about the privacy problems these forums raise and develop a strategy to handle these concerns without stunting access. I have managed to hold out joining until now, but the temptation to connect and reconnect with friends and acquaintances is increasingly tempting. With member friends already displaying my picture and information on their pages, can avoidance really be seen as a measure in maintaining my anonymity and privacy?

Robynn Arnold is an LL.M. Candidate at the Faculty of Law, University of Ottawa.
PDF Print
You and Your Avatar: Having Second Life Thoughts on Anonymity and Identity
By: Bert-Jaap Koops

May 8, 2007


My first thought was that a website called On the Identity Trail, with a research stream on Constitutional, Legal and Policy Aspects, would feature a lively debate on a right to anonymity. Yet a search on 'right to anonymity' on this website offers only one hit: a December 2003 piece announcing that lawyers in the ID Trail project will study a right to anonymity. Since then, the term as such does not recur, and the anonymity focus webpage - although covering a fascinating range of subjects - does not offer much for the reader who wants to know whether or not she has a right to anonymity.

This, of course, was only to be expected. A right to anonymity does not exist, has never existed, and will never exist. At some point, there will always be someone with a right to know your identity. In certain contexts, it is eminently possible that you remain anonymous, to your hairdresser, reader, or (sperm-donated) child, and you may even claim a certain right to this. But there is always a conflicting right to identification that may outweigh your claim to anonymity, for your hairdresser (if you leave without paying), for your reader (who feels slandered), for your child (looking for his father), and, ultimately, for the police (looking for a serial killer). If a right to anonymity were established as a generic right, it would be so relative as to become meaningless.

My second thought was that things may be different in cyberspace, that illusive but oh so attractive space where no-one knows you're a dog. Or in Second Life, where you can be a dog and where no-one knows who you really are. What is more, where you yourself may not know who you really are. Isn't Second Life - today's hyped epitome of cybercommunities and massive multi-player on-line role-playing games - a space where we can start from scratch and build a parallel universe where a right to anonymity is the most normal thing in the world? Where anonymity is available to anyone desiring some privacy, some fun, some room for weird statements that won't be held against her tomorrow?

If only life, even Second Life, were so simple. Ever since John Perry Barlow's Cyberspace Declaration of Independence and the subsequent tsunami of laws and regulations that refuted Barlow's rhetoric, centering on the one-liner "What holds off-line, also holds on-line" [1], we know that cyberspace and real space are inextricably intertwined. You and your avatar are two of a kind: they're different, but linked. You may want your avatar to be anonymous, or to have a famous avatar without anyone knowing it's really you who pushes the buttons, but how do your avatar friends, the avatar cops, the game providers, and the other players feel about that?

The evolution of virtual game spaces mirrors the evolution of the Internet: no sooner does it reach a wider audience, than it becomes commercialised, criminalised, regulated, normalised. The thrill of novelty disappears. Real life enters. In Second Life and its next-generation clones, avatars will use foul language, slander, commit vandalism, abuse children, rape dogs, offer drugs and crackz, discuss Al-Qaeda, launder money, and infringe trademarks. Politicians are shocked and will criminalise animal abuse in on-line games. Trademark holders will sue Internet and game providers to give the log-in data of infringing players. You yourself will want to know who assaulted your daughter's avatar and stole the dragon sword on which she spent half-a-year's pocket money. Registering the identity of game players will become routine practice, and at some point, there will always be someone with a right to know your identity.

This is a missed opportunity, since virtual spaces offer a unique occasion to experiment. In their second lives, people dare take risks they would never dream of taking in their first life. In particular, people can develop parts of their identity that they dare not develop in real life. How does it feel to be a boy? I never knew I had this tender streak in my character. How exciting to experiment with same-sex sex. How good it feels to tell this black guy that if he doesn't get out of the way, I'll chop up his ghettoblaster! As your avatar experiments, grows, and develops, in some way, you yourself grow and develop too.

This unique, identity-fostering potential of virtual space is at risk if anonymity is not a given in games. The risk of being recognised will prevent not a few experiments with roles and identities. Yet tragically, anonymity can not be a given in virtual space, because virtual space is never absolutely virtual. Real people live in virtual spaces, and real people can be hurt. If legal protection is taken seriously, absolute anonymity - of avatars and of players - is impossible. A virtual and strong right to anonymity is an attractive idea, but we must have second thoughts about this.

The bright side of this is that the resulting need for identity and identification in cyberspace raises a whole range of fascinating issues that beg to be researched. How do we identify the people behind the avatar, when millions of the world community are living in a single cyberworld, when multiple users share an avatar, and when the first people who can give identifying information - ISP's, game providers - are likely to be in foreign jurisdictions? Do people identify themselves with their avatar? Is someone's ipse identity (her sense of self) affected by the way her avatar is treated in virtual space, or by her being identified - by her idem identity (her sameness) - as the person behind the avatar [2]? Since most virtual games seem to decree that in case of conflicts, the law of California applies, do I want my identity to be governed by a law-maker who used to be a terminating cyborg? And while we are on the topic of cyborgs, when will avatars become semi-autonomous and remain active when you log out, thus acquiring some sort of identity of their own? When will they start talking back, asking you who you are, this guy that is playing around with them?

A right to anonymity is perhaps not such an interesting issue to research after all, not even in virtual spaces. At some point, there will always be someone with a right to know your identity. You yourself, for instance. Or your avatar.

Bert-Jaap Koops is Professor of Regulation & Technology at TILT - Tilburg Institute for Law, Technology, and Society, the Netherlands.

[1] M.H.M. Schellekens (2006), 'What Holds Off-Line, Also Holds On-Line?', in: B.J. Koops et al. (eds.), Starting Points for ICT Regulation. Deconstructing Prevalent Policy One-Liners, The Hague: TMC Asser Press, pp. 51-75.

[2] This is one of the many identity questions that will be addressed in the coming year by the EU FIDIS network.

PDF Print
The Game Theory of Phishing
By: Jeremy Clark

May 1, 2007


By all measures, the amount of internet fraud is rising. Morgan Keegan reports the number of new phishing sites increased in its order of magnitude from 4,367 in October 2005 to 37,444 in October 2006. And phishing is not the only source of online fraud, the number of victims of identity theft is growing as well.

In response to the escalation of phishing attacks, a plethora of anti-phishing tools have been unleashed—Firefox extensions, IE toolbars, and psychedelic colour-shifting borders for your browser, as well as, perhaps more sensibly, blacklists of known phishing sites including a list maintained by web titan Google. Of course, these tools only work in so far as users take the time to install them and learn how to use them. On the latter point, news on the usability of security front is equally despairing. A user study conducted by Rachna Dhamija (Harvard), J. D. Tygar (Berkley), and Marti Hearst (Berkley), presented last year at the Conference on Human Factors in Computer Science, had participants evaluate 20 websites—7 legitimate, 13 fraudulent—and differentiate between them. The best phishing site fooled over 90% of the participants, with many users reasoning that page’s nice layout and animated graphics were a sure sign of its legitimacy. Numerous other usability studies have examined the effectiveness of various anti-phishing technologies, and its typical to hear them described as unintuitive at best and unusable at worst (not to mention an eyesore).

All of this brings us to the magnificent architecture of some of Ottawa’s oldest banks. With their tall pillars, imposing lobbies, marble floors, and brass railings, bank architecture showcases impressive work by great architects like John M. Lyle. (Okay, pardon the non sequitur. I assure you I am going somewhere with this). What is perhaps most intriguing about bank architecture is the reason for the notable buildings. Why exactly were banks so impressive and what happened? There is an easy answer: the magnificent designs were a consequence of competition (an answer easy enough to be articulated in The Canadian Encyclopedia). The problem with this answer is that it does not adequately explain why bank buildings have become less and less impressive over the past century while there is still substantial competition, nor does it explain why there was not a similar architectural arms race in hardware stores, feed mills, or other competitive industries.

A better answer comes from the work of economist Michael Spencer on asymmetric information and signaling theory (for which he shared the 2001 Nobel prize). Before the days of governmental oversight and a banking oligopoly, there existed the threat that the new bank that opened up down the street might be a fraud with crooks planning to run off with your money. By building impressive buildings, legitimate banks sent a signal of quality to customers that fraudulent banks could not afford to send. An expensive building assured potential customers that the bank was planning on long-term establishment and was committed to high standards of service.

These types of scenarios are called signaling games in game theory. A basic signaling game has two participants, a sender and a receiver. The sender knows something about herself (called her type) that is not observable to the receiver. The sender’s objective is to signify her type in a signal that differentiates her from other senders of different types, and to provoke an appropriate response from the recipient. Examples of signals include the education level of a job applicant, a full-page advertisement in the New York Times, or the striking blue-green plumage of a peacock.

The problem of phishing and fraudulent websites is also a signaling game, where legitimate websites need to find the online equivalent of an impressive building to signal their type to users. The problem is that the most obvious parallel to the offline world—an impressive website—is completely inadequate. Whether or not the bank customers of lore worked out the game theory of their situation, the signal worked because customers naturally gravitated towards banks with nice buildings. Once the signal became common, most customers did not need an education campaign in how to differentiate between legitimate and fraudulent banks to make the correct choice. In other words, their ulterior motives led them to the right decision. As the user study mention above indicates, this natural instinct is still instilled in modern internet users. When presented with an impressive website with fancy graphics and a cutting edge layout, a significant proportion of users conclude that is a signal of its legitimacy. While designing the kind of full-featured websites banks commonly use does cost a small fortune, the problem lays in the fact that all this hard work can be copied effortlessly. Phishing is thus a twofold problem: (1) we do not have a good signal, and (2) the signal that users naturally look for is not good.

It may be possible to address the second through user education if only we could solve the first. One potential signal might be website seals offered by watchdog organizations like TRUSTe and BBBOnLine. Benjamin Edelman of Harvard empirically studied websites baring these seals. He found that while a BBBOnLine seal slightly increased the probability of the site being trustworthy (but not enough to be an adequate signal), a TRUSTe seal actually decreased the probability that is was trustworthy. That is to say, a site with no seal at all is more likely to be trustworthy than one with a TRUSTe seal. Thus the seal not only fails as an adequate signal, it actually results in adverse selection. In the same paper, presented last year at the Workshop on the Economics of Information Security, Edelman also found that search engine advertisements are more than twice as likely to be untrustworthy as the accompanying search results—another display of adverse selection.

Perhaps a more promising area of third party accreditation is through website certificate authorities. The largest certificate issuers are, respectively, Verisign, GeoTrust, Comodo, GoDaddy, and Entrust. Until recently, a certificate from any of these authorities evoked the same response in browsers—a padlock being displayed—despite the fact that the verification process varies radically from authority to authority. Recently, however, Microsoft has agreed to implement a new, tiered approach to displaying certificate indicators. In new versions of Internet Explorer, the address bar will display a red toolbar if the site is a suspected phishing site, yellow if the site has a traditional certificate, and green if it has an extended validation (EV) certificate (and as always, white for no certificate). Receiving an EV certificate requires an extensive investigation process that will likely catch any fraudulent attempts at certification.

EV certificates have the potential to be an adequate signal. However this is only half of the problem, as the other half is getting users to recognize the signal and act accordingly. Time will tell if the EV process is extensive enough to demarcate legitimate companies from fraudulent ones, and if users will adapt to recognizing and understanding the implications of the signal. In the meanwhile, economic game theory still dictates that one way a company can signal its legitimacy is by spending more money than a fraudulent one could afford. In my opinion, nothing would say quality like an SSL certificate that costs a million dollars, turns the IE address bar sparkling gold, and puts a dollar sign over the lock. Anyone want to help me start MilliSign?
PDF Print
Privacy as a Social Value
By: Jane Bailey

April 24, 2007


The Canadian case law on hate propaganda, obscenity and child pornography features numerous analyses and discussions on the right to privacy, almost exclusively in the context of the privacy claims of those accused of related offences. Shaped as they are by the contexts in which they are raised, these analyses tend to mirror the negative, individualistic, control-over-access-to-information paradigm that has dominated thinking on the issue for several centuries. Notwithstanding that the vast bulk of Canadian legal analysis focuses on the right of an individual accused against state intrusion on a “private” sphere of activity to the exclusion of consideration of the privacy-related rights of the targets of hate propaganda and obscenity, Canadian courts have recognized that child pornography intrudes upon the privacy-related interests of the individual children abused in its production. The failure to recognize that hate propaganda and obscenity trigger similar intrusions for the members of the groups they target does not necessarily mean that no such intrusions are in fact triggered. Instead, the failure to recognize the triggering of those privacy interests might be understood to be the result of the selection of an individualistic privacy paradigm that, by and large, is conceptually inadequate to capture the collective nature of the privacy-related harms that can be occasioned by all three of these forms of expression.

Individuals targeted directly within hate propaganda and obscenity could muster arguments to squeeze the related privacy intrusions they experience as a result of that targeting into the individualistic paradigm, as has been the case with the analysis of the privacy-related intrusions on the children abused in production of child pornography. In the case of hate propaganda, however, the typical modus operandi of hate purveyors avoids attacks on individuals, generally focusing on broad categories. In the case of obscenity, the individualistic control-over-information paradigm, combined with patriarchal presumptions that women can be assumed to have consented to sexual activity and abuse is likely to impose a preliminary threshold of proof of non-waiver. Re-making what are essentially collectively-based claims into individual claims for the purpose of fitting the paradigmatic mould is unlikely, however, to form the basis of a meaningful long-term strategy for equality-seeking groups and their members.

Just as the analyses of privacy in the contexts of abortion and the counseling records and sexual histories of complainants in sexual assault cases have tended to re-personalize political issues, undermine calls for affirmative state action and reinscribe gendered and raced notions of privacy, so too may privacy-based arguments by the direct targets of hate propaganda and obscenity crafted to fit the paradigm. The privacy-related harms of hate propaganda, obscenity and child pornography need also to be understood in the context of social inequalities that allow empowered narratives to constrain the autonomy of otherized individuals by limiting their opportunities for self-definition with presumed, imposed characteristics attributed to the equality-seeking groups with which individual targets are identified. The personal intrusion is integrally and intrinsically related to systemic, group-based power imbalances. Claims framed within the individualistic privacy paradigm are more likely to bury that dynamic than to make it understood. Without that recognition, the potential role for state action to address those imbalances – or at least a call for state action reflecting a conscious choice not to reinforce those imbalances is likely to be ignored.

Rather than trying to fit collectively-based harms into an individualistic paradigm, it may be preferable to re-think the paradigm to encompass collective, social considerations. The seeds for this idea were originally sown within aspects of work by authors such as Westin that were largely sidelined in the wake of an individualistic, libertarian drive against state intrusion. They have since been replanted in the work of authors such as Allen and Gavison who have advocated privacy as a producer of social goods such as better social contributions and relationships. However, the drive to articulate privacy as a social value can be found more directly in the work of authors such as Gandy, Regan and Cohen in the context of rising concern as to the broad-ranging privacy implications of digital data collection and use. As fragmented individual data collected for one purpose is aggregated and re-used in other contexts as the basis for labeling and making judgments affecting individuals’ lives with little or no opportunity for reciprocity, the adequacy of individualistic models that focus on control over access to information has increasingly come under scrutiny.

The push, in the context of digital data collection and use, for recognition of privacy as a public value, a common value and a collective value and the potentially invidious collective forms of discrimination to which its inobservance can give way offers both threats and opportunities for members of equality-seeking groups. To the extent that those accused of offences relating to hate propaganda, obscenity and child pornography would then be positioned to bootstrap their individualistic privacy argument with one premised on societal interests, the competing equality-based interests of the members of target groups may be undermined. On the other hand, thinking collectively about the value of privacy opens up the opportunity to better articulate a more group-based conception of the privacy violation occasioned by perpetuation of group-based stereotypes prevalent in hate propaganda, obscenity and child pornography. It suggests an opening to argue that privacy shouldn’t simply be conceived of as a producer of individualistic goods like free expression, freedom of conscience and liberty, but also the equally important, but too frequently unmentioned democratic right to substantive equality.

The parameters of a collectively-based privacy argument might work from accounts of authors such as Delgado, Crenshaw, Tsesis and MacKinnon on how hate propaganda, obscenity and child pornography can work to impose social constructions of inhumanity on targeted groups that are both externally reinforced and sometimes internalized in a way that undermines their abilities to self-define. To the extent that these effects lead individuals to choose to dissociate or to attempt dissociation from the groups so targeted, both the groups themselves and society as a whole stand to lose - our aspirations for diversity, plurality and mutual respect are undermined.

If hate, obscenity and child pornography are understood in this way, certain aspects of the current push for a social conception of privacy within the context of digital data collection might be usefully analogized. Simplistic data derived from these forms of “expression” are used to render social profiles of targeted groups that become a basis for imposed definitions not only on those groups, but their members as well. These socially constructed definitions then form the basis and justification for discriminatory action and treatment of individual members of those groups that can, in some cases, be internalized within their own processes of self-definition.

The fragments of identity misrepresented in hate propaganda, obscenity and child pornography are used to form the bases for social composites that intrude both upon the definition of self and the understanding of self in relation to group. The social constructions produced authorize privacy intrusions that both reflect and reinforce substantive inequality. For equality-seeking communities, privacy understood entirely as a producer of purely individualistic goods like free expression and liberty has to often been an empty proposition. Privacy understood as a social value and producer of collective goods like substantive equality seems like something worth talking about.
PDF Print
A Self-narrative Approach to the Deeply Personal
By: David Matheson

April 17, 2007


In less than a couple of weeks, I’ll be attending the Computers, Freedom, and Privacy Conference in Montreal to participate in a workshop presentation with other members of the project. The theme of the discussion is the reasonable expectation of privacy. This morning I’d like to give a snapshot of what I’ll be contributing.

Let me start off by noting what seem to be two very general conditions on the reasonable expectation of privacy in informational contexts. First, it seems obvious that in order for someone to have a reasonable expectation of privacy with respect to a piece of information, she can’t have voluntarily exposed it in a general manner. When I walk across the quad on my university’s campus in broad daylight during a busy term weekday, there’s an obvious sense in which I’m voluntarily exposing lots of information about myself: I know that if I walk across the quad, various people are likely to cast an occasional glance in my direction and thereby acquire visual information about my present appearance, location, activity, etc.; and I’m okay with that, so I walk. But no one would say that I have a reasonable expectation of privacy with respect to it, since I’ve voluntarily exposed it – made it known or at least easily knowable – to whomever happens to be in the area.

Second, in order for an individual to have a reasonable expectation of privacy with respect to a bit of information, it must be personal information of a certain sort about her. To say that information is personal is to say, at the very least, that it is about persons. The information that lightning is a rapid discharge of electrons, say, or that the average annual rainfall in Montevideo is 1100mm, is not personal because it’s not about persons – at all. Moreover personal information, in the usual sense, must be personal information about specific persons. Consider, for example, the following pieces of information, all of which are about persons: that Canada has a population of over 30 million, that all people have certain inalienable rights, and that recent polls show that a majority of Americans favor national anti-obesity programs. Despite being about persons, these bits of information are not about specific persons, and hence don’t count as pieces of personal information in the usual sense.

But not just any personal information counts. In order for an individual to have a reasonable expectation of privacy with respect to a bit of personal information, it must be personal information of the right sort. For consider the following examples of personal information about me: that I am self-identical (to borrow an example from earlier exchanges on this blog with Steven Davis), that it is logically impossible for me to be a circle, and that my rate of free-fall is the same as that of a small pebble. Even if we admit these as examples of personal information, because they are about specific individuals, no one would be inclined to say that they are of the right sort of personal information to be covered by the reasonable expectation of privacy. They can be rationally inferred about specific individuals merely on the basis of nonpersonal pieces of information such as logical or scientific laws.

Let’s call personal information of the right sort – of the sort with respect to which one can have a reasonable expectation of privacy – “deeply personal information.” Accordingly, we can say that in order for an individual to have a reasonable expectation of privacy with respect to a bit of information, she must not have voluntarily exposed it and it must be deeply personal information about her.

I want to resist the suggestion that deeply personal information is to be distinguished by means of its sensitivity. The basic idea of this suggestion is that deeply personal information is sensitive personal information, i.e. personal information that individuals don’t want widely known by others. Sensitivity in this sense, according to certain privacy theorists, might come in one of two basic forms. The personal information in question might be sensitive because the person it is specifically about does not want it widely known by others. It might also be sensitive because it is the sort of information that most members of her society don’t want widely known about themselves.

The reason I want to resist this suggestion is two-fold. First, consider the problem of hypersensitivity. This has to do with the fact that some people can be excessively sensitive about information, including personal information that is not deeply personal. Suppose, to illustrate, that for one bizarre reason or another I happen to be very sensitive about the information that I am self-identical, that it is logically impossible for me to be a circle, or that my rate of free-fall is the same as that of a small pebble. It’s quite silly of me to be sensitive about this sort of rationally inferable information, but, nonetheless, let's suppose, I am. And since it’s sensitive information specifically about me, it turns out to be deeply personal information on the sensitivity approach. But that seems wrong. Whether personal information about me is deeply personal in the relevant sense can’t surely depend simply on my sensitivities, which may stray quite wildly away from the realm of where they ought to be.

There’s also the problem of hyposensitivity. This arises because some people can be excessively insensitive about information, even deeply personal information about themselves. We all know that sort of person who opens up at the drop of a hat and shares all sorts of intimate details about themselves to anyone with open ears. Encountering that sort of person is disconcerting, because we want to say that they shouldn’t be sharing so much deeply personal information with us, total strangers.

Of course, an advocate of the sensitivity approach could agree with us here, and point out that the reason the information such a person shares is deeply personal is that it’s the sort of personal information that most members of their society don’t normally want widely known by others. It may not be sensitive personal information for them, but it is for most of their society, and so it is in fact deeply personal.

But it’s not too hard to think of cases in which even the sensitivities of most members of society are deficient. Suppose that the government, or even a large corporation – call it Big Brother – embarks on a propaganda campaign, for one bad reason or another, to convince most members of society not to be sensitive about the intimate details of their sexual and romantic lives, their medical statuses, their on-line activities, etc. Suppose further that the campaign is very successful. We get the result that virtually no one in society cares how widely such personal information about themselves is known by others. Does the very success of the propaganda campaign absolve Big Brother, who then goes on to get his hands on such details about many members of society, from the charge that he’s inappropriately gotten his epistemic hands on deeply personal information of many members of society? Surely not. The right thing to say of this sort of scenario seems to be that Big Brother has, wrongly and sadly, convinced most members of society not to care about large swaths of what remains their deeply personal information.

So if we don’t characterize the nature of deeply personal information along the lines of the sensitivity approach, what’s the alternative? It seems to me that one plausible alternative, at any rate, can be gleaned from paying careful attention to the language that the Supreme Court has employed in such well-known cases as R. v. Plant (1993) and R. v. Tessling (2004). Deeply personal information, the Court says, is what lies at the “biographical core” of personal information, and information whose disclosure may affect the “dignity, integrity, and autonomy” of the individual it is about.

This suggests two very important points about the nature of deeply personal information. First, deeply personal information has something to do with what might be described as the telling of a story about an individual’s life – that’s the “biographical” bit. Second, it also has to do with the individual’s telling her own story, for herself and on her own terms – with “dignity, integrity and autonomy.”

The narrative language of “biography” and the “telling of one’s own story” may be largely metaphorical, but I believe it captures a very familiar element of our day-to-day experience. We are all, everyday, telling stories about ourselves to others in the sense of revealing to (and concealing from) others different pieces of information about ourselves in different contexts. And the capacity to do so in accord with our own considered convictions about who should know what about us in which context is crucial, I think, to our dignity, integrity and autonomy as persons.

We can bring these points together into something like the following (call it) “self-narrative” approach to the nature of deeply personal information. On this approach, deeply personal information is personal information open access to which would seriously undermine the individual’s ability to tell her own unique story. (When I talk about “open access” here, I mean more or less unrestricted access for the public at large, i.e. access for pretty much any member of society who cares to learn the relevant information, regardless of whether the individual that the information is about has voluntarily exposed it.)

To evaluate the plausibility of the self-narrative approach, consider its application to cases already mentioned. The rationally inferable information that I am self-identical, that it is logically impossible for me to be a circle, or that my rate of free-fall is the same as that of a small pebble, despite being about a specific individual, is not deeply personal information. Does the self-narrative approach give us that result? It would seem so. It is very difficult to see how open access to any of these pieces of personal information about me would seriously undermine my ability to tell my own unique story. After all, none of these pieces of information could itself be used to distinguish me from others in any significant way. That it is logically impossible for me to be a circle is certainly about me in particular, but exactly the same sort of information can be known to apply to every other individual in society, simply by rational inference from non-personal information. That’s also true of the information that I am myself or that my rate of free-fall is the same as that of a small pebble. Everyone is self-identical. Everyone’s rate of free-fall is the same as that of a small pebble.

Recall now the Big Brother example. On the sensitivity approach, the very success of Big Brother’s campaign absolves him from the charge of wrongfully getting his epistemic hands on loads of deeply personal information about members of his society. But, as we noted, that seems wrong. On the self-narrative approach, however, we get a more intuitively sound verdict. Big Brother can properly be charged with inappropriately getting his hands on deeply personal information, because the mere success of his propaganda campaign – the mere fact that he’s convinced most members of society not to be sensitive about intimate details of their sexual and romantic lives, medical statuses, on-line activities, etc. – does not suffice to render those details non-deeply personal. Open access to such details would seriously undermine the ability of the individuals concerned to tell their own unique stories: where there is open access, individuals lack control over those details, which constitute precisely the sort of personal information whereby they could significantly distinguish themselves from others. And the fact that open access would seriously undermine their ability in this way remains regardless of whether they are sensitive about the details.
PDF Print
Don't have an account. I'll use a shared one.
By: Stefan Popoveniuc


It is generally believed that you have to take the extra step to protect your privacy: look for the SSL lock on your browser, shred your old bank statements, scan your computer for key loggers etc. Convenience and easy of use are often regarded as antagonists to security or privacy. I have recently come to discover a useful website that seems to contradict this paradigm.

Remember all those popular websites that force you to register just because you want to read the entire article, user comments or download some piece of free software? They all claim that the registration process is simple but you often find yourself entering your email address, gender, full or partial postal address, phone number and at the end they ask you to fill out a survey with how many hours you spend on the internet each month, what’s your income level, age, education and so on. But probably most important, you tend to set your password from the two-three passwords that you use on tens of websites. Clearly an exposure of what you consider to be private information.

www.bugmenot.com has a collection of public usernames and passwords for some of the most popular sites that require free registration for accessing their free content. Some of the popular websites are: www.nytimes.com www.washingtonpost.com www.imdb.com etc. A Firefox extension makes logging in to these websites a breeze: right click ->login with BugMeNot. Click-clack, you’re in.

Don’t get me wrong, customizing your account and leaving comments with your reserved username is always good, but most of the times you just want to read the end of article. And you simply don’t want to have yet another site know one of your “secret” passwords :)

*The author has absolutely not affiliation with BugMeNot.com, except for sharing the same Internet.

PDF Print
Implanting Dignity: Considering the Use of RFID for Tracking Human Beings
By: Angela Long

March 27, 2007


* This piece is a summary of the arguments contained in a longer paper that is currently a work-in-progress.

Debate is currently raging over the use of radio frequency identification devices (RFIDs) as a method of identification of unique entities. However, this debate has centered upon the general privacy concerns raised by the use of RFIDs. [1] While the privacy implications of RFID use are important, equally important are the unique implications of RFID related to human dignity. Concerns related to human dignity are especially relevant now, as implantable RFIDs have now been approved for medical use in the United States. [2] The VeriChip, an implantable RFID manufactured by Applied Digital Solutions, is being marketed to hospitals and doctors as a method of quickly identifying unconscious patients in the emergency room setting. They have also been used for and proposed for a variety of non-medical purposes, such as the tracking of English football players and migrant workers in the US. [3] In the non-implantable context, RFIDs are currently being used to monitor patient compliance in pharmaceutical trials, ie. to ensure that patients are taking their drugs properly. [4] This could easily be implemented in cases where patients with mental illnesses are subject to a community treatment order in order to ensure that drugs are being taken.

It seems likely, then, that the potential uses for implantable RFIDs will only increase in the future. Indeed, as the examples above illustrate, it appears that the use of RFIDs, both external and implantable, could shift from a voluntary and consensual model of use, to one that is neither voluntary nor consensual, which is of considerable concern to those concerned not only about privacy, but about ethics more generally. It is thus imperative to examine the ethical concerns; concerns about how we treat other human beings; surrounding the use of implantable RFIDs in more detail.

Many of the same privacy arguments made in the context of non-implantable RFIDs apply equally to implantable RFIDs. However, there is an additional factor within implantable RFIDs that raises our moral antennae; something more than just the typical informational privacy and anonymity concerns articulated by those writing on RFIDs generally; something that is unique to RFIDs that are implanted in human beings or otherwise used to track the actions and movements of human beings that has not yet been accounted for in the existing literature. [5] This additional factor in the implantable RFID context has been casually described as a concern for ‘human dignity’ in the popular media. Thomas C. Greene articulates it like this:

Unique RF identity chips and concealed RF readers everywhere: madmen have been complaining about this since the earliest days of radio. That’s how we knew they were madmen. Only an IT industry divorced from any sense of good taste and human dignity, in which technology becomes an end in itself, could strive to make the nightmares of the insane a common reality. And yet, here we are. [6]

And, as stated by Cédric Laurant, Policy Counsel at the Electronic Privacy Information Center:

Monitoring children with RFID tags is a very bad idea. It treats children like livestock or shipment pallets, thereby breaching their right to dignity and privacy they have as human beings. [7]

While this concern for ‘human dignity’ has been raised, it has not been explored in any philosophical or legal depth within the academic literature. As such, it remains, to some, mere rhetoric. Such an exploration, however, is necessary in order properly articulate the concerns that have been raised by these writers. It is also important to look at how such an analysis relates to, or even encompasses, our concerns about privacy and anonymity in the implantable RFID context, allowing for a new discourse on the myriad of concerns surrounding RFIDs that track the movements and actions of human beings. Such a discourse is important in the legal context, as human dignity, unlike privacy, has been continually recognized one of the underlying principles of the Canadian legal system, as enshrined by the Charter of Rights and Freedoms. By viewing the tracking of human activity through RFIDs as an infringement of human dignity, an argument against the legality of the use of RFIDs in these ways could be greatly bolstered through the infusion of one of the most fundamental values enshrined in Canadian law, and thus any legal argument against their use could be viewed as much stronger and likely more effective.

Human dignity is a concept that has longstanding meaning both within philosophy and within the law, most notably as the basis for modern human rights law, although it is not a particularly well-defined concept, as it often has very different meanings in different contexts. [8] Most recently, the concept of human dignity has received renewed attention in the field of bioethics, with experts striving to get to the root of the concept and to determine how it is being used by law and policy makers and to determine the ‘correct’ conception of the term. The most widely accepted theory of human dignity is that based on Kantian deontological philosophy, where it is viewed as the “essence of humanity” [9] that provides each human being with intrinsic worth by virtue of possessing a certain quality or qualities (usually agency or autonomy). Based upon possession of this quality, this intrinsic worth, all human beings are to be accorded respect and are to be treated as ends in themselves and not merely as a means to an end. However, the use of both implantable and external RFIDs to track the actions and movements of human beings clearly betray this imperative in using human beings to achieve ends unrelated to the well-being of the subject her/himself, ends that are usually related to the accumulation of information; information which may in fact be used against the person about whom it is collected.

Given that Canadian law aims to protect people from violations of their human dignity, at the very least from intrusion by the state under the Charter, any attempt by the state to use RFID in a non-consensual and non-voluntary manner may indeed be considered contrary to Canadian legal values and could run the risk of being declared of no force and effect under s. 52(1) of the Charter.

[1] See e.g. Katherine Albrecht & Liz McIntyre, Spychips: How Major Corporations and Government Plan to Track Your Every Move with RFID (Nashville: Nelson Current, 2005); Laura Hildner, “Defusing the Threat of RFID: Protecting Consumer Privacy Through Technology-Specific Legislation at the State Level” (2006) 41 Harv. Civil Rights-Civil Liberties L. Rev. 133.
[2] U.S. Department of Health and Human Services, Food and Drug Administration, 21 CFR Part 880 [Docket No. 2004N-0477] “Medical Devices; Classification of Implantable Radiofrequency Transponder System for Patient Identification and Health Information” (10 December 2004), online: <http://www.fda.gov/ohrms/dockets/98fr/04-27077.htm>. Although most apparently relevant to implantable RFIDs, human dignity concerns are also equally implicated in the external use of RFIDs where the specific use is to track the human beings to which they are linked. One example of such a use where human dignity concerns were raised is that in the case of Brittan Elementary School in Sutter, CA, where students were outfitted with RFID tags around their necks. Their movements inside the school were tracked by hand-held computers kept by the teachers. See e.g. Garry Boulard, “RFID: Promise or Peril?” State Legislatures (December, 2005) 22 at 22.
[3] With respect to tracking migrant workers in the US, see online: LiveScience <http://www.livescience.com/scienceoffiction/060531_rfid_chips.html>. It has also been suggested for use in soccer players to track their on field movements, see online: Manchester Evening News <http://www.manchestereveningnews.co.uk/news/s/217/217056_man_utd_plan_to_chip_players.html>.
[4] See online: Med-IC Digital Package <http://www.med-ic.biz/certiscan.shtml>.
[5] For example, while Dr. John Halamka discusses the privacy implications of the VeriChip, he appears to do so only within a strict informational privacy analysis, which in the context of something being implanted into the body, seems somewhat lacking. John Halamka, “Straight from the Shoulder” (2005) 353 New Engl. J. Med 331.
[6] Thomas C. Greene, “Feds Approve Human RFID Implants” The Register 14 October 2004, online: The Register <www.theregister.co.uk/2004/10/14/human_rfid_implants/>.
[7] Mark David, “Implantable RFID May Be Easy, But That Doesn’t Mean It’s Ethical”, online: Electronic Design <http://www.elecdesign.com/Articles/Index.cfm?AD=1&ArticleID=14794>.
[8] In the bioethical context, see e.g. James F. Childress, “Human Cloning and Human Dignity: The Report of the President’s Council on Bioethics” (2003) 33:3 Hastings Center Report 15 at 16 and Timothy Caulfield, “Human Cloning Laws, Human Dignity and the Poverty of Policy Making Dialogue” (2003) 4:3 BMC Medical Ethics 2.
[9] Deryck Beyleveld & Roger Brownsword, Human Dignity in Bioethics and BioLaw (Oxford: Oxford University Press, 2001) at 64.
PDF Print
Where’s Waldo? Spotting the Terrorist using Data Broker Information
By: Louisa Garib

March 6, 2007


In the fall of 2006, the Ottawa Citizen broke a leading news story based, in part, on work done by the Canadian Internet Policy and Public Interest Clinic, (CIPPIC). Pursuant to an access to information request, CIPPIC learned that the Royal Canadian Mounted Police (RCMP) had purchased consumer information from Canadian data brokers for law enforcement purposes. The information that the RCMP obtained from data brokers included individuals’ telephone numbers and addresses, as well as personal information available from public records (On the Data Trail: A Report on the Canadian Data Brokerage Industry, April 2006).

Commercial data brokers on both sides of the border collect personal information from various sources such as public registries, contest ballots, product warranty forms, newspaper and magazine subscriptions, travel bookings, charitable donation records and from companies that track credit-card use. In its coverage of the issue, the Ottawa Citizen reported that since September 2001, the RCMP has been buying and retaining this kind of personal information from data brokers, and in some instances may have forwarded that information to U.S. law enforcement.

Shortly after the story broke, the Canadian Association for Security and Intelligence Studies (CASIS) held its Annual Conference in Ottawa. At the conference, Canadian and U.S. policy officials, judges, academics, and defence analysts met to discuss intelligence gathering and surveillance in the current security environment. One of the conference panels debated the role and relevance of using “open sources” versus secret intelligence and information during law enforcement investigations. “Open source” information can be information freely available on the Internet, data contained in public records such as land title registries, or information collected and sold by the private sector. While the panel discussion focused on using information from press reports and websites, conference participants spoke of making “better” or more “effective use” of open sources, and the need for systems that could analyze open source information. Data brokers could certainly serve that purpose, by collecting, categorizing and conducting a preliminary assessment of open source information for law enforcement. By performing a “first cut” of massive amounts of information, the commercial data brokers can help the state to “spot the terrorist” or identify any other type of criminal.
Also in the fall of 2006, the Ontario Superior Court struck down the definition of “terrorist activity” in the federal Anti-terrorism Act, [S.C. 2001, c. 41] (ATA) in the case of R. v. Khawaja, [2006] O.J.No. 4245 (Ont. S.C.J.) (QL). The court found that the “motive clause” contained in the act infringed Mr. Khawaja’s rights to freedom of conscience and religion, and freedom of expression and association guaranteed by sections 2(a), (b) and (d) of the Canadian Charter of Rights and Freedoms.

The statutory definition linked terrorism to criminal activity motivated by religion, ideology or political belief. Judge Rutherford reasoned at para 58 that the “inevitable impact” of making motivation part of anti-terror investigations would be that a “shadow of suspicion and anger” would fall over certain groups in Canada, raising concerns about racial and ethnic profiling. In his decision, Justice Rutherford severed the invalid motive clause in the definition of terrorist activities from the rest of the anti-terrorism legislation; leaving the remainder of the provisions in force. To date, Mr. Khawaja has not proceeded to trial as there are aspects of his case that are currently before the courts.

While Khawaja, for now, stands as a bar to using motive as evidence of terrorist activity under the ATA, law enforcement’s potential use of personal information collected by data brokers raises the same concerns about racial profiling and creating groups of suspects that Justice Rutherford mentioned in his decision.

Information supplied by data brokers is unreliable. Brokers gather information from a variety of sources and have few incentives to determine and ensure the veracity of the information they collect and sell to law enforcement. Compounding this problem is the lack of transparency for consumers. It is virtually impossible for individuals to be aware of all of the organizations that have collected and retained their personal information over time. Consequently, consumers have minimal recourse to access, challenge and correct the myriad of what Professor Daniel Solove calls “digital dossiers” that often contain inaccurate personal information. The absence of recourse and access rights to ensure the reliability of information sold to law enforcement without consumers’ knowledge or consent also raises concerns about due process.

Nor is it clear what criteria law enforcement would use to assess the relevance, accuracy and reliability of information provided by commercial data brokers. What type of information is being purchased? How would the information interpreted and contextualized? What valid conclusions or predictions, if any, can be drawn from such information?

The inaccuracy or misinterpretation of information supplied by data brokers to law enforcement combined with the lack of transparency and oversight surrounding the use of that data can have dire consequences for targeted individuals and identifiable groups.

Identifying an individual as a security threat, terrorist, or terrorist sympathizer based on questionable information provided by data brokers can destroy a person’s livelihood, family life, reputation, and in some cases their physical security. Although it is not established that information from data brokers played a role in the “extraordinary rendition,” detention and torture of Canadian citizen Maher Arar, it is not difficult to contemplate the worst case scenario for an individual who is profiled according to information provided by data brokers based on what we know about Mr. Arar's terrifying ordeal. Identifying an entire group as suspect using information complied by data brokers could result in criminalization, stigmatization and marginalization, violating equality provisions as well as freedom of religion, thought, expression and association rights contained in the Charter.

Law enforcement’s potential practice of using information compiled by commercial data brokers isn’t only problematic for certain racialized groups or suspicious individuals; the practice implicates all of us. The private sector collects and uses personal information about nearly everyone. A criminal profile could be pieced together from various purchase records on any individual, based on the information complied by data brokers. That data could be used to establish a motive and identify individuals as suspects or potential suspects for any crime – including those not yet committed.

We could all, then, be profiled based on fragments of information about us that may be wrong, outdated, distorted, and removed from context. If information collected by the private sector is purchased and used by our government and law enforcement agencies without transparency, oversight and safeguards, it can be dangerously misinterpreted in ways that could prejudice people’s lives.

PDF Print
Privacy as Modesty and the Uninterrogated Equality Rights of LE
By: Jane Bailey

February 27, 2007


On August 25, 1995, LE, a 42-year-old single mother of two, attempted to pay for a cab with an invalid credit card. [1] The cab driver refused LE’s subsequent offer to pay with cash she had quickly arranged to borrow from another tenant in her building. Instead, the driver notified the police. After a CPIC search, the officer called to the scene found evidence of an outstanding warrant for failing to appear at trial relating to charges of obtaining credit by false pretences. In the 18 hours that followed, LE was strip searched, confined to a cell under video surveillance, denied a blanket despite the cold temperature in the cell (since apparently no blankets were available at the time), after which she was observed pretending to hang herself from the cell bars with her bra strap, forcibly stripped of her clothing after she refused to remove them, told not to position herself in the cell so as to escape video surveillance (which she refused to do) and ultimately handcuffed naked to the cell bars where she was visible to all those passing by for at least 20 minutes until blankets (ironically) were taped to the outside of the bars, according to the trial judge, “in order to give [her] some privacy” [para. 41].

LE’s civil action alleging, amongst other things, negligence, assault and breach of her ss. 7 and 12 Charter rights was dismissed. Almost as disturbing as the facts of the case itself, are the motifs of privacy’s gendered legacy present in the trial and Court of Appeal decisions. Even more fundamentally, what emerges from the case is a transparent example of what Lise Gotell has referred to as the “nothingness” of privacy as it is currently framed in law and the seeming futility of purely privacy-based claims for members of many equality-seeking communities [(2006) 43 Alta. L.R. 743].

The trial judge found that the authorities’ forcible removal of LE’s clothing was consistent with an established policy of removing the clothing of both male and female prisoners who have attempted suicide or who, as in LE’s case, have pretended to attempt suicide. The judge further found that the policy was reasonable and noted that LE was left “without the blankets protecting her modesty for a period not exceeding 20 minutes”[para. 42]. LE’s “modesty” is referred to four more times in the reasons of the Court of Appeal – generally in the context of the Court’s conclusion that the trial judge adequately considered LE’s privacy and dignity claims. As Anita Allen and Erin Mack have carefully demonstrated, the gendered legacy of privacy has frequently meant that privacy claims are afforded different content, depending upon the gender of the person asserting them [(1990) 10 N. Ill. U. Rev. 441]. The privacy of male claimants has typically been understood in the case law as necessary for independence and autonomy of choice, while for women “privacy” has too often been analysed as necessary for maintaining “modesty” – a term simply serving as code for a classed and raced analysis that saw women’s forced seclusion in the “privacy” of the home as the preferable means to protect their most highly prized possession – their “virtue”. To understand what happened to LE as primarily an affront to her “modesty” is to ignore both its impact on her status as a thinking, independent, autonomous human being, as well as the way in which that affront depended for its dehumanizing impact on the stereotypical shaming associated with public exposure of women’s bodies.

Apart from the unnamed, but gendered characterization of privacy in the judgments, the Court of Appeal’s perhaps most jarring line states: “[LE] properly conceded in oral argument before this court that there is no free-standing right to dignity or privacy under the Charter or at common law” [para. 63]. In the absence of a s. 8 claim relating to unreasonable search and seizure or a claim premised on some other specific statutory authority (like that provided, for example, to convicted sex offenders whose information or DNA is sought for inclusion in a government-run registry or databank), as far as the law is concerned, it seems women in the position of LE can really only talk about whether the conduct of authorities is consistent with Charter values – with privacy being one of them. Unless they can wedge their claims into one of these other pigeon-holes, they have no independent legal grounds for asserting a claim that being handcuffed naked to cell bars in full view of passersby, while also under video surveillance, constitutes a violation of their privacy. (And presumably, similarly, no independent basis for asserting a claim that a policy that automatically requires stripping prisoners of their clothing after they have attempted suicide or feigned such an attempt, violates the “right” to privacy – since no such independent right exists.) Interestingly, the Court of Appeal’s jarring statement was more recently relied upon by a court as the basis for striking out a privacy claim asserted by a Black woman lawyer in relation to alleged racist epithets by another lawyer [[2006] OJ No. 4134].

It is striking to so directly confront the idea that for Canadians privacy is little more than an interpretive principle for assessing the conduct of the authorities unless the claim arises in the context of a “search and seizure” or under a specific statute that adverts to a right of privacy, when so many of us (particularly in socially disadvantaged communities) are so regularly exposed to exercises of authority that have little or nothing to do with these situations. In the context of claims such as LE’s, where the gendered and raced legacy of privacy and dignity are so evident, I cannot help but revert again to the need for an understanding of privacy and dignity premised upon and framed within the “free-standing right” to substantive equality. Under that rubric, we might interrogate some different questions. While the policy of stripping all prisoners who attempt or feign an attempted suicide is facially written to apply equally to men and women, we must ask against persons of which race and gender is it statistically more likely to be applied? And how might such a policy’s meaning and effect be interpreted differently if it were considered in the context of gender and race inequality and the discriminatory sexualized stereotypes of Aboriginal and Black women that Gotell, and Allen and Mack have shown to be the basis for denying some women even the minimalist patriarchal protection of “modesty” historically afforded middle class white women? How are we to understand the meaning of privacy and dignity for those of us in equality-seeking communities unless the law is required to interrogate them in context?

It seems the best hope for privacy and dignity is equality.


[1] The following discussion is based on: LE v. Lee, [2000] O.J. No. 4533 (SCJ) ; rev’d [2003] O.J. No. 4239 (SCJ, Div Ct); rev’d (2005) 77 O.R. (2d) 621 (CA); leave to appeal refused, [2005] SCCA No. 516. Prior to dismissing LE’s application for leave to appeal, the SCC had dismissed a motion by Aboriginal Legal Services of Toronto, Inc. to intervene on the application for leave to appeal.

PDF Print
Wherever You Go, There You Are: Inserting Privacy Into Our Everyday Space
By: Anne Uteck

February 20, 2007


Note: this posting essentially represents snippets of my current research in progress.

Anyone familiar with J.K. Rowling’s world of Harry Potter cannot help but be struck by its devices of wizardry. These devices provide some idea of what it might mean to embody awareness in the physical world, precisely the shift we will experience as computational power moves beyond the desktop into everyday objects. Much of the charm from this popular series comes from the quirky magic objects that surround Harry and his friends. Rather than being solid and static, these objects embody initiative and activity - read surveillance capability. Take for example, the Pensieve which stores thoughts and memories for later retrieval: think cameras, chips and tags that capture ever-bigger parts of our experience, especially as they are integrated with devices that know our agenda, the places we visit and the people we are meeting with; or the Weasley’s clock - completely useless if you wanted to know the time, but able to pinpoint where each family member might be, work, school, home or even travelling, lost or in the hospital, and the Marauder’s Map having icons that represent people as they move around Hogwarts Castle: think geo-spatial technologies that bring the same feature to open spaces. Next generation magic or next generation technology? By whatever label, they prompt us to start thinking more about space, the space of our everyday lives, how it is being transformed and increasingly vulnerable to a new wave of technologies that make us more visible and more exposed. This, in turn, raises questions about spatial privacy, its nature and scope, and its viability for legal protection.

Emerging location, or geo-spatial technologies, such as Global Positioning Systems (GPS), Radio-Frequency-Identification (RFID) and advanced wireless devices are being introduced into all facets of everyday real life. This new wave of powerful technologies are finding their way into our homes, cars, cellular phones, identification documents and even into our clothing and bodies. Within the context of growing technological convergence, they have the unique ability to locate and track people and things anywhere, anytime and in real time. There is nothing new, nor necessarily sinister about wanting to locate people and objects and track their movement from one place to another. Clearly, there are some compelling advantages to such enhanced capability. For example, emergency services are better able to find accident victims, commercial organizations are able to improve the way they do business by fleet, product and employee tracking; parents may want to be sure their children are safe; and retailers, stadiums and other service-oriented facilities can adjust staffing levels and product inventory to best accommodate consumer patterns. For government intelligence and law enforcement, serving the public interest includes managing risk, which translates into increased security applications for monitoring people and things, especially given the shift towards a safety and security state. Overcoming many of the limitations inherent in the passive mainstream technologies, this generation of location-based technologies makes all of these things possible, automatically, remotely, accurately, continuously and in real time.

The obvious privacy and surveillance implications, however, are staggering and these concerns are rendered more pressing and more complex as the technologies are combined, integrated, connected, invisibly and remotely to networks, forming part of a wider movement towards a society characterized by ubiquitous computing (UBICOMP). In the ubiquitous networked society, computing devices are embedded in everyday objects and places with the potential for comprehensive monitoring and surveillance that is not contained by space or time, thus crossing both physical and social boundaries. This, in my view, is deeply problematic because the core privacy interests individuals have in sustaining personal, physical or even psychological space are potentially diminished, particularly over the long term as networked location technologies destabilize personal spheres and challenge our fundamental ideas about personal space and boundaries and the privacy expectations that go with them.

Canadian law, principally s.8 of the Charter, recognizes a reasonable expectation of spatial privacy, and purportedly its protection, at least in theory, extends to people. However, the parameters have been confined to ownership or at least, the physicality of the place. In other words, the territorial spectrum of protection has been narrowly constructed by the Supreme Court of Canada. On the current spatial assessment of privacy interests, you can point to barriers that are sustaining its protection. In most cases it is a tangible barrier that clearly delineates the boundary crossed triggering section 8. However, even where there has been no actual physical boundary crossed (trespassed), the intrusion has been assessed as an expectation of privacy in the place under surveillance. In other words, the context engaging section 8 protection is not what capacity the person is acting, but where physically the person is and a tangible boundary that can be identified as being crossed. As more of our lives in private places, personal spaces and movement across all spaces are potentially caught within a web of constant accessibility, the current spatial privacy construct does not take into account the nature of changing technologies, rendering irrelevant protections afforded by the traditional analysis because there is no tangible boundary crossed and the surveillance is capable of moving with people as they leave their homes and move from place to place. The current spatial privacy protection does not get at the core of what is ultimately objectionable: our desire to limit intrusions into our space, affairs, bodily sphere, attention paid to us, freedom from observation and of movement without the threat of being watched – visible and exposed. Thus, there is a need for a new conceptual apparatus for spatial privacy capable of sustaining legal protection for the entire array of privacy interests articulated by the Supreme Court of Canada.

Should we be concerned? Yes. Rhetoric and over-reaction? Perhaps. However, identifying the need for a renewed consideration of spatial privacy interests in response to location-based technologies is compounded by an on-going concern, namely, that the discourse on privacy and privacy protection has centered on assessing interests principally in informational terms. I would go so far as to suggest that the predominant theoretical, analytical and practical emphasis in policy, legal and scholarly discourse has been on the data protection model of informational privacy.

Spatial privacy interests have long been marginalized and largely overlooked in the context of technology and surveillance. While protecting information was a reasonable focus forty years ago when the primary concerns related to the growth of information technologies and the creation of large databases to store personal information, today the privacy implications of new technologies are not just about data processing or informational privacy interests. Moreover, data protection laws and constitutional analysis of informational privacy do not address the central threats to spatial privacy arising from location-based technologies. Aside from the nature and quality of information that may be gathered by the use of these technologies, their embeddededness everywhere in the physical world calls for a privacy assessment that more broadly considers people and their space. In fact, the language of data protection and focus on an informational analysis constrains a more robust discussion of privacy and risks collapsing spatial privacy interests into the informational paradigm. This is not to suggest that the baby be thrown out with the bathwater, but it does reinforce the need to construct a more effective means by which to bridge spatial, informational and personal privacy protection.
PDF Print

i want you to want me: the effect of reputation systems in online dating sites
By: jennifer barrigar

February 13, 2007 


This piece is abstracted from a longer paper that is currently seeking publication venue.

By now it is almost trite to point out that the scale and breadth of the internet opens up the possibility of reaching large numbers of people quickly and easily, facilitating social and commercial matching on a scale hitherto unimaginable. At the same time, however, the internet is fraught with ambiguity. Text communications are denuded of gesture, tone and the million nuances that inform our interpretation of meaning. Even in visual arenas such as You Tube, recent events show conclusively that the lines between vlogging, fiction and commerce are fluid and difficult to discern. [1]

Reputation systems have been developed as a technological means to harness the potential of the Internet by making trust possible in online environments. This technology is used on many well-known sites. eBay’s feedback system, for instance, allows both the buyer and seller in a transaction rate each other, and the cumulative ratings are available for perusal by any eBay user attempting to determine whether to enter into a transaction with a particular individual. Amazon also uses a variation of a reputation system, allowing users of the site to submit their reviews of materials. A reviewer may rise to the rank of “top reviewer” based on feedback of other users, while all users come to understand that a reviewer’s status is predictive of the helpfulness of her review. Slashdot.org has a similarly dynamic reputation system in place, where site users submit and review news items as well as actively reviewing the contributions of others. Users of the site are able to modify their settings to show only top-rated items, and top-rated authors acquire “karma points” which increase the weight of their reviews and ratings. In each of these systems, the “reputation” of an individual is established by meeting the needs/expectations of other users, whether for trustworthy buyer/seller behaviour, reliable reviews, or a good eye for interesting and newsworthy items.

The use of reputation systems in online dating is somewhat less intuitive than its use in other arenas, because the “product” being judged is less clear. On eBay, the performance of a particular contract is rated. Although there is not originating contract in the Amazon sense, ratings of a particular reviewer are based on how well her product has met the desires/needs of the user. Slashdot.org’s reputation rankings are similarly performance-based, with status incrementally built through accurately representing and satisfying the desires of users of the site. Michele White has noted how “Amazon’s personalization options seem to allow spectators, who are depicted as active users, to write into the system and program it according to their desires.” [2] In the recent introduction of reputation systems to online dating sites we see even more clearly the encoding of desire and consequent regulation of performance.

The Manifesto for the Reputation Society claims that “when, in colloquial language, we speak of a person’s ‘good reputation’ we are implicitly claiming that the person fulfills many of his or her local society’s expectations of good social behavior – typically including qualities like honesty, reliability, ‘good moral character’, and competence.” [3]

As Lees recognizes, while ‘reputation’ for a man invokes social and cultural qualities, for a womyn ‘reputation’ has always denoted sexual behaviour. [4] This particularly gendered implication of ‘reputation’ in the arenas of sexuality and dating is further exacerbated by the context of the online dating environment. Although both men and womyn use online dating sites, research indicates that compared to Internet users in general, online daters are more likely to be male. [5] In addition, all users of these sites are products of our inherently sexist culture, which necessarily informs their responses to the world and to each other. Sexism exerts a constituting force on our identity, as it is “continually endorsed and celebrated by the dominant culture. The mass media, the daily press, pornographic magazines and videos all reinforce the objectification of women’s bodies and celebrate a form of macho, aggressive masculinity.” [6] Accordingly, I would argue that the standards encoded into the online dating system are inherently gendered.

A negative reputation, then, is the result of failure to conform to the group standards of the dominant culture. When users of these sites fail to perform and present the gendered identities expected of them, this transgression is seen as a failure in them to uphold expected moral codes, and reputation is thus formed and assigned within the system. Accordingly, if “those who defy the dominant position will incur a form of disapproval that will lead them to be less trusted, liked, and respected in the future” , [7] then s/he who seeks to avoid a bad reputation must necessarily come to both understand and perform the expectations of the dominant position.

Reputation is not simply about purchaser choice and assisting purchasers to make choices that will best satisfy their needs – indeed, it depends for its power on a resulting regulatory force. Looked at in its full social context, reputation functions as a form of surveillance and, “like surveillance, may induce people to police themselves.” [8] The normative effect of reputation systems in online dating environments leads to a situation where “the culturally constructed ways that women express their femininity (emotional, shy, weak and nurturant) and men express their masculinity (unemotional, aggressive, strong and potent) are deemed to be natural.” [9] As such, womyn subject to these expectations do not experience themselves as deviating from individual expectations, but rather as transgressing normative standards. Similarly, men who are “disappointed” in these transactions do not experience their expectations as problematic, but rather are encouraged by the reputation system to enforce conformity with expectations rather than re-consider the expectations.

This analysis suggests that reputation systems in online dating environments function as a form of self-regulating surveillance – they set the standards of expected gendered behaviour, they act to enforce adherence to those standards by stigmatizing those who fail to conform them, and they normativize those standards, resulting in internalization of the standards and self-policing of behaviour. Far from the transformative tool of cooperation that reputation systems purport to be, in this environment at least they act to perpetuate a particular gendered and sexualized inequality.

It might be suggested that this is an isolated and site-specific issue, relevant only to online dating. I note, however, that of late there have been suggestions that reputation systems move from their current site-specific assessment status to become anchored on the individual identity instead. This would create a mobility of reputation, where individuals could build an amalgamated reputation that would be accessible to any/all persons or organizations interested in entering into a relationship with a particular individual. Before we implement any kind of mobile reputation system (or even before we increase our reliance on existing reputation systems) we must recognize their regulating power and problematize what is being regulated in order to ensure that the enforcement of stereotyped norms of behaviour and performance does not become part of this matrix.

[1] For examples, see the recent “lonelygirl15” (http://www.nytimes.com/2006/09/13/technology/13lonely.html?ex=1315800000&en=7eae0c5f86be8939&ei=5090) and Sunsilk embedded ad (http://www.cbc.ca/arts/media/story/2007/02/04/bridezilla-campaign.html) controversies.
[2] Michele White, The Body and The Screen: Theories of Internet Spectatorship (Cambridge: MIT Press, 2006) at 24 [White 2006].
[3] Hassan Masum & Yi-Chang Zheng, “Manifesto for the Reputation Society” (2004) 9:7 First Monday, online: First Monday http://www.firstmonday.org/issues/issue9_7/masum/index.html at 4.
[4] Sue Lees, Ruling Passions: Sexual Violence, Reputation and the Law (UK: Open University Press, 1997) at 17.
[5] See for example Robert Brym & Rhonda Lenton, Love Online: A Report on Digital Dating in Canada, Toronto 6 February 2001; Canadians and Online Dating, Leger Marketing Report, 9 August 2004.
[6] Lees, supra note 2 at 48.
[7] Cass Sunstein, “Group Judgments: Statistical Means, Deliberation , and Information Markets” (2005) 80 N.Y.U.L. Rev. 962 at 986.
[8] Howard Rheingold, Smart MOBs: The Next Social Revolution (Cambridge: Perseus, 2003) at 126.
[9] White 2003, supra note 2 at 286.

PDF Print
Contested Identities or Controversial Medium? Authentication and YouTube.com
By: Patrick Derby

February 6, 2007


I step outside of my comfort zone, and my identity as a criminologist, to provide the following commentary on authentication and ‘new media’ technologies, specifically in the context the popular video sharing website Youtube.com. I call the text that follows a commentary, as the thoughts and ideas presented herein require further development. This being said, I look forward to your challenges and comments, so I can further develop this piece into an article.

Authenticity and the Authentication of Identity
I believe it is important to define how I understand and use the concepts of authenticity and authentication. In order to be authentic the object in question must be genuine and reliable or trustworthy. The authenticity of an object is often determined through a process for gaining confidence that the object is what it appears to be; this process is referred to as authentication, and such processes may vary in their formality. By no stretch is authentication new, nor does it emerge with the rise of a networked society. Whether it is ancient artefacts, video statements allegedly released by terrorist organizations, or individual identities, all undergo a process of authentication. As described by Stephan Brands, “[i]n communication and transaction settings, authentication is typically understood as the process of confirming a claimed identity” (Brands, 2005: 1, emphasis in original).

Stranger Society: Authenticity in the City and Virtual World
As I have indicated above, authentication is not new to social life. While individuals once lived their lives in the absence of anonymity, industrialization and the rise of the city significantly altered the dynamics of social living. The emergence of the city facilitated the growth of individualism, privacy, and anonymity, leading some to suggest that we have become a society of strangers (Lofland, 1973). The ‘stranger society’ thesis simply suggests that most of our interactions in everyday life occur with strangers who cannot vouch for our reputation based on first-hand personal knowledge. The unknown reputations / motives of others are a source of uncertainty and insecurity, and various institutions began using surveillance technologies, such as photo identification to authenticate valid clients.

In the early 1990s, we began to see the emergence of the World Wide Web. Early proponents of the internet promised an anonymous playground, impossible to regulate. However, the more popular the internet became, the more incentive dominant institutions had to establish themselves online. In less than a decade, the vast expansion of information technology made it possible to engage in urban social life without actually being present. Shopping and banking can now conveniently be done online from the comfort of home, while professional and personal relationships (local and global) may be mediated through the internet without any actual (physical) meeting. David Lyon (2001) refers to this declining requirement for co-presence in our day-to-day interactions as the disappearance of bodies.

As internet usage has become more mainstream, so too have new social fears, which have had an impact on settings that allow for online transactions and communications. These fears include, but are not limited to, fears of identity theft and cyber-predators. First, it was quickly realized that for the majority, the internet did not make good on its promises of privacy and anonymity. Most of our online interactions require that we divulge information about ourselves, which may later be pieced back together to reveal a better picture of our real identities. As most of us are aware by now, our personal information had been commodified, and may be used for both lawful and illicit purposes. Second, fears have emerged around the threat of cyber-predators, whether it is paedophiles, child pornography rings, or even callous men hunting vulnerable women to date for financial gain.

Not surprisingly, institutions have responded to these new fears, in an attempt protect the online economy, spawning an entire industry around online privacy protection, surveillance, and authentication. Parallel to the budding online security industry emerged an ethos of online responsibilization. While I will not go into any further detail on the subject, I will acknowledge (whether I agree with them or not) that great strides have been made by institutions to authenticate the identities of individual engaging in financial transactions online. What I would like to discuss in more detail for the remainder of my commentary is authentication that occurs in online communication settings.

Many of us have had the experience of establishing an email account of some sort. Whether we choose Yahoo or Hotmail as our email service provider, or whether we open an account on Blogspot or MySpace, the process is usually similar. Each of these typically requires the user to create a self-generated username and password, which is usually verified using some form of cryptographic technology. But again, as anyone who has created such an account is aware, the information we often provide to establish such accounts is rarely, if ever, accurate.

A quick cruise through the user profiles of YouTube members confirms that I am not alone in providing inaccurate profile information. Given the above, allow me to suggest that, unlike their counterparts responsible for transactional settings, the creators of online social and communication spaces are not preoccupied with authenticating the true identities of its users. Does authentication not occur in social spaces online? This is a question I began to explore within the confines of the YouTube community.

Video Sharing and the YouTube Community
For those who have been hiding under a shell, or simply have not been paying much attention to the media hype enjoyed by the video sharing website YouTube.com, this website had its official debut in November 2005, and by summer 2006 was the fastest growing website on the internet. In November 2006, the start-up was purchased by Google Inc. for a purported $1.65 billion US. In addition to sharing music videos and movie/television clips, the YouTube allows amateurs to post videos or share their experiences and/or opinions via vlogs. Consequently, YouTube has created several internet celebrities, several of whom have gone on to experience fame beyond the YouTube community. While some of these YouTube celebrities have achieved fame as a result of their film making talents, others have done so as a result of contested online identities.

This past week a viral video posted on YouTube entitled Bride Has Massive Hair Wig Out made national headlines after receiving over 2 million hits. The clip appears to be an amateur recording of a twenty-something woman chopping her hair off during a tantrum an hour before her wedding. Debate immediately emerged regarding the authenticity of the video. As it turns out, the clip was an initiative launched by hair product company Sunsilk Canada, and the individuals in the video are aspiring Canadian actresses.

Another contested YouTube identity was that of Bree, more popularly referred to by her username lonelygirl15. Lonelygirl15 debuted on YouTube in June 2006, as a coming of age story through which the audience shares in Bree’s life experiences. In addition to her video postings on YouTube, lonelgirl15 also established a MySpace site to facilitate communications with fans. Despite these efforts to make Bree’s identity as believable as possible, in just over one month, several fans began to question the authenticity of the lonelygirl15 video blogs, and by September it was revealed that Bree, a.k.a lonelygirl15, was actually an actress named Jessica Rose. The YouTube community was divided as several members responded to the lonelygirl15 controversy. While some YouTubers became upset when to Bree’s true identity was revealed, others provided their support for the series’ creative efforts.

While I am not necessarily concerned with which side individuals took in this controversy, I am struck by how YouTubers, and even members of wider society (including popular media), have demanded authentication of the identities portrayed within this virtual social space. Whereas in online financial transactions authentication is top-down, from institutions to users. Authentication in the context of the examples provided from YouTube, indicate that demands for authentication in communication transactions is more likely to be lateral.

Further, after examining the user profile information of selective YouTube participants, I have also come to question whether the lonelygirl15 controversy is really about Bree’s contested identity, given that it is not uncommon for YouTubers to mask their real-life identities. Ironically, even some of those who have rebuked lonelygirl15 are not forthcoming with their true identities, often providing inaccurate user profile information. Rather than these controversies being about authentic identities, I believe the controversy is more rooted in the authentication of the medium used to present video clips such as the lonelygirl15 storyline (vlogging) or the wig out bride. While the traditional medium of movie and/or film may be understood as fictional, the vlogs and viral videos presented on YouTube, for the most part, are conceptualized as authentic, and surely the creators of lonelygirl15 and the executives at Sunsilk Canada have intentionally exploited the authenticity of the YouTube medium.

Recently, a reporter asked an advertising executive whether ‘net seed’ clips such as wig out bride are going to become the ‘new normal’ of advertising. If they are, and if the post-911 ‘new normal’ is any indication of the events to come, I conclude my commentary with the following questions: Will YouTube.com become a virtual battleground, and will YouTubers become the foot soldiers in a ‘war on authenticity’?


BRANDS, S. (2005) Authentication. Available online at: http://www.idtrail.org/files/Authentication_Brands.pdf

LOFLAND, L. (1973) A world of strangers: Order and action in urban public space. New York: Basic Books.

LYON, D. (2001) Surveillance society: Monitoring everyday life, Open University Press; Philadelphia.

Patrick M. Derby is an MA Candidate with the Department of Criminology, University of Ottawa.
PDF Print
“Citizen Journalism” and Privacy
By: Teresa Scassa

January 30, 2007


It is increasingly commonplace for video of events, captured by ordinary individuals, to make the news. With the ubiquity of camera phones, the likelihood that someone will be on hand to record incidents otherwise lost to the news media increases significantly. To give an illustration, in the first week of January, a Nova Scotia cabinet minister was forced to resign when the media broadcast images from a cell phone video which showed him leaving the scene of an accident. The video was captured by a witness to the accident.

Examples like this are only one variety of so-called citizen journalism, which can take many forms. In some cases, citizens capture video, or provide commentary on news stories to major media outlets which report and communicate these contributions alongside their professionally prepared content. In other instances, individuals or collectives become the news intermediaries by creating alternative web sites to disseminate news or information on the theme or topic of their choice. Individuals may also dispense with intermediaries entirely, and create their own blogs, or post video footage or verbal commentary on their own website or on a content-sharing forum such as YouTube. These phenomena have given rise to a lively debate about the very nature of journalism.

Citizen journalism raises interesting privacy issues. Online video footage, photographs and even written commentary can feel extremely invasive of one’s private sphere. This is particularly the case where one has no expectation that one’s activities are being recorded. Yet in Canada, for example, legislation such as the federal Personal Information Protection and Electronic Documents Act (PIPEDA), the Personal Information Protection Act (PIPA) in each of B.C. and Alberta, the B.C. Privacy Act, (to give a few examples), contain exceptions for information collected, used or disclosed for journalistic purposes. These exceptions from basic privacy norms recognize that the public interest in news events will tend to outweigh individual privacy interests.

What is news, then? And what is journalism? Is it anything that takes place that someone considers worth reporting or worth reading about? Or is news defined in terms of either who gathers it (journalists) or who reports it (established media). To a large extent, the legislated exceptions from privacy legislation mentioned above seem premised on a particular understanding of journalism – one that involves an executive editorial control that acts as a filter for inappropriate content, and that follows accepted norms for news reporting. Yet there is a push in some quarters to recognize ordinary citizens acting as news intermediaries as being engaged in journalism. (See, for example, the discussion by Michael Geist in “We are all Journalists Now”, http://www.michaelgeist.ca/index.php?option=com_content&task=view&id=1280)

Where citizens send their cell phone videos to news outlets to be broadcast as part of television news programs, the result can be characterized as traditional media outlets expanding the scope of sources on which they rely for news footage. The screening mechanisms, quality control, verification measures and so forth, presumably remain in effect. Thus it is likely that cell phone footage broadcast over television networks will benefit from journalism exceptions in privacy legislation.

The situation is less straightforward, however, when so-called citizen journalists avoid the intermediation of professional news outlets and offer their footage online by posting it on private, non-professional news sites, on content-sharing sites such as YouTube, or on their own personal websites. Absent the formal infrastructure, do their activities constitute journalism? To put it another way, do the exceptions protect an industry, or a particular kind of activity? And if it is the activity, then is there a basis for distinguishing between activity that merits the label ‘journalism’ and that which falls below the unarticulated standard? (And here again, a journalist might be defined in terms of the acceptance of their work by an established media industry). It is interesting to note that in a recent decision from the U.S. District Court of South Carolina, a judge, in considering whether a blogger’s comments were ‘journalism’ proposed a functional analysis “which examines the content of the material, not the format, to determine whether it is journalism.” (BidZirk, LLC v. Smith, April 10, 2006).

Of course, with a statute such as PIPEDA, which only applies to the collection, use or disclosure of personal information in the course of commercial activity, making one’s cell phone video footage freely available to all interested parties does not trigger the application of the Act in the first place. B.C’s PIPA does not apply to a person acting in a “personal capacity”, whatever that might mean. (If someone is not acting in a “personal capacity” when they post video footage of events online, then in what capacity are they acting? Is it necessarily journalistic?) It also does not apply where the collection, use or disclosure is for journalistic purposes “and for no other purpose” (Query: what is a journalistic purpose? Is it just to see something in print, or does it include a desire to right a wrong, see justice done, fight crime, fight pollution, etc.? If these goals are part of the purpose for posting footage, for example, then is this a journalistic purpose alone, or a journalistic purpose combined with some other purpose?) B.C.’s Privacy Act, which creates a cause of action for a violation of an individual’s privacy rights, provides that a publication of material does not violate privacy if “the matter published was of public interest”.

The wording of these various exceptions raises interesting questions about the scope and purpose of journalism exceptions in privacy legislation. Is the goal to allow an industry to continue to operate in its customary manner? Or do the exceptions serve a broader public interest objective? The B.C. Privacy Act (to use an example) focuses on the issue of the “public interest” in determining whether a publication is a violation of privacy rights. With cell phone footage posted online, therefore, the issue under might be whether disclosure of the footage served a “public interest”. One may wonder whether the choice by the drafters of such statutes as PIPEDA or PIPA to use “journalism” as the basis for the exception aims to capture more than simply the public interest. In other words, is it possible that those statutes focus on a more traditional concept of journalism which assumes the added protective layer of editorial choice and unwritten norms or conventions?

Some say citizen journalism will ultimately make politicians, police, public figures and corporations more accountable, as they can no longer assume that their conduct will remain largely insulated from public view. However, others raise concerns about the impact of some forms of citizen journalism on personal privacy. They note that the targets of such journalism may not just be public figures and institutions, but may be private citizens captured committing minor infractions in their course of their daily lives. For example, if municipal by-laws say that trash cannot be put on the curb until the morning of pick-up day to prevent animals from getting into the trash and making a nasty mess, does a person who puts their trash out the night before deserve to have their photograph posted on a website which denounces those who contribute to urban pollution? Perhaps they do. But the level of exposure may be more than is warranted by the public interest. It might expose that individual to a backlash that is out of proportion to the offence. It is also not particularly nuanced; it does not all for a consideration of the “other side”. Is there a difference between journalism and vigilanteism? In Oklahoma City, one man decided to post on his web site video footage of johns soliciting sex from prostitutes in his neighborhood in an effort to combat prostitution in his neighborhood. (http://showmenews.com/2006/Aug/20060817News023.asp) Is this citizen journalism or vigilanteism? Or a bit of both?

To side track for a moment, it is interesting to consider the debates that have arisen regarding the online publication of court decisions. The publication of court decisions has always been an important part of an open and transparent system of justice. However, the impact on individuals of the internet publication of sensitive personal information has required some modification of this general principle of openness. The Canadian Judicial Council (CJC) has developed a protocol for the drafting of reasons for judgment by judges which is intended to balance the principle of openness with the reasonable privacy interests of litigants. (http://www.cjc-ccm.gc.ca/article.asp?id=2814) Yet in the absence of a court-ordered publication ban, the CJC would only restrict the publication of personal information in court decisions in the most extreme circumstances:

. . . there may be exceptional cases where the presence of egregious or sensational facts justifies the omission of certain identifying information from reasons for judgment. However, such protection should only be resorted to where there may be harm to minor children or innocent third parties, or where the ends of justice may be subverted by disclosure or the information might be used for an improper purpose. (CJC, Recommended Protocol for the Use of Personal Information in Judgments, para 31)

Of course, the publication of judicial decisions is not citizen journalism. The motivation towards openness in the reporting of judicial decision-making is supported by both a strong sense of an underlying public interest that is being served, and confidence in a professional and accountable judiciary. To return again to the journalism exceptions in privacy legislation, perhaps it is a sense of the public interest served by the professional news media combined with a certain confidence (whether warranted or not) in the professionalism and accountability of the established news media that lies behind the legislated exceptions to privacy norms in the collection, use and disclosure of personal information. If this is the case, then citizen journalists should be wary.

Teresa Scassa is Associate Professor and Director of the Law and Technology Institute at the Dalhousie University Law School.
PDF Print
When Less is More: Privacy, Security and Civil Liberties from Johannesburg to Washington
By: Jena McGill

January 23, 2007


Events deemed “national emergencies” have long provided justification for infringing civil liberties. In some instances, “security concerns” have led to the complete revocation of even basic rights, as was the case during the World War II internment of more than 22,000 Japanese Canadians on the basis of an alleged security “threat.” As we are well aware, “security” against the “terrorist emergency” has become the unofficial trump card of the post-9/11 world.

As a result of ballooning security issues and the threats that security “solutions” often pose to privacy interests and civil liberties, understanding the tension between privacy and security has grown both increasingly important and progressively more troublesome. In response to escalating levels of unwelcome surveillance and the scores of other unsolicited, privacy-invasive practices that pepper our day-to-day lives in the name of security, privacy advocates continue to call for appropriate limits on privacy-eroding laws and technologies that threaten to eat away at our privacy interests and civil liberties.

In the quest to define and promote these limits, one of the greatest challenges for the privacy community is answering the “how to” question when it comes to balancing privacy-related values with other, equally important but sometimes competing interests and rights. The privacy versus security contest is perhaps the most topical and certainly one of the most difficult tensions with which we must currently come to grips. The two ideals are often pitted against one other as rivals in an “either/or” dichotomy. An increase in security will necessarily come at a cost to our privacy and civil liberties – a cost that the privacy community generally deems too great to pay....or is it?

Earlier this month, news headlines hailed the success of a massive 350-camera surveillance system of closed circuit televisions installed throughout downtown Johannesburg, South Africa in 2001 [1]. Branded as one of the most dangerous cities in the world, Johannesburg credits the downtown cameras with drastically reducing the city’s crime rate - generous estimates cheer an 80% decrease in crime following the installation of the surveillance system. Prior to the introduction of downtown surveillance, Johannesburg’s high level of crime was blamed for stifling the social and economic life of the city, and virtually paralyzing its population. With crime now on the decline, Johannesburg officials anticipate that the city’s economic and social life will rebound and it will become a thriving metropolis and business centre. Extensive, privacy and anonymity-eroding surveillance has, ostensibly, saved the city.

Contrast Johannesburg with the latest round of U.S. law-making “in the name of national security.” The federal government is currently finalizing a plan to add to the FBI’s system of federal and state DNA databases the genetic codes of tens of thousands of illegal immigrants, captives in the “war on terrorism” and others accused but not convicted of federal offenses [2]. In most states, a person must be convicted of a crime before his or her DNA is added to the national system. The new plan, however, would apply to any U.S. citizen arrested under federal authority and to all non-U.S. persons who are detained for any reason at all. (The majority of the latter group will inevitably be illegal immigrants caught at the border or rounded up by law enforcement after entering the country.) This plan strikes a balance that has become typical of U.S.-policy making post 9/11: less privacy in the name of more security. Predictably, proponents allege that increasing the pool of DNA profiles available to law enforcement officials will assist in solving crimes and will make it easier to identify and track potential “terrorists.” Opponents of the plan, including the privacy community and the American Civil Liberties Union (ACLU), allege that mass seizures of biometric information are a gross violation of individual privacy and erode basic civil liberties.

The impetus behind both the Johannesburg surveillance system and the U.S.’ DNA collection plan is not dissimilar – to prevent crime and increase the efficiency of law enforcement [3]. In the latter example, as the ACLU points out, there is a very high risk that the collection and retention of DNA by government agencies will have a seriously detrimental impact upon individual privacy and civil liberties. The former case, however, is less certain. The privacy-invasive surveillance network appears to have impacted positively upon the rights of Johannesburg’s citizens by ensuring a higher degree of safety in the city’s downtown. Individuals are now able to participate in their communities and more fully enjoy their rights and freedoms. While the dialogue of the privacy community often focuses upon the negative effects that privacy-invasive technologies can have upon rights and liberties, the Johannesburg example asks us to consider how such technologies and practices may in fact work to further civil liberties and enhance the enjoyment of rights.

When we talk about privacy, it is always necessary to ask whose privacy is at stake and under what kinds of circumstances. These questions may yield very different answers depending on the context and the relative weight of privacy as against other relevant values and interests in a given situation. In the clash between privacy and other interests, and particularly when it comes to striking a balance between privacy and security, the North American privacy community often adopts a “more privacy equals more liberty” standpoint. We know, however, that this equation does not always hold true. Feminist scholars, for instance, have highlighted the ways in which privacy has been used as a shield to cover up the degradation and abuse of women and others in the private sphere. Too much privacy is not only possible, but can lead to deeply harmful outcomes.

The concern at the opposite end of the spectrum, of course, is that a right once ceded is eroded. Privacy infringements may be subject to a classic slippery slope argument – give away a little and you risk losing a lot. Are there bright line differences between gratuitous invasions of privacy and necessary sacrifices made in the name of some “greater good”? In the abstract, it is easy to agree that the concept of privacy is important and should be defended. The ways in which privacy’s theoretical importance translates into diverse real world situations is incredibly varied and at times conflicting. This makes privacy a necessarily qualified concept, and means that it is critical to contextualize its relative value within the larger spectrum of competing and complementary values that exist in a given situation.

The relative nature of privacy includes a number of considerations. Most would agree that while almost all societies appear to value privacy to a certain extent, there is a great deal of disparity in the ways in which privacy is sought and obtained, and in the levels of privacy to which a given culture or society aspires. A related inquiry is whether or not there are any aspects of life that are innately private and not just conventionally so. One of the ongoing difficulties in defining privacy and calculating its weight is that it is strongly relative and inevitably contingent on factors including economics, social norms and the technology available in a given socio-cultural domain.

There is perhaps a third dimension to the relative nature of privacy that depends upon basic human needs. The citizens of Johannesburg have, willingly or otherwise, sacrificed a great deal of their privacy and anonymity to the downtown surveillance system. Without surveillance, however, everyday activities carried an increased risk as a result of the city’s high crime rate. When basic needs, like physical safety, are not being met, as was the pre-surveillance situation in Johannesburg, privacy may be accorded less weight in balancing a society’s needs.

This idea resonates within the framework of Maslow’s Hierarchy of Needs and related schemes designed to explain human needs and desires. Such hierarchies propose that humans strive to meet successively higher psychological needs like esteem, respect and self-actualization only as their basic physiological needs, including physical safety, food and shelter, are satisfied. The basic concept is that the higher needs only come into focus once all the needs lower down in the pyramid are satisfied. Where does privacy fall in the Hierarchy of Needs? It is possible to argue that privacy is or should be located somewhere above basic physiological needs. When the necessaries of life are not fulfilled, privacy takes on a relatively diminished importance.

We spend a great deal of time thinking, talking and writing about how to define and defend this “thing” called privacy. One of the critiques often leveled against privacy is that its definition is subject to a patchwork of meanings, making it difficult to “pin down” and complicated to use and protect. At the end of the day, maybe this is not a critique at all, but recognition of privacy’s relative and multiple character and its different meanings, uses and levels of importance around the world. Johannesburg’s surveillance project reminds us that “less may sometimes mean more,” and that in our own privacy dialogue we must continually recall the context within which we live and work.

[1] CBC/Global News Bit, (January 6, 2007).
[2] See Richard Willing, “Detainee DNA may be put in Database” USA Today (January 19, 2007), online: http://www.usatoday.com/news/washington/2007-01-19-detainee-dna_x.htm.
[3] I acknowledge, but do not address here, the critical differences between the nature of the information being collected in Johannesburg and that proposed in the U.S. Capturing a video image via surveillance and collecting a genetic code through mandatory detainee DNA collection represent two distant points on a spectrum of invasive data collection practices, not least because of their differing potentials for misuse.

PDF Print
Who is That Masked Woman? Masking and Unmasking in Public Places
By:Gary Marx

January 16, 2007


In the Netherlands the government has proposed a public ban on covering the face with clothing such as the burqa, the Islamic head-to-toe robe. Similar restrictions have been suggested, and in specific contexts are in place, elsewhere in Europe. For Dutch leaders in a government facing re-election, the issue reflects contemporary religious and political conflicts, however miniscule the number of effected women. But the issue goes beyond current events to broader questions involving expectations about public behavior.

In modern societies the law is relatively clear about the rights others have with respect to the image an individual offers in “public”. Unlike some traditional societies in which the eyes must be averted or where veils are mandatory, in our culture appropriate looking is permitted (and can even be a sign of respect). In Canada and the United States what can be seen in public can also generally be photographically captured.

The presenting individual has rights as well. He or she can appear in ways that others may find offensive or provocative (whether sexually or stylistically). While the fashion police and the reticent may disparage such appearances, the real police have no criminal sanction to enforce. The enlightenment heritage protects the freedom to present the self as one chooses –I am free to be me and maybe even you. This contrasts markedly with societies where dress and body adornment are rigidly controlled and tied to social position.

In our society individuals are permitted and even encouraged to alter and disguise their “natural” appearance. They can wear baggy or padded clothes or those that accentuate muscles and curves. They can dress in age inappropriate ways and wear the cloths of the opposite sex. Cosmetic surgery, liposuction, botox, hair implants, elevator shoes, makeup and tinted contact lens are viewed by many persons as admirable forms of self-expression and self-help.

There are of course limits. The law in principle is clear about what must not be offered in public. The famous “naked man” of the University of California, Berkeley was arrested many times for what he failed to wear. In many jurisdictions women who breast feed in public places (or even in “private” places accessible to and visible to the “public”) may face arrest or exclusion.

The law and our expectations however are less clear and in conflict regarding what must be offered in public. When must the face be revealed?

It is well within the bounds of a pluralistic society to accept covering the face for legitimate purposes in public places, whether for religious reasons, anonymity in political communication, modesty or to hide disfigurement (e.g., the phantom of the opera). The acceptable link between form and function with respect to a mask on the ski slopes, the motorcycle helmet visor, the respirator or a mask for a costume party is clear. Society, or at least literature, might have been worse off if Zorro and the Lone Ranger lost their anonymity.

But what of settings in which a mask is worn for anti-social purposes, has unintended undesirable consequences or its link to religion is disputed? What happens when a valid religious justification conflicts with other important goals?

In the later 19th century a number of U.S. states passed anti-masking laws directed against the Klan. Consider as well prohibitions on wearing hooded sweatshirts in shopping malls or entering a bank while masked. The issue is not just that malls like banks are private places and hence freer to set their standards, but that as means of deterrence, accountability and identification there are strong grounds for prohibiting masking. In Denmark a series of bank and post-office robberies were carried out by a woman dubbed the “burka-robber”. In some jurisdictions there are additional penalties for wearing a mask when carrying a concealed weapon or in the commission of a crime.

The modern notion of a public sphere (whether a physical or cyber place) invites all citizens to participate regardless of social attributes. It implies legal rights of access, observation and expression. But it also involves more informal expectations of reciprocity in which individuals encounter each other as equals and are expected to behave within the bounds of civility (whether required legally or simply by manners). One aspect of this is being able to respond to the other by reading facial appearances and expressions.

The masking of the face brings a lack of reciprocity relative to those who present their faces (however adulterated). The masked person can see us, why can’t we see them? One way mirrors are not very appreciated in open societies. Paradoxically the covered face calls attention to itself and is in your face far more than the visible one. Beyond inhibiting interaction, the inability to see an individual’s face may engender fear and discomfort given the symbolism associated with the mask of the hangman and the criminal and the presumption that those who are hiding do indeed have something to hide.

But what is being hidden when a women covers her face and body? And why?
In Islam and Judaism covering the head is a sign of humility before God. Yet the burqa in being restricted to women goes far beyond this to issues of gender equality. Clerical supporters of the burqa suggest that it is a way of calming male passions, as well as an expression of modesty. Whether it has this impact (or the reverse given our fascination with what is hidden) is a question for empirical research. But even if it is factually correct, why not be consistent and consider female passions that may be aroused by viewing the unmasked male? In a less sexist and sexualized environment perhaps the need to mask the face would not be felt. Until then, gender equity would suggest the need to mask men as well as women. The mandatory masking of women, as was done under the Taliban, excludes them from full interaction in public settings. The dynamism and heterogeneity of the public sphere and the serendipitous encounter favored by urban theorists such as Jane Jacobs is lessened.

A number of European cases involve prohibiting teachers or students from masking their faces. Courts have ruled that the interaction that occurs in the classroom is inhibited when the face can not be seen. Similarly the broad vision required in driving a car may be impaired and a photo-id on a passport or driver’s license becomes moot.

Rather than legal prohibition, there may be indirect pressure against masking because of the secondary consequences it is presumed to have. For example in Amsterdam and Utrecht there are proposals to deny benefits to unemployed women who wear the burqa because it is seen to make them unemployable. An alternative of course would be anti-discrimination legislation in employment.

Opposition to masking based on its functional consequences is distinct from that based on implications for separating church and state or for the maintenance of order. In France for example the prohibition on head scarves and skull caps in schools reflects secularism and goals of equity and assimilation. In Germany their have been proposals to do an end run around the issue by requiring all students to wear uniforms. In some United States high schools there are prohibitions on clothes reflecting gang colors or those deemed to be too provocative.

Such cases reflect the inherent value conflicts between the individual and the community (or better between various communities) which need to be continually debated. Yet these self-presentation cases are not based on a concern with making the individual’s unique identity public. Indeed with respect to symbols of group affiliation, the situation is reversed –the individual seeks to advertise rather than hide an aspect of identity, while authorities seek to prohibit this.

Given the ubiquity and controversy over public surveillance and the move toward facial recognition technology, masking the face in public might even be seen as heroic resistance to the loss of public anonymity (let alone a way to resist disease). In one sense it is equivalent to using a paper shredder, pseudonyms, encryption and a floppy hat and sun glasses to protect privacy.

The issue may also be temporary as a result of the pace of innovation in the tools of identification. In a few decades it could even be seen as a quaint historical remnant of a backward age when identity was still determined by appearance and cards in the wallet, rather than by involuntary transmissions from implanted chips or distinctive scent.

But until then, it would be as wrong to categorically prohibit masks related to religious beliefs in public as it would be to require them. As with so many of our most contentious social issues the answer to masks should not be “never” or “always” but “it depends”.

What it ought to depend on is the context, motives, consequences and alternatives. In settings where the social costs can be significant or where an important community goal is subverted masking is undesirable. For benign activities such as walking on the street or visiting the library tolerance is required, although it is not cost free.

One would also hope that those who support masking are aware of the impact this may have on others and of the legitimate reasons for opposition to masking motivated not by religious intolerance, but by a different weighing of competing values.

Gary T. Marx is Professor Emeritus MIT (garymarx.net).

PDF Print
By: Ian Kerr

January 9, 2007


With the lights out its less dangerous
Here we are now
Entertain us
I feel stupid and contagious
Here we are now
Entertain us

Kurt Cobain

all is quiet on new years day. the pizza last night at colonnade was warm and comforting not unlike our nearby fireplace, where i now sit cross-legged with my laptop, well, on my lap. it occurs to me just lucky i am and how nice it was to spend a low key evening with family. (i have always thought of new years eve as a night that you have to kiss people you normally wouldn’t spit on…happy to have transcended that phase of life.)

rather than waking erin and newton, both of whom have an incredible ability to sleep easily and peacefully, i strangely find myself reflecting on the time that i spent last summer nosing around the caselaw on sniffer dogs.

at the time, i was preparing for an NJI conference, where my idtrail colleagues and i offered 150 judges a full day workshop on the reasonable expectation of privacy. we will be doing a funky-fied version of it again in montreal at CFP 2007, which is being organized this year by privacy guru and director of strategic policy and research at the office of the privacy commissioner of canada (and former idtrail research coordinator) stephanie perrin.

sniffer dogs look a lot like my dog — however, unlike newton, they are specially trained to “sniff-out” contraband such as illegal drugs or explosives.

i am not a criminal law specialist, nor do i claim any particular expertise in the law of search and seizure. but i do care a lot about it. it is the area of law most significant to the development of a crucial legal construct: the reasonable expectation of privacy.

the ‘reasonable expectation of privacy’ standard provides the benchmark for circumstances in which the state (and sometimes the private sector) is constitutionally permitted to interfere with an individual’s privacy interests. my own interest in the subject and part of the contribution that i had hoped to make at the NJI workshop stems from my deep concern that emerging surveillance technologies will be understood by courts to diminish our expectation of privacy; as i discuss in my NJI presentation, i think that this would be a dreadful, terrible mistake. an epic social disaster, really.

the crossroads for an exploration of the most recent sniffer dog cases, in my view, is a case that had nothing to do with sniffer dogs: a 2004 supreme court of canada decision called tessling.

at issue in tessling was the RCMP’s use of FLIR (forward looking infrared), a technology that captures the infrared portion of the electromagnetic spectrum. as the supreme court described it, it is a camera that takes pictures of heat instead of light. in tessling, the RCMP used an airborne FLIR system so that officers could measure from way-up-in-the-sky the heat emanating from a house occupied by a guy called walter tessling.

you see, the RCMP were suspicious that tessling had a grow-op in his house. but their sources were unreliable. the point of using airborne FLIR was to get better evidence — without trespassing on the property — so that an officer could appear before a judge and assert reasonable grounds to believe that tessling was growing pot in his house. (mere suspicion is not enough to get a search warrant.)

tessling’s lawyer argued that, without a warrant, the airborne use of FLIR to measure the heat coming off the walls of his house amounted to an illegal search and that the FLIR-evidence ought therefore be inadmissible in court.

the supreme court disagreed, stating at paragraph 63 of the decision that:

external patterns of heat distribution on the external surfaces of a house is not information in which the respondent had a reasonable expectation of privacy.

i could go on at some length about the decision and the legal framework used to determine that walter tessling did not have a reasonable expectation of privacy in the heat waves emanating from his house. but a discussion of the tessling decision or the legal framework it supports is not my current purpose. (if you are looking for that background, you should *most definitely* check out my friend and colleague jane bailey’s superb NJI presentation.)

since my focus is on sniffer dogs, tessling is relevant only insofar as its outcome seems to invite subsequent courts to consider adopting an extension of its logic — namely, in searches involving sniffer dogs, to ask:

whether external patterns of smell on the external surface of a knapsack is, or is not, information in which a person holds a reasonable expectation of privacy.

with the growing concern about various sorts of contraband, this is a burning question in canadian courts.

the answer to this question, i suggest, needs to be understood within the broader context of a recent and increasing trend in law enforcement — the adoption of ‘jetway’ programs (smell the irony in the ‘jetway’ nomenclature: this program is geared towards the surveillance of people who can ill-afford to travel by plane; it mostly takes place at bus terminals.)

jetway was developed in the US but has been used across canada for about 5 years. according to our courts, this program targets travelers said to look “out of the norm” in terms of their clothing, their behaviour, or their actions. once targeted, the abnormal-looking-individual is approached by a police officer along with a four-legged friend. the officer immediately shows police identification and engages the target in conversation, watching all the while for unusual behavior. the target is either discounted quickly and allowed to walk away, or is further engaged in a conversation that swiftly becomes more “personal and intrusive”. police describe these encounters as “strictly consensual”, claiming that the target is free at any time during the conversation to walk away. however, the police will often demand to see the target’s travel tickets and identification; sometimes this leads to a further ‘request’ that the target ‘consent’ to a baggage search. targeted persons usually capitulate. most don’t realize that they have any choice in the matter.

even if a target refuses to consent to a baggage search, the police dog does with its nose what the officer was not permitted to do with his or her hands and eyes: the pooch determines the contents in the bag and reacts in response to certain forms of contraband. permission or no permission, the sniffer dog sniffs.

funded to the tune of more than $500,000, canada’s federal jetway training course has been responsible for training hundreds of RCMP officers and enforcement officers from other agencies. similar provincial and municipal programs exist. approximately 5% of officers trained are reported to participate in it daily.

Q – do these sniffer dog programs constitute a ‘search’ of the sort that ought to invoke constitutional safeguards?

it turns out that canadian courts are all over the map on this…

(so much so that i am willing to supervise the PhD of anyone who can convince me that they could provide a theoretical account that reconciles the different decisions that the courts have rendered on the issue.)

consider the following sample of judicial pronouncements from across canada:

1. “The use of investigative tools and aides such as police dogs to detect contraband or explosives on public buses is not beyond the realms of reasonable expectations of the traveling public.
The dog sniff does not constitute a “search” within the purview of section 8 of the Charter. As there was no “search”, there could be no breach of [the target’s] right to be secure from unreasonable search or seizure.” [R. v. Gosse at para 28 and 40 (New Brunswick)]
2. “I find that the police conducted searches without the consent of [the targets] prior to their arrest by the use of the police dog. I reject the argument that [the dog] was simply used as an investigatory technique. It is clear from the evidence … that the dog was extremely reliable in detecting the odour of drugs emanating, as previously stated, either from drugs themselves, a recent presence of drugs, or items such as cash that have been in the presence of drugs or handled by persons who have themselves handled drugs. The sole purpose of the dog being at the bus depot that day was to assist the officers in locating drugs.” [R. v. Dinh at para 28 (Alberta)]
3. “In conclusion, I am of the opinion the [target] did not have a subjective expectation of privacy that could reasonably be supported. [The target] chose to travel by public transport which would provide no control or protection from others entering his immediate space. The use of dogs by police was known and he was aware of the effect of passing in close proximity of such a dog. The use of trained police dogs to detect the scent of contraband in public areas such as train, bus and airplane depots is a legitimate police investigatory tool and does not infringe on any legitimate privacy interest protected by section 8 of the Charter.” [R. v. McCarthy at para 36 (Nova Scotia)]
4. “I am not persuaded that the judgment of the Supreme Court of Canada in Tessling is supportive of the … position that a dog sniff is not a search. In Tessling, the house of the accused was specifically targeted as a result of information that the accused was involved in a marijuana grow operation. I see a significant difference between a plane flying over the exterior of a building (on the basis of information received) and the taking of pictures of heat patterns emanating from the building, and a trained police dog sniffing at the personal effects of [the targets] in a random police search.” [R. v. A. M. at para 47 (Ontario)]
5. “Justice Binnie in Tessling notes that FLIR imaging generates information about the home but section 8 protects people, not places. As I noted earlier, he emphasizes the fact that the information generated by FLIR imaging about the respondent does not touch on "a biographical core of personal information", nor does it "tend to reveal intimate details of his lifestyle".
Nor does the information that the dog’s actions supply.”
I conclude that [the target] did not have a reasonable expectation of privacy in the area surrounding his vehicle. The dog sniff did not constitute a search. [R. v. Davis at para 21-23 (British Columbia)]

if one were to start counting judicial noses in the dozen or so reported canadian decisions, almost half of them have held that the use of sniffer dogs without a warrant constitutes a search that infringes the section 8 Charter guarantee to be secure against unreasonable search and seizure. slightly more than half deny that the use of sniffer dogs constitutes a search — usually on the basis that people do not have a reasonable expectation of privacy in the smells that emanate from their personal effects.

part of the explanation for the apparent schizophrenia in the caselaw is that the reasonable expectation of privacy test requires a decision based on the ‘totality of the circumstances’. to be fair to the courts, the fact patterns for the above decisions range from dogs sniffing knapsacks at bus depots, to dogs sniffing rental cars on open roads, to dogs sniffing kids’ school lockers. it is not surprising that such different facts could lead to different judicial pronouncements regarding the reasonable expectation of privacy in at least some cases.

judicial inconsistencies aside, there are, in my view, several other troubling aspects to the jurisprudence.

for starters, i am uneasy about the pickwickian logic adopted by an overwhelming majority of our courts across canada. it goes roughly like this: if there is no reasonable expectation of privacy, then there was no search. in other words, if i do not have an expectation of privacy in the smells emanating from my backpack, then when three police officers show up at my law school and randomly comb the halls with their trusty german sheppard, sniffing for students with dope, this is not to be considered a police search.

to me, this smacks of humpty dumpty’s scornful response to alice in through the looking glass that, "When I use a word it means just what I choose it to mean – neither more nor less." if police and their dogs are on duty and doing their thing but are not ‘searching’, then what exactly are they doing?!

this approach to defining police searches is further problematized by the recent trend to understand and define privacy in terms of informational privacy. my concern, one shared by many of the participants at our NJI workshop, is that the informational privacy approach is excessively reductionist in nature. once police activities are understood as nothing more than ‘capturing heat emanating from the wall of a building’ or ‘intercepting chemical emissions oozing through a backpack’, it is no longer possible to appreciate the deep social significance of RCMP planes beaming infrared lights at our homes in the middle of the night or OPP police officers and their guard dogs randomly patrolling our high schools, city streets and bus stations. (thankfully, as indicated in quotation #4 above, this point was not lost on the Ontario Court of Appeal.)

i am also troubled by the fact that, practically speaking, it matters squat what the courts think about privacy or how they define a search. regardless of whether sniffer dogs are said to conduct searches or whether the court finds that a target has a reasonable expectation of privacy in escaping odours, the evidence gathered through jetway programs is, at the end of the day, almost always admitted in law courts. according to most courts, to exclude the evidence, would bring the administration of justice into disrepute.

by admitting evidence in spite of the fact that it was obtained through a privacy breach, and/or by failing to provide any alternative remedies in the case of such privacy breaches, the courts are relinquishing the strongest deterrent available to prevent police from orchestrating investigations that are designed to interfere with privacy. without deterrents or remedies, such investigatory techniques are sure to become standard practice. and once they are standard practice – you guessed it – it becomes unreasonable for us to expect the police to act otherwise. quotation #3 above demonstrates this point nicely: since the target must have known that police regularly use dogs to sniff out drugs, he could no longer reasonably expect privacy with regard to smells emanating from his personal effects.

here, the notion of ‘expectation’ is completely stripped of its previous normative commitments.

instead, we are forced, as herbert hart might have put it, to take an external perspective of our expectations of privacy. like holmes’ ‘bad man’, who has not internalized the law as a reason for behaving a certain way but only sees legal rules as mere predictions about what the courts will do in fact, the reasoning adopted in quotation #3 above and by many members of the canadian judiciary tends to reduce our privacy expectations to nothing more than predictions about how the police will in fact behave and what technologies the consider state of the art.

its no longer about how they ought to behave.

the discourse is no longer centered on democracy, rights, duties or even interests. it is about the state of the art and the current standards of practice. as such, the ‘reasonable expectations’ test becomes a strange kind of factual inquiry.

the reasoning in quotation #3 above perfectly illustrates the crucial problem with the ‘reasonable expectations’ standard stripped of its normative meaning. once an expectation is understood as nothing more than a prediction: if you want to change the standard, all you have to do is change the expectations. and if you want to change the expectations, all you have to do is change the standard.

it’s a circle that rolls round upon itself.

and once that ball is rolling, it doesn’t take long to snowball. although i am quite certain that this is not what justice binnie had intended, the post-tessling trend in courts across canada has been to reduce our understanding of police search activities to an impersonal, non-social, merely informational transaction in a way that tends to shrink our reasonable expectations of privacy.

i find this trend particularly disconcerting in light of concurrent surveillance programs in the private sector and in light of rapidly developing surveillance technologies.

in what has to be one of the most unnoticed ‘anti-piracy’ surveillance news stories in 2006, the motion picture association of america very recently sponsored a world tour of two sniffer dogs named ‘lucky’ and ‘flo’. this gorgeous pair of black labs can be seen in this video sniffing-out polycarbonates, a byproduct of CDs & DVDs. with demonstrations across north america, central america, europe and asia, the purpose of this tour is to convince customs agents and border authorities, worldwide, to use anti-piracy canine units at airports, seaports, and anywhere else that bootleg CDs & DVDs are being transported. (a special shout-out to JereMe for bringing this story to my attention)

clearly, this is absolute תעגושמ. but, I assure you, it is not fiction. far from it.

and in the same world where dogs are being trained to sniff-out DVDs in gym bags, technologists are perfecting new means of remote sniffing. from simple devices that detect, measure and analyze electricity consumption to gas chromatography and other advanced forms of machine olfaction that are used to detect, measure and analyze odours in the air that even dogs cannot.

to take one very primitive technological example, consider digital recording ammeters (DRA). DRA is a technology that is capable of measuring the flow of electricity and producing graphical representations of the cycle of electrical consumption that takes place within a residence. among other things, these graphs can be used to identify grow-ops which, as it turns out, produce a very particular pattern of electrical flow. [grow-ops tend to use 18 hours of light and 6 hours of darkness to grow the plants and then switch to 12 hours of light and 12 hours of darkness in order to simulate autumn, thus producing the buds, which are the saleable product from the marijuana plant.]

with DRA, the police no longer need the expensive infrared fly-overs used in tessling. they just need to hook-up one of these load profile devices to a nearby public utilities pole; DRA can determine with nearly 100% accuracy whether there is a grow-op (though the DRA cannot, with any degree certainty, determine what is being grown). of course, DRA could also be used to determine other activities going on inside a home.

canadian courts have considered whether the police’s use of DRA constitutes an unreasonable search. like the sniffer dog cases, these decisions are all over the map. in a slender majority of the cases that i have read (6 to 5), courts have applied a tessling type analysis to DRA. for example, this court, this court, this court and this court all held that use of the DRA device to monitor a home was not a search and it therefore did not interfere with the target’s reasonable expectation of privacy — even though the entire reason for using DRA was to surreptitiously determine the nature of a target’s activities inside of a dwelling without transgressing its physical boundaries.

in my view, DRA and other primitive technologies need to be understood in light of one of the most rapidly developing areas in the field of information and communications technology: sensor networks. through the use of wireless technologies consisting of spatially distributed autonomous sensor devices, we are developing an astonishing capability to monitor and meter personal, physical and environmental conditions at greater and greater distances.

and, practically anyone can use these devices for practically innumerable purposes!

it doesn’t require much imagination think beyond today’s prototypes. consider the ingenious feral robotic dogs. with a few clever hacks, commercially available toy dogs (such as the famous sony aibo) have been turned into robotic sniffer dogs — enabling citizens to ‘sniff-out’ corporate contaminants from remote distances. granted, this project (by the brilliant natalie jeremijenko) is a happy use of sensor networks. a form of counter-corporate sousveillance. but it is not hard to see that there will be other uses, less happy.

the stunning technological developments that are just around the corner should give us some pause when we think about the simplistic and reductionistic way that our courts are becoming more and more inclined to think about the sniffer dog cases in particular and the reasonable expectation of privacy in general.

okay. newton has come downstairs to sniff-out a second breakfast and i hear erin’s quiet footsteps, so i’ll end now with an allegory.

kurt cobain, the troubled soul behind the legendary grunge band, nirvana, set out in the early 1990s to write what he called, “the ultimate pop song”; a song that would bust-down the barricade between alternative and mainstream rock music and perhaps serve as an anthem for generation x; a song that he called smells like teen spirit.

according to pop folklore, the rocker subsequently described as the spokesman for (or, was it against?!) the coming generation of excessive consumption borrowed the song title from a line spray-painted on the wall of his bedroom by his pal kathleen hanna:


as the story goes, since he and kathleen (lead singer of the riot grrrl punk band bikini kill) had recently spent late evenings talking about the politics of anarchy, the future of alternative music and the plagues of humanity, cobain took her graffiti message as a slogan expressing how he had captured the spirit of a generation through his music.

apparently, kurt cobain had no idea what he smelled like!!

comically or tragically (depending on one’s point of view), the paint sprawl had a less inspired meaning. it turns out that hanna’s words were to be taken literally. she simply meant to say that kurt smelled like Teen Spirit,™ the deodorant worn by tobi vail (kathleen hanna's bandmate and kurt's then pelvic affiliate). cobain had not heard of this colgate product, nor had he realized that his friends thought of him as branded by his soon-to-be-causual-ex-partner’s scent.

i suspect that none of us really know what we smell of, or who is smellin’ us.



a denial, a denial, a denial, a denial, a denial,
a denial, a denial, a denial, a denial

ian kerr
new years day, 2007
PDF Print
Some Thoughts on Camera Phones, Space and Gender
By: Rob Carey

December 12, 2006


For some time, I have been interested in camera phones and their implications for the various concerns this project encompasses. I recently came across the following account by software entrepreneur Philippe Kahn, in which he explains that he invented the device in 1997 to share photographs of his newborn baby:

While Sonia [Kahn’s wife] was doing the real work, I had my digital camera and my cell-phone working together and able to pull email addresses from my laptop. It took a couple of trips to Radio Shack as well as all my sleep for 48 hours. Sophie, our baby, was doing really well and were were able to share picture-messages with friends and family around the world in real time. The eureka moment was when we received messages back from friends and family going: “How did you do this? Where did you get this device?” Within a few days Sonia and I realized that if we could turn a real cool demo into a fully scalable system that could serve millions of picture-mails in real-time we would be building a great business: cool, innovative, exciting and really useful to about everyone

Kahn situates the camera phone’s myth of origin within the most intimate of social units – the family. In so doing, he establishes a neat congruity between the camera phone and conventional snapshot photography. Bogardus (1981), for example, contrasts the intimate nature of family photographs with the worldly nature of other image-making media: “Instead of being a public form of communication, the snapshot - despite its ubiquity - has always been a private one” (p. 114). Similarly, Metz (1985) argues that photography's chief realm has largely been that of domesticity, viz. the picture that commemorates family observances. He claims that “the kinship between […] photography and privacy, remains alive and strong as a social myth, half true like all myths” (Metz, 1985, p. 82).

As of this writing, however, the camera phone is still sufficiently strange as to be unencumbered by similarly commonplace cultural habits or understandings. Kahn’s account makes clear that networked interactivity is integral to the camera phone’s essence; the camera phone is a protean device capable of a broad range of functions, including text-messaging, e-mail, Web browsing, music and video downloads, games and, of course, image capture. It is therefore difficult to think of it as a home- or family-centered medium in the quite same way that Metz and Bogardus thought of the conventional camera. Indeed, the camera phone is exemplary of the various portable wireless technologies that have altered the microsocial negotiations peculiar to what Goffman called the public order (that is, spaces characterized by face-to-face contact among strangers or the “merely acquainted” (1971, p. xi)).

Interestingly, however, some research suggests that everyday camera phone use corresponds closely to traditional snapshot photography, insofar as it involves sharing information with friends and family (Okabe, 2004; Kindberg, Spasojevic, Fleck & Sellen, 2004; Van House, Davis, Ames, Finn, & Viswanathan, 2005). But this raises an interesting question: if the camera phone is contiguous with conventional home photography, why would anyone actually need such a device when a stand-alone digital camera would suffice? Despite the various uses to which camera phones may be put, the instrument appears to confound even some constituents of the camera phone industry itself. For example, the Consumer Electronics Association (CEA) issued a document in 2004 entitled Click Creatively: Novel Uses for Your Camera Phone, in which eleven of the twelve novel uses could have been performed with a stand-alone camera. Only one – “Let your kids use your camera phone to capture and email a same-day photo to friends during a family vacation” – invoked the camera phone’s interactive capacities. The slightly desperate nature of the CEA’s enterprise is evident in the twelfth suggested use: “Recreate that perfectly presented restaurant meal at home by using your camera phone to take a photo of it next time you dig in!” One could argue that any technology whose prospects depend on a general wish to photograph about-to-be-eaten food is doomed. It would seem, to paraphrase Latour (1997), that the camera phone is a solution to a problem that has yet to be invented. Yet the CEA’s inability to define a distinct role for the camera phone illustrates a critical point: cultural habits surrounding new technologies often arise from concerted efforts to create an ethic of use that defines and directs the user’s engagement (Munir & Phillips, 2005).

Practices surrounding conventional photography, for example, were carefully influenced by interests with a commercial stake in the medium. During much of the twentieth century, for example, Kodak promulgated a vision of home and family to which photography was central (the so-called ‘Kodak moment’). Integral to these efforts was the conceptualization of specific subjectivities for whom the taking of family photographs amounted to a kind of moral imperative. Kodak’s advertising often imposed upon mothers, for instance, the obligation to act as camera-wielding archivists of their family’s history (West, 1999). Indeed, Kodak’s strong association of the female subject and ‘home’ articulated a doctrine of separate spatial spheres for men and women so durable as to be subtly – or not so subtly, depending on one’s reading – reproduced in Kahn’s anecdote. Equally durable, of course, are the various other social practices surrounding photography that Kodak and other concerns worked so hard to create. Today, it is absolutely unremarkable to commemorate notable moment’s in family life – graduations, weddings, holidays – by taking a photograph, even though this notion was once alien to most people (Munir & Philips, 2005).

It is the struggle to articulate a role for the camera phone in society that interests me. Accordingly, I would like to explore a particular effort to construct the camera phone as a distinctive device, one that is integral to everyday life in a way that conventional cameras (or mobile phones) are not. Specifically, I consider a television commercial depicting camera phone use by a young, white, heterosexual couple. (The commercial can be found here).

Entitled “Duty Calls,” the commercial opens with a shot of two feet clad in women’s dress shoes. Various other shoes are strewn across the floor. Subsequent scenes reveal that the feet belong to an otherwise conventionally dressed man in a shoe store. After several intervening shots, he uses a camera phone to take pictures of the shoes he is wearing. In the next scene, the viewer sees a pregnant woman sitting on a couch with her feet elevated, answering her phone. She holds the phone so that the photograph of the shoes appear where her own feet would be. The ad ends with the superimposed text: “here… phones become dressing rooms”.

In one sense, the commercial conceptualizes space and place in a way that undoes the strangeness of the camera phone. It constructs an ethic of use and a context in which the device makes sense: the woman’s use of the device may be viewed as a liberatory act, insofar as it allows her to experience aspects of the world that exist beyond the boundaries of her home. Yet a deeper reading reveals a curious ambivalence: although the camera phone appears to serve its users by configuring spatio-temporality as a customizable phenomenon, it also delineates a sharp, gendered distinction between domestic space and the wider world. Integral to this interpretation is the woman’s obvious subjectivity as a consumer.

In their historical investigation of gender and urban spaces, Bondi and Domash (1998) argue that the growth of a middle-class “culture of consumption” (p. 279) played a key role in reconfiguring the contours of contemporary cities. Prior to the nineteenth century, a middle-class woman’s ability to venture into the city was strictly regulated by considerations of propriety. For such women, socially sanctioned activities in the city included “caring and nurturing activities, such as visiting the sick or infirm” as well as excursions to cultural sites and churches (p. 270). Compared to freedom experienced by a man of comparable position, a woman’s experience of the city’s spaces was relatively constrained. With the rise of a consumer culture, however, a woman’s freedom of movement expanded to include the spheres encompassed by consumer activities. As Bondi and Domash point out, however:

[I]n terms of space, this development could be potentially disruptive, since it required women, the bearers of “feminine” values, to enter the masculine spaces of the city to act as consumers... [T]his potentially disruptive act was neutralized by the development in the nineteenth century of “femininized” consumer space within the city - if women had to be on the streets of the masculine city, then those streets and stores had to be designed as feminine (p. 280).

Thus, a middle-class woman’s identity as a consumer afforded her limited access to certain public spaces in the city – department stores and arcades, for example, which were shaped to accommodate her status as a consumer. A woman’s ability to experience the city’s public spaces was therefore contiguous with her subjectivity as a consumer.

I do not think it is too much of a stretch to identify at least some elements of the foregoing in "Duty Calls". It is arguable, then, that the commercial not only echoes Kahn's myth of origin, but reproduces a longstanding doctrine of separate spatial spheres for men and women. As Nicholson (1983) argues, such spatial separations are as much figurative as material:

The spatial division separating the inner sphere of the home from the outside world had […] a symbolic significance that did not correspond precisely with the spatial division [...] the separation is more adequately understood as a separation between two worlds governed by different norms and values (Nicholson, 1986, p. 43).

Although the doctrine is long-standing, its various historical iterations have proven supremely adaptable. Leslie (1993), for example, offers a compelling argument that the ‘new traditionalism’ evident in much advertising of the 1990s – in which women were situated in contexts strongly suggestive of traditional family values – represents a nostalgic anodyne against the anxieties arising from a radically new and unstable social landscape:

As a traditional sense of place has been eroded by the instantaneity of electronic culture and the proliferation of homogenized landscapes of consumption, it has been replaced by idealized images of community and place, such as the concept of ‘home’ as it was constructed in the 1950s (Leslie, 1993, p. 691).

Indeed, attempts by advertisers to (re)establish the domestic sphere as a primary locus of women’s identity corresponds to various economic and cultural turns, such as post-Fordism, that have altered taken-for-granted social arrangements, both in the home and in the workplace (Leslie, 1993). Concomitant with these shifts has been the exponential growth of information and communication technologies that promise more than ‘instantaneity’ – devices such as camera phones confer on their users the power to reconfigure the contours of their everyday environments so as to modify the experience of conventional spatio-temporal binaries – public/private, work/home, etc. Against this, Motorola seems to offer a deeply ambivalent vision which celebrates the liberatory potential of the new technology, while formulating an ethic of use that etches gendered spatial distinctions into the profound uncertainties of the wireless world.


Bogardus, R.F. 1981). Their "carte de visite to posterity": A family's snapshots as autobiography and art. Journal of American Culture 4: 114-133.

Bondi, L. & Domosh, M. (1998) On the contours of public space: a tale of three women. Antipode 30: 270-289.

Goffman, E. (1971) Relations in Public: Microstudies of the Public Order. New York: Basic Books.

Kindberg, T., Spasojevic, M., Fleck R., & Sellen, A. (2004). I saw this and thought of you: Some social uses of camera phones. CHI 2005, April 2–7, 2005. Retrieved June 6, 2006, from http://portal.acm.org/citation.cfm?id=1056962&dl=GUIDE&coll=GUIDE

Latour, B. (1997) Science In Action: How to Follow Scientists and Engineers through Society. Cambridge: Harvard University Press.

Leslie, D.A. (1993) Feminity, post-Fordism, and the 'new traditionalism.' Environment and Planning D: Society and Space 11: 689-708.

Metz, C. (1985). Photography and fetish. October 34: 81-91

Munir, K.A. & Phillips, N. (2005) The birth of the 'Kodak Moment': Institutional entrepreneurship and the adoption of new technologies. Organization Studies 26: 1665-1687.

Nicholson, L. (1986) Gender and History: The Limits of Social Theory in the Age of the Family. New York: Columbia University Press.

Okabe, D. (2004). Emergent social practices, situations and relations through everyday camera phone use. Retrieved June 9, 2006, from http://www.itofisher.com/mito/archives/okabe_seoul.pdf

Van House, N. A., Davis, M., Ames, M., Finn, M., & Viswanathan, V. (2005). The uses of personal networked digital imaging: An empirical study of cameraphone photos and sharing. Ext. Abstracts CHI 2005, ACM Press; pp. 1853-1856.

West, N. (1999) Kodak and the Lens of Nostalgia. Virginia: University of Virginia Press.
PDF Print
Privacy vs. Equality: Reflections on Re-thinking the Dichotomy
By: Jane Bailey

December 5, 2006


The Supreme Court of Canada has interpreted “expression” very broadly for purposes of defining the extent of Charter protection for free expression. As a result, hate propaganda, obscenity and child pornography have all been found to qualify as Charter protected expression. The state has therefore been required to prove that the restrictions it imposes upon these forms of “expression” are justifiable in a free and democratic society.

Freedom of expression is perhaps most often characterized as an individual liberty – a right to express one’s beliefs free from state intervention. In the context of hate propaganda and obscenity, the overriding justification offered for state intrusion on an individual’s “expressive” freedom has been constitutional obligations relating to the more collective rights of equality and multiculturalism. Legislative restrictions on the individual Charter right to expression free from state intrusion have been found justifiable on the basis that hate propaganda and obscenity undermine the ability, respectively, of members of targeted minority groups and women to function and be respected as social equals. The concern is that the degrading and dehumanizing imagery and text of hate propaganda and obscenity may promote attitudes accepting of discrimination and violence against those groups and their members. Closely tied to this equality analysis is an analysis of the effects of hate propaganda and obscenity on the “dignity” of members of minority groups and women. While the privacy rights of those accused of offending state-imposed restrictions on hate propaganda and obscenity are explicitly considered, the privacy rights of target groups and their members are not. The analysis of the justification for restrictions on child pornography reveals a somewhat different emphasis – focusing more on its effect on the privacy and associated dignity rights of its immediate individual targets – the children abused in its production – rather than on broader social concerns as to the effect of its “message” on attitudes and behaviours toward children that serve to undermine the equality rights of that group and its members.

Why is it that the case law focuses explicitly on the privacy rights of the targets of child pornography, but never explicitly discusses the privacy rights of the targets of hate propaganda and obscenity? Perhaps the most obvious response is that, in fact, the privacy rights of target group members are simply not at play in the contexts of hate propaganda and obscenity. I would suggest that before jumping to that conclusion, we ought to more thoroughly expose and challenge assumptions about the nature of privacy and its relationship with equality underlying both that conclusion itself and much of the analysis in Canadian case law relating to hate propaganda, obscenity and child pornography.

One alternative response might be that recognition of certain privacy-related interests of the individual children victimized in child pornography, and the absence of any similar analysis in the context of hate propaganda and obscenity reflects a particular individualistic, negative liberty approach to privacy that unnecessarily pits privacy-related interests as oppositional to equality rights, in part by failing to give due weight to both the social and collective aspects of identity formation and their relationship with the broader social value of privacy. But is there any value-added in equality-seeking groups investing time and energy in attempts to re-imagine and re-articulate the by now entrenched vision of privacy as a fundamentally individualistic negative liberty?

As thinkers like Nussbaum have suggested, such efforts are not without their dangers, not the least of which is the risk of further inscribing privacy with values of little relevance to all but the most privileged members of equality-seeking groups. While the best legal hope for equality-seeking groups may well continue to be promoting understanding and acceptance of principles of substantive equality, in some instances both the collective interests of those groups as a whole and the related interests of their individual members may also be served by cultivating a more social or collective understanding of privacy and its ends.
PDF Print
“A Man’s Home (Page) is His Castle”
By: Carlisle Adams

November 28, 2006


Many of us have probably heard the saying “a man’s home is his castle”. (By way of parenthetical footnote, let me just mention that I could not find an elegant way to make this old adage gender-neutral: “an [(unspecified gender) entity]’s home is [(unspecified gender) possessive pronoun] castle”. Therefore, for the purposes of this article, I will just state up front that “man” and its variations should be taken to mean “man or woman” and their corresponding variations, and hope that this satisfies any sensitivities in this area.) This saying has been around for quite some time and most people, I think, would probably have some intuitive understanding of its meaning upon hearing or seeing it. However, given some of the new terminology with which we have become accustomed in our Internet age (particularly, “home page”), it may be interesting to explore whether this saying has any relevance in our digital virtual world.

Traditionally, a castle (the residence of a king) has been associated with at least four concepts: protection; identity; privacy; and control. With respect to “protection”, the castle provides a fortress, a stronghold, security from invaders of all kinds. “Identity” suggests ownership, permanence and, at least at some level, a way of authenticating oneself (“I am the king because I have the key to the drawbridge, or because the guard will let me in when I show up”). In terms of “privacy”, the castle is a means of keeping its inhabitants and their discussions away from prying eyes and ears, so that secrets are prevented from flowing out to the general public. The castle also gives a sense of “control” in that the king has ultimate authority over certain aspects of the domain (e.g., the power to decide content and activity within the walls).

When it is said that a man’s home is his castle, the analogy is clear: with his home, the man gets – or at least expects – a measure of protection, a sense of identity, some level of privacy, and a degree of control. So now, out of curiosity if nothing else, one can ask, when we say today that a man has a home page, is the analogy as clear, or is it stretched so thin as to be fragile and useless? Let’s consider the above four associations with the term “castle” to see how well they apply to our digital (rather than physical) homes.

Protection. When Bob sets up a Web server and creates on that server a home page for himself, does he have the security from invaders that a castle might provide? Even a superficial awareness of security issues with websites over the past few years will confirm that the answer is “no”: attackers break into websites every day all over the world to enter, change, or steal data. Well-known buffer overflow or SQL injection attacks are commonly used to break into websites to cause damage. If a website is expecting some user input (such as a username and password), the attacker may send far more data than the buffer allocated to receive this data can hold (a password of 10,000 characters, for example). This input data may “spill over” beyond the buffer into another area of memory. If the overflow area is a place where instructions are executed, and if the overflow characters are carefully constructed to be valid executable instructions, then this attacker will have succeeded in having arbitrary code of his choosing run on Bob’s machine. This is the essence of a buffer overflow attack. With SQL injections, the attack is similar in that data input by the user contains some extra, unexpected characters and these characters are treated as commands to the SQL database that sits behind Bob’s webpage.

If Bob has routines that do proper input data validation (e.g., make sure that the username that has been entered is really a username and is not longer than a specified value), he may be able to avoid some of these attacks, but it is not always easy to distinguish an attack script from valid data. Unfortunately, the conclusion is that unless he puts extensive effort into setting up particular safeguards, Bob’s home page gives him very little protection from the malicious entities that live outside his “virtual walls”.

Identity. Is Bob’s home page a valid mechanism for showing ownership and authenticating himself? The answer to this question is also negative. Webpage spoofing attacks are not very difficult to perform and can be fairly successful. Making an identical copy of an existing webpage is almost trivial (it is essentially a copy-and-paste operation to move every image, every piece of text, every logo, etc., from one webpage to another. The (slightly trickier) step is to get other people to go to the new site while thinking that they’re going to the old one. This can be accomplished using a technique called DNS cache poisoning. The Domain Name Server (DNS) is a machine that performs an important service on the Web: you give it a host name (such as bob.com) and it gives you back the IP address of that host machine (such as 192.12.567.30). This way, your computer can communicate with that machine using the Internet Protocol (IP). The data pair {bob.com, 192.12.567.30} is stored in the DNS cache (a fast portion of its memory). Cache poisoning is an attack in which that the attacker changes the data pair so that bob.com is instead associated with a different (i.e., the attacker’s) IP address in the DNS cache. Now everyone that wants to go to Bob’s machine will send bob.com to DNS, get back the attacker’s IP address, and then go to the attacker’s website, which has been made to look identical to Bob’s site. Thus, the attacker gets Bob’s customers (their money, or their personal data), and Bob has no way of knowing that this has occurred.

If Bob’s machine is a Web server, technologies such as SSL server authentication can help, but they provide no guarantee: users often do not check that the certificate of the site they have reached is the one they’re expecting, and often ignore warnings in pop-up windows even if they do check. In general, website spoofing means that Bob’s home page is not sufficient to prove ownership or to validate or authenticate Bob in any way.

Privacy. Does Bob’s home page give him a private place to store his personal and confidential information? At first glance, this seems like an odd question to even be asking: the Internet is a public space and Internet search engines (such as Google) will find a website once it exists (this is what they were invented to do). The idea of putting something on a website and expecting it to be private is a bit like building a house with glass walls and expecting that others will not see what goes on inside. However, it is possible to create private spaces within a website; typically, these are password-controlled areas (anyone with the password can view the page, and all others are redirected to another page telling them that they are not authorized to see the contents). But the problems with passwords as an authentication mechanism are extremely well-known and well-documented. Compounding this is the fact that the site owner usually wants a number of people to access these areas (friends, family, students in a course, etc.) and so will deliberately choose a password that will be easy for that group of people to remember. It may therefore be conjectured that the passwords used to protect these areas are probably even weaker than typical passwords, which are notoriously weak in many cases.

It is unlikely to be the case that Bob’s home page will be a safe place for his private data.

Control. Does Bob have authority over his home page? Does he have the power to decide content and activity? Again we see that the answer to this question is negative. Website defacement (in which a hacker changes the content or appearance of a target website, altering or inserting messages, pictures, or other data) shows that owners do not really have complete power over the content on their sites. Furthermore, session hijacking attacks (in which a hacker takes over an active session and begins interacting with the server as if he was the original client, or interacting with the client as if he was the original server), buffer overflow attacks, SQL injection attacks, and so on, demonstrate that the site owner may also have little power over the activities that take place on his site.

Integrity detection / protection mechanisms, intrusion detection / protection mechanisms, and good session management practices can all help but these are hard to do well and, again, unless Bob takes extensive efforts to defend his website, he will not have the control over content and activity that he might wish to have.

Where do we go from here?

OK, so home pages don’t really have any of the properties we might associate with homes. But “home” and “home page” are just names; what difference does any of this make? At this point, it may be tempting to pull out another old saying: What’s in a name? That which we call a rose by any other name would smell as sweet [1]. There may be much truth in this (after all, Shakespeare was right about a great many things!), but we need to exercise a little caution. The danger lies not in using two different names for the same thing, but rather in using the same name (or very similar names) for two different things: “home” and “home page”. All the security (i.e., protection, privacy, identity, and control) we associate with our home does not translate immediately to our home page. Although there are some similarities (locking the front door with a key is something like password-protecting the website), there are some major differences as well (for example, website spoofing: the attacker makes an exact replica of your house and fools your family and friends into going there when they want to visit you? Nothing in the real world (outside the “twilight zone” [2]) corresponds to this). Using essentially the same term (“home” and “home page”) could lead the unsuspecting user to think that similar behaviour – both precautions and activities – are appropriate, when in fact this is not the case.

I am not advocating that we should call home pages something else (it’s far too late for that sort of change, although “start page” might have been a good choice that is free of other associations). But I am suggesting that this could serve as a reminder to us that we need to be careful when naming things in our created virtual worlds. Choosing names that are familiar (so that people will more readily identify with the technology and feel comfortable using it) can have unintended consequences (including behaviour that is inappropriate because the technology is not as much like its real-world name-sake as the appellation might imply). Let this be a lesson to us: choosing names for concepts should not be taken lightly; we need to think through the connotations of the names we pick and consider whether this may lead to security or privacy problems down the road.

So, a man’s home (page) is his castle? Not really; not in the new Wild West of cyberspace…


[1] Romeo and Juliet, Act II, Scene 2.
[2] http://www.scifi.com/twilightzone/
PDF Print
Agency and Anti-Social Networks
By: Ryan Bigge

November 21, 2006


“A man opposed to inevitable change needn't invariably be called a Luddite. Another choice might be simply to describe him as slow in his processes.”
-- Francis Wolcott (Deadwood, Season 2, Episode Four)

Let me start with a strange but charming article in the Sunday New York Times, written by a 24-year-old market researcher named Theodora Stites. In “Someone to Watch Over Me (on a Google Map),” Stites details her multiple memberships in various online communities. She describes the safety and security of friendships made online due to the distancing effects of computer mediation and jokes about being unable to “log out of” awkward social situations in the physical world, thus prompting her to join Second Life.

Reading the article, I found myself taken aback -- not by the extent of her electronic immersion but by the amount of work (labour, as it were) her routine appeared to entail. As Stites writes, “Every morning, before I brush my teeth, I sign in to my Instant Messenger to let everyone know I'm awake. I check for new e-mail, messages or views, bulletins, invitations, friend requests, comments on my blog or mentions of me or my blog on my friends' blogs.” [i]

This sounds like a lot of effort. I would undoubtedly forget to brush my teeth. Clearly, the target demographic of 14-24 year olds who use MySpace have more free time than beleaguered, 30-something grad students. Although I have social networks in the dirt and flesh world, I do not see the utility of an online equivalent.

Of course, it’s hard not to sound like a young fogey when questioning the curious rituals of the younger generation. I’m reminded of novelist Nicholson Baker, who once published a lengthy, impassioned defense of the card catalog in the New Yorker back in 1994. Swimming against the technological tide is often unpopular, but it remains a useful intellectual exercise.

In a recent online interview with danah boyd, a PhD student at the Berkeley School of Information studying MySpace and MIT’s Henry Jenkins, social networking sites are described as vital resources for students entering primary and secondary schools. According to Jenkins, “The early discussion of the digital divide assumed that the most important concern was insuring access to information as if the web were simply a data bank. Its power comes through participation within its social networks.” [ii]

Jenkins raises important questions relating to the digital divide and making good on access. But when did joining MySpace or Facebook because a necessity, rather than an option? Did we skip a step? At what point does not being a member of a social network site become a liability? At what point does it become impossible to not be a member?

Journalism about social networking sites underscore this aspect of inevitability. In a recent New Yorker article by John Cassidy, Facebook co-founder Chris Hughes explains that, “If you don't have a Facebook profile, you don't have an online identity.” He went on to say that, “It doesn't mean that you are antisocial, or you are a bad person, but where are the traces of your existence in this college community? You don't exist---online, at least. That's why we get so many people to join up. You need to be on it.” [iii]

You need to be on it. Where does choice or agency reside in inevitable change? What if I want to decide for myself? Does that make me “slow in my processes?”

Although I’m aware of the irony inherent in the term (you’re reading this article online, after all), I believe that the neo-Luddite movement offers a useful method of reconsidering the importance of social networking sites. Neo-Luddite philosophy provides a small measure of critical distance from the object of study, along with foregrounding questions of technological determinism. In his recent book Against Technology, Steven E. Jones examines the myth of the Luddites, and how those who smashed looms in 1811 and 1812 continue to inspire and inform debates about technology almost 200 years later.

Incorporating a wide range of writers and thinkers, including William Blake, Mary Shelley, Bill Joy, Edward Tenner and Theodore Kaczynski, Jones investigates how the mythology of the Luddites has persevered and reconfigured itself over time. In its most basic iteration, Jones suggests that, “Many people who identify with the term ‘Luddite’ just want to reduce or control the technology that is all around us and to question its utility – to force us not to take technology for the water in which we swim.” [iv]

The problem for would be loom-smashers, according to Jones, is that “Modern (and now postmodern) technology is routinely understood as an autonomous, disembodied force operating behind any specific application, the effect of a system that is somehow much less material, more ubiquitous, than any mere ‘machinery.’” [v] My technological skepticism is not sufficient enough for me to consider acts of rage against the machinery, but I do think it worthwhile to consider the quality of water that we find ourselves swimming in.

Although not a neo-Luddite, Mark Andrejevic, in his examination of webcams, writes of the Digital Enclosure, a concept that is equally relevant when considering social networking sites. According to Andrejevic, “The de-differentiation of spaces of consumption and production achieved by new media serves as a form of spatial enclosure: a technology for enfolding previously unmonitored activities within the monitoring gaze of marketers.” [vi] I like to think of the digital enclosure as a more theoretically robust update of Rockwell’s 1980s hit “Somebody’s Watching Me.”

There is plenty to surveil. According to various studies, young people spend a significant amount of time using Facebook and MySpace. Cassidy points out that “Two-thirds of Facebook members log on at least once every twenty-four hours, and the typical user spends twenty minutes a day on the site.” [vii] Social networking sites might resemble play, but Andrejevic argues that “Consumers generate marketable commodities by submitting to comprehensive monitoring.” [viii] Which makes MySpace and Facebook participation a form of labour, even if it’s invisible to most users.

Andrejevic’s work helps explain why Rupert Murdoch’s News Corporation paid $580 million last year to purchase MySpace. For Andrejevic, the digital enclosure “promises to undo one of the constituent spatial divisions of capitalist modernity: that between sites of labor and leisure.” [ix] Which is to say that 24-year-old Theodora Stites is clearly working two jobs.

Of course, like any theoretical insight, the digital enclosure doesn’t explain everything. I would complement Andrejevic’s work with Angela McRobbie, who has studied how elements of the UK rave scene seeped into the logic of the cultural industries during the 1990s, creating an environment where “the club culture question of ‘are you on the guest list?’ is extended to recruitment and personnel, so that getting an interview for contract creative work depends on informal knowledge and contacts, often friendships.” [x] Without making it explicit, McRobbie is exploring Pierre Bourdieu’s concept of social and cultural capital – that is, the importance of who you know, not what you know. Bourdieu’s concept has been extended by Sarah Thornton (subcultural capital) and Paul Resnick, who created the term sociotechnical capital to describe “productive resources that inhere in patterns of social relations that are maintained with the support of information and communication technologies.” [xi]

Combining agency and sociotechnical capital forces the question: Is there any difference between those excluded from creating a robust social network and those who chose not to participate? How does a neo-Luddite (that is, a conscientious MySpace objector) differ from someone with social network failure? Or, to put it another way, is it possible to communicate intent through a lack of participation?

It appears as though social network sites now offer two polarized options: either the constant, self-generated surveillance of the type described by Stites or the self-negation (“You don’t exist”) that avoidance entails. In a marketplace built on unlimited choice, this lack of options is rather frustrating.

It almost makes you want to smash something …

About the author
Ryan Bigge is completing his Master’s thesis on the transgressive strategies of Vice magazine in the Joint Programme in Communication and Culture at Ryerson University. ( This e-mail address is being protected from spam bots, you need JavaScript enabled to view it ). His review essay, Making the Invisible Visible: The Neo-Conceptual Tentacles of Mark Lombardi, was published in the Fall 2005 issue of Left History. Ryan has a BA in history from Simon Fraser University.

Zach Devereaux, a doctoral candidate in the Communication and Culture program at Ryerson University, provided invaluable assistance and brainstorming for this paper. Thanks also to Dr. Greg Elmer, Dr. Edward Slopek and Dr. Jennifer Burwell.

[i] Stites, T. (Jul 9, 2006). Someone to Watch Over Me (on a Google Map). New York Times, pg. 9.8
[ii] Jenkins, H. and boyd, d. “Discussion: MySpace and Deleting Online Predators Act (DOPA)” at http://www.danah.org/papers/MySpaceDOPA.html accessed 28 August 2006.
[iii] Cassidy, J. (2006). Me media. New Yorker, 82(13), 50-59.
[iv] Jones, S. E. (2006). Against technology : From the luddites to neo-luddism. New York: Routledge. p. 231
[v] Jones, S. E. (2006). Against technology : From the luddites to neo-luddism. New York: Routledge. p. 174-175.
[vi] Cassidy, J. (2006). Me media. New Yorker, 82(13), 50-59. (Archived version lacks pagination.)
[vii] Cassidy, J. (2006). Me media. New Yorker, 82(13), 50-59. (Archived version lacks pagination.)
[viii] Andrejevic, M. (2004). Little Brother is Watching: The Webcam Subculture and the Digital Enclosure. MediaSpace: Place, scale, and culture in a media age. In Couldry N., McCarthy A. (Eds.), . New York: Routledge. (Book retrieved electronically)
[ix] (2004). Andrejevic, M. (2004). Little Brother is Watching: The Webcam Subculture and the Digital Enclosure. MediaSpace: Place, scale, and culture in a media age. In Couldry N., McCarthy A. (Eds.), . New York: Routledge. (Book retrieved electronically)
[x] McRobbie, A. (2002). Clubs to companies: Notes on the decline of political culture in speeded up creative worlds. Cultural Studies, 16(4), 516-531. [p. 523]
[xi] Resnick, P. (2005). Impersonal Sociotechnical Capital, ICTs, and Collective Action Among Strangers in Dutton, W. H. Transforming enterprise : The economic and social implications of information technology. Cambridge, Mass.: MIT Press. (p. 400).

PDF Print
Data Security: Quit collecting it if you cannot protect it!
By: Jennifer Chandler

November 14, 2006


We are busily inventing technologies to gather or create personal information “hand over fist.” Not only are we gathering personal information in more and more ways, but we are creating new personal information types.

In some cases, the new technology itself creates a new type of personal information to be gathered (e.g. the snapshot of our personal interests and curiosity that is contained in search engine query history – see Alex Cameron’s recent post). Other technologies enable the collection of personal information that exists independently of the technology (e.g. the various technologies to track physical location and movement, or to use physical attributes in biometrics – as described recently by Lorraine Kisselburgh and Krista Boa in their posts).

The creation of more and more stores of personal information exposes us to the risk of the misuse of that information in ways that harm our security and dignity. In the context of genetic information, consider the risks of genetic discrimination, or the controversy over “biocriminology,” [1] which has developed the idea of the individual “genetically at risk” of offending against the criminal law. Consider also the many uses to which information about one’s brain that is gathered through improved neuro-imaging techniques might be put. [2]

These new forms of personal data collection may solve some compelling social problems, but they will also expose us to risk. I set aside the full range of risks for the purposes of this blog post in order to focus on one in particular. There is ample evidence that we are better at creating stores of data than at securing them. The compromise of data security exposes the individual to the risk of impersonation as well as to the risk that a third party will use the information to draw conclusions about an individual contrary to that individual’s interests.

The impersonation risk is unfortunately now familiar – everyone knows about ID fraud and insurance companies are busily hawking ID theft insurance to protect us from some of the losses associated with it. Today, ID fraud capitalizes upon the most mundane and widespread of identification and authentication systems, including ID numbers, account numbers and passwords. However, the risk is clearly not restricted to these basic systems. Back in 2002, Tsutomu Matsumoto at the Yokohama National University demonstrated how to create “gummy fingers” using lifted fingerprints. These gummy fingers were alarmingly successful in fooling fingerprint readers. [3] All of this underscores the tremendous importance of protecting the security of stockpiles of personal data that can be used in ways to harm the interests and security of the individuals involved.

Our current legal system is woefully inadequate to deal with this problem. Breaches of data security occur so often [4] that they are becoming a bit of a yawn – a numbing effect that should be deplored. A recent Ponemon Institute survey reports that 81% of companies and governmental entities report having lost or misplaced one or more electronic storage devices such as laptops containing sensitive information within the last year. [5] Another 9% did not know if they had lost any such devices.

Although data custodians often seem to claim that the public relations costs of a major security breach are enough of a threat to encourage efforts to promote data security, the evidence makes me wonder if some additional encouragement would not be helpful. One of the key problems with data security is that a large part of the cost of a data security breach may be borne by persons or entities other than the organization responsible for protecting the data from being compromised. Under these circumstances, one would expect the organizations responsible to be inadequately interested in protecting the data.

One of the functions of tort law is to deter unreasonably risky behaviour. If careless data custodians could be held responsible for the damage to others flowing from breaches in the security of personal information under their control, they would be forced to internalize the very real costs of their carelessness.

There have now been a couple of dozen such lawsuits attempted in the United States and two class actions filed in Canada that raise a claim for damages based on the negligent failure to employ reasonable data security safeguards. The success rate so far is low.

One of the key problems facing plaintiffs in these suits is that a claim in negligence is based on a showing of actual harm. Courts will not treat an increased risk of harm as actual harm. This raises the question of how to characterize the insecurity that a data subject feels when his or her sensitive data has been carelessly exposed. Is the harm an anticipated one, namely eventual misuse by an ID fraudster? Or is the harm better understood as a present harm – the immediate creation of an insecurity that imposes emotional harm as well as financial harm (i.e., the cost of self-protective measures such as credit monitoring services, insurance, closing and re-opening accounts and changing credit card numbers). So far, the courts have held that actual harm occurs only once ID fraud happens.

It is clearly in the interests of the defendant data custodians that liability depend upon a showing of ID fraud because, it turns out, it is usually extremely difficult for a plaintiff to tie the eventual ID fraud to the breach of data security caused by the defendant. Because our personal information is so widely used and so poorly safeguarded by many data custodians, it becomes quite difficult to establish the necessary causal link between the ID fraud and the defendant data custodian. The data custodians are thus well-protected – no liability for a careless breach until ID fraud occurs, and no liability (usually) once ID fraud occurs because “who knows where the unknown fraudster got the data he or she used.”

The plaintiffs in these cases have also attempted another interesting argument in order to try to obtain compensation flowing from data security breaches. They point to the so-called “medical monitoring” cases in which some courts have permitted plaintiffs to recover the costs of medical monitoring after exposure to toxic chemicals (e.g. PCBs, asbestos, and drugs found to have harmful but latent side effects). The plaintiffs in the data security breach context argue that their predicament is analogous. They must bear present costs in order to monitor for the eventual crystallization of the risk into a concrete loss.

One might argue that the policy reasons for permitting recovery in the medical monitoring cases are not present in the data security breach cases. Indeed, the defendants in these cases often argue that human health is a more compelling interest than financial health and so relaxed liability rules that are justified in the medical context are not justified in the data security breach context. In my view, this argument is not as self-evidently correct as the defendants claim. The harmful effects of financial insecurity and fraudulent impersonation on human health and psychological well-being are well-known.

Perhaps the insecurity felt by a plaintiff whose sensitive personal data has been compromised ought to be understood as a present compensable harm in its own right in appropriate cases. When we look to the future and see the kinds of personal data that are being collected and/or created using novel technologies, the insecurity and vulnerability of the data subject takes on a new urgency. Given that choices are being made now about the development of these technologies and will be made soon about their deployment, it seems to me that there is no time like the present to ensure that the full costs of carelessness in the use of these technologies are internalized by those who seek to use them.

Until those who want to collect personal data can figure out how to keep it reasonably secure, they have no business collecting it.

[1] Nikolas Rose, “The Biology of Culpability: Pathological Identity and Crime Control in a Biological Culture,” (2000) 4(1) Theoretical Criminology 5-34.
[2] Committee on Science and Law, Association of the Bar of the City of New York, “Are your thoughts your own? “Neuroprivacy” and the legal implications of brain imaging,” (2005) <http://www.abcny.org/pdf/report/Neuroprivacy-revisions.pdf>.
[3] Robert Lemos, “This hacker’s got the gummy touch,” CNET News.com (16 May 2002) <http://news.com.com/2100-1001-915580.html>.
[4] See the list of major reported security breaches which is maintained at <http://www.privacyrights.org/ar/chrondatabreaches.htm>.
[5] Ponemon Institute, “U.S. Survey: Confidential Data at Risk,” (15 August 2006), sponsored by Vontu Inc., <http://www.vontu.com/uploadedFiles/global/Ponemon-Vontu_US_Survey-Data_at-Risk.pdf#search=%22ponemon%20vontu%22>.
PDF Print
Anonymity: a relative and functional concept
By: Giusella Finocchiaro

November 7, 2006


Anonymous data are extremely relevant in Italian and European legislation: in fact, these data are not subject to the laws regarding processing of personal data. This is stated, for instance, by the recital no. 26 of the European Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Moreover, anonymity represents the best way to protect privacy and personal data, as has been affirmed on several occasion by the European Commission and the European Council.

Qualifying anonymous data is not, however, a simple operation.

Anonymous is, in the common language, a term which evokes an absolute concept: without name.

This concept of anonymity as namelessness, as the origin of the word reveals, by definition excludes the identity of the subject to which it refers.

That which is anonymous is therefore faceless and without identity. Anonymity is a concept which evokes an absolute lack of connection between a fact or an act and a person.

However, anonymity is often relative to specific facts, specific subjects and specific purposes.

A composition, for instance, may be anonymous for some but not for others, depending whether or not they know the author.

So the right to be anonymous, when recognized, refers to certain subjects, in predefined circumstances and for specific occasions, which can be specified by the law.

In the Italian law the anonymous data are defined as being data which in origin or after being processed “cannot be associated with an identified or identifiable data subject”. Data can be originally anonymous or can be treated so as be made anonymous.

The key point of the article is the sentence “cannot be associated”. In which cases can be deemed that data cannot be associated with a subject? Must this be a physical or a technological impossibility? Whether this has to be absolute or relative, has already been clarified by the Recommendation of the Council of Europe No. R (97) 5 on medical data protection, where it is stated that information cannot be considered identifiable if identification requires an unreasonable amount of time and manpower. In case where the individual is not identifiable, the data are referred to as anonymous.

On the contrary, the definition of “personal data” as stated by Italian Law, is “any information relating to natural or legal persons, bodies or associations that are or can be identified, even indirectly, by reference to any other information including a personal identification number” , while the definition given by the European directive is the following: “any information relating to an identified or identifiable natural person ('data subject'); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity”.

In both definitions the criterion is not only the reference but also the possibility to refer information to a data subject. This referability is measured in relation to the time, cost and technical means necessary to achieve it. The value and sensitivity of the information should also be taken into account. For example, medical data should require a high level of protection. Relating the information and the subject, to which it refers is a technical possibility; however the legality of this depends on legal and contractual boundaries.

Relativity is therefore central to the definition: data can be anonymous for some, but not for others.

Likewise for functionality, data can be anonymous for certain uses but others not so.

In conclusion, as personal data can be legally processed only for specified purposes by authorised persons, data can be anonymous only for certain people under pre-defined conditions. Therefore anonymity in processing of personal data is not an absolute concept: it is, instead, a relative and functional concept.

Giusella Finocchiaro is a Professor of internet law and private law at the University of Bologna, Italy.
PDF Print
Technologies of Identification: Geospatial Systems and Locational Privacy
By: Lorraine Kisselburgh

October 31, 2006


In an increasingly mobile information society, location has become a new commodity giving rise to technologies such as wireless cell phones, global positioning systems (GPS), radio-frequency ID (RFID), and geographic information systems (GIS). Location technologies make visible an individual’s movements and activities, revealing patterns of behavior that are not possible without the use of this technology. In a typical day’s activities – using a debit card, an electronic toll pass, an automobile’s GPS navigation system, and a cell phone – information about one’s location can be tracked and stored in many ways.

The desire to protect this information is called location privacy, and is based upon Westin’s (1967) notion of privacy as “the claim of individuals … to determine for themselves when, how, and to what extent information about them is communicated to others”, a framework of autonomy or control of information about one’s self. [1] While much literature focuses on informational and relational privacy, locational privacy, is less well studied.

Communication tools, transactional cards, personal locator and navigational systems, radio frequency identification devices, and surveillance cameras all have the capability to provide information about one’s location and behavior. In particular, geospatial technologies, such as global positioning systems (GPS) and geographic information systems (GIS), are powerful in their scope and capability to converge locational and tracking technologies. Geographic information systems (GIS) aggregate data and information from multiple sources including satellite, aerial, and infrared imagery, geodetic information, and “layered” attribute information (such as property records). These aggregates of data, like data mining systems, create collected bits of information that generate valuable and powerful profiles of objects.

Boundaries of intrusiveness

There are a growing number of high resolution satellites providing imagery for GIS systems. These eyes in the sky raise the question of “how close is too close”, or at what level (i.e. resolution) do these images become intrusive to individual privacy. High resolution commercial satellite systems currently allow general features of facilities to be readily observed: the QuickBird system provides 0.6mGSD resolution satellite images with 1-14 day sampling. At this resolution, features such as buildings, roads, and large objects are visible (for example, see a 0.6m GSD [2] image of the Washington D.C. airport). GIS systems also include aerial images that provide details at <0.3mGSD. Thus, precise geolocation information can be discerned in geospatial systems, especially when information is aggregated with other sources.

It is tempting to say that only very high spatial resolution is intrusive. But consider the situation of a low spatial resolution object (such as a dot representing an individual) overlayed onto a map and then captured in near-real time, i.e., at high temporal resolution. For example, one can identify a teenager’s location on a map, and then track his movements in near-real time through GPS data. In this scenario, even without high spatial resolution, one’s behaviors and actions are identifiable, allowing a system to track movements and infer from that information one’s actions and behaviors. Thus, the combinatory effect of high temporal resolution, with either low or high spatial resolution, identifies and becomes intrusive in ways that singular information would not. This means both the spatial and temporal contexts must be evaluated when determining intrusiveness.

The new Real-time Rome project announced last month by MIT provides an example of the applications of GIS systems and visualization tools, using data from cell-phone usage, pedestrian and transportation patterns, to map usages of urban space. While visualization is based upon aggregated information, individual-level data is collected.

Rights to locational privacy

What rights do we have to locational privacy? In the United States, common law gives rise to four generally recognized privacy torts: (a) intrusion upon a person's seclusion; (b) public disclosure of private facts; (c) publicity in a false light; and (d) misappropriation of one's likeness. However, the public disclosure tort is limited by the clause “if an event takes place in a public place, the tort is unavailable” (Restatement (Second) of Torts 652D, 1977), and the courts have generally ruled that a person traveling in public places voluntarily conveys location information. But courts have also recognized that “a person does not automatically make public everything he does merely by being in a public place” (Nader v. GMC, 1969, 570-71; see also, Doe v. Mills, 1995).

Constitutional protections for privacy, derived from the Fourth Amendment, restrict government intrusion into our personal life through searches of persons, personal space, and information. In the seminal case Katz v. United States (1967), the United States Supreme Court held that government eavesdropping of a man in a public phone booth violated a reasonable expectation of privacy because the Fourth Amendment protects “people, not places.” The Court held that whatever a person “seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected” (389 U.S. 347, 352, emphasis added). This gave rise to the two-pronged test of constitutional protection: whether an individual has an expectation of privacy that society will recognize as reasonable.

Case law has interpreted these locational privacy rights more specifically, examining intrusions of technology into the private sphere, government searches that are technologically enhanced, and the use of mobile devices and telecommunication information to derive locational information. While Fourth Amendment protection doesn’t extend to that which is knowingly disclosed to the public, the courts have ruled that the use of technologies not available to the general public can violate the privacy one reasonably expects (Kyllo v. U.S., 2001). But courts have shown a willingness to allow law enforcement to use technologically-enhanced vision for searches, including flying over a fenced backyard (California v. Ciraolo, 1986), a greenhouse (Florida v. Riley, 1986), or an industrial plant (Dow Chemical v. U.S., 1986), suggesting that the “open fields” doctrine [3] brings no reasonable expectation of privacy. [4]

This protection does not extend to deriving location information from communication devices. Transaction information such as telephone numbers is not protected (Smith v. Maryland, 1979), but providers are prevented from releasing information that discloses the physical location of an individual (CALEA, 1994/2000; U.S. Telecom v. FCC, 2000). However, using mobile communication devices as tracking devices to derive location information is not constitutionally protected (U.S. v. Meriwether, 1990; U.S. v. Knotts, 1983; U.S. v. Forest, 2004), as courts have ruled that individuals using cell phones, beepers, and pagers do not have a reasonable expectation of privacy when moving from place to place. (This interpretation continues to be challenged.)

Furthermore, while the Electronic Communication Privacy Act (1986) protects against unauthorized interception and disclosure of electronic communications (18 USC § 2510-22; 2701-11), it excludes tracking devices (§ 3117). However, the Wireless Communication and Public Safety Act (1999), explicitly protects location information in wireless devices, (47 USC § 222, §§ f), requiring customer approval for disclosure. [5] But the Patriot Act (2001) has nullified some of these protections, granting broad authorities for government surveillance, including the ability to use roving wiretaps.

In summary, legal protection for location privacy in the United States is inconsistent and sectoral, providing coverage under certain situations and for specific technologies.


Emerging geospatial technologies, through their power and invisibility, re-architect our public space and change our patterns of disclosure and interaction with others in this space. Architecture regulates the boundaries of accessibility in human interaction. Just as doors and windows increased barriers and expectations of privacy in 17th century rural villages, modern technologies are decreasing these barriers, by providing new capabilities to extend or enhance human senses (our eyes, ears, and memory). This changes the architecture of our public sphere, and shifts our constructions of public-private space and boundaries. These shifts are at odds with our expectations and sense of personal space, thus leading to a sense of intrusion. In turn, this changes our awareness of disclosing and interacting with others in this space.

At the same time, the pervasiveness and invisibility of locational technologies mean that control of access to information about oneself is not available. We are unaware of the presence and activity of such technologies, and thus lack autonomy in regulating the boundaries of accessibility. This has implications for understanding our navigation and negotiation of connectivity in the modern world. In addition, the aggregation of information – whether in data mining systems or geographic information systems – creates very powerful identifiers. Whereas a single bit of information may not be threatening, aggregated bits constitute a pattern of behavior or a profile that can reveal much information and threaten one’s privacy and liberty.

Thus, the unique threats of geospatial systems as technologies of identification are based on two primary factors: a) aggregated data creates very powerful identifiers; and b) the invisibility of data collection and use results in a loss of agency in the process by which we are identified. These in turn influence how we interact in our society, and by extension, the construction of our identities.

This raises questions that require further study: What do these technologies of identification mean for our construction of identity in digital realms? That is, when technologies extend human senses, what happens to our construction of personal space and retreat, and our concept of reasonable expectations of privacy? Further, under the current legal framework, how do we address new constructions of space (e.g., reconnaissance of space above private property), new technologies of intrusion (e.g., infrared, RFID, GPS, GIS), and new constructions of scope (e.g., aggregated information)? Additional research is needed to understand how individuals define these ambiguous boundaries, our expectations of private space, and the mechanisms by which we negotiate shifting boundaries in the face of emerging locational technologies.

[1] Westin, A. F. (1967). Privacy and Freedom. New York: Atheneum
[2] GSD, ground sample distance, refers to the pixel representation of the distance on the ground between two components, in digital imagery.
[3] See Hester v. United States, 265 U.S. 57 (1924) and Oliver v. United States, 466 U.S. 170 (1984) for a discussion of the “open fields doctrine” which suggests that constitutional protection is not extended to the open fields.
[4] Curry, M. (1996). In plain and open view: GIS and the problem of privacy. Paper presented at the Conference on Law and Information Policy for Spatial Databases, Santa Barbara, CA.
[5] Edmundson, K. E. (2005). Global positioning system implants: Must consumer privacy be lost in order for people to be found? Indiana Law Review, 38.

Lorraine Kisselburgh is a doctoral student in Media, Technology, and Society (Department of Communication) at Purdue University. Portions of this article were presented at the NYU Symposium on “Identity and Identification in a Networked World” and at the International Communication Association in Dresden Germany, and have been submitted for publication in the “ICA 2006 Theme Session Proceedings.” The author wishes to acknowledge the support of Eugene Spafford (Department of Computer Science, Purdue University) in the conceptualization of this project.
PDF Print
Why Definitions Matter: an Example Drawn from Davis on Privacy
By: Jason Millar

October 17, 2006


Concepts inform our interpretations of the world. As such their definitions are important for our common understanding. On a multidisciplinary project like the Identity Trail, confusion over definitions can undermine our ability to discuss certain issues that rest on complex concepts like privacy. Along these lines I would like to comment on one philosophical project undertaken by Steven Davis during his trip down the Identity Trail, namely his attempt to find a definition of privacy, as outlined in his forthcoming publication (initially entitled) “Privacy, Rights, and Moral Value”. For those who have not (and will not) read the paper I will offer a preamble on the general problem at hand.

The preamble: Haven’t we heard this before!?

Much of my time on the Identity Trail has been spent being exposed to a number of multidisciplinary perspectives on privacy. Some of those perspectives are legal ones, offering up descriptions of how current laws are challenged by the various privacy implicating technologies being used and created every day. Others are sociological, describing how technologies are approached and used with specific focus being placed on the effects or implications of privacy on a technologically mediated interaction. Still others are technologically focused, proposing interesting privacy-enhanced/enhancing technologies often as (partial) solutions to many of the current problems highlighted in the legal and sociological streams of the project. Of course, this description fails to capture the breadth of privacy research being performed on the Identity Trail [1] but it is sufficient to point to a common thread underpinning the work, namely the general concept of privacy.

For anyone interested in understanding privacy, our agreement on the nature of the general term has implications on how we might go about discussing the theories or issues that rely on it (like those mentioned above), just as we might have to understand what is meant by the word ‘equality’ in order that we might have a meaningful discussion about laws or public policies that implicate it. Of course, even the importance of understanding the nature of privacy generates much debate in and among the various fields concerned. Exasperated privacy advocates argue that we could better spend our time focusing on new policies in order to deal with the existing backlog of relatively uncontested privacy concerns, while on the other end of the spectrum academic theorists—philosophers and the like—seem uneasy (as they tend to do) about the grounds upon which the issues are being fought. However, it is clear that arguments centered on privacy, in whatever discipline they reside, rely to some degree on an understanding of the general concept of privacy for their force. Whether the parties are content to implicitly borrow concepts of privacy already established in the literature, or act to modify them (explicitly or implicitly) in some way in response to new research, some particular version of the concept of privacy is nonetheless present in the arguments. Often times discussions and disagreements over the particulars of laws, policies or technologies are largely motivated by disagreements over the particulars of the concepts underscoring them. This should not ring controversial. If we are to agree on the implications of privacy in ethics, law, technology or elsewhere, we can make progress by engaging the concept explicitly on some level, given its omnipresence in the discourse. With that in mind, it is a valuable undertaking to pose the question, “What is the nature of privacy?”, even if privacy issues are of interest yet philosophy is not [2].

Davis’ Definition of Privacy and Some Implications

In response to Davis’ definition I will focus on a tension that it draws out between one’s own preferences and others’ preferences. I believe the tension points to interesting consequences in our understanding of how generalized privacy laws operate relative to the operation of our individual notions of privacy.

Davis defines privacy as the following:

In society T, S, where S can be an individual, institution, or a group, possess privacy with respect to some proposition, p, and individual U if and only if

(a) p is personal information about S.
(b) U does not currently know or believe that p.

In society T, p is personal information about S iff and only most people in T would not want it to be known or believed that q where q is information about them which is similar to p, or S is a very sensitive person who does not want it to be known or believed that p. In both cases, an allowance must be made for information that most people or S make available to a limited number others.

Consider the following scenario. On Saturday, Jane is not sensitive about others knowing her sexual orientation. Other people are able to ascertain her sexual orientation though she never offers it up, and other people, in fact, do ascertain her sexual orientation. In addition, most people in Jane’s society are also not sensitive about others knowing their sexual orientation on Saturday. For some reason, on Sunday most people in Jane’s society develop a severe sensitivity to the idea of others coming to know their sexual orientation. Jane does not develop a similar sensitivity on Sunday, and other people continue to ascertain Jane’s sexual orientation through no action on her part.

On Davis’ account Jane suffers a loss of privacy sometime on Sunday. This seems counterintuitive. Jane’s privacy is linked to sensitivities that others develop—the fact that they stop wanting their sexual orientation to be known is presumably due to some sensitivity to the information—without her having to develop the sensitivity on her own. I will call this type of sensitivity a privacy preference, since the definition links preferences about which information is personal, and which is not, directly to the notion of privacy. In this case the privacy preferences of others seem to place some sort of demand on Jane, though it is not clear what the nature of this demand is. Perhaps it suggests that she should consider her sexual orientation to be a sensitive topic. Whatever the case may be, Jane’s continued indifference to the fact that others are able to ascertain her sexual orientation must be squared with the demand resulting from the claim that Jane has suffered a loss of privacy on Sunday due to the privacy preferences of others.

This tension seems even more problematic when we note that one’s own control over personal information features heavily in the definition yet is undermined by it. Not wanting others to know p is at the core of both the sensitive S’s notion of personal information, as it is at the core of the majority’s notion of personal information. The disjunctive in the definition of personal information causes problems in the way that Jane apparently suffers doubly on Sunday; she has apparently suffered a loss of privacy due to the shifting privacy preferences of others while at the same time suffering a loss of control of the very nature of information about her. Though the shifting nature of the information may not strike one as something over which they need to maintain control, many privacy theorists have placed a premium not just on the control of the flow of information, but also on control of the nature of it in order to maintain the contextual integrity that is seen as necessary for privacy [3]. I would suggest further that a loss of control over the scope of personal information is what leads to the strange new demand that is apparently placed on Jane.

I think we can understand where the demand plays out by addressing an underlying tension between the law’s need for a normative conception of privacy and individuals’ need to navigate privacy on their own terms. As a legal (largely instrumental) definition of privacy, I think Davis’ account gains considerable traction [4]. If a majority of individuals feel that certain information is personal in that they are sensitive to others coming to know it indiscriminately, and if there is a demonstrable harm associated with others coming to know it, then the law can justify prohibiting people from trying to come to know personal information.

However, Davis’ definition of privacy loses traction on the level of the individual. If Jane does not consider a privacy loss to have occurred, the normative claim placed on her by society (and the law) will not change this. The result is that we must question whether privacy, as defined by Davis, addresses the same kind of transgression that our concern for personal control over information, i.e. the moral kind, seeks to protect us against? Privacy laws, in the sense that they can be used in cases where individuals suffer harm, certainly address moral privacy concerns. But a focus on the legal/instrumental conception of privacy and control over personal information ignores the sensitivity that motivates our individual, moral, privacy concerns in the first place. If Jane does not feel that her privacy has been violated on Sunday, then the moral notion of privacy may differ necessarily from the legal one, if only so the law may function efficiently.

It has been suggested on the Identity Trail that many people don’t seem to care about their privacy [5]. A great deal of the resulting research has focused on trying to understand why this seems to be the case. Perhaps one factor in the equation is that we mistake the legal notion of the concept for the moral one when evaluating the sensibility of people’s actions in certain contexts. Understood this way the assertion that Jane has suffered a loss of privacy may be isolated to legal concerns. Convincing Jane otherwise may do nothing to secure her privacy.

[1] It undoubtedly also fails in its attempt to describe the nature of the work being done in the various streams by the various researchers. To that end I would invite everyone reading this entry to browse the research that has accumulated on the Identity Trail in order to appreciate the full scope of it.
[2] Several collaborators on the Identity Trail have done this explicitly, including Marsha Hanen, Steven Davis and Dave Matheson, to name a few. Others have offered research into privacy implicating activities or technologies, always (I think) with an implicit view to informing or reaffirming our understanding of the concept.
[3] Nagel, T. (1998). Concealment and exposure. Philosophy and Public Affairs, 27(1), 3-30.; Nissenbaum, H. (1998). Protecting privacy in an information age: The problem of privacy in public. Law and Philosophy: An International Journal for Jurisprudence and Legal Philosophy, 17(5-6), 559-596.; Rachels, J. (1975). Why privacy is important. Philosophy and Public Affairs, 4, 323-333.; Scanlon, T. (1975). Thomson on privacy. Philosophy and Public Affairs, 4, 315-322.
[4] I invite the legal theorists to correct me in my discussion of the nature and function of laws if they feel compelled to do so.
[5] For example, Jaquelyn Burkell in this ID Trail Mix piece.

PDF Print
Bouquets and brickbats: the informational privacy of Canadians
By: Jeffrey Vicq

October 3, 2006


Recently, I spent some time examining the Canadian data brokerage industry.

In the last several years, a number of scandals in the US data brokerage industry made American companies like ChoicePoint and DocuSearch household names, even in many Canadian homes. American journalists prepared several interesting and extensive exposes describing, in rich detail, the sometimes messy results of the marriage of technology and data in the name of convenience, commerce and security.

Yet, the activities of the industry’s players in this country have traditionally been less well understood. Accordingly, working as part of a team under the direction of the talented Pippa Lawson at CIPPIC, a number of us sought to gain a better understanding of the Canadian data brokerage industry—identifying its key players, determining the types of information commonly made available, and tracking personal data as it flowed from consumer to compiler, and from broker to buyer. The final report was quietly released earlier this summer.

In the course of our investigations, I frequently found myself reflecting on two broader questions: first, I wondered how best law could protect the personal information of Canadians—and by extension the privacy of Canadian citizens—in the Canadian marketplace. Examining the data brokerage industry afforded me the opportunity to consider the effectiveness of privacy legislation in the face of an industry whose sole purpose is to assemble and trade personal information about Canadians. Second, I wondered about who was the biggest culprit responsible for the slow erosion of personal informational privacy that has occurred in Canada over the last several decades. Having the opportunity to consider how data on Canadians was collected, compiled, distributed and used in the data brokerage industry afforded me the opportunity to consider culpability from several perspectives.

Given that Parliament has recently reconvened for the fall sitting—and cognizant that PIPEDA, the federal private sector privacy legislation in force in much of the country, is due for review—I thought I might offer up a few thoughts on these points.

With respect to the protection of the personal information, it is clear that Canadians enjoy greater informational privacy than our US counterparts—thanks primarily, it would appear, to the impact of private sector privacy legislation. There is seemingly less information available for purchase online about Canadians than Americans [1] , and several companies claim to have curtailed operations or ceased operating altogether in Canada following the introduction of Canada’s private sector privacy legislation. Using provisions contained in the legislation, Canadian consumers can learn what information Canadian companies have about them and can seek the correction of errors in those records—rights which are unknown to American consumers. In this light, Canada’s data protection laws are arguably the single most valuable instrument available for the protection of Canadian informational privacy.

But these laws are not perfect. This legislation—and most glaringly PIPEDA—is hamstrung by the absence of robust enforcement provisions. During my time in private legal practice, it was an all-too-common occurrence that once a client was apprised both of the extensive obligations of the legislation and the ramifications of non-compliance, the client would elect to ignore the law. And there is reasonably good evidence to suggest that private sector organizations that have attempted to comply with the legislation have done so poorly: see, for example, CIPPIC’s recently published study examining the compliance (or relative lack thereof) of retailers with Canada’s data protection laws. The legislation’s lack of a robust enforcement mechanism undoubtedly plays a role in the high rates of non-compliance CIPPIC’s found.

To a lesser extent, Canada’s private sector privacy laws have also been maligned for the way they define “personal information.” These definitions qualify the “personal information” to which the laws pertain to information about “identifiable individuals.” As such, information that has been “anonymized” accordingly falls outside of the scope of the legislation. However, data anonymity specialists (including the terrific Latanya Sweeney) have been demonstrating for some time the relative ease and accuracy with which “anonymized” information can be reconnected to identifiable individuals.

Interestingly, my own research into the data brokerage industry indicated that many of these companies were not particularly concerned with the granularity of the information they attributed to individual citizens. For example, several Canadian data compilers rely on data—like public-use microfiles—that Statistics Canada makes available and considers to be “sufficiently anonymized or aggregated to be made publicly available.” Absent the services of someone like Dr. Sweeney, it may indeed be difficult to connect this information to a particular household. However these data compilers use the aggregated information (like mean household income for dwellings located in a particular postal code set) to attribute characteristics to all households in the set. This information—which on a household to household basis may be erroneous—is nonetheless usually of sufficient accuracy for marketing purposes. As such, despite Statistics Canada’s anonymization efforts, this information is still being used by marketers as personal information, in order to build broader and richer—if somewhat fuzzy—profiles of Canadians.

Given this, some in the privacy community have suggested that the definition of “personal information” should be amended to include all information about an individual, whether identifiable or not. I am not confident, however, that this would represent a feasible or practical response to the problems created by the use of anonymized or aggregated information to impute characteristics to Canadian households. That issue might better be addressed by legislation that precludes the use of data for certain purposes, as opposed to the wholesale revision of the definition of “personal information” itself.

These (and admittedly other) shortcomings aside, Canada’s privacy legislation has been a valuable tool for protecting the informational privacy of Canadian citizens. With certain amendments, the legislation could come to represent a truly effective set of tools to be used in the fight to protect the informational privacy still enjoyed by Canadians.

However, these tools will only be effective if the activities of the culprit primarily responsible for the erosion of the informational privacy of Canadians can be stymied. “Who is this culprit?,” you may ask. There are—both unfortunately and perhaps unsurprisingly—an abundance of candidates, given the actors and factors that have had a significant impact on informational privacy of Canadians in the last decade: the abundance of cheap and powerful digital database technologies, the growth of the internet, the emergence of the data brokerage industry and the development of a culture of fear in the US, are but a few.

However, I believe the primary culprits responsible for the erosion of informational privacy are, in fact, Canadians themselves.

In examining the sources of the data commonly exchanged in the data brokerage industry, I was astounded to discover how much sensitive data is provided willingly and openly—for little or no consideration—by Canadians. Admittedly, there are a number of collection vehicles wherein the language used to explain the purpose for the collection and planned use for the data is vague and / or misleading—if any language is used at all. But there were a remarkable number of occasions where the collection vehicles used clear and unequivocal language to explain the reasons for collection and use, and Canadians still appeared to respond in droves. There are numerous examples—Canadians complete surveys and questionnaires on sensitive topics, enter contests or offers that request extensive information about buying habits or preferences, and obtain free product samples in exchange for providing their personal details. The most recent iteration of one survey used extensively in the Canadian market is over 91 pages long, asking an exhaustive list of sensitive and highly personal questions about the respondent. [2] While consumers are often offered coupons or contest entries in exchange for completing the survey, many surveys offer no reward for their completion at all.

The aforementioned collection vehicles are examples of circumstances where it should be reasonably clear to the respondent (certainly if the data collector is complying with the requisite legislation) that there is little to be gained by them in disclosing their valuable personal information. Less clear, perhaps, are those circumstances where information is collected from Canadians contemporaneously with the acquisition of goods or services, whether over the internet or via traditional channels. Book, music and movie clubs, along with newspaper and magazine publishers, are fertile sources of information about the hobbies and interests of Canadians. General retailers and service providers are also rich sources.

Drawing on all of this, data brokers have accrued and trade in a broad range of information on many Canadians, including marital status, age, religion, income, property ownership, investments, health information, habits, interests, diet and credit card ownership, amongst others. One Canadian data broker claims to have a file containing the names of 8.7 million Canadians organized by preferred genre of book; 8.1 million organized by hobby, and another 3.1 million organized by the types of financial investments they own and plan to purchase. Another broker offers information on households in which one or more members has experienced any one of a variety of health conditions including ADHD, arthritis, bedwetting, depression, diabetes, heart or kidney disease, high blood pressure or cholesterol, lactose intolerance, macular degeneration, migraines, neck pain, nut allergies, urinary tract and yeast infections.

All of this information has been, for the most part, willingly provided by Canadians. And while much has been written about growing public concerns about privacy, the actions of Canadians do not accord with their purported fears. The results of a survey conducted by Forrester Research in 2005 found that “…while 86% of consumers admitted to discomfort with disclosing information to marketers, they participated in online surveys and research for free products or coupons, and entered competitions or sweepstakes at rates nearly equal to consumers who aren’t as concerned. [emphasis added]” [3]

Given this, it is Canadian citizens themselves that I see as posing the single greatest threat to their own informational privacy. The interests of Canadians do not appear to accord with their actions in this respect, which I would assume to be the product of a lack of education about how individuals can themselves be more responsible about protecting their own personal information. There is no question that being privacy savvy takes time and energy. However, the public must be invested with some of the responsibility for safeguarding their own personal information; otherwise, personal data privacy will continue to erode, despite the most finely crafted legislation, the efforts of the Privacy Commissioners and the lobbying of privacy advocates.

In this respect, government does have a role to play in educating the public about why informational privacy is important, and how personal information can be protected. In addition to making the changes to PIPEDA outlined above, government might also work with industry to develop and require the use of short uniform privacy policy templates, which would enable citizens to review and compare organizations’ privacy policies more quickly.

Similarly, those of us who have an appreciation of the importance of data privacy have obligations as well. We must resist the too-often-pursued predilection to “preach to the choir,” and instead make a concerted effort to educate the public about the importance of personal information privacy. An educated and engaged public can be far more effective in protecting their own informational privacy interests than even the most well-funded Privacy Commissioner or privacy advocate.

[1] In this context, I am considering information that is extant and generally available for purchase, as opposed to the use of the internet to contact parties who might—via pretexting or other means—obtain detailed information about an individual.
[2] It should be noted that this information is not typically made available with names and addresses attached; rather, it is released in an aggregated format.
[3] See "Privacy worries don't keep consumers out of online surveys and promotions" (Jan.30, 2006) Internet Retailer, .

Jeffrey Vicq is a lawyer and consultant, and candidate in the Master of Laws (with Concentration in Law and Technology) program at the University of Ottawa.
PDF Print
Network Neutrality and Privacy
By: Greg Hagen

September 19, 2006


Privacy and network neutrality are not usually discussed in the same context but the two are related. Network neutrality is concerned, at a minimum, with the ability of internet users to communicate amongst themselves without their communication being unjustifiably blocked or degraded. As Tim Berners-Lee has described the concept, “[The internet] must not discriminate against particular hardware, software, underlying network, language, culture, disability, or against particular types of data.” Network neutrality can include other conditions as well.

The debate regarding the extent to which network neutrality is justified usually revolves around economic concerns, such as whether abandoning network neutrality will hamper innovation on the internet platform. The purpose of this note is to emphasize that, independently of economic considerations, the discriminatory behaviour mentioned by Berners-Lee has implications for personal privacy which must also be assessed. The basic point is that in order to block, degrade or otherwise shape certain kinds of traffic an ISP must identify the nature of that traffic through inspection of packets of private communications. The deeper the inspection, the more potentially privacy-invasive.

Some examples will help illustrate the kind of activities that are of concern. A well known Canadian example of blocking occurred when Telus blocked a website operated by members of the Telecommunications Workers Union which was on strike.

Another example concerns the potential for an ISP to degrade the service of a third party, say Skype, that uses the ISPs network when the ISP itself offers competing voice services, or has contracted with, say, Yahoo to deliver priority voice services. The Telus wireless Hotspot webpage notes:

You cannot use a TELUS Mobility Hotspot to send or receive a VoIP call because VoIP calls could disrupt or interfere with the Hotspot service.

The webpage does not elaborate on how VoIP could interfere with the wireless service. Hotspot subscribers are left to wonder whether there is some technical foundation for this claim or whether Telus is simply attempting to prevent the use competitive voice services such as Skype. The author has not encountered any problems in using Skype at a Telus Hotspot.

A trickier example occurs when a cable company provides a higher quality of service for its VoIP service than a competing VoIP service which uses that cable company as an internet access provider, degrading the competitors’ services by implication. For example, Shaw uses only its private network for its digital phone on the basis that time-sensitive voice packets need the bandwidth provided by its network that otherwise might be dropped if the bandwidth was shared with the public internet. Daniel J. Weitzner has argued that this may be the kind of case where network neutrality requires Shaw to offer the use of its private network to competing VoIP services. Yet, when Shaw provided a $10 quality of service enhancement for its subscribers using other VoIP services, Vonage Canada complained that the surcharge was a thinly veiled tax and "Shaw's VoIP tax is an unfair attempt to drive up the price of competing VoIP services to protect its own high-priced service…." Evaluating Vonage’s claim depends upon the proper valuation of the quality of service enhancement, a difficult task for consumers.

Vint Cerf, one of the inventors of the internet offered comments in November 2005 to a proposed draft U.S. Bill to Create a Statutory Framework For Internet Protocol and Broadband Services that nicely explains some motivations behind network neutrality:

The remarkable social impact and economic success of the Internet is in many ways directly attributable to the architectural characteristics that were part of its design. The Internet was designed with no gatekeepers over new content or services. The Internet is based on a layered, end-to-end model that allows people at each level of the network to innovate free of any central control. By placing intelligence at the edges rather than control in the middle of the network, the Internet has created a platform for innovation. This has led to an explosion of offerings – from VOIP to 802.11x wi-fi to blogging – that might never have evolved had central control of the network been required by design.
My fear is that, as written, this bill would do great damage to the Internet as we know it. Enshrining a rule that broadly permits network operators to discriminate in favor of certain kinds of services and to potentially interfere with others would place broadband operators in control of online activity. Allowing broadband providers to segment their IP offerings and reserve huge amounts of bandwidth for their own services will not give consumers the broadband Internet our country and economy need. Many people will have little or no choice among broadband operators for the foreseeable future, implying that such operators will have the power to exercise a great deal of control over any applications placed on the network.

Similarly, Tim Berners Lee commented on video and on a blog that: “[w]hen I invented the Web, I didn't have to ask anyone's permission. Now, hundreds of millions of people are using it freely. I am worried that that is going to end in the USA.”

It would end if some ISPs had their way. For example, Ed Whitacre, CEO of AT&T in the U.S., declared:

Now what [Google. MSN, Vonage and others] would like to do is use my pipes free, but I ain't going to let them do that because we have spent this capital and we have to have a return on it. So there's going to have to be some mechanism for these people who use these pipes to pay for the portion they're using. Why should they be allowed to use my pipes? The Internet can't be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!

The U.S. telecommunications company, Verizon, has made much the same complaint.

How has this state of affairs come about? The recent Canadian Telecommunications Review Panel Report rightly pointed out that

…the separation between the applications and content layers of telecommunications services, as well as between these layers and the underlying network layers that provide physical connections and transport services result in a fundamental change in the structure of the telecommunications industry. Content providers do not need to be applications or network providers and applications providers no longer need to be network providers.

ISPs are now attempting to regain control over access to content and applications that they lost as a result of the separation of layers on the internet. “Access providers thus leverage their market power in the Internet access market to try to extract more profit, either directly or in partnership with a preferred third party, in the applications market.”

The CBC recently noted that, although “Internet video provides a natural opportunity for a public broadcaster such as CBC/Radio-Canada to significantly extend the reach of its video services and thereby make high quality Canadian video programming available on a national and global basis,” “[t]he business case analysis for Internet video is complicated by the fact that suppliers of broadband connections may also have incentives to control the bandwidth available for Internet video.” It explains:

Canadian cable companies engage in "bandwidth shaping" which allocates different levels of transmission capacity to different services according to the operational preferences of the cable company. This type of bandwidth shaping can ensure efficient use of transmission capacity. It can also ensure that Internet video by third parties does not become a threat to the business of the cable company, whether it be the delivery of traditional television programming to cable subscribers, VOD or the distribution of cable company-owned Internet video services.

As a result of the kind of problems described above, the Report of the Telecommunications Review Panel recently recommended amending the Telecommunications Act “to confirm and protect the right of Canadian consumers to access publicly available Internet applications and content of their choice by means of all public telecommunications networks providing access to the Internet” Of course, this is subject to reasonable exceptions. Given the difficulty of enacting network neutrality legislation in the U.S., the susceptibility of the Canadian government to industry lobbying (as evidenced in the case of copyright reform) and the perceived bias of some Canadian legislators, such an amendment might be difficult to achieve.

A complementary approach to the network neutrality issue is to emphasize the privacy implications of abandoning network neutrality. Unfortunately, on the rare occasion that privacy implications are recognized, they are usually shunted aside. For example, at a discussion of the Internet2 Consortium’s QoS Working Group on implementing traffic shaping at its 208 member Universities, the question “Wouldn’t shaping traffic somehow be an invasion of user privacy?” was raised. The privacy concern was dismissed rather quickly.

First, few schools explicitly guarantee users any formal level of privacy. Second, shaping traffic is a non-intrusive intervention relative to many other options, such as turning copyright infringers over to the authorities. Third, traffic shaping can be done via technical means and on an aggregated/ anonymous basis, if privacy is an issue.

Even Universal Music, in a discussion of its automated notice and taken down system appears to recognize that packet inspection has privacy implications but prefers to place those issues at the doorstep of Universities who would use their software.

In general, the more specific the blocking rule is, the greater the privacy implications. Each university deploying ATS must decide the appropriate balance between privacy and blocking for their application.

Of course there are legitimate reasons to monitor packets. The post office would not know where to send your mail unless they could read the address on the envelope and the same goes for packets over the internet. Other legitimate non-commercial reasons for monitoring content could concern, for example, national security and lawful investigations of criminal activity where there is a warrant.

Nevertheless, traffic shaping could be privacy – intrusive, especially if it required deep inspection of packets, even if done on an aggregated and anonymous basis. For example, Allot’s deep packet inspection technology allows ISPs to identify and classify data packets to know usage patterns concerning P2P, VoIP, online games, email, video and so on, potentially violating the privacy of persons. The conception of privacy that is needed to protect us from such intrusions has a closer relationship to the conceptions of privacy protected by criminal law, the tort of breach of privacy, the Charter of Rights and Freedoms and international human rights instruments than to the protection of personal information required by PIPEDA.

While this approach emphasizes the protection of a form of privacy other than the unauthorized use of personal information as regulated under PIPEDA, the CRTC is in a position to regulate such intrusions into private communications. Under Section 7 of the Canadian Telecommunications Act, one of the objectives of telecommunications policy is contributing to the protection of the privacy of persons. The CRTC already has jurisdiction over privacy issues related to the operation of telecommunications networks. (The Panel has further recommended that the CRTC be empowered to directly regulate all telecommunications service providers to the extent necessary to implement the Canadian telecommunications policy objectives.)

On its own view, as stated in Telecom Decision CRTC 2003-33 [Reference: 8665-C12-14/01 and 8665-B20-01/00. Confidentiality provisions of Canadian Carriers] the Commission said at paragraph 23, that “… its jurisdiction in this matter [of privacy] stems not from the PIPED Act, but from the Telecommunications Act, and that in exercising its discretionary powers pursuant to the Telecommunications Act, it may apply different standards than those contemplated by the PIPED Act.” Although the Federal Court of Appeal considered the potential of varied standards startling, it would allow the CRTC to deal with privacy issues not covered by the information privacy approach embodied in PIPEDA but well-recognized in other areas of law.

In short, it would be useful for the CRTC to consider how it might further the objective of the protection of personal privacy in the context of packet inspection and traffic shaping with the result that it assists in preserving network neutrality.

Greg Hagen is an Assistant Professor of Law at the University of Calgary.

PDF Print
Publius, the Pseudonym and Poetry
By: Carole Lucock

September 12, 2006


In 1787-1788 a series of articles under the pseudonym ‘Publius’ [i] appeared in a number of State newspapers, primarily those of New York. The articles presented arguments in favour of the ratification of the U.S. constitution and were subsequently published together as the now famous ‘federalist papers.’ Subsequently, the federalist papers were attributed to Alexander Hamilton, James Madison or John Jay. [ii]

Publius wrote at a time of heated political debate about the content and ratification of the U.S. constitution, a time in which a veritable cast of characters were writing pseudonymously: ‘Brutus’, ‘Cato’, ‘Centinel’, ‘John DeWhitt’ and the ‘Federal Farmer’ – to name a few and who are known as the anti-federalists [iii] – wrote against ratification, with Brutus, in particular, engaging in critical debate with Publius.

One could engage many lines of enquiry concerning this rich and important discourse, carried on as it were under the veil of a pseudonym, or perhaps behind its character. I am interested in the entry of Publius into law’s contemporary discourse concerning the pseudonym. In particular, the jurisprudential arguments drawing on the pseudonym Publius (and similar pseudonyms) to support the ‘right to’ anonymous speech on the basis that some peril might befall authors if they wrote under their ‘real’ name. I question whether law’s characterization of the political speech pseudonym adequately accounts for the phenomena or sufficiently justifies a space for pseudonymous speech.

The U.S. Supreme Court case, McIntyre v. Ohio Elections Commission [iv], is often cited as supporting the ‘right to’ anonymous speech, based on the guarantees of the First Amendment. McIntyre uses the example of Publius and other pseudonyms as supporting an ‘honorable’ tradition of pseudonymous speech. The primary rationale given is a prudential one: the anonymity afforded by the use of the pseudonym is necessary to protect the author from untoward consequences. It is this justification that appears to dominate the subsequent jurisprudence and provides the ‘strong’ case for allowing a space for anonymous speech. This view fails to give appropriate recognition and scope to the complexity of pseudonym use and the purposes leading someone to use a pseudonym. While it is almost certainly the case that at given times in history – including revolutionary, colonial America – there have been prudential reasons for adopting a pseudonym in order to conceal one’s ‘real’ identity, this is but one rather narrow justification in support of its use. Thomas, in his concurring decision in McIntryre, provides an illuminating history of pseudonym use pre and post the ratification of the constitution. This history reveals pseudonym use at the time as a far more multi-dimensional phenomenon and opens a window to consideration of the dramatic, playful and transformative elements associated with the choice and use of a pseudonym.

At the time that Publius and other pseudonyms wrote, it seems likely that their ‘real’ names were known to a limited extent (based on common speculation and actual knowledge within trusted circles) and certainly in many cases became known while the authors were still alive. Why then was a pseudonym so popular and widely used at such a critical and momentous time in a nation’s history? Certainly there is evidence of the desire for disguise; no doubt prudential reasons played some part. However, there is also evidence that disguise was sought to prevent the arguments from being rejected out of hand because of pre-judgments about the author. Beyond this, however, there is a transformative aspect of the pseudonym that enabled the author to speak in a voice that was not only factually disassociated from the views of an identifiable individual but also that facilitated the entry of different views and perspectives altogether. This hypothesis is supported by Furtwangler who has carefully analysed the federalist papers and found that Publius has noticeably different points of view than those of ‘his’ purported authors. [v] In other words, rather than it being the case that the authors used the pseudonym to merely conceal their identities in order to express their own views, in Publius views and perspectives were expressed that were rhetorically tailored to the occasion. [vi] Furtwangler also points out that Publius and others made skillful use of the periodical press, which was supplanting traditional forms of “national communication and influence – pulpit, parliament and crown.” [vii] This served the vital purpose of reaching and informing a public on whose informed assent the legitimacy of the Constitution is founded.

Ironically, the modern state that was ushered in with the ratification of the American Constitution and in some measure supported by the persuasive logic of Publius, not only unified as one nation a group of diverse states and points of view, but also, arguably, eroded the conditions that enabled recourse to the views and perspectives that were enabled in Publius. This modern State not only claims legitimacy ‘in the people’ it also eschews all other claims to legitimacy (monarch, religion, independent factions, brute force) and begins to eclipse the identity that can stand apart from the interiority of the State or be a part of an externality that is extra-State, an identity that can have many names in a variety of contexts some of which are beyond to the purposes of the State. In short order, after the ratification of the U.S. Constitution, measures were implemented to require that a name be given and registered according to a prescribed formula, that one is counted and polled in a regular census, and that one is legitimated only in one’s ‘real’ or ‘legal’ name are instituted. [viii] This began a process whereby the individual and identity becomes a singularity that is in some measure State directed and controlled. One could speak here of hegemony; however, I prefer to think about this in terms of the multi-faceted nature of the human and the preservation of a space for extra-State, extra-non-prescribed activity. A space that leaves open the possibility of the kinds of pseudonyms that can extricate one not only from the immediacy of one’s own worldliness but also from the pre-prescribed and legitimate requirements of the State.

It may be no small co-incidence that pseudonym use has proliferated in recent years and one can certainly align this increase with the advent of a new means of communication, the Internet. As we absorb the meaning of this resurgence and as legislators and courts begin to address the phenomena, we should think carefully about the various reasons for using a pseudonym and avoid too quickly characterizing pseudonym use merely as a means to conceal for prudential or illicit purposes, for history at least reveals that it is a much richer phenomena than this.

Furtwangler notes:

As a literate, civil, rational spokesman for modernity (though in the guise of an ancient sage), Publius cannot move some loyalties. He cannot counter or satisfy some human longings. It would be idle to wonder how a full-blown, spiritually satisfying constitution might have emerged in the 1780s, harmonized by an American Milton. The nature of the American experience was to begin anew, try an experiment, cast off crown and pulpit by calling upon modern newspaper prose to justify a new departure. The poetry of such a changed world would have to emerge, […] through experience, time, and feeling. But the first large step is bare law; devoid of the grace of imagery, softened only by the long deliberation and free discussion, and opening a dangerous discontinuity between old authorities and new. [ix]

It may be that the poetry that we were to wait for was right before our eyes in Publius and that we continue to provide for its emergence as we remain open to the richness of the pseudonym.

[i] Publius was the first name of a famous Roman consul, Valerius Publicola, who played a prominent role in the establishment of the new Roman state after the expulsion of the King and monarchial rule.
[ii] Attribution is not an uncontroversial issue. There is ongoing debate as to which of the three were responsible for the authorship of some papers; moreover, there are those who suggest that while a specific paper has been attributed to a particular author, in fact the paper was likely a collaborative effort or at least not the work of a single author. See, for example, Albert Furtwangler, The Authority of Publius (Ithaca, New York: Cornell University Press, 1984) 118-129.
[iii] Some of the pseudonyms chosen, as in the case of Publius, draw upon an historical figure who played a significant role in founding a non-monarchial state. See, for example, constitution.org, “Anti-Federalist Papers”, < http://www.constitution.org/afp.htm>, which provides details concerning these pseudonyms.
[iv] McIntyre v. Ohio Elections Commission, 514 US 334 (1995), <http://www.law.cornell.edu/supct/html/93-986.ZO.html>, 115 S.Ct. 1511 [McIntyre].
[v] Supra note 2 at 23-32.
[vi] Ibid. at 61-62. Also of significance is the choice of pseudonym, in many instances clearly referring back to persons who were prominent players in establishing the Roman republic.
[vii] Ibid. at 90-91.
[viii] Carl Watner, “The Compulsory Birth and Death Certificate in the United States” and “A History of the Census” in Carl Watner with Wendy McElroy eds., National Identification Systems (Jefferson, North Carolina: McFarland & Company, Inc., 2004) 70 and 132 respectively.
[ix] Supra note 2 at 111.
PDF Print
The Fate of Friendship in the Networked Society
By: David Matheson

September 5, 2006


According to a recent study published in The American Sociological Review, friendship seems to be taking a hit in contemporary society. “Americans have fewer close friends and confidants than they did 20 years ago,” as Gary Younge from The Guardian (“Nation No-mates,” 23 June 2006) summarizes. “In 1985, the average American had three people in whom to confide matters that were important to them. In 2004, it dropped to two, and one in four had no close confidants at all.”

There is at least the whiff of a paradox here, when one considers that the decline in friendship in the last two decades has coincided with the rise of the networked society. The networked society is presumably about increased connections between its members. Close friendship is an instance of interpersonal connection par excellence. Why then should that connection shrink as others grow steadily? How is it that friendship of the close sort gets crowded out by the many other forms of interpersonal contact that saturate the networked society?

In order for an interpersonal connection to take on the form of close friendship -- to move beyond the realm of mere acquaintance, say -- it must of course be of a certain quality. I wonder whether, when it comes to the dramatic increase of interpersonal connections in the networked society, we might be dealing with a case of quality-undermining quantity.

The general phenomenon of quality-undermining quantity is pretty familiar. It makes its appearance in my corner of academe every year when those first-year essays that try to do too much, to cover too much terrain, come my way. The advice I’m constantly giving out -- aim for narrowness of scope and depth of discussion, rather that breadth of scope and superficiality of discussion -- is really just an expression of a worry about the ease with which quantity can undermine quality.

Or consider the fact that nowadays, many members of more fortunate societies have access to vast quantities of food. You don’t have to be an advocate of the Slow Food Movement, or even a particularly strong opponent of its Fast Food counterpart, to suspect that there’s something about the quantity of food available in these societies that tends to sit ill at ease with its quality. Gina Mallet nicely captures the point as she describes her move from early post-war Britain to the United States:

The moment I arrived in Los Angeles, I forgot all about England and fell in love with supermarkets. When I stepped inside my first supermarket, I thought I’d fallen into Aladdin’s cave. I had never seen so much food in my life – even at Harrods – or such beautifully burnished food: food that glowed like jewels, food temptingly presented, even the packaging itself looked looked edible, and it was all so cheap. I was taken to the Farmer’s Market where big was extra beautiful, jumbo fruits and vegetables piled high. The grapefruit, I swear, were the size of basketballs, and the oranges as large as melons. They shone with cleanliness. It was hard not to be bowled over. At a coffee shop I ate a mile-high sandwich stuffed with tomato and avocado, a fruit that was still called an alligator pear in England and considered exotic. It didn’t matter that the fruit didn’t taste of much. Coming from England, to me the bounty was all, a horn of plenty. It never occurred to me that within a few decades, the supermarket was going to emerge as the single greatest threat to the taste of food.
At first, supermarkets seemed benign. They were so cheap, and there were enough different chains to provide variety. But then, as the supermarkets began to telescope into fewer and fewer and larger chains, the food buyers started to think globally. They didn’t search out toothsome vegetables to tempt the customer. Instead, they drew up criteria for the fastest-moving food and ordered it grown. Whole varieties went to the wall, and the supermarket began offering only a fraction of the accumulation of fruits and vegetables once grown in even a modest Victorian kitchen garden, with its supersized onions and giant leeks. The supermarket vegetable is above all telegenic and tough – like a Hollywood movie star. It may be that corn only tastes good when rushed from the field straight to the grill or pot; but supermarket corn must be bred to survive for weeks. Iceberg lettuce is the model of industrial lettuce because it stays crunchy indefinitely in the fridge. (Last Chance to Eat: The Fate of Taste in a Fast Food World, Toronto: McClelland & Stewart, 2004, pp. 233-4)

When it’s so easy to come by, when its quantity begins to overwhelm, there’s a tendency for it to remain at, or even degrade to lower levels on the quality scale. This is perhaps just as true when the “it” is interpersonal connection as when the “it” is the topics addressed in the first-year arts paper, produce, or what have you.

The technologically-driven interpersonal connections in the networked society are certainly easy to come by. Thanks to e-mail and the Internet, for example, I could reach out and connect with dozens and dozens of fellow members of the networked society this very afternoon, should I so desire. But the ease of the connections also tempts strongly in favor of their being both fleeting and insensitive. And it seems to me that close friendship is unlikely to emerge on the back of interpersonal connection when the connection has these features. Close friendship requires a less ephemeral, more sensitive connection.

Because it is so convenient, for example, the elevated temptation with e-mail (at least in my own experience) is regularly to fire off quick, frequently ill-considered, and even more frequently ill-formulated missives that leave the recipient with little reason to believe that they have been crafted with any sort of sensitivity -- care and respect for the recipient, for her convictions and concerns. It’s not hard for the recipient to get the sense that her connection with the sender is taken for granted by the sender. Moreover the ease with which interpersonal connections are made has had a large part to play in the lack of care with which privacy is treated in the networked society. Send me a sensitive email and I can all too effortlessly forward it along to someone else, against your (perhaps unstated but blatantly obvious nonetheless) wishes and very often without your knowledge. Be unfortunate enough to have certain bits of information about yourself on-line that you wouldn’t have there (were you to know about them or have any real say in their presence), and chances are that I can root them out with a few simple Google maneuvers. Off-line intrusions of that sort would seem pretty egregious. On-line, they’re getting so easy to effect that it’s becoming harder to see them as all that bad. The ease of my connections with you and others in the networked society lends itself naturally to a disregard for your privacy.

So maybe we’ve got the beginnings of a general explanation of the paradox mentioned above. The networked society is indeed about dramatic increases in interpersonal connections between its members. The high convenience of those connections, however, mediated by such technologies as e-mail and the Internet, tends to see to it that they fail to manifest certain qualities required for their transformation into instances of close friendship -- stability, sensitivity, and so on.

If this explanation is on the right track, what measures can be taken to preserve and encourage close friendship in contemporary society -- assuming we agree that it’s a value worth preserving and encouraging? A Neo-Luddite response would counsel us to cease, or at any rate severely limit, our reliance on the technologically mediated, easy means of connection. But that response seems to me to take an overly dim view of the convenience value of the technologically mediated connections. I like my email as much as the next person; I probably like spending time in supermarkets even more; and I don’t think the source of these preferences is wholly disreputable. Perhaps a better response is thus to return with renewed vigor to a consideration of the various ways in which we can help diminish the temptation to move from convenience to such unfriendly conditions as fleetingness and insensitivity, while still allowing ourselves the benefits of convenience. It seems to me that the efforts of members of the netiquette movement and of those involved in the practical evaluation of privacy policies (see here and here for a couple of premier examples) are role models of this alternative response. I, for one, hope that these efforts to humanize the network steadily increase -- for friendship’s sake if for nothing else.
PDF Print
Checking our papers
By: Mark B. Salter

August 29, 2006


At certain moments, we are asked to account for our movements. I recently applied for security clearance in the pursuit of research and filled out a long form – but the same is true of a landed immigration application or a curriculum vitae – all which I have also filled out in my time. In each of these dossiers, we write a story in which we are the lead, whom the camera never leaves. And we are confessing subjects. In modern society we are conditioned that, in David Lyon’s phrase, “if we have nothing to hide, we have nothing to fear.” We tell the doctor all of our symptoms, the lawyer all the details of our crime, the border agent the purpose of our visit, the professor all the factors that made the essay late. Self-knowledge and the propensity to self-disclosure is the interpreted as the hallmark of truth. It is nearly unimaginable to say of one’s life, “I just have no idea what happened that year – I was in love, drunk, traveling, ill. 1995 is lost to me.” Between jobs, on research leave, what-have-you – on filling out the landed immigrant form some years ago I laughed out loud at the idea of a “permanent” address. I am not an international man of mystery but as part of a peripatetic career there are some gaps in my story – months where I cannot account for my whereabouts. And, when presented to the authorities, stories need to be complete.

Though we are the authors of our own story, we are often not the key audience – the doctor, the lawyer, the border guard will adjudicate whether or not our story “makes sense.” I taught at the American University in Cairo between 2000 and 2003, traveling back the United States regularly throughout this time. Before the war on terror, the immigration inspectors would ask me my profession, and I would say that I taught in Cairo – and the response was uniformly positive. “We need more people there – good for you.” It was seen as an educational peace corps or an opportunity to make danger pay in the wild wild East. After the war on terror, I would say I taught in Cairo – and the response was “why?” I would say “I needed a job,” but it became a new burden to explain why I had made such a reckless decision. My narrative of the choice had not changed, but the administrative reception of that decision changed radically.

Different from fighting the stereotypes which accumulate on all of us to assert some kind of individual identity, I think we need to be on guard to whom we confess what. My barber knows I am a professor and he thinks that I take the summer off, have a job for life, and wear tweed. None of which are true, but it doesn’t really affect my opportunities. With my barber, the discord between the stereotype and the reality makes no odds. But, with the dean or the hiring committee, we sell ourselves as individuals who have been working since early childhood with the sole intention of being hired at Eastern Dropovia University. There is a pressure within the academic community (as well as the government I would argue) to have a single trajectory – a life which leads to this moment. No wrong tracks, no dead-ends, no mistakes, no blank spaces on the map. I have never filled out a grant application says “I studied this for a year before finding out someone else had written a book on the subject, so now I would like some money to study something new.” For a profession which prides itself on building knowledge, there is little discussion of failure.

Having experienced a few traumas recently, which interrupted the trajectory of my work, I am ever more aware of the expectations of this smooth, publicly-access history. In addition to this general confessionary pressure, I have noticed a particular institutional pressure to “explain” why my productivity dropped off at this or increased point at another. A hiring committee member once accused me that I was a “book” type of an academic rather than an “article” type of academic, and asked I explain why. The gap between one article and the next is a blank space which requires a story. My female and male colleagues face similar pressures when expecting or as new parents, for example. The price for the explanation of public behavior is a loss of privacy. As I write the cover-letter for a job application, I am aware of the need to justify “why Cairo,” “why this pause,” “why that article.”

Which leads me to consider “Rate my professor.com” and other public venues where my story is written by others. As a professor at a public institution, I am dissatisfied with the way that student evaluations are done – with some substantial research to support my belief that the contemporary way of evaluating teaching rewards certain types of teaching and discourages others. I am excited that students have an independent space to air their views which speaks to other students. Plainly, I am vain or conscientious enough to search myself on that site. I’m concerned that the folks running the site say that “students are the CUSTOMERS of professors” (Caps in original), which is a particular neoliberal view of education as a kind of trade school – but this is a kind of empowerment. But, I have to admit being concerned that “hotness” or beauty is up for adjudication, as described by the linked article the website authors use to justify the inclusion of the category,. The New York Times article “The Hunk Differential” argues that more attractive professors get higher ratings (all other things being equal). Rather than being a caveat, the hotness quotient becomes simply another part of the review. Cunningly, the website suggests to professors who are dissatisfied with their reviews that they should publicize “RMP.com” which “ALWAYS has a huge impact on the number of ratings and makes the site become less entertainment oriented.” For me, the ability of students to speak back to power in the classroom on “RMP.com” is worth the unfair or unflattering reviews – I have a similar anonymous blog on my own University-based course websites. However, to encourage students to evaluate their professors in terms of “hotness” and publicly post those comments crosses the line between professional and personal.

In a recent “Identity Trail” workshop, we discussed the culture of “myspace.com” in which adolescents often post personal information without fully understanding the privacy implications (see previous blog by Jeremy Hessing-Lewis on this website). With the increasing pressure to make universities more transparent, open, and accessible (which is laudable), as professors we are making more and more of our lives public. I am fairly adamant not to post photos of myself on the web, especially linked to my professional persona. I am happy to post my curriculum vitae and course plans. I am unwilling to post my measurements or medical history. In an interview with geographers Foucault once joked that he did not want to be pinned down by a label – “let others check our papers” he said. But, as I have argued here, especially in the current moment, we have to be cautious as to what our public papers say about our private lives.

Mark B. Salter is an Assistant Professor at the School of Political Studies, University of Ottawa.
PDF Print
Privacy, Power and Vulnerability
By: Marsha Hanen

August 22, 2006


In a recent posting, Val Steeves made the point that women often experience differential power in relation to men when it comes to protection of their privacy, and especially so with respect to choices about the extent to which they are prepared to have parts of their bodies exposed to public view. As Val says, “the fact that there is a relationship between privacy and power is old news”; and certainly the issue of women being treated differently, and usually less well than men along a variety of dimensions has received detailed comment in the feminist literature over at least the past thirty-five years. But I think it is still important in the context of discussions of privacy to take note of this different treatment and the power relations it exposes, not only in connection with women but also in relation to a range of other groups and their members.

For one thing, it helps us to see that privacy does not necessarily carry the same significance, function or value for everyone. An obvious point, perhaps, and yet we still run into all sorts of attempts to characterize privacy based on an assumption that it has a single meaning and function. For another, it helps to highlight the issue of whose privacy, whose power and whose vulnerability is at stake in different situations and how these are to be reconciled.

I have been struck, in the past week or two, by public discussion in British Columbia of a class action suit alleging abuse against former residents of the Woodlands School, a care facility (which closed in 1996) for mentally disabled children. The Globe and Mail (August 11, 2006) reported that “…staff molested children left in their care, forced them into cold showers or scalding baths, locked them in extended isolation and beat them, according to a review commissioned by the government in 2000.” And there are allegations, as well, of serious sexual abuse.

A proposed government compensation package using a points system to quantify the severity of sexual, physical, emotional and psychological abuse has given rise to anger on the part of the victims and their families and charges that the compensation scheme is abusive, inhumane and degrading, forcing vulnerable individuals to relive experiences they found horrific. Their preferred alternative is a “common experience payment” model, which would allow for a lump sum payment to victims without the need for painful testimony to support awarding points based on the government’s perception of the degree of severity of the abuse.

Clearly, this example raises a host of ethical questions relating to issues of privacy and its underlying values of freedom, dignity and respect for persons. If reports of the case are accurate the government claims the points system is a “blueprint” used by Ottawa for such cases and does not agree that there was systemic abuse, as claimed by the “We survived Woodlands” group; but denying reported experiences of widespread abuse may result in placing too high a burden of proof on relatively powerless individuals, and raises serious questions about government accountability. What is more, the issues of protection of privacy and recognition of human dignity may require especially great sensitivity and attention to detail where vulnerable groups or individuals are involved.

Another B.C. class action suit, concluded in 2004, involved residents of a facility for the deaf. The settlement agreement, achieved through a mediation process in that case of sexual abuse of persons who attended Jericho Hill School between 1950 and 1992, provided for compensation at several levels (depending upon documented abuse). It also provided for a five-year Well-Being/Counseling Program, advocacy for further literacy support and education, training in American Sign Language, plaques to commemorate the experiences of the former students, a scholarship program and public acknowledgment of the abuse by government (see www.jhsclassaction.com).

Although the levels of compensation scheme used in this case might be described as analogous to a points system, much depends upon the details of the process and its implementation: how the facts of each case are determined, what care is taken to provide help to victims to understand the process and intended outcomes and to make their claims, whether arbitrary categorizations are avoided, the extent to which victims are provided an opportunity to tell their stories in a safe environment, how and to what extent individual privacy is protected and a host of similar factors.

In addition, on the privacy side there is the fundamental question as to what methods it is ethically legitimate to use to find the victims in the first place, in order to present them with the available choices, and whether the ends of finding them in order to make benefits available would justify whatever means are deemed necessary, even if such means go beyond what we might regard as acceptable in other situations. Increasingly sophisticated technologies increase the likelihood of privacy invasion; and these issues about how class members are located raise, as do most privacy related problems, both epistemological questions (How accurate is our information? What do we do when errors are found?) and ethical ones (Who has access to the information? How carefully is it protected? How sensitive are we to the special needs of persons who have been mistreated?).

Sadly, there are numerous kinds of situations in which the issue of abuse of vulnerable persons – children, disabled persons, people with few material resources or little power in society, individuals with certain medical problems, former residents of residential schools and others needs to be dealt with. We can hardly argue that these cases are few and far between – not that the existence of even a single case would be justifiable. The sooner we learn to base our response to such situations on ethical principles that give voice to respect for the dignity of all, including protection of privacy, the more likely we are to avoid the kind of reaction that we have been seeing to the case of the Woodlands School.

PDF Print
Biometric Passports: A Response to the Western Hemisphere Travel Initiative?
By: Krista Boa

August 15, 2006


The Western Hemisphere Travel Initiative (WHTI) heralds changes to the identity documentation required for Canadians wishing to enter the United States. While its name might imply that the WHTI is a multi-lateral initiative, it is not. The WHTI was developed by the United States to implement parts of the Intelligence Reform and Terrorism Prevention Act of 2004, which requires all individuals entering the United States (including US citizens) to present a passport or another type of identity and citizenship document approved by the Department of Homeland Security: the passport is the preferred document. For other documents to be acceptable under the WHTI, they “must establish the citizenship and identity of the bearer, and include significant security features. Ultimately, all documents used for travel to the U.S. are expected to include biometrics that can be used to authenticate the document and verify identity” [1, emphasis added]. The deadlines that apply to those entering the United States from Canada are January 8, 2007 for air and sea travel and December 31, 2007 for land crossings.

Both Canada and the US seem to be struggling to meet these new requirements within the required timelines. The impending deadlines have caused a flurry of discussion between Canadian and US officials, as well as among bordering cities and states, as the most significant change will be felt at the land border crossings. Given the time remaining for each country to develop and implement the necessary technologies and systems, it remains uncertain whether either country will be able to meet the deadlines, much less develop and test a robust system. These concerns have been so great that the US Senate voted to postpone the deadline at land crossings from December 31, 2007 to June 1, 2009, but this has not come forward in the House of Representatives yet [2]. Nevertheless, the Senate is doing all it can to delay the implementation of the WHTI because of concerns that the PASS card technology (the US alternative to the passport) and fears that those who need new documents will not be able to acquire them in time [3].

At present, it appears Canada has opted to meet these WHTI requirements using the existing passport, augmented with facial recognition technology, instead of developing an alternative border-crossing document like the US PASS card [4]. However, other types of documents currently in use will not be abandoned. Minister of Public Safety Stockwell Day and US Secretary of Homeland Security Michael Chertoff announced in July that members of the NEXUS and Fast and Secure Trade (FAST) programs, which employ biometrics, will continue to enjoy expedited border crossings [5]. NEXUS and FAST are joint Canada-US programs to prescreen frequent travelers between Canada and the United States who are citizens and permanent residents of both countries. NEXUS covers individual travelers, and includes air, highway, and marine sub-programs, while FAST focuses on trade and applies to importers, carriers, and drivers. Day and Chertoff also indicated that both countries will encourage further enrollment in these programs, which I read this to mean they have been deemed acceptable under the WHTI.

Developing a biometric passport for Canadians is not solely a response to the WHTI. Using biometrics in Canadian passports was first proposed in December 2001 in the Smart Borders Action Plan and then in Canada’s 2004 National Security Policy. Additionally, by using facial recognition technology, the Canadian passport will also comply with the International Civil Aviation Organization resolution of May 2003. However, recent steps to amend the Passport Order, the rules that govern the Canadian passport, might indicate that WHTI is motivating Canada to take action now.

On June 28, 2006, Canada took the important initial step toward biometric passports by announcing changes to the Passport Order in the Canada Gazette. Specifically, subsections 8.1(1) and 8.1(2) were amended to read as follows:

8.1 (1) Passport Canada may convert any information submitted by an applicant into a digital biometric format for the purpose of inserting that information into a passport or for other uses that fall within the mandate of Passport Canada.
(2) Passport Canada may convert an applicant's photograph into a biometric template for the purpose of verifying the applicant's identity, including nationality, and entitlement to obtain or remain in possession of a passport.

It is not clear how Passport Canada plans to proceed from this point. According to one article a Passport Canada spokesperson states that no timeline has been set for implementing biometric passports [4].

As part of the application process, Passport Canada will also use facial recognition technology to “screen applicant photos against images of suspects on security watch lists” with the aim of preventing “people who are ineligible for a passport, including national security risks and certain criminals, from obtaining one” [6]. It would appear that at least this part of the system will be in place in late 2007 [6]. While there is little information about this aspect of the program available, it raises some serious concerns about quality, accuracy, and sources of the watchlists to be used, the biases they contain, and whether there will be an appeals process for those wrongly denied passports. The widespread errors and inaccuracies of the US no-fly lists must be avoided in granting a document that is crucial to citizenship and essential for Canadians to exercise their mobility rights.

If this new passport, with its additional security features and security checks, is to be the Canadian response to the requirements of the WHTI, a great deal of work remains before such a system is ready to be used. Furthermore, it is not clear how the requirement for documents with biometrics will be handled if they are not included on the Canadian passport in time. Will Canadians be subject to a US-VISIT type system requiring them to register their biometrics at the border?

US Senator Patrick Leahy’s recent words of caution about implementing the PASS card system too quickly seem to capture the essence of the problem the WHTI’s timelines create: not only does he refer to it as a “train wreck on the horizon”, but he also warns that “[i]t will be far easier and less harmful to fix these problems [in the system] before the system goes into effect than to have to mop up the mess afterwards” [4]. This is a caution worth hearing in the Canadian context as we move forward with implementing biometric passports. Failing to take the time to fully trial and test the technologies and systems risks not only security, but the personal data held in these systems and individuals mobility rights. Finally, we need more information about Passport Canada’s plans for implementing biometric passports, and whether non-biometric passport will be acceptable for travel to the United States in order to evaluate this program and its implications for Canadians.

[1] US Department of State. Frequently asked questions about the new travel document requirements. Online at http://travel.state.gov/travel/cbpmc/cbpmc_2225.html

[2] Border cards have a long way to go, report says. Globe and Mail, June 1, 2006. Online (for a fee) at http://www.theglobeandmail.com/servlet/Page/document/v4/sub/MarketingPage?user_URL=

[3] Hudson, Audrey. Pass card placed on hold in Senate. Washington Times, June 30, 2006. Online at http://washingtontimes.com/national/20060629-111905-1877r.htm

[4] Delacourt, Susan. Ottawa takes ‘big step’ to biometric ID. Toronto Star, June 30, 2006. Online at http://www.thestar.com/NASApp/cs/ContentServer?pagename=thestar/Layout/Article_Type1&c=

[5] Minister Day and Secretary Chertoff discuss progress on security issues. Public Safety and Emergency Preparedness Canada News Release, July 18, 2006. Online at http://www.psepc.gc.ca/media/nr/2006/nr20060718-en.asp

[6] Bronskill, Jim. Passport to use facial imaging. Globe and Mail, June 24, 2006: A6.
PDF Print
Privacy & Private Copying Levies
By: Jeremy deBeer

August 8, 2006


Levies are a way of compensating copyrights holders for the fact that people copy music for non-commercial purposes in their homes, a phenomenon known as “private copying.” For reasons I’ll discuss in a minute, licensing or preventing private copying is problematic. So instead of allowing copyrights holders to extract licence fees from actual private copiers, some countries authorize a tax on blank media and/or recording devices to generate compensatory revenues. As the internet and p2p are transforming private copying into public sharing, many commentators have advocated a greater role for levies.

The levitation of copyright can be a heavy topic. I want to offer a little grounding on one aspect of the debate—the relationship between privacy and private copying levies. Private copying has never been a purely private activity. Decades ago people were making mixed tapes for their friends, families and sweethearts. Still, privacy issues are deeply entwined with levies. Concerns about privacy are among the primary reasons for replacing current copyright laws, digital locks, end user licenses and litigation practices with broader levy schemes. But levies are not a privacy panacea. They could create an equally troubling set of privacy-related problems.

It is often believed that levies were simply a response to the practical difficulty of preventing or licensing private copying. Simple, cheap and popular home audio recording equipment made private copying impossible to prevent. Transaction and enforcement costs made private copying impossible to license. Moreover, laws were often ambiguous about whether private copying should or could be controlled. Policymakers have addressed these problems by expressly legalizing private copying and generating revenues to compensate copyrights holders by allowing them to tax blank media and/or recording devices.

In fact, however, policymakers were concerned not just with the interests of copyright holders but also with the privacy rights of private copiers. A copyright does not entitle its holder to control all possible uses of a work. Reading books, watching movies, viewing artwork or listening to music are not things that copyrights holders may legally prohibit, especially when these activities take place in private. Reproducing works for private non-commercial purposes is allowable on similar grounds.

These activities are permitted not only because they interfere minimally with copyrights holders’ legitimate economic interests, but more importantly because they occur within the individual user’s private sphere. The liberty to experience works in private is important for personal development and effective participation in a democratic society. The inviolability of one’s private sphere requires freedom from copyright holders’ claims of infringement for private copying. Controlling private copying with locks, licenses or litigation threatens to put individuals’ privacy at risk and undermine privacy as a social value. Levies, therefore, are as much about the principles of privacy as they are about the pragmatics of licensing.

However, despite the fact that levies were used historically to balance copyrights holders’ economic interests with individuals’ privacy rights, and the potential of levies to alleviate worries about the privacy impact of locks, licensing and litigation, levies give rise to a new set of privacy issues. Although it is not necessary to gather or monitor data about consumers’ preferences in order to generate revenues under levy schemes, such information is necessary to distribute those revenues to creators appropriately.

Under most private copying schemes, levy revenues are collected by an organization representing a large number of copyrights holders who have designated the organization to act on their behalf. Distributing revenues to the creators and companies entitled to receive remuneration is a long, complex and controversial process. In free market capitalist societies, popularity as measured by consumer demand is generally seen as the fairest way to allocate levy revenues. Techniques for measuring consumer demand are, however, imprecise. Revenues collected under Canada’s existing levy scheme are distributed on the basis of samples of other samples of two supposed proxies for consumers’ private copying of music: radio airplay and retail sales. These are unreliable indicators of consumer preferences, especially in a digital environment driven by p2p file sharing. Current sampling practices can skew distribution patterns generating windfalls for some copyright owners and nothing for others.

Consequently, most proponents of levies as alternative compensation schemes suggest using tracking mechanisms or other technologies to improve the accuracy of revenue distribution. Information embedded into digital files can be used to facilitate monitoring of consumers’ consumption of music. There are distinctions between the type of metering necessary for pay-per-use copyright licensing as compared to levy revenue distributions. For instance, the latter requires information only about whether digital content was consumed, not about who consumed it (although both kinds of data might be collected from the same person). Also, in the levy scenario, the entity collecting information could perhaps be a public administrative agency, not a private enterprise (although I’m not sure which is worse).

The bottom line is that monitoring consumption to distribute levy revenues has the potential to completely undermine any privacy gains achieved by the introduction of levies in the first place. This problem must be adequately studied and safeguarded against before levies could be a truly viable alternative compensation scheme.

Jeremy deBeer is an Assistant Professor at the Faculty of Law, University of Ottawa.

PDF Print
Getting Naked – Tennis, the Hijab and the Struggle for Equality
By: Valerie Steeves

August 1, 2006


Last week, I spent 6 hours in a mall. For those of you who don’t know me well, you probably don’t realize how unusual that is. I hate shopping and my first thought as soon as I get into a store is how quickly I can leave. But the object of the trip seemed simple enough. We needed to buy some tennis shorts for my teenaged daughters - loose enough to be comfortable, with big pockets to hold tennis balls. After six hours, we had come up empty. We couldn’t find anything other than the low-cut, spandex, pocketless, extremely short shorts LuLu Lemon knockoffs that masquerade as girls sports wear. But what really struck me was how we managed to pick up seven tennis shirts and four pairs of tennis shorts for my son, without even looking.

You may be wondering what this has to do with privacy, but I’ve been thinking a lot about an article I read in the Toronto Star back in June after 17 men were arrested on terrorism charges. The story talked about what their wives experienced when they attended a set-date court appearance. The article started by saying, “They live by a different code. A code of modesty and privacy that was clearly violated at the Brampton courthouse yesterday as they arrived to catch a glimpse of their loved ones.” The media blitzkrieg that greeted these women as they stood in line to enter the court was likened to racial profiling and Tarek Fatah of the Muslim Canadian Congress defended the women by saying, “We know these are extremely private people... The merits of leading a secluded life is a separate debate altogether and is not done with cameras in these women's faces.”

I’m not so sure about that. In fact, this might be an excellent context to examine the relationship between privacy, power and identity. The hijabs and niquabs worn by the accused’s wives are as contested as my daughters’ sports wear. Advocates of the veil argue that it protects women from the male gaze and allows them the freedom to move about in public with anonymity. Its detractors argue that the hijab forces women into a private sphere structured by patriarchal violence and the disempowerment of women. Revealing women’s sports wear, on the other hand, can be said to liberate women’s sexuality from the strict codes of modesty that constrained them in the past, or to objectify their bodies as sexual property in any public context, in effect robbing them of power through public exposure. As far as privacy and publicity are concerned, women’s clothing is a red button topic.

But the newspaper coverage of the Brampton court date adds a new thread to the debate - privacy as political identity. The wives’ desire to avoid publicity is something they share in common with almost all family members of persons involved in court proceedings. But the claim that an extremely private life can justify a withdrawal from those most public of elements of the rule of law – a free press and an open trial – is an intrinsic claim to a special and unique identity. In this sense, seclusion of the feminine becomes a form of social power.

The fact there is a relationship between privacy and power is old news. The wealthy and powerful often use their influence to protect their private lives from public scrutiny. However, I find it interesting that the claim was made with respect to these women’s bodies, their physical appearance at the courthouse, and yet it was not made with respect to the publication of their blog entries in the Globe and Mail. One could argue that the publication of their images – or the small parts of their bodies that were exposed to the public eye that day before the courthouse – is far less invasive than the reprinting of their comments about jihad or their hatred of Canada in a national newspaper. But their bodies are what is contested – not the bodies of their male partners or friends but their bodies, as women. The jarring note that comes out loud and clear in the article is that the exposure of these women’s bodies in public implicates them in some way. As one of them was heard to say at the scene, “Even if they don't see us, they will know we're here.” Ironically, the claim to privacy through hijab makes them visible in a way that Western clothing could never do, but it is a vulnerability their men do not share. They are vulnerable as women.

On the other hand, my daughters’ shopping expedition drives home the ways in which Western women’s clothing is used to structure and discipline girls’ bodies by exposure. I often laugh when I hear people talk about the incredible variety available to teens in the marketplace. One of my girls wore ripped jeans for ten months because she couldn’t find a replacement pair that weren’t so low, she couldn’t sit down in them without exposing herself. For her, women’s clothing is inherently political – the extent of exposure is tied directly to her sense of identity and her potential for empowerment.

Our shared ability, or lack of ability, as women to determine if and when we reveal our bodies in public underlines how the relationship between privacy and identity is a gendered one. And it has everything to do with power. It’s no surprise to me that my daughters – like Rosalind and Viola before them – solved their tennis dilemma by going to the men’s section and buying boys sports shorts.
PDF Print
Municipal WiFi is Coming, and Why Privacy Advocates Should Care
By: Graham Longford

July 25, 2006


At a press conference in March of this year, Toronto Hydro Telecom (THT) announced an ambitious plan to turn Toronto into the largest Wi-Fi (wireless fidelity) internet ‘hot-zone’ in North America. Flanked by Mayor David Miller, THT’s CEO David Dobbin called the availability of Wi-Fi in public spaces, and the ubiquitous, mobile connectivity that it enables, “the new benchmark for urban living.” Miller called the announcement “a historic moment in Toronto’s development as a world-class city.” THT’s announcement vaults Toronto to the forefront of municipal WiFi deployments in North America, alongside “muni WiFi” pioneers like Philadelphia, San Francisco, and Fredericton.

On its face, the case for deploying municipal WiFi systems is a compelling one. Advocates claim that city-wide WiFi schemes promote economic development and tourism, attract and retain skilled workers and investment, increase the efficiency of municipal services, improve emergency response and public safety, and narrow the digital divide. It is for reasons like these that hundreds of municipalities in North America, Europe and Asia are implementing or planning WiFi systems.

Yet, despite its allure, municipal WiFi is controversial, particularly in the U.S.. Private sector critics argue that municipalities have no business providing internet service to citizens. Muni WiFi services, they claim, duplicate and unfairly compete against private telecommunications services. Public health advocates, meanwhile, have weighed in with concerns regarding the dangers of electro-magnetic radiation emitted by wireless devices. Largely absent from the debate so far, however, have been arguments about the privacy risks of such systems. As major WiFi deployments like Toronto’s are rolled out across Canada and the rest of North America, surveillance and privacy scholars, activists and policy makers must become engaged in order to ensure that such systems are implemented in a manner that is transparent, accountable and as respectful of user rights, including privacy, as possible.

The following offers an overview of the THT WiFi plan and a preliminary analysis of its privacy implications. As THT’s service has yet to be deployed, some of this is unavoidably speculative. We can extrapolate, however, from the experience of other municipalities, including San Francisco and Fredericton, which will also be discussed. I conclude by reviewing a set of guidelines for enhancing the privacy of muni WiFi systems proposed by privacy advocates such as EPIC and EFF, and call for the development of THT’s WiFi system in conformity with them.

One zone, no strings attached?

THT’s WiFi plan, which it has dubbed “One zone, no strings attached,” plan envisions a wireless “cloud” covering the entire city (630km square) with ubiquitous internet connectivity within 3 years. The first phase of the rollout is under way, with THT promising to cover a 6 square kilometre area in the downtown core by the end of 2006. From a technical standpoint, the THT network will use license-exempt wireless spectrum (the same spectrum used for household devices like garage door-openers and baby monitors). Bandwidth will be supplied through THT’s existing 450km fibre-optic network, which it uses to monitor Toronto’s electricity grid. THT claims that its WiFi internet service will be up to ten times faster than existing broadband services in the city. The THT plan also relies on mounting WiFi equipment onto many of the city’s 18,000 street lights, which are owned by THT’s parent company Toronto Hydro. Under THT’s plan, every 7th street light in the city will be equipped with a WiFi device, bathing the city in wireless connectivity (Hamilton, 2006; Toronto Hydro Telecom, 2006).

While THT is a subsidiary of municipally-owned Toronto Hydro Corporation, its WiFi business model is unambiguously commercial and revenue-oriented. THT will offer its WiFi service free of charge for the first six months of operation, to be followed by the introduction of tiered service plans available on a prepaid or subscription basis at market competitive rates. THT plans to market the service to downtown businesses, workers, restaurant and hotel patrons, and university students (Toronto Hydro Telecom, 2006). Whether or not it will eventually target the broader residential broadband market in the city remains unclear.

Until THT’s WiFi network is deployed and its terms of service made public, it is difficult to comment on its privacy implications in detail. We know enough about its business model already, however, to raise some red flags. First, the THT system will require users to create accounts and authenticate. While this need not entail divulging personally identifying information, it certainly facilitates user data collection and session-to-session tracking, which could eventually be tied to personal information. Since THT also intends to sell the system on a subscription basis, it will most certainly collect and retain users’ banking and/or credit card information, thus enabling user data to be tied to individuals.

Secondly, THT has made it very clear that the main purpose of the system is to maximize revenue for its parent company, Toronto Hydro Corporation. With this in mind, THT will most certainly examine the revenue potential of the user data that it collects. Major web properties will no doubt line up to gain access to THT’s user data. Furthermore, THT may also be tempted by the prospect of generating additional revenue by selling ad space with its service; indeed location-sensitive advertising is a major component of many muni wifi business models, including San Francisco’s (Chester, 2006). Location-based advertising is dependent upon combining user data with location information in order to customize ads and services to a user’s geographic location. Such a combination can also be used to reveal an individual’s location, as well as patterns of movement through the network coverage area.

Finally, and alarmingly THT’s Dobbin recently speculated on the feasibility of integrating CCTV surveillance cameras into the system, mounting camera units on city street light poles and transmitting images to police via the THT WiFi network (Granatstein, 2006).

What does THT’s plan mean for the privacy rights of Torontonians and visitors to the city, as thousands (if not more) flock to the service? Fortunately, we do not need to wait for THT to deploy its system fully in order to grasp the potential implications for user privacy. The experience of municipalities that are farther down the road to deployment is instructive.

Google’s San Francisco WiFi deployment

In the spring of 2006, a partnership between Google and Earthlink was awarded a contract to develop a WiFi network for the City of San Francisco, beating out 4 other bids. The Google/Earthlink plan involves providing tiered internet access services, including a free low-speed service provided by Google and a paid high-speed service provided by Earthlink. The provision of each service is to be supported by a different business model. The free, low speed (300 Kbps) service offered by Google will be financially supported by online advertisements streamed to users of the network and tailored to their location, habits and preferences as tracked by Google. Earthlink’s premium, high speed (1 Mgps) service will cost users approximately $20 per month, and be free of advertising.

San Francisco’s proposed WiFi network has been scrutinized by privacy advocates. EPIC, EFF and the ACLU recently prepared a privacy analysis of the 5 competing bids for the contract, looking at the provisions made in each for the collection, use and retention of user data (EPIC, 2006). Four out of five, including Google/Earthlink’s, were found to be privacy-invasive. Only the proposal submitted by SF Metro Connect, a non-profit community network, passed muster. Analysis of the Google-Earthlink bid showed that the collection, commercialization, sharing of user data would be the default setting for the system. Google’s free service will be accessed via a location-aware captive portal page and user sign-in, thus allowing persistent tracking across sessions. Along with collecting user email addresses and usernames, Google intends to collect, analyze and commercialize user location information in order to customize advertising and other location-based services that users will see and have access to. Google’s concession to privacy concerns includes an “opt-out” provision for those who do not wish to access location-specific advertising and services or have their information shared with third parties, thus making information collection and sharing the default setting of the system. Additional concerns were raised about how Google will respond to requests for user information by law enforcement officials, including Google’s policy of not informing users when such requests have been made.

All told, the Google/Earthlink proposal was judged by the EPIC/EFF/ACLU study to be one of the most privacy-invasive of the 5 proposals for the San Francisco system. Google’s model for a free, ad-supported WiFi service has been the subject of intense scrutiny by the press and other municipalities, although rarely in relation to its privacy implications. Should it prove to be commercially viable, the Google model may well be replicated in hundreds of municipalities across the U.S., and possibly Canada, a prospect that should concern us.

Setting, applying, and advocating a standard for privacy-protective municipal WiFi systems

Part of the problem with the San Francisco deployment, according to the privacy advocates, is that the City set no minimum standards for privacy protection in its initial Request For Proposals. What might such a set of standards look like? The EPIC/EFF/ACLU privacy analysis document proposes a “Gold Standard” for privacy-protective municipal WiFi systems. The fundamental principle of a privacy-protective system is that “where information needs to be collected, it should only be used for operational purposes and deleted after it is no longer needed” (EPIC, 2006). Practically speaking, a “Gold Standard” muni WiFi system should:

• allow access without "signing in"; sign-in procedures often require personal information that enables tracking;
• offer a level of access that is free, since fee-based systems (e.g. subscription services) enable the identification of users through credit card or bank account information, unless provision for cash payment is made; and,
• forego offering targeted advertising and other customized electronic services based on user identity, location or surfing behaviour; such services may be offered, but on an “opt-in” basis requiring the user’s explicit consent..

For more detailed information on the EPIC/EFF/ACLU “Gold Standard,” including recommendations for data storage and retention practices, see EPIC, 2006.

Applying this Gold Standard to THT’s WiFi model is difficult of course, given that the service has yet to be rolled out. Based on what we know so far, however, it is highly unlikely that it will meet the standard. THT’s insistence on the use of log-ins and paid subscriber accounts ensures the collection of information beyond what is minimally and technically necessary to operate and permit access to the system, and creates the conditions for the persistent tracking of user behaviour tied to personally identifying information. The latter will also allow THT to construct commercially valuable user data profiles that it will be tempted to exploit by selling them to third parties. Only the adoption of an explicit “opt-in” policy for the collection and sharing of such data would mitigate the privacy risks posed by such a move.

The fact that THT has yet to roll out its system presents an opportunity to intervene, however, just as it is developing its policies and terms of service. The need for intervention is urgent, given that many other municipalities in the country are watching to see if the THT model provides a viable blueprint for other deployments. Any influence that privacy advocates have in shaping the THT model may well have ripple effects across the country. But the points of leverage from which to influence THT – be they city politicians, City hall committees, or Toronto Hydro itself - need to be identified, and pressure brought to bear. The privacy risks of muni WiFi need to be identified and articulated, along with best privacy practices. And as we think about best practices, we would do well to recall and revive interest in Canada’s homegrown model of muni WiFi – the Fredericton e-Zone – which has been eclipsed by the recent hype associated with the deployments in Philadelphia, San Francisco and, now, Toronto. Fred e-Zone has been operating successfully as a free municipal WiFi service in the New Brunswick capital since 2003, and without using authentication procedures, log-ins or collecting personal information. As the muni WiFi wave begins to roll across this country, we would do well to study the “Fred e-Zone” experiment closely, to better understand what has enabled it to succeed despite its admirably minimalist approach to user data collection.


Chester, Jeff (2006) “Google’s Wi-Fi Privacy Ploy,” The Nation, March 24,

EPIC (2006) A Privacy Analysis of the Six Proposals for San Francisco Municipal Broadband, http://www.epic.org/privacy/internet/sfan4306.html

Granatstein, Rob (2006) “Network could be invitation to big brother,” Torontosun.com, July 15, 2006,

Hamilton, Tyler (2006) “Downtown goes wireless,” Toronto Star, March 8, 2006.

Toronto Hydro Telecom (2006) “One Zone, no strings attached,” (www.thtelecom.ca).


Graham Longford is a Postdoctoral Research Fellow in Community Informatics
Co-Investigator, Community Wireless Infrastructure Research Project (CWIRP)
Faculty of Information Studies
University of Toronto

PDF Print
Little Brother – Electronic surveillance inside private organizations
By: Chris Young

July 18, 2006


Much is made of the potential pitfalls of overly broad government surveillance of civilian activities, and rightly so. Most will agree it is a good thing that our society still has at least some intuitive understanding that such powers in the hands of those who govern us can do more harm than good in the long term. However there is a parallel realm of human activity where surveillance also occurs and which is discussed much less frequently. I have in mind here the world of private organizations and the (mostly) electronic surveillance they engage in over their own employees. Of course this is more of a potential issue with large organizations that have the resources necessary to pay for the technical and human resource requirements that this entails, however moving forward smaller and smaller organizations will be able to put in place the mechanisms necessary for employee surveillance, as it is very likely that out-sourcing services in this area will become available. Further, out of the two types of organization I am most familiar with, the university and the for-profit corporation, the latter is much more likely, in my view, to engage in employee surveillance, as universities still maintain a respect for researcher independence (among other factors). On the other-hand, corporations, slightly paranoid about anything that might affect their bottom line, will tend to jump reflexively to employee surveillance as just another good business practice. Before providing some thoughts on whether this should even be accepted as true, I will go over a few things that recently appeared in the news that shed a bit of light on what is actually going on in the corporate world.


CBC recently reported on the release of a Ryerson University report discussing the use of electronic eavesdropping on employees by private corporations in Canada. Apart from showing that the practice was widespread, one of the findings was that employers did not stop to think that this sort of activity might be a problem. Another report surveying American and British corporations found that somewhere close to 40% of these routinely eavesdropped on employee communications. To some extent this is warranted, as the way in which, and what, employees communicate to the outside world clearly is company business. However many employees will use their company email accounts for private communications. Further, other electronic communications media are either coming into regular use or are becoming more networked, making them equally vulnerable to surveillance on the part of the organization. I have in mind here instant messaging, heavily used by those under thirty years of age, and the transition of phone services from analog networks (which in practice makes eavesdropping rather difficult) to fully digital and integrated networks, the most obvious example being VoIP (Voice over Internet Protocol) telephony. For example, a private telephone call made over an IP phone on a corporate network to a government agency, during which the employee might communicate information such as their social insurance number, can not only be intercepted and heard by the IT department of that corporation, but can very easily be permanently stored on a corporate network. There are of course many other types of information of a private nature which individuals may prefer to keep to themselves and which are in no way corporate business. The international nature of contemporary businesses and electronic networks also means that such information is as likely stored in another country as where one’s physical place of work is. If information resides on US networks, it may well be available to US government security agencies under local legislation. Someone trained in law might better be able to shed light on this aspect of the issue than I can.

My own response when working at a private company has been to limit my use of company email software and phones to strictly business uses. My mobile phone and encrypted web-based email services I use for personal communications. However, I happen to be both tech-savvy and aware of developments and common practices in electronic surveillance, which is not true of the population at large. Further, I have the financial resources to use a mobile phone during the day, which may not be true of some categories of workers. In passing, the importance of encryption becomes obvious in this context as a way for employees to protect their private information, and speaks to its value as a democratizing force in an electronic world.

One point that is made in the Ryerson report that might be overlooked by some is that very often the IT departments of large corporations are implementing electronic surveillance practices without the oversight, or even the knowledge, of human resources departments. It seems to me that human resources personnel should be an essential part of the teams creating electronic privacy policies within corporations. For one thing they are trained in orginizational theory and are well-placed to judge what the best use of electronic surveillance over employees might be. It is not at all clear to me that pervasive surveillance of employee activity has the best outcome in terms of employee productivity and overall organizational efficiency. Secondly, human resources personnel often have at least some social science or liberal arts background, which (one would hope) gives them more of an insight into the appropriateness of using technology to peer into the private lives of employees.

Apart from the surveillance free-for-all that some IT departments engage in (at a large corporation I have recently worked at, the answer of one IT person to my question as to what they looked at in terms of web communications to the outside world was “everything”, or words to that effect), there is the added factor that the surveillance policies being implemented, whether planned or ad hoc, are rarely communicated to the employees. Many people may not realize that a private phone conversation related to family matters, for example, will possibly reside on company servers for the next several years or more (the conversation would likely be archived as part of the normal data storage activities of the company in question).

The surveillance of private communications in the corporate world, although less politically sensitive than government surveillance of the civilian population, does warrant some attention, as it will affect more and more people’s everyday lives in the coming years.

PDF Print
A Flickr of Web 2.0
By: Jeremy Hessing-Lewis

July 11, 2006


Welcome to Web 2.0, where your life is the content. Thanks to such upcoming venture capital stars as folksonomy, social networking, Wikis, and architectures of participation, this second wave is already upon us. Yet, just behind the jargon and Silicon Valley hype, lies a collection of legal and ethical issues mirroring and amplifying previous iterations of online participation. This ID Trail Mix will briefly survey some of these issues using Flickr photo sharing as a case study of Web 2.0.

Flickr as Web 2.0
The name Web 2.0, although still under debate (and litigation), is an umbrella term used by a series of conferences hosted by O’Reilly Media and MediaLive International. Without referring to any specific technological innovation, the name is used to describe a collection of web tools and standards that fit within broad themes such as usability, participation, standardization, remixability and convergence.

Flickr currently has an estimated 1.5 million users in the increasingly competitive digital photography after-market. Despite the abundance of innovation, the Flickr mission statement remains sufficiently straightforward: 1) “We want to help people make their photos available to people who matter to them” and 2) “We want to enable new ways of organizing photos.” To accomplish these goals, Flickr has deployed an overwhelming number of tools. Pictures can be uploaded via email, cell phone, Flickr software, and of course the old fashioned web browser. They can be annotated, blogged, bookmarked, printed on mugs or t-shirts, and published in coffee table books.

And then there is the public dimension, where “available to people who matter to them” seems to include just about anyone with an Internet connection. While pictures can be set to private, most users post publicly in order to avoid having to assign individual permissions to Uncle Hank and Cousin Sue.

Users are the New Bots
When Yahoo! acquired Flickr in March 2005 for an undisclosed amount, it was not immediately clear why it would invest in a small Vancouver company when it already had a far more popular photo sharing site in Yahoo! Photos. The answer is based in how Web 2.0 tools can be used to sort content. Not only are the photos submitted by users, but they can also be annotated and categorized by members of the community itself.

Photo by Open Door Exit under a Creative Commons Licence

Flickr organizes photos by way of folksonomy. In other words, content is identified in an open-ended system of collaboration. A taxonomy by folks. Meta-tags are added to each photo by the person posting the photo. Depending on the level of permissions, all Flickr users may be able to add additional tags. For example, I might include the tags “Birthday” and “Party” with the above photo. My photos would then be returned by searching for any of these tags. Another user might add “Jeremy Hessing-Lewis” at some later point. Some users even add GPS locations to situate photos in geographic context.

Unlike Google, which uses computer algorithms known as crawlers to locate and identify content, a folksonomic system will give results as interpreted by humans. Still, the ultimate goal remains the same: enable users to find content that they want.

It’s no surprise that Yahoo! also acquired the bookmarking site Del.icio.us in 2005 to complement its folksonomic goals. Del.icio.us allows all users to bookmark sites and add tags to the bookmarks in order to produce annotated lists of popular web content. The inevitable convergence allows Flickr users to add Del.icio.us bookmarks to photos, groups, and portfolios. This effect, where users add value to the network, is known as an “architecture of participation.”

Although the public may accurately classify content, folksonomy vastly complicates online privacy by limiting the content creator’s ability to disclose specific identifying details. While an infinite number of monkeys on an infinite number of laptops may correctly identify pictures of your birthday, do you really want an infinite number of monkeys browsing your birthday photos?

It’s not hard to imagine how folksonomic sorting could further impair privacy and anonymity. What if I choose to post pictures anonymously only to have others fill-in the remaining details (as in the above example)? What about controlling descriptors accompanying your image? The result is that although I may begin by disclosing my information in a certain way, inevitably “I am what you say I am.”
Photo by Fabz under a Creative Commons Licence

Social Networking
Since the $580 (US) million purchase of MySpace by the media behemoth NewsCorp, social networking has been the star of Web 2.0. While the idea of online social forums has been around since the earliest days of the web with such legendary haunts as the WELL, the social element is now built-in to just about every possible site.

Flickr’s social networking features are, not surprisingly, based around photography. Users can create a “group” and so long as it is listed as public (the default), anybody can contribute photos. People may then discuss the photos and form their own online communities forged around certain themes. The “wedding” tag alone contains 2, 544 groups. Each includes a collection of images and vast amounts of personal information. Although I didn’t attend Mark and Ruth’s Wedding, I feel like I was in the wedding party.
Photo by Voteninjaparty under a Creative Commons Licence

By participating in Flickr groups, even voyeurism becomes a social activity. Relationships forged in this environment fall victim to exactly the same elements of preying-upon and web-stalking as have conventional chat rooms. The only difference is that a predator doesn’t have to wait to ask for your picture. Instead, they start with your picture and lure from there.

Photo by Steve Crane under a Creative Commons Licence

The Network is the Platform
The most threatening Web 2.0 feature that I foresee is that the interface will become so usable and efficient that users will no longer recognize that they are passing information over a network. When a computer user’s desktop becomes an extension of a website, users give-up both privacy and proprietary control of their information.

For example, Flickr incorporates a fully web-based “Organizr” program. It is a simple browser-based tool for uploading and sorting pictures without having to install additional software. Users need only click and drag thumbnail images into a web-based desktop. This simple procedure poses two important privacy issues.

Firstly, superior ease-of-use will likely increase the number of photos that users share. As most computer users will attest, clicking and dragging is often done without considering the full implications of the action. The Organizr feature allows users to load personal pictures to a public, online repository with almost no consideration of the consequences.

An extension of this concern is that increased ease-of-use lowers barriers to participation by softening the technology. An early example of this behaviour would be the transition from command-line operating systems to the modern Windows or Mac desktop experience. In terms of Flickr, users who may not previously have shared their pictures online may find themselves posting personal images. This will increasingly be the case as ordering prints online becomes common practice. Such users may not fully understand the subtleties of access permissions, copyright law, or one and half million voyeurs.

Secondly, when the network is the platform, all of the user’s information is permanently housed on the servers of the host company. When Yahoo! acquired Flickr, the company’s servers were moved to the US where they are now governed by US federal law. As more and more web-based programs are developed (see eg. Google Calendar), the impacts on personal privacy will be significant. Having lawful access to telecommunications systems in one thing, but having access to an archive of any user’s content should certainly be enough to make law enforcement salivate.

As Web 2.0 continues to be developed, some of its drawbacks are becoming increasingly clear. Will folksonomy be the final death knell of online anonymity? Will society recognize the threats posed by increased social networking? Will privacy laws be able to protect the tremendous increase in both the amount and variety of personal information being shared online?

While these details may not be resolved any time soon, you can already order prints of Mark and Ruth’s wedding from the nearest Target location…to be picked-up within the hour.

PDF Print
The New Paternalism, Technologies of Conformity, and Virtue by Default
By: David Matheson

July 4, 2006


In his classic essay On Liberty, John Stuart Mill famously argued that state restrictions on an individual’s freedom are justifiable only to the extent that they are aimed at preventing harm to others. When it comes to the state’s limitation of a citizen’s liberty, the appeal to what is in her own best interest or for her own good, Mill insisted, “is not a sufficient warrant.”

With various qualifications, something like this principle of liberty is near and dear to the heart of every political liberal, and it is usually thought to stand at odds with state paternalism. Yet, according to a recent special report from The Economist (“The Avuncular State,” 8 April 2006), there is a growing endorsement in academic circles of a kind of paternalism thought to be consistent with the liberal premium on individual liberty. The idea of the new, “soft” paternalism is that citizens’ behavior can be given the right shape by the state – for the sake of their own good – with no significant restrictions on their liberty. A central way of accomplishing this is through a restructuring of the default frameworks for citizens’ behavior.

The Economist asks us to consider, by way of illustration,

[one] example of soft paternalism [that has] has already attracted the interest of governments and the backing of this newspaper: employees should be signed up for company pension schemes by default. Such schemes, which typically attract tax breaks from governments and matching contributions from employers, are usually in the best interest of workers. You might say that joining is a ‘no-brainer’, except that what little brainwork and paperwork is required defeats a surprising number of people. A soft paternalist would presume that people want to join, leaving them free to opt out if they choose. In one case study […] changing the default rule in this way raised the enrolment rate from 49% to 86%.

Since the default policies and mechanisms favored by the new paternalism (which range far beyond restructured pension scheme defaults) include opt-out features, so the thought goes, they can’t be charged with forcing or compelling citizens to act in their own best interests. And this in turn means that the policies and mechanisms avoid the sorts of external restrictions that the principle of liberty proscribes.

Interestingly, the Economist article ends on a less than entirely enthusiastic note. In encouraging citizens to act in the right sorts of ways by default, it suggests, the new paternalism may end up discouraging them from developing the sorts of character traits and intellectual skills that we typically deem praiseworthy:

Reasoning, judgment, discrimination and self-control – all of these the soft paternalists see as burdens the state can and should lighten. Mill, by contrast, saw them as opportunities for citizens to exercise their humanity. Soft paternalism may improve people’s choices, rescuing them from their own worst tendencies, but it does nothing to improve those tendencies. The nephews of the avuncular state have no reason to grow up.

We might capture the force of the Economist’s skepticism here by considering a distinction drawn from Aristotle between the mere conformist to right behavior and the virtuous individual. The mere conformist acts in the right sorts of ways, but is not praiseworthy for so doing, because his actions are not properly motivated. The virtuous individual, by contrast, not only acts in the right sorts of ways but is further deserving of praise for her behavior because it is properly motivated. To illustrate with a rather low-key example, consider a new dog-owner who begins feeding his dog a type of food that is in fact optimally conducive to the dog’s health. The dog-owner didn’t choose the food for that reason, however. His motivation was one of convenience: he simply went to the nearest pet store and picked up a bag of whatever food happened to be the most well-stocked. Contrast this first dog-owner with a second, who ends up feeding her new canine companion the very same type of dog food, but does so because, having taken the time to look into the relative merits of different types of dog food, she decided that food was in fact the best for her dog. There’s an intuitively clear sense in which the second dog owner is praiseworthy in her dog-care behavior but the first is not: despite the fact that both dog owners end up doing the right thing vis-à-vis their dog’s nutritional needs, the second has the right motivation for doing it whereas the first does not. The first dog owner is a mere conformist when it comes to his dog’s nutritional care; the second is virtuous.

In effect, then, the point of the Economist’s skepticism is that even if the default policies and mechanisms of the new paternalism end up promoting conformity to right behavior, there is no reason to suppose that they will promote virtue, for those policies and mechanisms are quite consistent with citizens doing the right sorts of things without the right sorts of motivations. Worse, the new paternalism may end up demoting virtue by promoting conformity in the way it does: the more common it is that citizens do the right thing with the wrong motivation (or perhaps, depending on the nature of the default framework, with no particular motivation at all), the less common it will be that they do the right thing with the right motivation. And the less common it is that citizens do the right thing with the right motivation, the less likely it is that they will develop those stable traits of character and intellect that we call virtues (good reason, sound judgment, apt discrimination, and self-control, just to name a few). This is because, as Aristotle emphasized, the development of virtue requires the practice of virtue: in order to acquire the traits we call virtues, we must repeatedly do the right sorts of things with the right sorts of motivations. The danger of the new paternalism is that its efforts to promote conformity to right behavior by default threaten to undermine this practice condition on citizens’ development of virtue. Simply put, there is no such thing as virtue by default. And, arguably, a central mistake of the new paternalism is to assume that there is (or worse, that we really don’t need virtue at all).

My suspicion is that this very same mistake is made by advocates of new technologies of conformity, i.e. new technologies aimed at the automated short-circuiting of problematic user behavior. Consider, for example, the escalating movement toward implanted radio frequency identification microchips. Vendors are no doubt quite right to claim that users’ self-identification activities become more convenient and more reliable with chip in arm. But this merely speaks to conformity to right identity management behavior. Does it speak at all to identity management virtue? Might it not speak against?

Or consider recent battles about the use of digital rights management technologies, prominent participants of which include many of our own project’s gifted members. (See, for example, here and here.) The main objection to the use of these technologies is not of course that they fail effectively to protect against real copyright infringements. The objection is that they overprotect. And perhaps the overprotection, at least in its more ubiquitous forms, is bound to do something to users much more troubling than whatever its absence might do to copyright holders. As means of securing users’ copyright conformity, the DRM technologies may well incapacitate users’ copyright virtue. It seems pretty clear to me, at any rate, that they won’t secure that virtue by default.
PDF Print
AT&T's Privacy Policy
By: Angela Long

June 27, 2006


During my usual pre-work web-surfing (aka. technique of seemingly interminable procrastination) last week, I came upon a post on boingboing.net with the title AT&T retrofits privacy policy: your data is not yours. The title piqued my curiosity, given its relevance to privacy law and the involvement of one of the world’s largest telecommunications companies. During our contracts course this past year, Ian Kerr and I routinely used Canadian telecommunications contracts and privacy policies to provide ‘real world’ examples of contracts with which the students would have had some personal (and often frustrating) experience. Having read those contracts and policies in great detail (and even fashioning an exam question based on one such contract), I was interested to see what changes AT&T was making.

Apparently AT&T has revamped it’s privacy policy (a misnomer if ever I’ve heard one – ‘privacy policies’ usually provide protection for almost everything EXCEPT privacy) to provide even less protection for it’s customers confidential information. The boingboing.net posts a link to an article by David Lazarus of the San Francisco Gate. Lazarus describes the new policy, that applies only to AT&T Yahoo! internet users, as markedly different from the company’s previous policy, in that it specifically ascribes ownership of customer data to AT&T. The new policy states in a section dealing with AT&T’s legal obligations and fraud:

While your Account Information may be personal to you, these records constitute business records that are owned by AT&T. As such, AT&T may disclose such records to protect its legitimate business interests, safeguard others, or respond to legal process.

In addition, it also requires customer agreement with the policy as a term of the service. It states (in bold print):

Please read this Privacy Policy carefully. Before using your Service(s), you must agree to this Policy.

In other words, if you don’t agree with the policy, which means agreeing to the use of your personal information in the ways set out by AT&T, you can’t use AT&T’s service.

Much of the brouhaha surrounding the latest antics of AT&T in the U.S. has to do with allegations that the company has been allowing the National Security Administration access to not only customer account information, but also to data that customers have transmitted through AT&T’s services, such as e-mails without warrants, in the name of national security to subvert potential terrorist attacks on the US, an on-going red hot issue for privacy advocates. The company’s new policy actually widens the scope of to whom and in what circumstances it will be able to provide it’s customers’ information to government authorities. It states:

We may disclose your information in response to subpoenas, court orders, and other legal process, or to establish or exercise our legal rights or defend against legal claims. We may also use your information in order to investigate, prevent, or take action regarding illegal activities, suspected fraud, situations involving potential threats to the physical safety of any person, violations of the Service Terms or the Acceptable Use Policy, or as otherwise required or permitted by law.

While Lazarus focuses on the ownership of information issue (an issue that is no doubt of interest to privacy advocates) in this article and in a follow up article, I will instead focus on the contractual issue of required agreement to privacy policies, which as I have discovered, has implications for Canadians dealing with telecommunications companies, as well as all other commercial enterprises. As I stated above, AT&T has made it a term of its internet service agreement that customers agree to the privacy policy. If you don’t agree to the policy, you don’t get the service. In contractual lingo, we can call this a take it or leave it offer. AT&T, as the offeror, is the master of the offer. The customer, as the offeree, may attempt to negotiate (the chances of this happening in modern commercial relationships is quite unlikely), but has no real power of the terms upon which the offer rests. The only choice the offeree has is to accept the terms of the offer or take her business elsewhere. I am no expert in US privacy law, but given the lack of emphasis on the take-it-or-leave-it change to AT&T’s policy in the media coverage, that it is legal for AT&T to take such an approach. In addressing this issue, Lazarus states:

Meanwhile, what can AT&T customers do if they choose to distance themselves from AT&T? Dozens of readers have put that question to me since Wednesday’s column ran. Short answer: Not all that much. There are other local and long-distance companies...but they often rely on AT&T’s network to get calls through or have policies similar to AT&T’s.

After reading about the situation with AT&T, I became curious about the state of affairs in Canada with respect to take-it-or-leave-it offers based on acceptance of privacy policies. Having some familiarity with Canadian telecommunications privacy policies, I didn’t recall seeing a similar term that required acceptance of the privacy policy in order to receive the service. But upon further investigation, I realized that Rogers has a similar term in its End User Agreement for Rogers Yahoo! Highspeed Internet . The preamble states:

As a condition of using the Services, you agree to and must comply with the terms and conditions of this Agreement, which will be binding on you.

Clause 8c then incorporates the Rogers Privacy Policy into the End User Agreement. So by agreeing to the End User Agreement, a customer agrees to the Privacy Policy, and in fact must agree to the Privacy Policy, as it is a term of the contract itself. The result is much the same as in the AT&T situation, consumers have to take it or leave it. And the Canadian situation for consumers, especially with respect to telecommunications, is at least as bad, if not worse, than it is in the United States, with large corporations dominating the market for these services. If they don’t agree with the content of these privacy policies, or the way that the information will be used as provided within the privacy policies, consumers will increasingly be out of luck in finding them elsewhere at a reasonable price.

One difference in the Canadian situation, I thought, may be the existence of PIPEDA (Personal Information Protection and Electronic Documents Act ), a federal act designed to help protect personal information within commercial transactions. I thought there may be some sort of recourse for people who wish to acquire a product or service without necessarily consenting to the use of their personal information as outlined in a company’s privacy policy. However, after looking at PIPEDA, I was sorely disappointed (well, actually, I was a bit confused at first, as is often the case when I first read legislation). As it turns out, Rogers can require assent to their privacy policy as a term of their service agreement, meaning that Rogers can decline to enter into contractual relations with people who do not want to consent to the use of their information in the ways that Rogers have outlined in their policy.

Schedule 1 of PIPEDA provides the principles with which commercial enterprises are to adhere to with respect to the collection, retention and dissemination of personal customer information. The harbinger of PIPEDA is consent of the individual, meaning that people must consent to all collection, retention and dissemination of information at the time it is collected by a company. This means that companies must tell their customers upfront what information they will collect and how they will use that information. This all seems fine and good, until we consider whether there are any limits on the kinds of information that companies are able to collect or on the uses of that information. It appears that there are no such limits, and as long as the customer is informed of what the company is doing with the information there is compliance with the principles of PIPEDA. To me, it all seems largely circuitous. Companies are essentially allowed to collect, retain and use personal information for any purpose, as long as that purpose is identified by the company, communicated to the customer and consented to by the customer. There are no real limits on the kinds of purposes, since the purposes are defined by the companies themselves.

To illustrate my point, look at Principle 4.2.2 and 4.2.3 of Schedule 1:

4.2 Principle 2 — Identifying Purposes
The purposes for which personal information is collected shall be identified by the organization at or before the time the information is collected.
Identifying the purposes for which personal information is collected at or before the time of collection allows organizations to determine the information they need to collect to fulfil these purposes. The Limiting Collection principle (Clause 4.4) requires an organization to collect only that information necessary for the purposes that have been identified.
The identified purposes should be specified at or before the time of collection to the individual from whom the personal information is collected. Depending upon the way in which the information is collected, this can be done orally or in writing. An application form, for example, may give notice of the purposes.

The purposes are to be identified by the company itself. That may not be problematic in and of itself, but when looked at with the other principles contained within PIPEDA, it becomes harder to swallow. Principle 4.3.3. states:

An organization shall not, as a condition of the supply of a product or service, require an individual to consent to the collection, use, or disclosure of information beyond that required to fulfil the explicitly specified, and legitimate purposes.
A company cannot require an individual to consent to a purpose that was not explicitly specified in order to obtain a product or service. The problem is that the corollary of this statement must also be true, a company CAN require the consent of an individual to the collection, use or disclosure of information to obtain a product or service where that collection, use or disclosure has been defined and explicitly specified as a legitimate purpose. And who determines the legitimate purposes? Going back to Principle 4.2, the companies themselves are able to set their own purposes for gathering information. If consumers don’t agree to these purposes and do not wish to consent to them, they are out of luck as the company will not be required to contract with them.

This state of affairs seems unfair, to say the least. To allow companies to set their own purposes for the collection and use of personal information, some which may not be seen by consumers as legitimate (ie. the sharing of information with other companies within the same corporate family, or even worse transgressions) and then to allow them to deny the provision of a product or service on the basis of disagreement with such purposes does not seem to be in line with the general purpose of PIPEDA, which is to protect information of individuals. This may be acceptable (a big MAY) in some situations, where there is ample choice in the market for consumers. They can choose to go to companies who have information purposes more in line with their own views. But as Lazarus points out, this kind of consumer choice is waning. First, there is less and less choice about who to do business with, especially in the telecommunications industry where virtual corporate monopolies exist. Second, more and more companies are invoking all encompassing privacy policies that give them wide scope to deal with the personal information of their customers. And as long as they disclose this to customers at the outset, they have complied with PIPEDA. Increasingly, then, consumers must either consent (I would actually question whether this is true consent given the circumstances) to such policies or go without products and services. And as more and more companies adopt broad collection and use purposes, there is less and less privacy. Given this state of affairs my question is where there actually exists any substantive protection for personal information collected within the commercial sphere at all?
PDF Print
Emerging Technologies of Ownership
By: William "Spike" Gronim

June 20, 2006


The social concept of ownership as applied to physical objects is broadly accepted and easily enforced. Whoever controls an object and has the right to transfer this control to others owns the object. In principle this definition applies to digital content as well. Difficulties arise, though, when abstract legal principles meet common social practice and technological realities. People typically describe the (legally purchased) music on their computers as "my music" In fact, as the Apple iTunes® terms of service state[1], this music remains the property of the content producers and their affiliates. A group of companies called the Trusted Computing Platform Alliance (TCPA) are working hard to enforce ownership of digital content using new technologies. These new technologies extend the current software-only content control systems with hardware components. What can these technologies do, what are they being used for, and what does all this mean for consumers' autonomy?

The Trusted Platform Module (TPM) [2] is a representative example of a digital ownership enforcement technology. The TPM is a microprocessor attached to the motherboard of a notebook or desktop computer. Its purpose is to provide a "platform root of trust" [3] that allows a computer to prove things about itself to other computers across a network. For example a TPM allows the computer to prove that it was manufactured by a certain company and has an authentic TPM chip. In order to accomplish this the TPM has cryptographic capabilities built in, such as RSA[4] encryption and signatures. Before a computer leaves the factory the TPM generates an RSA public/private key pair that serves as its "Endorsement Key"[5]. The TPM holds the private key in its own memory. Ideally, nobody (including the manufacturer) knows this private key. The manufacturer then uses their RSA key pair to sign the TPM's public key, creating a certificate of authenticity endorsing the TPM. All the various manufacturers' RSA key pairs are in turn endorsed with certificates from the TCPA organization. This forms a chain of trust from the TCPA to the TPM, allowing any Internet-connected computer to verify the authenticity of a given TPM.

Remotely authenticating a TPM might seem like an obtuse and technical procedure, but without hardware support content control will always be imperfect. A non-TPM computer can do anything the user programs it to do. Apple iTunes® uses software tools to prevent unauthorized use of downloaded content [6]. The content is encrypted, and the software will only decrypt the content for use on authorized computers for authorized purposes. Software-only content control is fundamentally weak. The root of this weakness is the user's complete control over the computer. In order to play the music it must be decrypted, if only temporarily, and the decrypted form must appear somewhere in the computer's memory. A skilled user can access any portion of their computer's memory at any time. Accessing the correct parts of memory in an efficient manner is of real practical complexity, leading to practical security benefits for the content producer. Regardless of practical challenges it is theoretically possible to circumvent any such software-only content control system.

Fully incorporating the TPM into a computer's software "stack" could enable strong content control. The term stack refers the layered nature of computer software: applications depend on operating systems that in turn depend on hardware. The remote authentication feature of the TPM allows remote computers to establish a secure channel with the TPM. This means that a content producer can send messages to a consumer's computer addressed to the computer's TPM. Using basic cryptography tools and the Endorsement Key, the TPM and the content producer can each determine whether the consumer programmed their computer to alter the messages in transit. The content producer now knows that the TPM is authentic and therefore not under the consumer's control. The TPM can then examine the operating system and tell the content producer whether or not it is identical to the version released by the vendor. At this point the content producer has authenticated the bottom layers of the stack - the hardware and the operating system. Proceeding similarly, the content producer can move up the stack until they are confident that everything is in order - that is, not under the consumer's control.

Before going any further we must address some of the hype and fear surrounding TPM. The members of the TCPA hype TPM as a broad solution to many security and privacy problems. Without going into the details I have reservations about the technical and business arguments for the benefits of TPM. I am not convinced that some of the problems allegedly solved by TPM require it, nor am I convinced that the TPM architecture is sound and secure. As TPM matures, the market and the security community will answer these questions. For now I take vendors' technical statements at face value in order to evaluate how TPM affects ownership of digital content. Some security researchers and open source software advocates fear that TPM will be blatantly abused by its creators. Content, including the operating system itself, will be remotely disabled by vendors without cause. Linux won't run on TPM-enabled computers. Governments will compel vendors to use TPM to ban certain documents. In reality these fears are overblown. Intel recommends that TPM units should be shipped disabled by default and with certain potentially invasive features disabled permanently[7]. There are serious backwards-compatibility issues involved in implementing full-stack TPM that have so far kept proposed uses in niche areas, such as digital music ownership.

The striking thing about TPM is that it takes on the role of the owner. Using the techniques outlined above, control of content is transferred from the producer to the TPM itself. Returning to our initial definition of ownership we see that the TPM, not the consumer, ends up owning TPM-protected content. The user cannot control the content; the encryption key held only by the TPM prevents this. Nor can the user transfer control without the TPM's consent. In the case of digital music this new technical reality seems to be in line with existing copyright principles. Problems arise when the technical implications of TPM content ownership are worked through. From the content producer's point of view it is meaningless to attempt TPM control of only the application layer. The application's entire memory is accessible to the operating system. In general, control of any lower layer of the stack implies theoretical control of higher layers. Thus TPM ownership of music requires TPM ownership of the operating system.

At this point we have deviated from the established principles of ownership. Of course operating system vendors' copyrights imply ownership of their software. The requirements of TPM controlled music go beyond the rights granted copyright holders. Non-TPM operating systems grant their users full autonomy except in narrowly defined situations (such as unauthorized copying of the operating system itself). A full-stack TPM-enabled operating system introduces an extensible system by which consumers' activities can be directly controlled by a network of third parties. Regardless of the legal and ethical validity of such controls' end purposes, TPM implies a transition from general autonomy and specific controls to general controls.
William "Spike" Gronim ( This e-mail address is being protected from spam bots, you need JavaScript enabled to view it ) is a software developer and alumnus of the Carnegie Mellon University Data Privacy Lab.

[1] http://www.apple.com/support/itunes/legal/terms.html §13 (a)
[2] Bajika, Sundeep. Trusted Platform Module (TPM) based Security on Notebook PCs - White Paper. June 20, 2002. Accessed at http://developer.intel.com/design/mobile/platform/downloads/Trusted_Platform_Module_White_Paper.pdf on June 18, 2006.
[3] Ibid. p. 7.
[4] RSA can be used to encrypt/decrypt and sign messages. Suppose Alice wishes to send Bob a confidential message. Bob generates two long numbers with certain mathematical properties, a public key and a private key. The public key is made available to everyone, while only Bob knows his private key. Alice can use Bob's public key to transform her message such that it is only meaningful to someone in possession of Bob's private key. If Bob wants to prove his identity he can sign his message by creating a short number (the signature) based on his private key and the message. Alice can use the signature, the message, and Bob's public key to determine whether someone with knowledge of Bob's private key created the signature. See http://en.wikipedia.org/wiki/RSA for a more complete description.
[5] Bajika 2002, p. 8.
[6] This software is called FairPlay. See http://en.wikipedia.org/wiki/FairPlay.
[7] Bajika 2002, p. 19.

PDF Print
One Moment Please*
By: Carole Lucock

June 13, 2006


“Agent, agent, agent!” A contorted face yells into a cell phone. A frantic pumping of the keypad follows this. Momentarily, the tightened muscles on the face relax and pull toward a smile; the pumping of the keypad stops. A reassuring voice has appeared on the other end of the line: “How may I help you?” Apparently, it is a real voice. The voice of someone real. The voice of a someone.

There follows a rather ordinary voice exchange in which the caller is successfully instructed on how to hook up a telecommunications technology:

“Okay, so the red plug goes to the converter? That’s what I was doing wrong.”
“Probably. Call back if you’re still having trouble. Now, is there anything else I can help you with today?”
“I think I’m fine now. Thank you.”

The ‘machine voice’ and the ‘human voice’. At first, the machine voice, or the machine simulating human voice, beginning a series of prompts. And responding to this series, obediently on cue, a series of human responses. “If you want English, say ‘English’”, the machine voice instructs. “English”, comes the reply of the cooperative supplicant. “Is that a business or a residence”? “Residence.” After a few rounds of this, the human voice, impatient and frustrated, tries frantically to override and free itself from the incessant questioning of the machine voice. “Agent, Agent, Agent!” the voice yells, as the human hands pound on the keyboard. And finally, a human voice appears on the other end.

This mini-drama becomes increasingly common and ordinary today. It begins in a machine to human exchange. Sometimes that is enough to satisfy the human. Other times, as above, the human becomes impatient and frustrated, or the issue cannot be addressed by the machine voice. The drama ends in a human-to-human exchange.

Reflecting on this drama, I think about my encounters with these voice machines. I register a certain uneasiness. What is the source of my unease? What are its grounds?

To begin, there is my impatience and even irritation at having to negotiate a maze of questions. My time and energy is increasingly used up in useless, unwanted options, as I seek to get ‘customer service’. But that cannot be the root source of my unease with the machine voice. After all, in many cases where a real voice or human being is on the other line, I have to endure a similar sort of inane questioning as this human voice – live and in real time, but machine like all the same – reads me through a script of questions it is programmed to ask. That too can make me impatient and irritated. But the machine like human voice is different from the human like machine voice.

To be sure, I have a certain unease about the machine like human voice. And this unease too is not reducible to impatience or irritation. And the precise grounds of that unease would be worth chasing down. All the same, I prefer the human voice, even when it is machine like, to the machine voice that is or tries to be like a human voice. And so, as in the drama above, I try to get to a human voice as quickly as possible: “Agent, Agent, Agent!” I endure and respond to an inane series upon series of options in order to make my way as quickly as permitted to a place – a voice – where my questions or problems can be addressed. I even search the internet to see if creative others have found ways of overriding the system.

In some few of these automated options mazes, if one makes the right move there is a possibility of skipping the options entirely (at any stage) and speaking to another human being directly. But in this or that case there may not be such a right move, or one may not know what it is. I dread, and therefore assiduously seek to avoid, making a wrong move or mistake that would loop me back to the beginning of the options series. And I am relieved when (if) the human voice appears: “How may I help you today”.

So my unease is not just impatience or irritation. And neither is it the passive obedience that one is steered into as a supplicant, on the way to a desired endpoint: problem solved, or a real person. To be sure, I do not like this passivity. “English please,” I answer dutifully and politely to one of its prompts, adding the ‘please’ half-forgetful that it is after all a machine I am responding to. But even when I am dealing with other human beings (machine like or not), in situations where I am a supplicant, I am conditioned or programmed to a certain passivity in order to make my way toward my desired end point. And politeness sometimes helps.

So it is not the fact of my conditioned or even programmed responses that is the source of my unease about the human like machine voice. To be sure, this gives me a certain unease, and that unease would also be worth chasing down, but it is an unease that is not unique to the machine voice.

And that unease that I feel grows as we move from automated keypad choice to voice-tone choice (if choice is the right word at all here), and becomes greater as these voices take on a calm and friendly persona and try to feign real-person dialogue. The machine-voice tries to ‘understand’ and ‘interpret’ the variety of responses provided in answer to specific questions, as though it were interacting with the caller. Currently, the technology is not particularly good and it’s obvious that one is dealing with a machine-voice, a machine that stands between you and a real person (even if one occasionally forgets oneself and slips into the polite form of response one might give in answer to a person: “English, please”). However, the technology will get better. And one can imagine a Turing moment when it will not be so obvious that the voice on the other end of the phone is a machine, or that one’s responses are pre-programmed towards its programmed ends.

As we move in this direction, I wonder if our current, passive submission to answer the machine-voice command (speak out loud to the voice-machine) is preparing or conditioning us to embrace a possible tomorrow in which the real person at the other end of the phone has been eclipsed. A person who, to be sure, may be machine like and programmed. But a person all the same. A being to whom it is not absurd to say ‘please’ or thank you. A being to whom, after a series of inane machine like questions and answers, one might meaningfully (even if vainly) terminate the exchange with a profanity and a charge, accusation or put down that the person is nothing but a machine or a robot.

*The author of the present piece warrants that, notwithstanding the ideological programming she has received, it has not been (entirely) machine written.
PDF Print
The Nexus of Intellectual Privacy and Copyright
By: Alex Cameron

June 6, 2006


For nearly three centuries since the enactment of the world’s first copyright statute, individuals have been free to travel the kingdom of copyright as countrymen, enjoying the delightful objects to be found there, in private and without any notice taken. Historically, neither copyright law nor copyright holders have interfered with individuals’ freedom to enjoy copyright works in private. This centuries-old relationship between copyright and privacy has changed dramatically in the recent past.

Copyright and privacy have increasingly come into conflict over the course of the past decade. This conflict has led to a diminishment of individuals’ privacy and autonomy in connection with their enjoyment of copyright works. Digital rights management (DRM) technologies that use surveillance and restrict individuals’ activities are a prime example of this conflict.

Failure to gain a richer understanding of the conflict and relationship between copyright and privacy may leave us with little or no room to travel our vibrant copyright kingdoms in private. Permitting privacy to be diminished in the name of copyright may also lead to the impoverishment of the very copyright kingdoms that we purport to be enriching in so doing.

This short ID Trail Mix briefly discusses why, quite apart from its intrinsic worth, authors’ intellectual privacy is and has historically been instrumental in furthering the goals of copyright. This ID Trail Mix raises the question of whether the rationale behind authorial privacy’s historical utility in promoting the goals of copyright can provide arguments in support of protecting individuals’ intellectual privacy in connection with their enjoyment of copyright works. The ultimate question posed here is what role individuals’ intellectual privacy could or should play in the copyright balance.

Copyright and authors’ intellectual privacy

Copyright and privacy share a fascinating and complex historical relationship. At first blush, one might have thought that copyright and privacy have come to implicate one another only over the course of the past decade since the advent of digital networked technology. The kind of conflicts that have emerged in the recent past – the ones that involve conflict between copyright and individuals’ private enjoyment of copyright works – do appear to be a uniquely contemporary phenomenon. However, copyright and privacy also share a much older and more foundational relationship, a complementary relationship. For example, Sunny Handa has characterized privacy as one of the “theoretical pillars” of copyright. i

In “The Right to Privacy”, Warren and Brandeis sketched a picture of where copyright and privacy might lay in respect of one another as of 1890. In examining the nature and basis of the right to control the act of publication of a copyright work, Warren and Brandeis described how the right does not depend on whether the subject matter has any economic value or would otherwise be protected as intellectual property. In other words, the common law right to control the act of publication is not merely the right of control that copyright provides and nor is it motivated by protecting precisely the same interests as copyright is motivated by. Distinguishing the right from principles of private property, Warren and Brandeis identified the right as one instance of the more general right of privacy, the right “to be let alone”.

The aspect of the relationship between copyright and privacy identified by Warren and Brandeis is based principally on the distinction between published and unpublished works. Though not offering complete privacy protection (because facts could be disclosed without infringing copyright), the right of first publication of a copyright work is a privacy-like right protected at common law. Once published, the rights in the work became primarily rooted in copyright law.

In addition, there are a number of cases where copyright has been invoked to protect confidential information and in some cases what one might consider to be privacy interests. In these cases, which continue to arise, copyright has played an instrumental role in protecting privacy interests, typically in situations involving the attempted publication of personal materials such as letters. In a similar way, copyright has effectively protected privacy-related interests in the area of commissioned photographs and portraits.

Privacy can thus be viewed as playing at least two key roles in terms of furthering the objectives of copyright. First, privacy protects the act of first publication. This protection helps to encourage the development and expression of new ideas. Sunny Handa discusses this concept in the negative, noting the risk inherent in having less than absolute privacy protection in this area:

Making the right of privacy [protecting first publication] less than absolute, creates a chilling effect whereby confidential works will not be committed to paper for fear of their being divulged. This is similar to the approach of the courts to the U.S. first amendment law. Thus, privacy protections [protecting first publication] should be paramount. It is both an important right – considered a fundamental freedom by some – and a fragile one. Once it is lost, privacy cannot be regained. It should be removed from the reach of copyright exceptions [such as fair dealing/use or public interest exceptions]. ii

By avoiding the potential chilling effect described in this passage, an absolute privacy right can be seen as encouraging the development and expression of new ideas, which is part of the purpose of copyright. It creates a refuge for building ideas, an intellectual ‘breathing space’, a veil behind which authors can explore ideas and develop new expressions. This privacy right ultimately protects authors’ right to determine whether and when they will publish their expressions.

A second way that privacy contributes to the objectives of copyright law lies in the protection of moral rights. Although rights in a work are primarily rooted in copyright upon publication, this is not to say that privacy is no longer relevant. In jurisdictions with moral rights regimes, like Canada for example, privacy plays a role in copyright in so far as authors have moral rights to remain anonymous or to use a pseudonym. These are forms of a right of privacy. Moral rights contribute to the development and dissemination of new expression, at least to the extent that such rights encourage authors to create and disseminate works that they would not otherwise create or disseminate. Moral rights, and hence a form of privacy, can therefore be viewed as an important part of the incentive package that copyright offers to creators.

These are a few examples of ways that copyright and privacy share a complementary relationship in ways that further the goals of copyright policy. However, the privacy rights discussed thus far have been the privacy rights of authors. But what of the privacy rights of individuals who wish to access and use copyright works? Can their privacy rights possibly further the goals of copyright policy when in recent years they seem to have so often come in conflict with copyright holders? Can authorial privacy’s utility in promoting the goals of copyright be extended to arguments in support of protecting individuals’ privacy in relation to their enjoyment of copyright works?

Copyright and individuals’ intellectual privacy

Prior the conflicts of the recent past, individuals were free to roam the kingdom of copyright in private, without any notice taken. Copyright has traditionally not interfered with individuals’ freedom to access and enjoy copyright works in private. Rather than focusing on the private activities of individuals, copyright has heretofore been principally concerned with protecting publishers against copying by competing publishers. As Daniel Gervais explains, copyright law never used to concern itself with the private activities of individuals who access and use copyright works:

The fact that copyright was not meant to be routinely used in the private sphere is further evidenced by the fact that exceptions and limitations to copyright were also written in the days of the professional intermediary as the user. This explains why in several national laws, the main exceptions can be grouped into two categories: private use, which governments previously regarded as “unregulatable” (i.e., where copyright law abdicated its authority by nature)… Still today, there are several very broad exceptions for “private use” (e.g., Italy, Japan) that were adopted in the days when the end-user was just that, the end of the distribution chain. End-users have always enjoyed both “room to move” because of exceptions such as fair use and rights stemming from their ownership of a physical copy. There was thus an intrinsic balance that recognized that end-users who did not significantly affect the commercial exploitation of works by their individual use should not be on the copyright radar.
…[copyright’s recent] invasion of the private sphere is at odds with the history of copyright, where it never forayed except, as just mentioned, in the case of levies. There was an implicit recognition that copyright did not apply to end uses, even though formally users were making copies and, in rarer cases, performing or communication works. iii

Of course, as Gervais alludes to when he mentions ownership of a physical copy, it is worth emphasizing two additional reasons why copyright has not conflicted with privacy in the past. First, individuals typically did not have the means to infringe copyright works, let alone on a scale that would have an impact on the exploitation of the work – e.g. individuals could not very easily copy, distribute and sell thousands of books. Second, copyright holders have traditionally not had an efficient or effective means to invade the private sphere; there were no ways that they could track individuals’ access and use of physical copies of copyright works in order to prevent or detect illegal or unauthorized activities. The context in which copyright and individual privacy now interact is dramatically different.

Through exceptions like fair dealing, copyright law continues to attempt to carve out private space for individuals to access and enjoy copyright works. However, copyright holders increasingly have the legal and technological means by which to foreclose those spaces and to track individuals’ private activities. This applies not only in the case of online digital content delivery services, but now also in the case of physical copies of works like CDs, as demonstrated by the infamous Sony BMG rootkit controversy iv. The scope of private, anonymous and/or autonomous use previously afforded by the ownership of tangible goods is eroding.

On the other hand of course, many copyright holders argue that individuals increasingly have the means by which to infringe copyright on a scale that impacts the commercial exploitation of works. For example, in a matter of seconds, a single individual can perfectly copy and make a copyright work available to millions of people for downloading on a p2p network. For these reasons, some copyright holders claim that privacy-invasive measures aimed at responding to infringement are justified.

The modern copyright context thus requires us to consider the nature and scope of individuals’ ability and potential legal right to enjoy copyright works in private, anonymously and autonomously. Authorial privacy suggests that privacy can play a role in furthering the goals of copyright. However, copyright policy has heretofore not adequately considered the potential importance of individuals’ intellectual privacy – individuals’ ability and/or legal right to enjoy copyright works in private, anonymously and/or autonomously – in furthering the goals of copyright. If, as the Supreme Court of Canada has recognized, the purpose of copyright is utilitarian, aimed at balancing the economic rights of creators against promoting the public interest in the encouragement and dissemination of creative works, then we must ask what role individuals’ intellectual privacy could play in that balance.

i Sunny Handa, “Understanding the Modern Law of Copyright in Canada”, (1997) McGill University (Thesis), at 160.
ii Ibid.
iii Daniel Gervais, “Use of Copyright Content on the Internet: Considerations for Excludability and Collective Licensing”, in Michael Geist, ed., In the Public Interest: The Future of Canadian Copyright Law (Toronto: Irwin Law, 2005) at 531, 548.
iv For a discussion of the Sony rootkit controversy, see Jeremy deBeer, “How Restrictive Terms and Technologies Backfired on Sony BMG” (2006) Internet & E-Commerce Law in Canada, Vol. 6, No. 12.
PDF Print
Clearing Away the Debris?: Webcamming in the Context of Feminist Tensions over Pornography, Privacy and Identity
By: Jane Bailey

May 30, 2006


Pornography, privacy and identity are three of the many unresolved tensions within feminist communities. Digital technologies, such as webcamming, offer us the opportunity to think not only about the impact of technical change on the meaning of these concepts, but also on the rightness of prior positions taken in relation to them.

In the 1980s and 1990s, feminists like Catherine MacKinnon and Andrea Dworkin argued that pornography undermined women’s ability to be recognized as the social equals of men by objectifying women as commodities for male consumption. The law worked to reinforce these and many other aspects of patriarchy by, among other things, constituting pornography as the constitutionally protected free expression of the members of the male-dominated industry and by strenuously protecting men’s right to consume pornography in the privacy of their own homes. In these and other ways, the law colluded with market forces to enable the stereotyping of women as submissive sexual objects for the use and abuse of men. MacKinnon, Dworkin and others called for an end to that collusion through creation of a civil ordinance that would allow women to call pornographers to account for their sexually discriminatory conduct. Law could then be used as a tool to redress, rather than reinforce, the harmful stereotype of “woman” socially constructed through pornography.

Much water has flowed under the bridge since the failed attempt to bring the civil ordinance to fruition. Critical race scholars and post modern thinkers such as bell hooks, Audre Lorde and Judith Butler have launched provocative criticisms of the situatedness and limited emancipatory potential of a movement premised exclusively, or at least predominantly, on a category such as “sex” or “gender”. Postmodern thinking pushes toward destabilizing categories, like “sex” and “gender” that have acted as historical bases for discrimination. From this perspective, individual political action is critical, as it is through consciously “performing” gender that we might work to upset the stereotypical definitions that have historically confined us. Postmodern performativity theory promotes a sense of individual power to make change. Championing the possibility of collective change through individual action sits well with many in the current generation who, Baumgardner and Richards have explained, understand anti-pornography feminism to be laced with dictatorial anti-sex and anti-pleasure sentiment.

Enter the Internet and webcams …

Ongoing or regular streaming of one’s perspective, existence, latest break-up (or break-out) seems to have become de rigueur online. Webcamming and vlogging seem to fit quite well in two arenas. First, the two mesh with what seems to be a current voyeuristic cultural fascination with the notoriety of the mundane. The analysis, however, does not stop there. Webcamming has been argued to be a potential source of empowerment for women in at least two senses.

First, Kimberlianne Podlas has argued that webcamming may empower women to take directorial control over pornography by reducing the financial resources necessary to produce and broadly distribute content. Moreover, as suggested in the film Webcam Girls, webcam technology provides a much safer, more controlled space for women engaged in the sex trade.

Second, Terri Senft has argued that autobiographical webcamming by women may actually serve to destabilize both the public/private divide and stereotypical constructions of femininity and domesticity that have historically confined women. One of the examples Senft discusses is the Jennicam, through which aspects of the day to day life of Jenni Riley were, for some seven years, filmed and distributed on the web. Jenni characterized her efforts as a “social experiment” in which she sought “to show people that what we see on TV- people with perfect hair, perfect friends and perfect lives--is not reality.” In postmodern language, one might say that Jenni sought, through online exposure of everything from the most mundane to the most intimate aspects of her life, to explode the myth of the “perfect”, feminine stereotype woman. In so doing, she might be said to have been negotiating new boundaries between the public and the private by making exceedingly “public” some of the most “private” details of her life in that historic bastion of privacy - her own home.

Without wishing to wholly dismiss, with a single sweep of my dictatorial broom, the subversive “potential” of new technologies in giving women greater control over representations of sex, gender and sexuality, I must confess tremendous skepticism about webcamming “experiments” such as the Jennicam as a means for doing so.

What is it exactly that is being subverted here in terms of the public/private divide? While historically discrimination has isolated women in the private sphere by foreclosing their participation in the public sphere, there is also a long and complicated history that involves treating the bodies and images of women as public – from the thousands of scantily clad women monotonously presented in mainstream media to the strange idea that anyone can touch the abdomen of a pregnant woman to legal restrictions on women’s rights to obtain medical services such as abortions. The Jennicam and other webcams set up to monitor women in their homes may push the boundaries of the geographic locations that we consider to be public and those we consider to be private. Cams of this nature would also work toward suggesting that it is women who should what aspects of their lives are public and private. However, the focal point of the webcam gaze remains the very familiar public domain of the woman’s body.

Further, even if one believed that one could reclaim control over the projection of the image of woman by taking control of the means of projecting it, how much control do Jenni and other “webcam girls” actually have? Jenni carried on her daily life as the object of the gaze of a fixed camera that she set up – and she from time to time posed for it in pin-up girl fashion. Once disseminated, she lost control over the compilation and use of her image. As a result, collectors (some with permission and others seemingly without it) clipped and pasted together versions of her life for their own private consumption, as well as making them available to others. In some instances, Jenni has been pornified through her audience’s collection of nude and partially-clad clips that otherwise constituted only fleeting moments in seven years worth of video of her life.

Regardless of whether one believes that women’s emancipation is best served by some or all of legal regulation of certain kinds of pornographic content; encouraging alternative performances of gender and sex; or empowering women’s control over these representations, one observation seems unavoidable. Patriarchal constructions of “woman” continue to clutter the spaces in which women seek to build their own identities. On its face, webcamming seems to offer the possibility for women to take a degree of control over those spaces. Even someone prepared to accept that increased individual control is enough, webcamming’s control-limiting features suggest it is unlikely to play a significant role in clearing away the debris.

PDF Print
“Where No Court has Gone Before…” Issues of Identity and Equality in Nixon v Vancouver Rape Relief
By: Jena McGill

May 23, 2006


In December 2005, the British Columbia Court of Appeal released its long-awaited decision in Vancouver Rape Relief v Nixon [Nixon]1. This is the highest level court in Canada to ever rule on a case of alleged discrimination against a transsexual person; in fact, Ms. Nixon’s is the first trans-based human rights case in Canada to move past the level of a Human Rights Tribunal. As the case has proceeded through the B.C. Human Rights Tribunal, the B.C. Supreme Court and most recently the province’s Court of Appeal, it has generated an ongoing dialogue in legal and feminist communities around the country that focuses on issues of identity, exclusion and the human rights of gender variant people. Following the release of the Court of Appeal’s decision, Ms. Nixon announced that she plans to seek leave to appeal to the Supreme Court of Canada.2

This may seem an unremarkable choice, however if Ms. Nixon does indeed go ahead with her plans to appeal, the Supreme Court will face its first opportunity to consider the human rights of gender variant people, specifically transsexual women, and the particular nature of discrimination experienced by individuals with gender identities that do not fit neatly into the traditional male-masculine/female-feminine sex/gender binary that so many people take for granted. The location of gender variant identities, and in particular transsexuals, in today’s legal and social climate may be likened to the position of gay, lesbian and bisexual persons 30 years ago – legally invisible, unprotected and subject to serious and injurious forms of discrimination. Ms. Nixon’s case therefore represents a significant opportunity for the Court to go “where no court has gone before” in addressing the human rights of gender variant individuals in society and under the law. If Nixon indeed seeks leave to appeal, will the Court take on the challenge presented by her case? Quite a challenge it is, as Nixon presents a number of important, but not easily resolvable questions about identity, equality and exclusion – questions the Supreme Court might not be ready to answer.

Kimberly Nixon was turned away from a volunteer training session at the Vancouver Rape Relief Society in 1995. Rape Relief held that Ms. Nixon’s male-to-female transsexual status meant that she did not have the life experience of growing up as a girl and living all of her adult life as a woman, experience that Rape Relief considers critical in allowing a woman to act as a peer mentor for other women using the rape crisis centre and the shelter. Following her expulsion from the training session, Kimberly Nixon filed a human rights complaint with the now defunct B.C. Human Rights Commission against Rape Relief, accusing the organization of breaching section 8 of the B.C. Human Rights Code, which proscribes denying "to a person or class of persons any accommodation, service or facility customarily available to the public … because of … sex;" and section 13, which states "[a] person must not … refuse to employ or refuse to continue to employ a person … because of … sex."3 Rape Relief denied discriminating against Nixon, invoking the Code’s section 41 “group rights exemption,” which specifies: "If a[n] … organization or corporation that is not operated for profit has as a primary purpose the promotion of the interests and welfare of an identifiable group or class of persons characterized by a physical or mental disability or by a common race, religion, age, sex, marital status, political belief, colour, ancestry or place of origin, that organization or corporation must not be considered to be contravening this Code because it is granting a preference to members of the identifiable group or class of persons."4 The case was referred to the British Columbia Human Rights Tribunal.

The Tribunal released its decision in 2002, finding in favour of Ms. Nixon, and holding that Rape Relief had failed to demonstrate any connection between being treated as a woman for one’s entire life and one’s capacity to be an effective volunteer at Rape Relief. 5 The Tribunal judged that Rape Relief’s primary purpose is to serve women, and with no dispute over the fact that Nixon is a woman (as reflected by her amended birth certificate) Rape Relief had discriminated by drawing a distinction between her and other women. Rape Relief sought judicial review of the Tribunal’s decision, and at the B.C. Supreme Court, the decision was overruled. The Court applied the discrimination analysis in Law v Canada6, and held that Rape Relief’s exclusion of Ms. Nixon was not discriminatory because she had failed to prove an injury to her dignity. The Court further found that her exclusion from Rape Relief did not prevent Ms. Nixon from participating in the cultural life of society because it was not an exclusion from the mainstream economic, social and cultural life of the province.7 Ms. Nixon appealed this decision at the B.C. Court of Appeal, which upheld the result below in favour of Rape Relief, although it rejected the incorporation of the Law test in the human rights context. The Court found that although Rape Relief’s policy of excluding transsexual women constituted discrimination under the B.C. Human Rights Code, the section 41 exemption permits a women’s service organization to discriminate against a sub-group of women, namely transsexual women, based on its own subjective wishes, because it had acted in good faith and established a connection between its exclusion of transsexual volunteers and its work in counseling female rape victims.8

Whatever side of the debate you may find more compelling, or even if you find yourself stuck on the fence, it is undeniable that this case is ripe with questions about identity, exclusion and equality that, if the Supreme Court chooses to grapple with them, could change the legal landscape not just for gender variant and transsexual persons, but for anyone who suffers discrimination and files a human rights complaint in this country. My goal here is not to assert a preference for one side of the case or the other, but rather to highlight some of the issues raised by Nixon, particularly as they relate to identity and equality. That said, I must admit that if the respondent in this case was McDonald’s or Wal-Mart, instead of a women’s service organization, I, like many others, would likely have little trouble expressing my support for Ms. Nixon’s case. On the facts as they exist, however, a number of seemingly irreconcilable questions arise.

First and foremost, the Nixon case has sparked what is perhaps an unprecedented debate about the precise combination of social, psychological, and biological factors that constitute the category of “woman.” Is it primarily a matter or anatomy, in which case Kimberly Nixon’s post-surgical body and amended birth certificate qualify her as a woman, though she lived until her 30s as a man, or is it based on important lived experience, as Rape Relief contends? Rape Relief’s argument focuses on the fact that because Ms. Nixon lived and was socialized for a significant part of her life as a man, she lacks the relevant insights that are necessary to be an effective peer counselor to women who have been victimized by men. Ms. Nixon is not, according Rape Relief, a peer to its clientele.

Rape Relief maintains a hard-won women-only space because it rightly believes that its clients, many of whom have been victimized by men, are more comfortable in this environment. An important part of preserving a women-only space is that volunteers and staff members be, and appear to be, women. Here is where things start to get sticky. How much of this case rests on Kimberly Nixon’s physical identity and appearance? Who is to say what a “real” woman – a woman who has been socialized as a girl and women her whole life - does or does not look like? Are those who qualify as “looking like women” simply subscribing to sexist constructs of how a woman should dress, wear her hair, walk and talk? However you might choose to answer these questions, the importance of maintaining Rape Relief’s women-only space is undeniable in ensuring the safety and wellbeing of its clientele, and maybe the security of those women trumps the fashion choices of others. As Rape Relief neither screens for “masculine-looking” women nor allows the participation of “feminine-looking” transsexuals, Ms. Nixon argues that its blanket transsexual exclusion policy is both under and over-inclusive. All of this said it is noteworthy that Ms. Nixon does, for all practical purposes look like a woman…I think.

On another level, this is a case of dueling rights. Should individual rights triumph over group rights? Is Rape Relief’s clientele more worthy of protection than Kimberly Nixon? Although it is Ms. Nixon’s individual rights that are immediately at stake in the confines of this case, she has come to represent an entire community of gender variant people who suffer discrimination and harassment every day. Through the validation of Ms. Nixon’s individual rights, the door could be opened for the recognition of the human rights of all gender variant people, making Kimberly Nixon a veritable poster-child for Canadian trans-equality. One can only imagine the stress that more than 10 years of trials and appeals puts on one’s personal and professional life, and Nixon herself has stated that the drawn out trial has been difficult. “Every time there is another hearing,” said Nixon, “I lose another job because of the publicity.”9 Similarly, Rape Relief, a non-profit collective offering a variety of services including a crisis phone line, an emergency residential facility and ongoing support groups and peer counseling to women who have survived violence, has also been tied up in legal wrangling for the past decade. There is little doubt that the case has affected Rape Relief’s reputation, financial security and ability to offer critical services to women in its community. If the Supreme Court does decide to take on the Nixon case, I do not envy the judges who will be obliged to decide whose rights will triumph.

Finally, at the root of the legal conflict there lies a clash between formal and substantive understandings of equality. Substantive equality stands in contrast to formal equality in that it recognizes that differential treatment can at times promote equality because the accommodation of differences – the “essence of true equality”10 – frequently requires that distinctions be made. Ms. Nixon is arguing for the recognition of her sameness with non-transsexual women, advancing a formal equality approach where each individual – in this case every woman – is treated exactly the same despite real differences in their experiences of disadvantage. Rape Relief is requesting a substantive-equality handling of the case, taking into account the patterns of disadvantage and oppression in society and the particular context within which the unique facts of this case occur.

Allow me here to reiterate how much simpler this case might be had Kimberly Nixon been refused a job at McDonald’s instead of the opportunity to be a peer counselor at a women-only rape crisis centre. In the former scenario, as in many employment contexts where gender does not contribute to an individual’s ability to be an effective employee, McDonald’s has no right to discriminate against Ms. Nixon on the basis of her transsexual status. In a women-only space like Rape Relief - a space specifically created for the purpose of organizing against a gendered form of violence and oppression - gender does matter.11 Rape Relief argues that gendered life experience is relevant to the objectives and membership of the organization, and so its differential treatment of Ms. Nixon does not amount to discrimination. From a substantive equality perspective, would Nixon come down to a comparison of the disadvantage and oppression suffered by transsexual persons versus that suffered by women who have experienced rape and sexual assault? Such a “race to the bottom” among equality-seeking claimants is dignity-harming in and of itself, and presents an almost impossible balancing act without a clear winner no matter what the outcome.

All of the questions raised by Nixon v Vancouver Rape Relief offer no easy answers – and perhaps no definitive answers at all – making the possibility of the Supreme Court judges turning their minds to the issues both exciting and somewhat terrifying. What are the chances that the Supreme Court will hear Ms. Nixon’s case should she file for leave to appeal? Is the Court ready to wrestle with the legally problematized transsexual identity? My hunch says that if given the chance, the Court will not go there. The transsexual identity remains problematic for the mainstream, the body too complicated, the very possibility of recognizing and acknowledging the ultimate “other” too remote, particularly on the facts of Nixon. Ultimately, this case attests to the madness of our cultural rigidity. If children and adults were not jammed into pink or blue categories, with prescribed sets of feelings, behaviours and appearances, maybe gender variant individuals would not feel the need to alter their physical bodies to accord with the “norm” and society could acknowledge and respect a spectrum of identities and individuals.

1 [2005] BCJ No. 2647.
2 Ms. Nixon’s desire to apply for leave to appeal to the Supreme Court was announced in a number of forums, including: Lancaster House: Labour, Employment and Human Rights Law, online http://www.lancasterhouse.com/about/headlines_1.asp.
3 British Columbia Human Rights Code [RSBC 1996] Chapter 210, online http://www.qp.gov.bc.ca/statreg/stat/H/96210_01.htm.
4 British Columbia Human Rights Code [RSBC 1996] Chapter 210, online http://www.qp.gov.bc.ca/statreg/stat/H/96210_01.htm.
5 Nixon v Vancouver Rape Relief Society 2002 BCHRT 1.
6 [1999] 1 SCR 497.
7 Vancouver Rape Relief Society v Nixon [2003] BCSC 2899 at 154.
8 Vancouver Rape Relief Society v Nixon [2005] BCJ No.2647.
9 DisAbled Women’s Network Ontario (DAWN) http://dawn.thot.net/nixon_v_vrr.html
10 Andrews v Law Society of British Columbia, [1989] 1 SCR 143 at 169.
11 Christine Boyle, “The Anti-Discrimination Norm in Human Rights and Charter Law: Nixon v Vancouver Rape Relief” (2004) U.B.C. L. Rev 31 at 56.

PDF Print
HollaBack NYC: Sites of Resistance, Sousveillance, and Street Harassment
By: Jennifer Barrigar

May 16, 2006


On the afternoon of 19 August 2005, Thao Nguyen was taking the New York City subway back to her office when a man sat down across from her and began fondling himself, extracted his penis from his pants and beginning to masturbate. Nervous and wanting to feel safer, she removed her camera-enabled cellphone and eventually took a picture of the masturbator. He left the train at the next stop. Ms. Nguyen immediately reported the incident to a police officer.

She went further than that, however – she also posted the photo and an account of the incident on Flickr and Craigslist. On 26 August 2005, the New York Daily News carried an article on their front page which reproduced a (cropped) version of the photo and asked readers to call the NY Daily News if they recognized the man. Three days later, the News reported that over 2 dozen people had identified the man in the photograph as Dan Hoyt, a local raw foods restaurateur. The next day they added that 6 more reports of being flashed by the man in the photo had been received. It was also noted that Mr. Hoyt had been arrested in 1994 for unzipping and flashing at a New York subway station, had pled guilty and was sentenced to two days of community service. Mr. Hoyt was eventually arrested and charged with public lewdness. He pled guilty and was sentenced to two years’ probation, with mandatory counselling.

Meanwhile, the blogging (and commuting) community had been energized by the effectiveness of Thao Nguyen’s actions. The HollaBack NYC site was created by the Artistic Evolucion collective "to expose and combat street harassment as well as provide an empowering forum in this struggle.”1 The FAQ was posted on 2 October 2005. The inaugural post to the site was on 3 October 2005 and read:

Here's the skinny--next time you're out and about and some cocky ass on a power trip whistles, hoots, or hollas--Just Holla back! Whip out your digicam, cameraphone, 35mm, (or sketchpad), and email us the photo. We'll post their ugly face for the whole world to see.
If you can't pull out a camera, or you don't have one on you, just send us a story and we'll post that too.2

HollaBack’s response to street harassment has received quick and global uptake. A European HollaBack site is expected to be up and running soon and in the meantime the HollaBack NYC site reports an average of 1,000 hits daily, including womyn from Spain, Italy and India.3

Since I became aware of this, I’ve been intrigued by it. I think it provides a fascinating lens through which to view some of the issues we deal with on this project. I’m particularly interested in where we situate our analysis of a site like HollaBack NYC or actions like Nguyen’s. Sadly, there’s nowhere near the time and space to address them all here, but I hope to sketch out at least some of my response and hope to generate deeper discussion(s).

In an article in New York Magazine, Hoyt critiques both Nguyen and the dissemination of the photo on the Internet:

In his account, the perpetrator is Nguyen, who misread his intentions (he claims he was already mid-masturbation when she stepped onto the train) and then humiliated him by posting his picture on the Web. He says he didn’t even realize he’d been photographed. “Even so, I wouldn’t imagine somebody throwing it up on the Internet for millions of people and destroying your life like that,” he says. “It’s one thing to take it to the police. But on the Internet, I read a lot of people saying,’“that was not too cool of her. That was really screwed up.’”
Hoyt believes that if he and Nguyen had only met under different circumstances, she might really like him. “You know, she’d go, ‘That guy’s pretty cool. He’s got this restaurant, and he’s fun,’” Hoyt says. “She’d probably want to go out with me.”

Hoyt seems almost to be suggesting that his behaviour should have been of no consequence to Nguyen – that his masturbation was a private affair which did not concern her. Looked at that way, her response was a “misreading” of his masturbation as somehow linked to her. The implication seems to be that the only right of response is by s/he who is personally attacked. By construing things this way, he sets himself up as the victim, unfairly exposed and exploited not just by womyn who “misunderstand” him but by the power of an increasingly technologized society.

James B. Rule has defined surveillance as “any systematic attention to a person’s life aimed at exerting influence over it.” Steve Mann speaks of it as meaning “to watch from above”.

The HollaBack NYC FAQ defines street harassment as:

…a form of sexual harassment that takes place in public spaces. At its core is a power dynamic that constantly reminds historically subordinated groups (women and LGBTQ folks, for example) of their vulnerability to assault in public spaces. Further, it reinforces the ubiquitous sexual objectification of these groups in everyday life.

It seems to me, putting these comments together, that street harassment may be read as a form of (or consistent with) surveillance – by being applied against the Other, it is hierarchical and aimed at those under; it exerts influence over womyn’s lives by reinforcing vulnerability and Otherness.

I’d like to suggest, then, that the HollaBack NYC project be understood as resistance to surveillance – a form of sousveillance if you will. That rather than being the victim of surveillance, Dan Hoyt was himself engaged in surveillance, part of a systemic surveillance directed at womyn and those perceived to be womyn.

As a tool of resistance, I think HollaBack NYC is exciting. I find myself returning to a quote from the August 29 New York Daily News article where another womyn who’d been flashed by Mr. Hoyt says “I just wanted to forget about the whole thing. I am glad someone had the wherewithal to do something about this.” I am pleased that HollaBack NYC is providing the wherewithal for womyn to name street harassment and begin to address it.

Steve Mann and Ian Kerr discussed equiveillance, the notion that the intersection of surveillance and sousveillance might create “some kind of equilibrium.” I question whether HollaBackNYC will (or can) create such a state. At the very least, I think there are some issues which must first be considered.

Steven Davis has expressed concern about the impact of sousveillance on third parties, as have others. It seems to me that the HollaBack NYC project mediates that concern as much as possible. Rather than a stream of ongoing surveillance, pictures (or stories, where pictures cannot safely be acquired) are of the harasser. Further, there has been a conscious decision by the HollaBack NYC moderators to retain the space as one of empowerment from, not power to. For example, the site has developed a strong anti-racism policy which mandates that the race of harassers or other racialized commentary not be part of the posted narrative. Where race is mentioned, it is expected that the necessity of doing so will be “clearly and constructively” explained.

While Cynthia Grant Bowman points out the universality of street harassment for womyn, she also recognizes that “women of different backgrounds may experience street harassment through the lens of different historical and personal experiences.”4 It strikes me that this is especially noteworthy on a site like HollaBack NYC because the womyn themselves are invisible – they are not in the photographs, they are taking the pictures, recounting the narrative rather than defined by it, turning the gaze back on men. This has strong potential for empowerment, allowing womyn the power to define their experience of harassment for themselves. At the same time, I am concerned that the anti-racism policy may have the effect of silencing these womyn and creating a homogenized “victim”, without any recognition of the particularities of race, class, sexual orientation, etc. which shape womyn’s experiences of street harassment.

I am also concerned about the safety of womyn who choose to “HollaBack” at their harassers. I was chilled by the entry for Friday 12 May 2006 where the man in the photo “ran after me and took a picture of the back of my head with his camera phone, wailing “now you can’t do anything!”

I wonder about the economic and class implications of this strategy. HollaBack NYC is a response predicated on access to technology – access to cellphones to take pictures, access to computers to upload them, access by others to computers in order to see the pictures etc. Ultimately, this increases the digital divide, simultaneously creating a site/voice of resistance and then denying it to some members of the marginalized population(s).

I worry about the unregulated nature of such sites. The HollaBack FAQ insists that “what specifically counts as street harassment is determined by those who experience it.” While my feminist self initially rejoices in this definition and in the act of resistance that is HollaBack itself, I begin to wonder what measures there might be to stop the posting of pictures of or allegations about individuals for other reasons. The anti-racism policy notwithstanding, what about issues of systemic racism that may fuel one’s perception of something as a “threat”? What about other power disparities which shape the interaction between photographer and subject? Can exposure on HollaBack NYC create stigma such that mistaken posting is an issue? Will there be way(s) for individuals who are mistakenly identified to have their photos removed? How can an individual without internet access be aware of the mistaken posting and/or negotiate its removal? Is the value of the HollaBack resistance lessened by the risk of malicious or mistaken posting(s)?

Finally, I find myself uncomfortable with what this response implies about the ubiquity of technology and surveillance, that rather than seek to dismantle the existing surveillance, we respond with the imposition of another layer of veillance.

As Linda Richman says: talk amongst yourselves….

1 http://www.hollabacknyc.blogspot.com/2005/10/hollafaq.html.
2 http://hollabacknyc.blogspot.com/2005_10_01_hollabacknyc_archive.html.
3 http://www.womensenews.org/article.cfm?=aid=2734.
4 Cynthia Grant Bowman, “Street Harassment and the Informal Ghettoization of Women” (1993) 106 Harv. L. Rev. 517 at 534.

Jennifer Barrigar is an LL.M. candidate at the University of Ottawa and Legal Counsel, Office of the Privacy Commissioner of Canada. The opinions expressed in this article are personal and do not represent those of the Office of the Privacy Commissioner nor bind that Office in any way.
PDF Print
EULAs and the Geniuses of Uninformative Dissemination
By: Jeremy Clark

May 9, 2006


“[C]ontrol of the Western species of the human race seems to turn upon language. Anyone who has worked with language, from the devil on, has been in the business of spreading knowledge. They are not knowledge itself. Novelists, playwrights, philosophers, professors, teachers, journalists have no proprietary right over knowledge. They do not own it. They may have some training or some talent or both. They may have a great deal of both. They will still be no more than the geniuses of dissemination. That knowledge — once passed on as the mirror of creativity or as an intellectual argument or as the mechanisms of a skill or as just plain information — may lead to increased understanding. Or it may not. So be it.” – John Ralston Saul, The Unconscious Civilization

In one unintentional way, Sony’s decision to secure a series of audio CDs with a very nasty piece of digital rights management (DRM) last fall was a partial victory for anti-DRM activists. This misstep on Sony’s part effectively catapulted a niche topic of concern to international attention and into the collective consciousness of the informed public where it lingered for a week or two, and then slipped into the chambers of recent history. The mainstream coverage largely focused, and rightly so, on how Sony’s DRM compromised the security, anonymity, and control of those who unwittingly inserted one of these audio CDs into their Windows machine. The DRM installed as rootkit — a technique that allows software to run invisibly on a system. Worst still, the DRM did not merely install itself as a rootkit; it created an open mechanism to allow itself to run invisibly, and by extension any other piece of properly constructed software. In other words, it left an open security hole for malware to slip through and become invisible to the majority of anti-virus and anti-spyware utilities protecting our systems. Once installed, the DRM will phone home each time the CD is inserted, and no method for uninstalling the DRM was originally offered.

I will not detail each twist and turn of the subsequent events that eventually provoked a recall on the CDs, and a series of lawsuits. I refer those interested to the blog of Mark Russinovich who originally discovered the rootkit and to the Wikipedia article. I have denoted this specific case as a partial victory for those who oppose DRM because while it temporarily caught mainstream attention and hopefully left an impression, it did not do much to impede the relentless movement of content creators towards DRM — it only caused them to adopt subtler albeit equally restrictive technologies.

However there is another side to the Sony debacle that I want to focus on: user consent. Like most pieces of software, Sony’s DRM included an http://www.eff.org/wp/eula.php>end-user licence agreement (EULA); that daunting piece of legalese that ends with an “I Agree” button. In this case, not only did Sony’s EULA not disclose the rootkit, phoning home, or uninstallability, the DRM installed itself before even displaying the EULA. These issues were subject to an Electronic Frontier Foundation lawsuit, which was eventually settled out of court. While I fully applaud the efforts of EFF, I also have to make an uneasy confession to make. Even if companies like Sony did fully disclose and detail all the undesirable behaviours of their software in a proper EULA, I would never know because I never read them. And I know I am not alone.

As these events transpired, I recalled an opinion piece I read a few years ago in Wired by Mark Rasch. Rasch begins with an anecdote: “I have a recurring nightmare. Microsoft CEO Steve Ballmer shows up on my doorstep demanding my left kidney, claiming that I agreed to this in some "clickwrap" contract” [link mine]. While an attorney and security guru himself, Rasch flatly admits to never reading online privacy policies despite writing them for clients. This confession appears to be part of a widely held consensus. According to internet legend, the software vendor PC Pitstop once buried a potential monetary reward in one of its EULAs for any claimant who responded through a given email address. 4 months and 3000 downloads later, the first person finally wrote in and their diligence was rewarded with a $1000 cheque.

Companies can be surprisingly candid in their EULAs, shamelessly detailing in plain language their intention of installing bundled tracking software, displaying all forms of pop-up ads, or phoning home with user information that can be sold to third parties. However other companies purposely obfuscate the pertinent information with impervious legalese, and many EULAs run to multiple pages inside a tiny window that cannot be resized or copied to the clipboard. As along as we the consumers are complicate with this system, and continue to unintentionally consent to terms of service we make no effort to understand, we are empowering the software vendors to the status of geniuses of uninformative dissemination. The information that is communicated through EULAs falls squarely in the latter half of John Ralston Saul’s distinction — knowledge that does not increase understanding.

In Mark Rasch’s op-ed, he turns to technology to aid in consumer understanding. Specifically he calls for a law robot that can be programmed with user preferences and process a licence or policy on a user’s behalf. Now suppress any visions of an artificially intelligent bot capable of comprehending a legal document for moment, because that technology is still far in our future. Other options exist. One is to pressure vendors into offering a machine-readable summary of their contracts and policies. And ground has already been broken on this front by the Platform for Privacy Preferences (P3P).

P3P was initiated in 1997 by the World Wide Web Consortium (W3C) with the objective of developing a standardized syntax for encoding machine-readable privacy policies for web services. P3P use a versatile mark-up language called XML. Any of you reading this blog entry through an RSS feed is already making use of XML. A P3P policy has a set of predefined disclosures and a company must make all that are applicable to its policy. The absence of any disclosure presumes the action is never taken. This transforms the nature of the policy from being a one-way broadcast into being a response to predetermined questions. This disempowers the vendors from being geniuses of dissemination who push their carefully constructed terms of services onto consumers, and empowers the user to pull understandable information from the vendor.

>From a technological perspective, a P3P policy is very elegant. It uses a hierarchical tree of assertions that require the web service to disclose its identity, the methods that are open for resolving disputes concerning the policy, and what gathered information can be later accessed by the user. It then requires the web service to explicitly detail every type of information that is retained (from a comprehensive and predefined list), what purpose the information will be used for, whom the information can be disclosed to, and how long it will be retained. A user may then specify her preferences to a mediating agent such as Privacy Bird or use a P3P-enabled search engine which will analyze the privacy policies of each website she visits before she actually connects to the service itself, and report any discrepancies between the site and her preferences (or if the site does not have a P3P policy at all).

The syntax of P3P could easily be modified to handle EULAs. As a rough sketch, consider anchoring the assertions in two categories: monitor and install. Because spyware monitors user traffic, the monitor category would essentially inherit all the P3P assertions specifying the information retained. It could also specify, with an action assertion, how the data is obtained (keystrokes, data scrapping, packet sniffing, data interception, et cetera) and how often the information is being obtained (only when the program runs, as long as the operating system is running, one-time only, et cetera). The install category would disclose any third party software that is bundled with the principal software and reference this software’s EULA. It would also include assertions concerning the actions taken by the software (rootkit, displays pop-ups, url redirects, et cetera) and an assertion of how uninstallable it is.

Logistically, porting P3P to handle EULAs is not as simple. The legal status of EULAs is ambiguous, and the enforceability of a machine-readable version is something I am not qualified to speculate on. There was also a need to enforce the accuracy of P3P and natural language privacy policies, resulting in a group of non-profit seal programs that audit and certify web services' privacy practices. Expanding the progress made with privacy policies to EULAs would require similiar programs, and the process demands a massive collaboration between computer scientists and lawyers and other disciplines. ID Trail represents a rare occasion when all the right people are sitting at the same table, and as a result I look forward to feedback concerning this problem from all angles: implementation ideas, critiques concerning its viability, opinions on its legality, and speculation on vendor's incentives to comply. Would an undertaking be in the public interest? Is it needed? Could it be effective?

Jeremy Clark is an MASc student at the University of Ottawa.
PDF Print
Privacy is Changing Outsourcing in Canada
By: Terry McQuay

April 25, 2006


Outsourcing in Canada is changing because of privacy laws, changes in government outsourcing policies and business concerns resulting from the USA PATRIOT Act. Increasingly, Canadian service providers are finding themselves with a competitive advantage simply because they keep their customers’ data in Canada. Conversely, US-based service providers are finding themselves at a disadvantage, often scrambling to move their data processing to Canada.


Privacy laws in Canada provide consumers with the ability to file complaints on organizations located in Canada with provincial and/or federal privacy commissioners’ offices. Complaints typically result from real or perceived mishandling of the consumer’s personal information by the organization, but consumers can file complaints even if they are not directly subject to the privacy issue or breach.

Privacy laws also provide the privacy commissioners’ offices with the power to investigate consumer complaints and an obligation to identify, expose and where possible influence privacy issues that impact Canadians. Over the last year, privacy commissioners in Canada have increased their focus on cross-border transfers of personal information. This privacy issue results from personal information being sent to locations that don’t have the same level of legislated privacy protections as Canada does.

Although offshore transfers to countries like India (that don’t have privacy laws) might seem like the logical target for this increased focus on cross-border transfer of information, they’re not. Organizations that outsource to India typically have contractual and other means to secure personal information, thus providing more than adequate privacy protections. The focus is on the USA. The USA PATRIOT Act is considered by some to be anti-privacy because it provides US federal authorities seemingly unfettered access to any personal information held by US firms, whether it is on US citizens, Canadians, or anyone.

Cross-Border Privacy Concerns

Privacy laws provide consumers the ability to complain, and provide privacy commissioners the powers to investigate these complaints. But do consumers really care if their personal information is transferred to the USA? As a Canadian, ask yourself these questions:

“Would I like my personal information reviewed by a US authority, like the FBI?”
“Would I like my purchasing habits, my medical information and my resume accumulated and accessed by US government agencies?”

If you answered ‘no’ to these questions, you are not alone. According to a survey published in June 2005, and conducted by EKOS Research Associates on behalf of the Privacy Commissioner of Canada, 64% of Canadians have serious concerns about companies transferring their personal information to the US.

Privacy Commissioners Influence Corporate Outsourcing Policies

Cross-border transfers of personal information are a major concern of privacy commissioners across Canada, and they have taken many steps to build the awareness of this issue. The Office of the Privacy Commissioner of Canada has stated on several occasions:

“At the very least, a company in Canada that outsources information processing in this way should notify its customers that the information may be available to the US government or its agencies under a lawful order made in that country.”

In a recent precedent-setting finding from the federal commissioner’s office about a complaint of an organization’s transfer of personal information outside of Canada, the finding stated that an organization must comply with the Personal Information Protection and Electronic Documents Act (PIPEDA), the law that governs all customer personal information transferred to the US by corporations in Canada.

Principle 4.1.3 of Schedule 1 states:

“An organization is responsible for personal information in its possession or custody, including information that has been transferred to a third party for processing. The organization shall use contractual or other means to provide a comparable level of protection while the information is being processed by a third party.”

Principle 4.8 states:

“An organization shall make readily available to individuals specific information about its policies and practices relating to the management of personal information.”

To comply with PIPEDA, the Commissioner’s finding states:

“What the Act does demand is that organizations be transparent about their personal information handling practices and protect customer personal information in the hands of foreign-based third-party service providers to the extent possible by contractual means.”

Transparency requires providing notice to consumers that their information will be located outside of Canada. Thus, organizations have only two viable options:

1. Provide notice to consumers that their personal information is being transferred to the US and is subject to US laws; or
2. Keep the data in Canada.

Outsourcing Rules are Changing

Organizations are avoiding this issue completely by keeping personal data in Canada. The location of the data is now one of the decision factors when selecting a new service provider for an outsourcing contract. Many, if not most, government organizations are demanding personal information remain in Canada. Banks, insurance companies and healthcare providers are pressuring their current suppliers to keep personal information in Canada, and selecting new suppliers that keep their data in Canada. Privacy has changed outsourcing in Canada.

Competitive Advantage for Canadian Service Providers

Canadian companies are finding they have a competitive advantage, simply because the data remains in Canada. One such company is ThinData, a Canadian e-marketing solutions provider. Wayne Carrigan, VP of Client Services at ThinData explains:

“We are a Canadian company and we have always processed our customers’ data in Canada. We never expected privacy laws and concerns about the USA PATRIOT Act would provide us a competitive advantage, but it has.”

As for customer demand, Wayne states:

“We are increasingly responding to proposal requests that specifically ask if we keep clients’ data in Canada. Our customers have stated that one of the reasons they have chosen ThinData is they want their data to remain in Canada”.

Similarly, Gabe Mazzarolo, Chief Privacy Officer of Workopolis, Canada’s biggest job site, states:

“Almost every piece of information contained in an individuals resume is personal information. Both our corporate clients and Jobseekers feel more secure knowing their information remains in Canada.”

Nymity, a leading privacy research firm, has seen substantial growth in both its training and its subscription services as both US and Canadian organizations are looking for pragmatic solutions to mitigate the impact of privacy on outsourcing, or looking for a means to capitalize on this privacy issue. Jin Shin, Nymity’s General Counsel explains:

“Outsourcing personal information to the US can be done in compliance with PIPEDA, but doing so doesn’t mitigate all privacy risks, and in some cases it introduces new privacy risks. For example, although providing Notice is required, it can have unanticipated results. A few of Nymity’s customers have provided Notice that resulted in complaints to the Federal Privacy Commissioner’s office.”

Linda Drysdale, a privacy expert at PricewaterhouseCoopers states:

“We foresee huge growth in service providers conducting audits against the new Generally Accepted Privacy Principles (GAPP) from the AICPA/CICA, partially due to their customers’ concerns related to transfers of personal information outside of Canada.”


Privacy is changing outsourcing in Canada. Government policies virtually mandate personal data remain in Canada and corporate Canadian is finding it best to simply avoid the issue completely by keeping their customers’ data in Canada.

The bottom line for services providers is: Canadian service providers have a competitive advantage—US service providers have a business risk.

Terry McQuay is President of Nymity Inc., a privacy research firm that provides privacy training, risk mitigation subscription solutions and research services for corporations and not-for-profit organizations.
PDF Print
Anonymity As a Way of Managing Stigma: The Case of Narcotics Anonymous
By: Catarina Frois

April 19, 2006


I would like to take this opportunity to talk a little about the use of anonymity as a way of managing stigma, specifically in the case of the association known as Narcotics Anonymous. The saying “once a Junkie, always a junkie” used by NA members, is closely related to three ideas that I presently address: stigma, anonymity and addiction. Narcotics Anonymous are a non-professional self-help association conceived for individuals with drug-related problems. They follow a model known as the 12-Step program, consisting so many stages or principles which individuals must follow if they are to successfully engage in a process of abstinence from drugs instilling on members a “life philosophy” that will be useful to them in all the areas of life.

The oldest register I found for Narcotics Anonymous in Portugal, dates back to 1983: the first group was started in Lisbon, and today, according to the data made available by the association’s portuguese website, there are 164 groups distributed throughout the country. The research included here relates specifically to a nine month period of participant observation in two groups of the Lisbon area. Each of these groups had an average of 20 members, with ages ranging from 25 to 45, an with a ratio of 60% men to 40% women.

Members in this association describe themselves as “addicts”, that is, people suffering from an illness called addiction, which is not merely a dependency of toxic substances and alcohol but a disease with underlying behavioral problems of which obsessive-compulsive and self destructive behavior are symptoms. The 1st Step, which states: indicates that they believe that abstinence is only possible when someone is ready on one hand, to acknowledge that they are powerless toward their use, and on the other hand, that they can recognize themselves as addicts.

Therapy is based mainly on the exchange of common experiences among participants during the course of reunions arranged for this purpose – the meetings. This event lasts approximately 90 minutes, during which those who have gathered there speak of their drug-related problems, past and present. As an association made up exclusively of people afflicted by the same problem, not by professionals, they act on the conviction that “the therapeutic value offered by one addict to another is irreplaceable” and thus members experience what they call “identification” free of judgment and prejudice. Everyone present admits having lost control of their lives due to drugs and their need to seek a solution for this problem through the sharing of their experience.

If initially persons seeking help think of themselves as failures, as “bad” people with no principles, as soon as they get acquainted with NA philosophy and with other people sharing the same problem, they realize that they were not responsible for their behavior under the influence of drugs. They are no longer junkies; they are people with a disease. At this point there is a whole transformation in the way members define themselves ant their relationship with others. This starts in the first moment a person introduces him/herself in a meeting stating his/her first name and acknowledging their situation: “Hello my name is Pedro and I am an addict”.

The idea of illness is to some extent, a way of denying responsibility for past actions and releasing a burden of shame and guilt which everyone points as those feelings which were prevalent when they first joined this association. According to NA philosophy, drug abuse and addiction are in fact two different concepts. For NA members drug abuse refers to a person who is still actively using toxics substances and who may or may not be an addict, since addiction implies an illness that is more than just a question of drug use. An addict’s obsessive compulsive behavior reveals itself in different area of a person’s life, such as his work, relationships, etc.

A drug abuser is a junkie, someone who society rejects and condemns. It has an immediate negative connotation. An addict on the other hand, is a sick person who has no responsibility over his conduct “under the influence” but who has a responsibility to keep cleat of that influence. How does this distinction relate to stigma and anonymity? Erving Goffman (1963) speaks of stigma as a condition of difference and distinguishes two types of stigmatized persons: the “discredited” and the “discreditable”. The discredited is someone bearing a visible stigma, which is evident at first glance and which, according to this author, has an immediate influence on the way interaction occurs.

This is the case, for example, of someone with a visible physical deformity, or of the junkie wee see begging on the sidewalk. The second type, the stigmatized discreditable, will be someone who has a stigma which is not immediately visible to others, and which will only become “discredited” from the moment he reveals his condition to others. This is the case of an “addict” attending to NA.

To NA members a recovering addict will only reveal without restraint his/her stigma within a meeting: outside the group he will omit his/her problem, including his/her membership. This is where anonymity, the last idea mentioned in the opening paragraph, plays its role.

Anonymity is one of the rules of this association and it is observed both within meetings and outside of them, as a way of protecting the legal identity of individuals. As such, within one meeting members identify themselves merely as addicts, concealing all other identifying elements – family name, address, profession, etc. – and outside the meetings members will keep their membership, as well as other’s anonymous. Revealing their membership to non-members is tantamount to revealing their stigma.

The decision to do this is referred to as “breaking anonymity”; in other words, revealing their identity as someone who has had a drug-related problem makes their stigma visible to others, exposes them to judgments made on the basis of this information. This brings us back to the difference between drug abuse and addiction. NA members share the idea that other people view drug users as “criminals”, as untrustworthy people who are capable of acting in bad faith and incapable of change; “Once a junkie, always a junkie”. Because of the weight this stigma bears on the image of drug users, as soon as someone breaks their anonymity and reveal themselves as somebody with a drug problem, they will immediately be identified by others as a “junkie”.

Anonymity is therefore a choice, a useful instrument for managing stigma. In such a context, a person is free to choose what is revealed, and who it is revealed to. Thus, anonymity is a kind of empowerment for those who use it.

Catarina Frois is a PhD student in Anthropology at the Institute of Social Sciences, Lisbon University, Portugal.
PDF Print
Myspace: a network without borders
By: Melissa Cheater

April 11, 2006


MySpace is the current hot little number in the world of online social networking sites, boasting 66 million members, and growing. It is ranked 8th in alexa.com’s global top five hundred websites, and 5th on the English Language top five hundred. What started sixdegrees.com (no longer online), lead to friendster, and the current groundbreakers, Myspace and Facebook. There is no need to get nitty-gritty about all the little distinctions between the various OSN (online social network) services that have come and gone over the years. The important facts to remember is that anyone with an email account can register on myspace, and that facebook (ranked 53 in the global five hundred) is only open to individuals with email accounts accepted university mail servers. Friendster is considered a past trend in North America, having faded from administration/user conflicts and a period of technological trouble, but still claims 27 million accounts. Facebook rests at 7 million participants. At more than twice the population of Canada, Myspace is by far in the lead and has a significance all its own.

Social networking sites are characterized by a “self-descriptive profile” featuring photos, personal information and a public display of “personal connections” (Donath & boyd) Though OSN websites have risen and fallen over the year, the popularity of this type of service has only increased. Offline, a study by Wellman has observed that “a typical personal network included 3-6 close and intimate ties, 5-15 less close but still significant and active ties, and about 1000 more distant acquaintances” (Wellman in Donath & boyd 80). Networking sites are very efficient at allowing users to maintain an increased number of weak ties and an overall larger network of connections (Gross & Acquisti 73, Donath & boyd 80). Granovetter’s “Strength of Weak Ties” describes how a weak tie should not be undercredited as a “trivial acquaintance tie but rather a crucial bridge between the two densely knit clumps of close friends,” in a context where otherwise these “clumps” would have no connection whatsoever and would be isolated from each other (Granovetter 202). By connecting different groups, weak ties give access to the different resources and opportunities available in different groups. In terms of privacy, a social network structure supporting an inflated number of weak ties (users boast anywhere from 1 to 1000’s of myspace “friends”) is an environment where a huge amount of information is moving very freely – and in a network of 66 million individuals, this can be quite significant. (On Monday, April 10, Tom had 69,998,034 friends connected to his profile – and while every new member is given Tom as a friend, not all of them chose to keep him on their friend lists. This would put myspace membership somewhere above Tom’s 69,998,034). If gossip and rumour are considered social concerns in an offline network of 1000 connections (Wellman), imagine the consequences in a network of 70 million paired with increased weak, “bridging” ties.

danah boyd’s concept of the “super public” is also very relevant to this discussion. It is recognized that in our daily lives we actively manage our identity, performing different faces in different situations (Goffman). We perform work to maintain our various faces in separate publics, and to avoid overlapping these performances. boyd proposes that as myspace.com shifts from a niche service for musicians, to a mainstream community, a super public is emerging. Where else can we find a context where we would present the same face so openly to such a large body of individuals? Previous network sites have involved features that allow members to adjust how visible their profile is to different degrees of connection. For example, Donath and boyd discuss a situation where a teacher with a friendster account was confronted with having students from her classes add her as a “friendster” and having to decide whether she was comfortable with students being able to view the profile she had created with friends in mind. Friendster allowed her to set who was able to view her profile but this option is not offered by myspace. Myspace, in fact, has no privacy options available for adult users.

Acquisti & Gross discuss that while offline ties or connections can be “loosely categorized as weak or strong,” they are actually “extremely diverse in terms of how close and intimate a subject perceives a relation to be. Online social network, on the other side, often reduce these nuanced connections to simplistic binary relations: “friend or not”” (73). Nowhere else is this more true than on myspace. In the absence of privacy settings, the only way to deny a member complete access to your myspace profile is to deny their friendship – and even then, they can still view all your content (except for blogs posted as private or “friends only”).

Those of us who were present at the SSHRC site visit in February might remember Joel Reidenberg’s question about myspace, regarding how he could witness his son’s (or any other member’s) behaviours within the network without explicit permission. All you need to start surfing myspace is a membership, you don’t need any friends. This is one of the primary differences between myspace and facebook (facebook was the topic of a talk given by Alessandro Acquisti). While myspace allows anyone with an email address to start an account, only emails from approved university domains are able to start accounts on facebook – and you can only freely “lurk” people who attend your specific school. Facebook also has a variety of different privacy settings, that Acquisti finds are rarely used. Anyone, even without a membership, can click through the myspace network viewing almost everything. Membership gives you access to individuals photo galleries and blogs. Being someone’s “friend” gives you permission to leave a public comment on their profile page, and will also cause all of their “bulletin” broadcast messages to be listed on your myspace console page.

Users are given the option of making posted photos entirely private, or entirely public (no middle ground). A setting is available that allows members to screen public comments before they are posted on their profile for everyone else to see. Individuals under 16 are able to create “private profiles” so that their content is only available to “friends,” however, the individuals display photo, name, age and location information are still publically displayed.

Beyond the clashing of “publics” into a super public, and the inability to control how visible your profile is to other 66 million members of the site, there are further privacy concerns considering how much information users tend to disclose on their personal profiles. This is a phenomenon seen on most online social network sites, but swelling the potential network to ten times the average size of other similar services makes the situation a little more significant in the case of myspace.

As I browse through the myspace directory (publicly available without an account), I notice that almost every member has opted to upload a display photo. The vast majority of these photos appear to include the individual him/herself and clearly show their faces. Most members seem to prefer presenting themselves with real, or realistic, first names. Clicking through the network of profiles reveals each page filled (to the limits in some cases) with endless lists of favourite movies, books and music, age, sexual orientation, hometown, current town, motivation for joining myspace, who they’d like to meet and open ended fields such as “about me” where users type out mini (and sometimes lengthy) diatribes about what makes them “them” and express whatever parts of their identity aren’t covered by the previous categories. In light of the discussion put forth by Jackie Strandberg, “Giving it up for free: Teens, Blogs, and Marketers’ Lucky Break,” myspace seems not only to contain a similar wealth of information just asking to be exploited, but also does it in a standardized series of tables and headings that can only facilitate the process. “dbickett” posts on the Kuro5hin website, the many technological flaws of myspace that leave users open to serious privacy and security breaches caused by loopholes in the sites coding, leaving the submitted information further open to violation.

Datamining is not the concern that the media are warning us about however. A Google News search on myspace gives us almost 5500 results, most of which are on the topic of youth safety and the dangers of strangers online. Catherine Saillant, LA Times, starts her article with the following:

I've covered murders, grisly accidents, airplanes falling out of the sky and, occasionally, dirty politics.
But in nearly two decades of journalism, nothing has made my insides churn like seeing what my 13-year-old daughter and her friends are up to on MySpace.com.

And just what was her daughter up to that lead to the loss of her myspace privileges? “Giving a one-fingered salute.” This comparison might seem extreme, but in fact this is the tune of most mainstream media coverage of the myspace phenomenon. March media were flooded with accusations that using myspace had lead to the abduction of two teenage girls. Interestingly enough, danah boyd’s interview with Bill O’Reilly – one of television’s most conservative journalists – was able to present a less loaded portrayal of the website. But maybe this could be connected to FOXnews’ parent organization News Corp. having purchased myspace.com.

So is myspace significant to those of us interested in privacy: socially, technologically or legally? I know my opinion, but I might be biased as self-proclaimed myspace addict. Whether or not myspace lasts, it is certainly here for the moment. It might just be a fun way to keep in touch and up-to-date on your friends but it’s not just you, me and joe who are watching. Myspace isn’t just self-expression among friends, it has recently become a form of legal surveillance.

A year of thank you’s to Dr. Jacquelyn Burkell who has given me advice, experience, and encouragement (through the Anonequity project, on this ID Trail Mix, and in my own studies as my undergrad comes to a close). And to everyone that has listened to me prattle about myspace over the past few months, it’s almost over!
PDF Print
Using the right lenses for developments in identity management
By: Dr. Miriam Lips

April 4, 2006


Many of you may have noticed that an important Bill for the future of UK central government’s Identity Management Policy recently has passed an important hurdle for further implementation. Having received Royal Assent after being bounced between the House of Commons and House of Lords several times, the UK Identity Cards Bill will now be passed as law. Aims of the UK central government are to introduce a national ID card containing three biometric identifiers, together with a National Identity Register acting as a central database in which a range of details about individuals will be stored. After a political tussle between the House of Commons voting for the ID cards to be compulsory whilst the House of Lords continually voted for the cards to be kept voluntary, the House of Lords offered a compromise to the House of Commons that anyone renewing their passport will have details put onto the National Identity Register but will not be forced to have an ID card until 2010. One reason for the compromise is that 2010 will be after the next general election in the UK: if the Conservatives gain power at the next vote they claim that they will look to abandon the ID card scheme.

As things stand, every UK citizen over the age of 16 who applies for a new passport from 2008 will have details added to the National Identity Register, including biometric information. The first ID cards will be issued to passport applicants in 2009. The intention is that ID cards may be used as travel documents for within the EU, meaning that passports might not be needed. Those who never apply for a passport will not need to have an ID card, but will be able to apply for a ‘stand alone’ ID card if desired. Foreign nationals that abide legally in the UK will also have details entered onto the Register. A card will be issued that acts as a residence permit. Research findings show that UK citizens are generally supportive of a national ID card (Dutton et al, 2005, p.114; Home Office, 2003; Detica, 2004), or even consider their introduction as inevitable (Cragg Ross Dawson, 2004, p.6).

The UK government has defended its proposals for a variety of reasons, including prevention of benefit fraud, prevention of terrorism, prevention of identity theft and authentication in e-government services. Besides for a whole range of e-government applications it believes the cards will be used by a number of different organisations, such as banks, Royal Mail, libraries, video/DVD rental companies, mobile and fixed line communications service providers, travel agencies, airlines, higher education institutions, retailers, property rental companies and vehicle rental companies. To further facilitate this development the government will provide Identity Verification services for accredited organisations to check an individual’s identity, for instance when opening a bank account or registering with a GP.

Critical voices in the UK point at seemingly unrealistic technical expectations of this ID card scheme, using arguments such as the fact that neither the major contractors nor the government have shown themselves capable of organising and implementing an outsourced IT scheme on this scale: for instance, no country has attempted to use biometrics technologies to register a population the size of the UK (The LSE, 2005); the proposed requirement for 100 per cent accuracy seems to be unrealistic: has there ever been an identification system which is 100 per cent accurate? (Neville-Jones, 2005); trials of the card scheme have demonstrated that a substantial number of specific groups of the UK general population (e.g. disabled people) may not be able to enrol on biometrics based verification schemes (UK Passport Service Biometrics Enrolment Trial Report, 2005); a critical voice from industry: ‘a national ID card for the UK is overly ambitious, extremely expensive and will not be a panacea against terrorism or fraud, although it will make a company like mine very happy' (Tavano , 2005)(Biometrics specialist for Unisys, one of the companies considering bidding for contracts. Quoted in The Guardian, 21 October 2005); and, from a collective group of LSE academics, that the government proposals for a secure national identity system are too complex, technically unsafe, overly prescriptive, massively more costly than government is itself estimating and lack a foundation of public trust and confidence (The LSE, 2005, p.3).

Looking at the UK national ID card debate from the academic, “ivory tower” this debate seems to be illustrative for the way in which identity management (IDM) issues have been tackled by governments so far. Optimal security, technical reliability, ID “theft ” (ID theft as a concept has only emerged recently. The theft or fraudulent use of ID documents however exists for a long time.), privacy, public safety, and accuracy repeatedly have been important topics in public decision making about personal identification and authentication systems at many occasions in the past. This debate therefore is not a new debate emerging in the current era, but can be observed regularly in many national public decision making arenas since the implementation of the paper-based passport system several centuries ago. Interestingly, through time, there have not been notable changes in the use of the passport as an authentication system in various service related procedures between government and citizens.

This similarity in restricted, mainly technically focused IDM topics may also explain the current ease with which governments are trying to copy ID card systems or authentication systems from ‘best practices’ available in other countries, with the Belgian eID card as a clear favourite at present. From a technical perspective new forms of personal identification, authentication and IDM seem to be acknowledged as enhanced technical ‘solutions’ to be used in similar identification and authentication practices compared to the past.

However, in the UK context some critics have pointed at the overemphasis in the public debate on the visible, technical means of identification proposed by the UK government, the ID card itself, and, with that, the lack of public attention for the more invisible aspect of how citizens’ data will be handled by the UK government (eg Davies, 2005, p.38; the UK House of Lords Constitution Select Committee). It is this particular insight that seems to trigger some important questions. What empirical understanding do we actually have about the implementation and use of new forms of personal identification, authentication and IDM in citizen – government relationships? Has the UK been engaged in the right public debate so far to be able to effectively address the more fundamental question of potential change in citizen – government relationships due to new IDM means and forms, namely potential change in important institutions in the public domain, such as citizenship?

The history of the use of the passport for instance shows us that personal identification procedures especially changed during moments of societal ‘crisis’, such as the French Revolution, the First World War and the Second World War (Torpey, 2000; Agar, 2003). Although the authentication system itself, the paper-based passport, more or less stayed the same through time, the frequency and intensity of its use as well as the officials executing the authentication process usually changed during these periods of crisis. A similar effect can be observed in more recent times after the events of 9/11 and the London bombings.

By using an historical perspective it is very interesting to see the changing meanings, uses, and values attached to a similar technical means and process for personal identification through time, the passport. For instance, the first passports and passport controls for that matter were not so much used to regulate citizens’ access to spaces beyond their home country as we are used to today, but to prevent people from leaving their home territory. Consequently those citizens leaving their Kingdom (i.e. under the old regime in France) were required to be in possession of a passport authorising them to do so. The main purpose of these documentary requirements was to forestall any undesired migration to the cities, especially Paris (Torpey, 2000, p.21).

Somewhat further in time, in the early 19th century in Prussia, the practice could be found whereby incoming travellers were provided with a passport from the receiving state rather than by the state of the traveler’s origin. These passports were no longer issued by local authorities but by higher-level officials. The foreigners and unknown persons circulating in the country were to be subjected to heightened scrutiny by the Prussian security forces, with the assistance of specific, legally defined (The 1813 passport law in Prussia) intermediaries like landowners, innkeepers and cart-drivers (Torpey, 2000, p.60).

Generally in the 19th and 20th century we may observe a development towards two models for citizenship attribution and the related issuing of passports to citizens, namely on the basis of ius soli (“law of the soil”) and ius sanguinis (“law of the blood”) (see for instance Brubaker, 1992). The latter model had to do with the development of enhanced mobility of citizens beyond the state’s territorial boundaries, especially for economic reasons, and the possibility for nation states therefore to continuously keep a relationship with citizens living abroad.

What this alternative, empirical perspective reveals to us is the profound influence these new forms of personal identification and authentication may have on the governance of citizen – government relationships. Institutional innovation, the renewal of traditional citizen –government relationships as a result of the creation and development of new information practices, appears to be happening due to the introduction of IDM in various electronic citizen – government relationships . A new ‘law of informational identity’ may soon replace the existing models of citizenship attribution in the analogue world, ius soli and ius sanguinis.

Similarly to the analysis of the passport’s history we may observe that borders between customers and non-customers of government organisations; identified or non-identified subjects of the state; authenticated citizens or non-authenticated citizens, are being reset as a result of these newly available forms of authentication and identity management in e-government relationships. Not only does the same authentication system allow the possibility for government to provide people with access to its virtual territories; it also allows governments to keep people out of them. Analogously to the Prussian era where intermediaries like landowners, innkeepers and cart-drivers supported the government in the checking and validation of a person’s identity, new trusted third parties are emerging, such as banks, telecommunications providers, and credit reference agencies, to help government to check people upon their trustworthiness.

The history of the use of passports and their changing meaning in society shows us how important it is to look beyond their technical characteristics and, thereby, to make use of alternative perspectives in empirically exploring the introduction and functioning of new identification ‘technologies’. It also makes us aware of the importance to perceive the use of IDM systems in an evolutionary way and for instance to look for punctuated equilibria (Baumgartner & Jones, 2002) in the historical evolution of ICTs, e.g. the periods of crisis during the history of the passport, as important moments where changes often may happen in the use of these technologies.

What will happen in eras of crises with the application of this newly developing model of citizenship attribution, the ‘law of informational identity’, remains to be seen. Whilst there is this chief concern with enhancing e-government service provision to entitled, trusted citizens, there is, nonetheless, recognition that the security agenda of modern government is adding to a climate wherein the identification of the citizen is seen as of paramount importance. If services to the citizen are to be provided effectively, then identity issues come to the fore. If enhanced personal and State security is paramount then, once more, the means of identifying individual citizens becomes of crucial importance.

Dr Miriam Lips, Research Fellow at the Oxford Internet Institute, University of Oxford.
Together with professor John Taylor and Joe Organ she is working on an empirical research project on ‘Personal Identification and Identity Management in New Modes of E-Government’, sponsored by the UK Economic and Social Research Council’s e-Society Programme


Agar, J. (2003), The Government Machine: a Revolutionary History of the Computer, The MIT Press.
Baumgartner, F. & B. Jones (eds) (2002), Policy Dynamics, Chicago, University of Chicago Press
Brubaker, R. (1992), Citizenship and Nationhood in France and Germany, Harvard University Press, Cambridge
Cragg Ross Dawson (2004), Public perceptions of ID cards. Qualitative Research Report, COI Ref: 262 151.
Davies, W. (2005), Modernising with purpose: a manifesto for a digital Britain, Institute for Public Policy Research, London, UK.
Detica (2004), National Identity Cards: The View of the British Public, April 2004
Dutton, W.H., C. di Gennaro & A. Millwood Hargrave (2005), The Internet in Britain : The Oxford Internet Survey (OxIS), May 2005, Oxford Internet Institute, University of Oxford.
Home Office (2003), Identity Cards – A Summary of Findings from the Consultation Exercise on Entitlement cards and Identity Fraud, Cm 6019.
Neville-Jones, Dame P former chair of QinetiQ. Reported on 18/10/05 by silicon.com, 'Lack of "balls" in Whitehall will hinder ID cards' Will Sturgeon http://www.silicon.com/publicsector/0,3800010403,39153447,00.htm
PDF Print
Subjectright (S), a reciprocal to Copyright (C)
By: James Fung, Steve Mann and Kyle Amon

March 28, 2006


This article presents the argument that any debate about copyright is inherently unbalanced, because it preferentially considers the right of a source entity, without equal regard to the right of a destination entity. Accordingly, we propose the concept of Subjectright, i.e. recipient rights, as a reciprocal to copyright.

In contrast to the analogous mechanisms of Intellectual Property (Copyright, Trademark, Patent, etc.) that protect that which is offered through predominant volition of a “transmitient”, Subjectright also covers that which we give off without conscious thought or effort, as well as that which we are exposed to simply through our existence.

Subjectright includes our physical facsimile, as might be protected by the Humanistic Property License Agreement (HPLA), http://wearcam.org/clerks.htm, http://wearcam.org/hpla.htm, http://wearcam.org/hp_manifesto.htm as well as our spoken word, molted detritus and mental engrams.

In this paper, we expand upon the principle of Subjectright to include that which we receive through eminent volition, and, in particular, that which we receive as subject, thus have been SUBJECTed to, often without our consent and sometimes even against our will.

In order for information to propogate, five functions must exist. There must be a creator, a transmitter, a conduit, a reciever and a processor of information. All five may reside within the same entity or be distributed, singly or multiply, between various entities. If any one of these five functions is lacking, information propogation can not occur.

Current Intellectual Property law and practice only affords privledges to the “transmitient” (creator, transmitter and conduit functions of information propogation). While Copyright(c), for example, provides extensive powers to the creator, transmitter, and/or conduit of information (e.g. an author, publisher, broadcaster), Subjectright, recognizing that individuals are recievers (eg. consumers) and processors (eg. users) as well as creators (e.g. producers), transmitters and conduits of information, extends commensurate powers to them as such.

Since we hold it to be self evident that all entities come into existence free, subject to none but their own mortality, having an inalienable right to maintain this freedom, we propose that a reciprocal set of privileges to those afforded by current Intellectual Property law to creators, transmitters and conduits of information, as instigators, be extended to conduits, recievers and processors of information, as subjects, under Subjectright(s), and, furthermore, that information instigators be morally and legally bound by Subjectright(s), necessitating them to respect the inherent, independent volition of all entities as free beings, and their right to maintain this freedom, in order to provide a means of redress when information instigators contaminate entities with unwanted information as subjects.

While Copyright is intended to protect the deliberate creation and transmission of information, Subjectright is intended to protect the primarily involuntary disclosure of information (e.g. physical facsimile, spoken word, molted detritus, etc.), as well as the often involuntary receipt of information (e.g. marketing and advertising, music, video, etc.) as mental engrams.

Note that in this sense of reciprocality Copyleft (i.e. Gnu Public License, GPL) is not really a reciprocal for copyright, in the sense that both Copyright and Copyleft attempt to protect a transmitient, although in quite different ways. In particular, to the extent that fame and fortune are fungible, Copyright and Copyleft are two sides of the same coin, whether that coin be a coin of commerce, or a coin of recognition and social status.

In view of the often involuntary nature of this exchange with regard to the recipient (eg. subject), it has been argued that Subjectright deserves stronger protection than Copyright. See, for example, First Monday, volume 5, number 7 (July 2000), URL: http://firstmonday.org/issues/issue5_7/mann/index.html

A scholar’s right to cite sites

Legal development is sometimes said to be significantly more dilatory than technological development (notwithstanding our desire to state that “The trouble with law is that so many new laws are created so quickly that technology is having a hard time catching up.”). As society evolves, the original intent of old laws is often lost and they begin to be misapplied as a result. In some cases, after a significant amount of subtle, social evolution, the results can be egregious. It is therefore not very surprising that many Intellectual Property laws are now in conflict with the reasonable freedoms of scientific, scholarly, or academic pursuit.

Consider, for example, the Felton case, http://eff.org/sc/felten/ Felten
v. RIAA.

"Freedom of Speech should not be sacrificed in the recording industry's war to restrict the public from making copies of digital music.
When a team led by Princeton Professor Edward Felten accepted a public challenge by the Secure Digital Music Initiative (SDMI)to break new security systems, they did not give up their First Amendment right to teach others what they learned. Yet they have been threatened by SDMI and the Recording Industry Association of America (RIAA) to keep silent or face litigation under the Digital Millennium Copyright Act (DMCA). Professor Felten has a career teaching people about security, yet the recording industry has censored him for finding weaknesses in their security. USENIX regularly publishes scientific papers that describe the weaknesses of technologies, but they are chilled by RIAA litigation threats.
EFF is asking the court to affirm the right of these scientists to publicly present what they have learned and the right of USENIX to publish the scientists' paper in their conference proceedings. EFF has also asked the court to overturn the anti-distribution provisions of the DMCA as unconstitutional restraints on the freedom of expression.
"When scientists are intimidated from publishing their work, there is a clear First Amendment problem," said EFF's Legal Director Cindy Cohn. "We have long argued that unless properly limited, the anti-distribution provisions of the DMCA would interfere with science. Now they plainly have."
"Mathematics and code are not circumvention devices," explained Jim Tyre, an attorney on the legal team, "so why is the recording industry trying to prevent these researchers from publishing?"
USENIX Executive Director Ellie Young commented, "We cannot stand idly by as USENIX members are prevented from discussing and publishing the results of legitimate research.""

Another important case fighting the infringement of current Intellectual Property laws on the First Ammendment is the 2600 case: http://www.2600.com/ and the appeal to a loss against a Motion Picture Association of America (MPAA) suit in August 2000. http://www.2600.com/news/display.shtml?id=211 The 2600 website says of this appeal,

"The case arises from 2600 Magazine's publication of and linking to a computer program called DeCSS in November, 1999 as part of its news coverage about DVD decryption software. DeCSS decrypts movies on DVDs that have been encrypted by a computer program called CSS. Decryption of DVD movies is necessary in order to make fair use of the movies as well as to play DVD movies on computers running the Linux operating system, among other uses. The Studios object to the publication of DeCSS because they claim that it can be used as part of a process to infringe copyrights on DVD movies.
Universal Studios, along with other members of the Motion Picture Association of America, filed suit against the magazine in January 2000 seeking an order that the magazine no longer publish the program. In the case, formally titled Universal v. Remeirdes, et. al., the District Court granted a preliminary injunction against publication of DeCSS on January 20, 2000. By August 2000, after an abbreviated trial, the Court prohibited 2600 Magazine from even linking to DeCSS."
Scholarly discourse and academic research seeks to spread new ideas, new discoveries, and in general new thoughts. The medium of thought conveyence is language, without which there can be no transmission of thought and thoughts must remain privy to their creators alone. Language is thus the transmitter of thought and it’s medium is the articulate symbol, manifested in speech or inscription, conveyed by an ever increasing number of media.

The articulate symbols of language were initially transmitted, and thought thus propogated, exclusively synchronously by phonetic utterance through the medium of air (ie. speech). Asynchronous transmission, and thus mass propagation, of thought became possible with the advent of inscription since the mediums of inscription were less mutable than the medium of air. It was then discovered that even speech could be inscribed on certain media and electrically reproduced, engendering an asynchronous manifestation of a type of thought transmission that was previously possible only synchronously. Ultimately, electromagnetic media was found to be extremely versatile, facilitating both synchronous and asynchronous transmission of all antecedent media necessary for the transmission and propogation of thought and, with the advent of the internet, with a fine degree of control.

The extreme versatility of electromagnetic media fostered it’s rapid proliferation as a multiply manifested thought transmission medium second in prominence only to the medium of air in conveyance of the spoken word and pictorial symbol.

It’s prominence has resulted in a devolution toward the more mutable paradigm of television and away from the less mutable literary tradition of the book. This transformation, in concert with the expansion, misuse and abuse of intellectual property laws, not only threatens the right and ability of scholars to make enquiry and publish results, but also to make scholarly citations to build upon in the tradition of science and scholarly thought.

For example, many web sites utilize CGI scripts that cause a single URL to reference multiple documents, making it impossible for scholars, critics, and scientists to cite and properly credit sources of reference and specific quotation, omission of which causes the work to suffer, thereby reducing it’s aggregate social benefit, when uncitable material is left out and exposes the author to intellectual property infringement liability when uncitable material is included for the benefit of the work and, consequently society, regardless. Moreover, complete web sites often vanish suddenly. For example, a scientific article referencing a January 22, 2001 article on Mediated Reality and EyeTap? Technology, published on the about.com wearables site, http://wearables.about.com/library/weekly/aa012201a.htm, will no longer be found by scientists wishing to extend work based upon this article in the future since it is no longer maintained on the about.com site, presumably because it is no longer considered profitable (e.g. does not generate enough advertising revenue, or the like).

One possible solution is to backup or mirror sites when cited. For example, an article published on the eyetap.org site, making a scholarly reference to this article, could cite a mirror site: http://about.eyetap.org/library/weekly/aa012201a.shtml. Each article being written would then contain all of its references to at least one level of recursion. With increases in mass storage capability, it might even be reasonable to bundle articles to two levels, but certainly one level would be reasonable.

While the creation of backup and mirror sites of scholarly citations helps ensure, in a technical sense, access to these works, current intellectual property law may criminalize those scholars who seek to preserve the works they reference. For instance, consider an academic journal which charges fees for access to their published articles. Such a journal is not responsible for ensuring long term access to the published article. However, were a scholar to mirror the article to help ensure its availability, existing intellectual property law may expose the scholar to potential legal action for circumventing the access fees charged by the journal publication. Furthermore, recent laws favor commerce by revoking the legal concept of fair-use and scholarly backup.

Consider, for example, Bill C-32 - As passed by the House of Commons http://www.pch.gc.ca/wn-qdn/c32/c-32toce.html

With the advent of wearable computing http://wearcam.org/ieeecomputer/r2025.htm Computer, Vol. 30, No. 2, February 1997 it is now possible that a person can remember everything they take in. Thus we are at a pivotal era (or will soon witness such an era) when an individual can remember what they have been taught, and that individual can also teach others. When abilities we currently attribute to ‘digital’ media move within the realm of second nature ‘personal abilities’ through such inventions, restrictions upon the person’s use of what they take in becomes akin to the notion of ‘thought police’.

In order to protect against such ``Thought Police what is needed is a new kind of agreement that is binding on the Transmitter (not just upon the Receiver) of information.

It is suggested, therefore, that Subjects would apply this Subjectright philosophy to information received, and that persons not wishing to release information under Subjectright, refrain from exposing Subjects to said information.

This ``right to teach therefore becomes recursive under Subjectright. A person bound to Subjectright simply declares: ``You have no right to teach me unless you grant rights for me to teach others, or more formally: ``By teaching me any new knowledge, you agree to be bound by the following Terms and Conditions: … one of which must permit re-teaching of what is taught.

Teaching is a form of brain damage, in the sense that once taught, we can never really forget. This brain damage is relatively permanent, e.g. the synaptic weights of the brain are permanently altered by advertising, loud (sometimes unwanted) music that is inflicted upon us, as well as by a good joke one can never forget. quote: ``There’s a song going around in my head… [words to a song about trying to forget a song, goes something like “there’s a song going around and around, there’s a song going around in my head and i don’t want to hear it no more, no more…”]

(This example underscores the difficulty in eradicating knowledge, and when that knowledge is unwanted, it causes a sort of pollution to one’s memory space…).

Thus there is a need for a concept such as subjectright that deals not only with the right to be free of unwanted violations of both privacy and solitude (such as being free of unwanted brain damage, unwanted insertion of material), but also to be free to provide scholarly discourse on what is learned.

Subjectright and Copyright

Even though under existing copyright laws, works may be reproduced for scholarly dissemination or criticism, such protections are not afforded to many of the day-to-day situations people encounter, whether or not conscious efforts are made to obtain or disseminate media. For instance, company logos used in advertising conveniently deliver the “stamp of the transmitter”, which provide the subjected and inflicted a clear target towards which to exercise their Subjectrights.

It was suggested that a fee could be charged by an unwilling Subject.
http://firstmonday.org/issues/issue5_7/mann/index.html (the cracker, hacker analogy of the brain as a computer being deliberately compromised by malicious spammers; realworld advertising as spam) The fee would be charged to the perpetrator of this pollution, or to those who benefit from the pollution (or both).

It would not be unreasonable to charge a fee for both the Reception of the unwanted information pollution, as well as for the storage, and for any damage that the pollution caused on the storage medium.

As if trying to add insult to brain injury, those bombarding us with unsolicited sounds, sights, and other forms of radiation pollution have the nerve to then try to charge us for remembering what we didn’t really want to learn. Such is the nature of Copyright, that one can be unwittingly or unwillingly SUBJECTed to input, and then be prevented from legally reproducing this same detritus. Stallman’s article entitled “Reevaluating Copyright: The Public Must Prevail”, examines the origins of copyright, pointing out that at the onset of the printing press, copyright was instituted as a method of encouraging the creation of works by publishers by restricting the freedoms of people to copy or redistribute those works. Such a system worked to allow the publisher to charge for access to their works. The article points out that at the time, since individuals could not distribute the works without a printing press, which few could afford, the agreement mutually favoured the public who gave up little, while allowing for publishers to profit from their work.

Since that time, however, technology has made it possible for individuals to distribute and reproduce material. Furthermore, while in the days of the printing press, reproduction of works had some physical cost associated with it in the form of the cost of paper and ink and transportation, modern distribution techniques have no such costs associated with them other than the rather small cost of electricity and bandwidth.

The situation has thus placed many works under copyright into a freely reproducible and publicly sharable medium where many people can benefit from the works without loss of quality in reproduction of the original.

Attempting to license or charge individuals for access to publicly accessible or mass marketed works which said individuals are bombarded with in an otherwise freely reproducible media is THEFT from Subjects. Attempting to block proliferation of reproducible, mass marketed teachings to Subjects is THEFT against those Subjects.

Perpetrators of this THEFT are asked to either cease and desist in such bombarding of Subjects with such material, or at the very least to allow Subjects to reporduce that which they are bombarded with.

If such works require an individual to pay a licensing fee or to agree to unethical or unreasonable conditions (see for example, http://wearcam.org/seatsale/poster/poster_agree_terms.htm) this is THEFT in the sense that it violates the Terms and Conditions of the Subjectright Transmitient License Agreement.

In such cases the Subject (Recipient) is thus required (by Subjectrights) to charge the content provider a de-licensing fee, or ``disservice fee.

Teaching as brain damage

Teaching involves stimulating the brain in order to impart knowledge, learn a skill or condition a frame of mind. The brain and consequently the individual, if affected by these stimuli, changes as a result. Teaching, a crucial component to human interaction and development, allows the exchange of ideas to take place. When neurological modification is undesired and unconsentual, the individual’s state of mental development does not progress, grow or improve, but instead regresses. The degree of regress is proportionate to the amount of mental clutter absorbed, due to the processing and filtering operations that must be done in an attempt to reverse the undesired teaching affects (in returning to the state of mind before the change). The persistence of memory and the absorption of information and feelings (the unconsentual nature of the teaching creates tension and conflict in the mind, bringing about negative emotions) into the subconscious mind ensure that neurological modification can never be entirely reversed. Is teaching brain damage? Perhaps we have a right to answer yes, if the teaching was unsolicited, and argue that this residual mental detritus constitutes brain damage proportionate to the quantity and intensity of the unconsentual teaching. Although the act of teaching is the same with or without consent, the consequences and resulting state of mind of the subject can differ substantially. Perhaps a good analogy is sexual contact: there’s a big difference between consentual sexual contact, and unconsentual. The physical activity is the same in both cases, but the result (happily married versus criminal activity) can be quite different.

Crime scene documentation

If the Subject witnesses or documents evidence of attempts to stop the proliferation of Subjectright media, the Subject is compelled to take legal action against such criminal activity (e.g. activity of causing brain damage with or dependency upon material that is not freely re-teachable).

Pirates are NOT Thieves (By who’s law?)

When it is said whether an act is legal or illegal, we must ask the question as to who’s law? Canadian law? American Law? EXISTech Corporation’s law? or Internic’s law?

Piracy did not, originally, pertain to software, but, rather, described captains of pirate vessels who were given permission by an issuing government to raid and plunder on the open seas ships of another government. The issuing government, in return, guaranteed safe haven at their ports, and allowed pirates to profit from their plunder (through what was known as a letter of writ). The accumulation of private wealth by this method was called “privateering”[Petrie, Donald A, “The Prize Game: Lawful Looting on the High Seas in the Days of Fighting Sail”, Naval Institute Press, Annapolis, Maryland, 1999], and was not regarded as theft, since pirates were acting legally within the domain of their own government.

Governments, at the time, made piracy and privateering not only legal, but also profitable. Thus pirates were the ones who were in fact government sponsored and supported. Privateering made trading and travel upon the otherwise open medium of the seas a dangerous proposition.

Today, “piracy” is commonly applied to the copying of software, or music. However, considering, the origins of privacy and privateering, we can re-examine the current trading on the otherwise open seas of moving digital bits around and determine who best fits the definition of a “pirate”.

There is also the notion of “fair use”. There is a well established “fair use doctrine” in the scholarly and scientific community which must be continued, lest we enter the “new dark ages”. As is well known, the origin of the internet has in its roots in the development of a method to share work between scholars. The development of copyable floppy disks, writeable CDs and widespread internet access allowed for ease of `trade’ upon the high seas.

However, many service providers and copyright holders are trying to prevent such “fair use”. Attempts to conceal, obfuscate, and prevent proper copying, backup and the spread of Subjectright works could thusly be labelled “privateering” (piracy). Many efforts to create pay systems and encryption to prevent copying within these mediums on behalf of the publishing companies would then be considered engaging in piracy. Certainly attempts to block/intercept the exchange of or extract payment for works exchanged between individuals (i.e. on the open seas) are also acts of “piracy”, supported through letters of writ issued by Copyright holding publishers.

Putting works that we are Subjected to into a freely accessible, reproducible medium (to escape the plundering pirates), may then be regarded by some as a noble and publicly beneficial activity.

Some might even argue that one should extend this basic concept to include “ripping” CDs, scanning and copying books, de-encrypting DVDs, opening source-code, reverse-engineering software, and regarding these practices as a noble and publicly beneficial activity, to counteract the piracy caused by otherwise inflicting such material on Subjects, often without the consent of the Subjects.

Such piracy is often committed by those who seek to enforce copyright. For instance, in 1996 the American Society of Composers, Authors and Publishers (ASCAP) received much media attention when it applied a licensing fee to the American Campers Association (ACA) for use of campfire songs. ASCAP does and remains in a position to, under existing copyright laws, levy fines and require licensing for summer camps to hold campfire sing-a-longs which include songs such as “Puff the Magic Dragon” and “Happy Birthday”. Unfortunately, many people have been unwillfully exposed to such music in unlicensed situations and have developed, in a sense, a cultural addiction to these songs. A birthday would not be complete without a “Happy Birthday” song, and much would be lost at a silent campfire, or one where the singers sing in fear of litigation. Furthermore, notice is not given to the listeners that these songs are subject to copyright and licensing, and thus no choice is given to the listeners but to learn the music.

Such a situation could only have evolved under copyright laws where private performances are allowed and encouraged, thus teaching the dependency and placing it into the freely accessible and sharable medium of verbal tradition, but public performances must be licensed, and thus profit may be extracted from a taught dependency. However, within the SubjectRights? framework, by exposing individuals to songs, ASCAP must then allow subjects to share it freely and reproduce that which they
have been involuntarily exposed too. ASCAP is still allowed to own copyrights to songs, but must find a more responsible way to market them to ensure they are only heard by those who are truely willing to pay their fees. (See “WHEN IN DOUBT, DO WITHOUT: LICENSING PUBLIC PERFORMANCES BY NONPROFIT CAMPING OR VOLUNTEER SERVICE ORGANIZATIONS UNDER FEDERAL COPYRIGHT LAW”, Washington University Law Quaterly Volume 75, Number 3, Fall 1997 http://ls.wustl.edu/WULQ/75-3/753-5.html on page says “Cite As 75 Wash. U. L.Q. 1277”.)

“Privateering” which might better describe acts commited by large corporations, and their paid lawyers.


Subjectright attempts to provide a sense of balance to an otherwise one-sided (e.g. Transmitter-only) point of view. Subjectright looks at both the Transmitter and Receiver of information.

As we enter the cybernetic era (from software to softwear, to implantables), we will see a blurring of the distinction between thinking and computing.

SoftWARE? embodies the idea of WARE:

Dictionary definition of “ware”

Main Entry: [^3]ware Function: noun Etymology: Middle English, from Old English waru; akin to Middle High German ware ware and probably to Sanskrit vasna price – more at VENAL Date: before 12th century 1 a : manufactured articles, products of art or craft, or farm produce: GOODS - - often used in combination b : an article of merchandise 2 : articles (as pottery or dishes) of fired clay 3 : an intangible item (as a service or ability) that is a marketable commodity

Main Entry: ve.nal Pronunciation: ‘vE-n[^&]l Function: adjective Etymology: Latin venalis, from venum (accusative) sale; akin to Greek Oneisthai to buy, Sanskrit vasna price Date: 1652 1 : capable of being bought or obtained for money or other valuable consideration : PURCHASABLE; especially : open to corrupt influence and especially
bribery : MERCENARY (a venal legislator) 2 : originating in, characterized by, or associated with corrupt bribery (a venal arrangement with the police) - ve.nal.i.ty /vi-‘na-l&-tE/ noun - ve.nal.ly /‘vE-n[^&]l-E/ adverb (C) 1997 by Merriam-Webster, Incorporated

Now having taught those 2 new words, WARE and VENAL, we hopefully all now have a right to use the English language without paying a word usage fee.

We were required to attend a public school, and we were exposed to these words against our will. We were forced to eat these words, now at the very least we should be free to use these words.

Likewise, the teaching of software skills (e.g. teaching someone how to use a program) must carry with it the free use of that program, in order to avoid brain damage arising from learning something (very hard to unlearn) that the person will not have free access to. Accordingly, it is our duty as teachers to teach people only how to use programs that are freely available to them at a later point in time.

Teaching a dependency (e.g. to get persons addicated to a certain product they must then buy) is theft.


The effects of copyright, left, and center, tend to focus on protecting the interests of creators, producers, and distributors of information. We presented a reciprocal concept, namely that of Subjectright, that considers the rights of those who are exposed to informatic content, whether by choice, by accident, or against their will.

We believe that especially when people are subject to informatic content against their will, that they have every right to “rip, mix, burn” or do what they like with it. Moreover, we also believe that any discussion of copyright is inherently unbalanced if it does not also consider subjectright.

PDF Print
Surveillance in Spheres of Mobility: Privacy, Technical Design and the Flow of Personal Information on the Transportation and Information Superhighways
By: Michael Zimmer

March 21, 2006


A recent Nassau County Supreme Court ruling held that data retrieved from a vehicle’s black box - a computer module that records a vehicle’s speed and telemetry data in the last five seconds before airbags deploy in a collision - could be admitted as evidence even though law enforcement officials did not have a search warrant. The court ruled that by driving the vehicle on a public highway, “the defendant knowingly exposed to the public the manner in which he operated his vehicle on public highways. ...What a person knowingly exposes to the public is not subject to Fourth Amendment protection.” A federal judge in upstate New York made a similar ruling, stating that police officers did not need a warrant to secretly attach a Global Positioning System device to a suspect’s vehicle. The judge said that a suspect traveling on a highway has no reasonable expectation of privacy.

In January 2006, the web search engine Google resisted requests from the U.S. Department of Justice to turn over a large amount of data, including records of all Google searches from any one-week period, partially on the grounds that it would violate their users’ privacy. This event generated widespread concern over the privacy of web search histories, and prompted many users to question the extent to which this component of their online intellectual activities might be shared with law enforcement agencies. (Indeed, it was later revealed that three other search engine providers – America Online, Yahoo and Microsoft – had previously complied with government subpoenas in the case, without public notice.) Similar concerns have arisen over commercial access to search engine histories as the vast databases of search histories held by these providers are increasingly matched up with individual searchers and demographic information from other search-related services in order to provide individually targeted search results and advertising.

The two technological systems described above - networked vehicle information systems and web search engines - represent important tools for the successful navigation of two vital spheres of mobility: physical space and cyberspace. However, they also share a reliance on the capturing and processing of personal information flows, and provide the platforms for surveillance of the person on the move. Networked vehicle information systems, which include GPS-based navigational tools, automated toll collection systems, automobile black boxes, and vehicle safety communication systems, rely on the transmission, collection and aggregation of a person’s location and vehicle telemetry data as she travels along the public highways. Similarly, web search engines, striving to provide personalized results and deliver contextually relevant advertising, depend on the monitoring and aggregation of a user’s online activities as she surfs the World Wide Web. Taken together, these two technical systems are compelling examples of the increased “everyday surveillance” (Staples, 2000) of individuals within their various spheres of mobility: networked vehicle systems constitute large-scale infrastructures enabling the widespread surveillance of drivers traveling on the public highways, while web search engines are part of a larger online information infrastructure which facilitates the monitoring and aggregation of one’s intellectual activities on the information superhighway.

The political and value implications of these infrastructures on individuals as they navigate through these spaces cannot be understated, yet they generally remain unexplored. These implications include shifts in the contextual integrity of the norms of personal information flows, challenges to the expectation of privacy in public spaces, concerns over whether one’s online intellectual activities are shared with third parties, and the potential for the “panoptic sorting” (Gandy, 1993) of citizens into disciplinary categories. Taken together, these infrastructures of everyday surveillance increasingly threaten the privacy of one’s personal information, and contribute to a rapidly emerging “soft cage” (Parenti, 2003) of everyday surveillance, a growing environment of discipline and social control.

In his book Technopoly, Neil Postman warned that we tend to be “surrounded by the wondrous effects of machines and are encouraged to ignore the ideas embedded in them. Which means we become blind to the ideological meaning of our technologies” (1992, p. 94). As the ubiquity of networked vehicle systems and web search engines intensifies, it becomes increasingly difficult for users to recognize or question their political and value implications, and more tempting to simply take the design of such tools “at interface value” (Turkle, 1995, p. 103). It becomes vital, then, to heed Postman’s warning, remove the blinders, prevent the political and value implications of networked vehicle systems and web search engines from disappearing from public awareness, and to critically engage with the design communities to mitigate these unintended consequences.

To accomplish this, three things must happen:

1. Broaden conceptual understanding of privacy: Efforts must be made to broaden the conceptual understanding of privacy to fully appreciate how the introduction of these new technologies disrupt the norms of personal information flows in the contexts of their particular use. A starting point is embracing more contextually-based theories of privacy, such as Helen Nissenbaum’s formulation of privacy as “contextual integrity.” Contextual integrity is a benchmark theory of privacy where the privacy of one’s personal information is only maintained if certain norms of information flow remain undisturbed. Rather than aspiring to universal prescriptions for privacy, contextual integrity works from within the normative bounds of a particular context. If the introduction of a new technology into a particular context violates either the norms of information appropriateness or information distribution, the contextual integrity of the flow of one’s personal information has been violated.

The theory of privacy as contextual integrity is particularly well suited, then, to consider how the introduction of networked vehicle information systems and web search information infrastructures might impact the governing norms of the flow of personal information in the contexts of highway travel and online intellectual activities. (For a starting point in such an analysis, see my paper presented at the “Contours of Privacy” conference.)

2. Engage in value-sensitive design: The notion that the design and use of technical systems have certain political and value consequences suggests the possibility of achieving alternative technical designs that might help to resist or otherwise mitigate such threats prior to their final design and deployment. It becomes vital, then, to engage directly with these technical design communities to raise awareness of the political and value implications of their design decisions and to make the value of privacy a constitutive part of the technological design process.

The multi-disciplinary perspective known as value-sensitive design is well suited to guide this endeavor. Value-sensitive design has emerged to identify, understand, anticipate and address the ethical and value-laden concerns that arise from the rapid design and deployment of media and information technologies. Recognizing how technologies contain ethical and value biases, the primary goal of value-sensitive design is to affect the design of technology to take account for human values during the conception and design process, not merely retrofitted after completion.

3. Foster critical technical practices: Recognizing that the choices designers make in shaping these systems are guided by their conceptual understandings of the values at play, work must be done to ensure technical designers possess the necessary conceptual tools to foster critical reflection on the hidden assumptions, ideologies and values underlying their design decisions. This is best accomplished by fostering “critical technical practices” within the design community. Formulated by Phil Agre, critical technical practice works to increase critical awareness and spark critical reflection among technical designers and engineers of the hidden assumptions, ideologies and values underlying their design processes and decisions. An example of critical technical practice in action is the Culturally Embedded Computing Group at Cornell University, which seeks to elucidate the ways in which technologies reflect and perpetuate cultural assumptions, as well as design new computing devices that reflect alternative possibilities. Their work provides a model for integrating critical technical practices into the technical design communities of networked vehicle information systems and web search information infrastructures.

At a moment when concern over government surveillance of its citizens is high, the prospect of the creation of a nationwide networked vehicle system infrastructure capable of monitoring vehicle location and activity causes pause. Similarly, general concerns over the privacy of web search histories is further aggravated by the possibility of the information being shared with government authorities. Broadening the conceptualizations of privacy to include approaches such as contextual integrity can help raise awareness of the political and value implications of these emerging information technologies. Further, embracing the pragmatic tools of “value-sensitive design” and “critical technical practice,” will ensure attention to political and ethical values becomes integral to the conception, design, and development of technologies, not merely considered after completion and deployment.

These prescriptions mark the first steps towards avoiding the ideological blindness Postman feared, engendering critical exploration of both the privacy threats of these emerging technologies, as well as their potential to trigger widespread surveillance and social control within two vital spheres of mobility.

Michael Zimmer is a PhD student in the Department of Culture and Communication at New York University, and maintains a blog at www.michaelzimmer.org.

PDF Print
Escaping your history
By: James Muir

March 14, 2006


Imagine that every search phrase you have ever typed into Google from your home computer was recorded and stored in a user-profile on one of Google's servers. What would this profile say about you? No doubt you would consider some of this information private. It might alarm you when you realize that this information is now out of your control. Perhaps you trust Google not to divulge it, but there may be legal circumstances which would force them to do so.

You don't have to imagine this scenario -- Google does in fact keep a record of your search history and they are currently under legal pressure to release a subset of this data to the U.S. government. Some surprising facts about Google's user-profiling are discussed in a recent CNET article (D. McCullagh, 3 Feb 2006). One of the questions that Google's data collection practises raises is the following: Is it possible for a user to use a search engine anonymously from their home computer? For instance, is it possible to do a Google search for "picking magic mushrooms" without having this tied to your identity and possibly used against you at a later date? There is a very brief discussion of this question in the CNET article. Two specific recommendations made are to 1) regularly delete any Cookies your browser collects, and to 2) proxy your web browsing through an anonymizing service like Tor. In this note, we explain just what these two instructions mean and argue that they alone may not suffice to anonymize your Google searches.

We begin by recalling some basic facts about the Internet. Every computer connected to the Internet is identified by a unique number called its IP address. An IP (version 4) address is a sequence of four numbers in the range 0...255 separated by dots (e.g., Your home computer's IP address is obtained from your ISP and they keep track of which IP addresses are assigned to which customers. If your ISP is subpoenaed, then they can be forced to match a customer's identity to a given IP address. When you surf the web normally, your IP address is submitted to the web sites you visit so that their content can be routed back to your computer and displayed in your browser. You can check what IP address you are advertising by visiting here.

Each time a user carries out a Google search, Google can record their IP address and their search phrase (as well as the current date and time). Thus, they can form a history of the search phrases which originate from a particular IP address. However, these IP address search histories are not necessarily the same as user search histories. There are two main reasons for this: 1) ISPs sometimes change the IP addresses of their customers; 2) the customers of some ISPs, like AOL, access the web through caching HTTP proxies which effectively results in many users advertising the same IP address to a web site. These issues can be overcome by using Cookies. A Cookie is a small data-file that a web site generates and stores in your browser. When you first visit Google, they set a Cookie in your browser which serves as a unique user-id. This Cookie can be subsequently read by Google each time you do a search through their web site and so it can be used to track your behaviour, even if your ISP happens to change your IP address.

Deleting Cookies regularly removes data that Google uses to track you and your web browser. Note that the Firefox browser can be set to delete its Cookies each time you close it. This explains the first recommendation. You may be wondering if there is a way to carry out a Google search while keeping your IP address hidden. This is where Tor fits in.

Tor is a network of 250+ Internet computers in various countries which run freely available software designed to facilitate low-latency anonymous communication. Tor has several interesting features but what is most relevant to our discussion is that it can allow anyone to surf the web without revealing their IP address. To start using Tor, you simply download a client program and then configure your browser to send its traffic to the client. Once the client is activated, it negotiates an encrypted pathway through the Tor network which will carry your browser's traffic. The pathway consists of three Tor servers and these are changed every minute or so. When your web traffic travels through the Tor network en route to Google, it appears to Google as though it was originated by the last server in the pathway. In particular, the IP address recorded by Google will be the IP address of the last server in the pathway. So, if you use Tor, your search phrases will likely be bound to an IP address other than your own.

However, the story doesn't end there. Even if you disable Cookies and surf through Tor, it may still be possible to maintain a profile of your web searches. If you take a look here, then you will see several examples of information that can be extracted about your browser and computer even when you have followed the two recommendations. For example, it is possible to learn what browser you are using, its version, what operating system you run, your preferred language, what timezone you are in, what plugins you have installed, and what the current settings of your display are. Google could compute a digest of this information and record it along with any search phrase you have submitted. It's not clear if this information would suffice to uniquely identify a user, but users who use less common browsers and operating systems are more at risk of this.

Much of this additional information about your browser and computer is accessible only through JavaScript and Java. If you do not want this information to be collected and then you can disable these components in your browser. Unfortunately, many web sites will fail to work with JavaScript disabled, but, if you want strong anonymity, then this might be a reasonable trade-off.

PDF Print
Privacy Issues and Canada’s Faith Communities
By:Travis Dumsday

March 7, 2006


Broadly speaking, public policy issues have an unfortunate tendency to become ghettoised, with particular problems being championed by certain segments of society while being mostly ignored by other interest groups and society at large. Thus certain segments become associated in both public and official consciousness, rightly or wrongly, with certain issues. The aboriginal community for instance tends to be associated mostly with issues directly relevant to that community, such as the economic development of reservations, preservation of native languages, etc. I call this ghettoisation unfortunate partly because it can lead to an accompanying tendency on the part of government and media to ignore the community’s involvement and stake in other issues. In the aboriginal example this might include the environmental advocacy undertaken by some native groups. Worse, it can lead to insular thinking in the group itself; when government and media link a community with a particular, narrow set of interests and issues, a subtle yet compelling psychological pull can be created in which the community unconsciously conforms itself to that image and ignores problems which may be of vital interest to it.

With that in mind, if someone asked you to write down a list of the issues of interest to Canadian religious communities, what would be the first item to pop into your mind? I realize that ‘Canadian religious communities’ is an exceedingly broad designating phrase, but humour me for a moment. What comes up first? Gay marriage? Abortion? Government funding of religious schools? I suspect that one of these three will be uppermost in the minds of many readers. Poverty relief and advocacy, peace initiatives, interfaith dialogue, these will tend to take a mental backseat, despite the tremendous time and resources which Canadian religious communities devote to these issues. How about privacy? Would that enter anywhere on the radar screen? I suspect not. I further suspect that this would be the case for most of those who would consider themselves members of these communities. Privacy is not seen as a ‘religious’ issue. But faith groups in this country are going to have to address some difficult questions relating to privacy in the near future, if they are not embroiled in them already.

In this context I think especially of the position of Canada’s Islamic community. If CSIS were to send undercover agents to attend services at mosques and monitor sermons given by Canadian Imams, in the hopes of spotting nascent terrorist sympathies or recruiting tactics, would this be a privacy violation? Leave aside for a moment the question of whether, if a violation, it would be justified. Is this even a privacy issue? It may be. In the philosophical literature on privacy and privacy rights the question has been raised as to whether groups, and not merely individuals, can possess a right to privacy. I think it has been convincingly argued that they can. For example, if a member of the Freemasons or some other secret society reveals to a reporter the group’s inner workings and rituals, it is plausible to think that the privacy of the group has been violated. Or consider some sensitive corporate meetings, or for that matter the meetings of the Canadian cabinet, whose minutes are kept sealed for decades. For a member of these groups to reveal what went on in such meetings is to violate the group’s privacy. And this is not merely a question of a group member violating the group’s trust. If an intrepid reporter were to plant a bug in the cabinet meeting room, he would be violating its privacy.

But can government surveillance of religious gatherings be considered in this light?