understanding the importance and impact of anonymity and authentication in a networked society
navigation menu top border
navigation menu bottom border
left side navigation top border

left side navigation bottom border

left side navigation top border
left side navigation top border

main display area top border
PDF Print
Data Security: Quit collecting it if you cannot protect it!
By: Jennifer Chandler


November 14, 2006

trailmixbanner.gif

We are busily inventing technologies to gather or create personal information “hand over fist.” Not only are we gathering personal information in more and more ways, but we are creating new personal information types.

In some cases, the new technology itself creates a new type of personal information to be gathered (e.g. the snapshot of our personal interests and curiosity that is contained in search engine query history – see Alex Cameron’s recent post). Other technologies enable the collection of personal information that exists independently of the technology (e.g. the various technologies to track physical location and movement, or to use physical attributes in biometrics – as described recently by Lorraine Kisselburgh and Krista Boa in their posts).

The creation of more and more stores of personal information exposes us to the risk of the misuse of that information in ways that harm our security and dignity. In the context of genetic information, consider the risks of genetic discrimination, or the controversy over “biocriminology,” [1] which has developed the idea of the individual “genetically at risk” of offending against the criminal law. Consider also the many uses to which information about one’s brain that is gathered through improved neuro-imaging techniques might be put. [2]

These new forms of personal data collection may solve some compelling social problems, but they will also expose us to risk. I set aside the full range of risks for the purposes of this blog post in order to focus on one in particular. There is ample evidence that we are better at creating stores of data than at securing them. The compromise of data security exposes the individual to the risk of impersonation as well as to the risk that a third party will use the information to draw conclusions about an individual contrary to that individual’s interests.

The impersonation risk is unfortunately now familiar – everyone knows about ID fraud and insurance companies are busily hawking ID theft insurance to protect us from some of the losses associated with it. Today, ID fraud capitalizes upon the most mundane and widespread of identification and authentication systems, including ID numbers, account numbers and passwords. However, the risk is clearly not restricted to these basic systems. Back in 2002, Tsutomu Matsumoto at the Yokohama National University demonstrated how to create “gummy fingers” using lifted fingerprints. These gummy fingers were alarmingly successful in fooling fingerprint readers. [3] All of this underscores the tremendous importance of protecting the security of stockpiles of personal data that can be used in ways to harm the interests and security of the individuals involved.

Our current legal system is woefully inadequate to deal with this problem. Breaches of data security occur so often [4] that they are becoming a bit of a yawn – a numbing effect that should be deplored. A recent Ponemon Institute survey reports that 81% of companies and governmental entities report having lost or misplaced one or more electronic storage devices such as laptops containing sensitive information within the last year. [5] Another 9% did not know if they had lost any such devices.

Although data custodians often seem to claim that the public relations costs of a major security breach are enough of a threat to encourage efforts to promote data security, the evidence makes me wonder if some additional encouragement would not be helpful. One of the key problems with data security is that a large part of the cost of a data security breach may be borne by persons or entities other than the organization responsible for protecting the data from being compromised. Under these circumstances, one would expect the organizations responsible to be inadequately interested in protecting the data.

One of the functions of tort law is to deter unreasonably risky behaviour. If careless data custodians could be held responsible for the damage to others flowing from breaches in the security of personal information under their control, they would be forced to internalize the very real costs of their carelessness.

There have now been a couple of dozen such lawsuits attempted in the United States and two class actions filed in Canada that raise a claim for damages based on the negligent failure to employ reasonable data security safeguards. The success rate so far is low.

One of the key problems facing plaintiffs in these suits is that a claim in negligence is based on a showing of actual harm. Courts will not treat an increased risk of harm as actual harm. This raises the question of how to characterize the insecurity that a data subject feels when his or her sensitive data has been carelessly exposed. Is the harm an anticipated one, namely eventual misuse by an ID fraudster? Or is the harm better understood as a present harm – the immediate creation of an insecurity that imposes emotional harm as well as financial harm (i.e., the cost of self-protective measures such as credit monitoring services, insurance, closing and re-opening accounts and changing credit card numbers). So far, the courts have held that actual harm occurs only once ID fraud happens.

It is clearly in the interests of the defendant data custodians that liability depend upon a showing of ID fraud because, it turns out, it is usually extremely difficult for a plaintiff to tie the eventual ID fraud to the breach of data security caused by the defendant. Because our personal information is so widely used and so poorly safeguarded by many data custodians, it becomes quite difficult to establish the necessary causal link between the ID fraud and the defendant data custodian. The data custodians are thus well-protected – no liability for a careless breach until ID fraud occurs, and no liability (usually) once ID fraud occurs because “who knows where the unknown fraudster got the data he or she used.”

The plaintiffs in these cases have also attempted another interesting argument in order to try to obtain compensation flowing from data security breaches. They point to the so-called “medical monitoring” cases in which some courts have permitted plaintiffs to recover the costs of medical monitoring after exposure to toxic chemicals (e.g. PCBs, asbestos, and drugs found to have harmful but latent side effects). The plaintiffs in the data security breach context argue that their predicament is analogous. They must bear present costs in order to monitor for the eventual crystallization of the risk into a concrete loss.

One might argue that the policy reasons for permitting recovery in the medical monitoring cases are not present in the data security breach cases. Indeed, the defendants in these cases often argue that human health is a more compelling interest than financial health and so relaxed liability rules that are justified in the medical context are not justified in the data security breach context. In my view, this argument is not as self-evidently correct as the defendants claim. The harmful effects of financial insecurity and fraudulent impersonation on human health and psychological well-being are well-known.

Perhaps the insecurity felt by a plaintiff whose sensitive personal data has been compromised ought to be understood as a present compensable harm in its own right in appropriate cases. When we look to the future and see the kinds of personal data that are being collected and/or created using novel technologies, the insecurity and vulnerability of the data subject takes on a new urgency. Given that choices are being made now about the development of these technologies and will be made soon about their deployment, it seems to me that there is no time like the present to ensure that the full costs of carelessness in the use of these technologies are internalized by those who seek to use them.

Until those who want to collect personal data can figure out how to keep it reasonably secure, they have no business collecting it.


[1] Nikolas Rose, “The Biology of Culpability: Pathological Identity and Crime Control in a Biological Culture,” (2000) 4(1) Theoretical Criminology 5-34.
[2] Committee on Science and Law, Association of the Bar of the City of New York, “Are your thoughts your own? “Neuroprivacy” and the legal implications of brain imaging,” (2005) <http://www.abcny.org/pdf/report/Neuroprivacy-revisions.pdf>.
[3] Robert Lemos, “This hacker’s got the gummy touch,” CNET News.com (16 May 2002) <http://news.com.com/2100-1001-915580.html>.
[4] See the list of major reported security breaches which is maintained at <http://www.privacyrights.org/ar/chrondatabreaches.htm>.
[5] Ponemon Institute, “U.S. Survey: Confidential Data at Risk,” (15 August 2006), sponsored by Vontu Inc., <http://www.vontu.com/uploadedFiles/global/Ponemon-Vontu_US_Survey-Data_at-Risk.pdf#search=%22ponemon%20vontu%22>.
 
main display area bottom border

.:privacy:. | .:contact:. | .:1rxdrugs.net:.


This is a SSHRC funded project:
Social Sciences and Humanities Research Council of Canada

© 2014 On the Identity Trail
Joomla! is Free Software released under the GNU/GPL License.