Journal Articles

Screen Shot 2019-06-14 at 11.40.09 AM.png

The Death of the AI Author” (2019) Co-authored with Carys Craig.

For years, I have been hoping to collaborate my dear friend Carys Craig, a copyright expert and rockstar Prof at Osgoode Hall professor whose work I have admired for years. In this early draft, we confront the issue of AI authorship. In a world where robots are writing movie scripts and composing music, much of the second-generation literature on AI and authorship asks whether an increasing sophistication and independence of generative code should cause us to rethink embedded assumptions about the meaning of authorship, arguing that recognizing the authored nature of AI-generated works may require a less profound doctrinal leap than has historically been suggested.  

In this essay, we argue that the threshold for authorship does not depend on the evolution or state of the art in AI or robotics. Instead, we contend that the very notion of AI-authorship rests on a category mistake: it is not an error about the current or potential capacities, capabilities, intelligence or sophistication of machines; rather it is an error about the ontology of authorship.

Building on the established critique of the romantic author figure, we argue that the death of the romantic author also and equally entails the death of the AI author. We provide a theoretical account of authorship that demonstrates why claims of AI authorship do not make sense in terms of 'the realities of the world in which the problem exists.' (Samuelson, 1985) Those realities, we argue, must push us past bare doctrinal or utilitarian considerations of originality, assessed in terms of what an author must do. Instead, what they demand is an ontological consideration of what an author must be. The ontological question, we suggest, requires an account of authorship that is relational; it necessitates a vision of authorship as a dialogic and communicative act that is inherently social, with the cultivation of selfhood and social relations as the entire point of the practice. Of course, this ontological inquiry into the plausibility of AI-authorship transcends copyright law and its particular doctrinal conundrums, going to the normative core of how law should — and should not — think about robots and AI, and their role in human relations.

Download the full article.

Screen Shot 2019-06-03 at 10.14.33 AM.png

Schrödinger's Robot: Privacy in Uncertain States” (April 12, 2018). 20 Theoretical Inquires L. (Forthcoming 2019); Ottawa Faculty of Law Working Paper No. 2018-14.

As robots and AIs are becoming ever-present in public and private, this piece addresses an increasingly relevant issue: can robots or AIs operating independently of human intervention or oversight diminish our privacy? Here, I consider two equal and opposite schools of thought on this issue.

On the side of the robots, we see that machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This performance track record is fueling a growing optimism that robots and AIs will one day exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status.

On the side of privacy, I consider the exact opposite: that robots and AIs are, in a legal sense, nothing. The prevailing view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law.  

In this paper, I argue that robots and AIs operating independently of human intervention can and, in some cases, already do diminish our privacy. Using the framework of epistemic privacy, we can begin to understand the kind of cognizance that gives rise to diminished privacy. Because machines can act on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of awareness that definitively implicates privacy. I come to the conclusion that legal theory and doctrine will have to expand their understanding of privacy relationships to be inclusive of robots and AIs that meet these epistemic conditions. Today, an increasing number of machines possess the epistemic qualities that force us to rethink our understanding of privacy relationships between humans and robots and AIs.

Download the full article

Screen Shot 2019-06-03 at 10.00.07 AM.png

"When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning" (February 20, 2019). 61 Ariz. L. Rev. 33 (2019); University of Miami Legal Studies Research Paper No. 18-3.

I wrote this piece in collaboration with my long-time pal and We Robot co-founder Michael Froomkin and our genius colleague in machine learning, Joëlle Pineau. In it, we observe that someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. In that context, we ask what the dominance of ML diagnostics will mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the long run—for the quality of medical diagnostics itself?

 In our view, once ML diagnosticians are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve.

 Although at first doctor + machine may be more effective than either alone—because humans and ML systems might make very different kinds of mistakes—in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment as well. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are no longer easily audited or even understood by human doctors.

 Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in the overall quality of care. My co-authors and I describe salient technical aspects of this scenario, particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We conclude that the appropriate revision to the standard of care requires maintaining meaningful participation by physicians the loop.

Download the full article

Devil in the Defaults

"The Devil Is in the Defaults" (2017) 4:1 Critical Analysis of Law 91

This review essay explores the concept of ‘shifting defaults’ as discussed by my dear friend Mireille Hildebrandt in her truly brilliant and absolutely indispensable book: Smart Technologies and the End(s) of Law. Although even attentive readers might mistake the subject of defaults as a minor topic within her book, I argue that they are of paramount importance to Hildebrandt’s central thesis: namely, that the law’s present mode of existence is imperilled by smart technologies.

 I begin by offering a taxonomy for Hildebrandt’s ‘shifting defaults’, carving them into four categories: (i) natural, (ii) technological, (iii) legal, and (iv) normative. Natural defaults, like human memory, can be shifted by a technological innovation like the written word, which augments our natural memory, reconfiguring our brains, culture and politics in the process. Technological defaults, by contrast, can be changed only with permission. I argue that their demonstrated power to influence choice, particularly when opaque to the average user, poses a significant threat to privacy, identity, autonomy, and, ultimately, many of our other normative and legal cornerstones. Legal defaults have been developed to clarify the law in the absence of a competing intention; they exist to accommodate unforeseen situations. I argue that legal defaults are regulated by courts and legislators with the aim of promoting clarity, predictability and the public good. Finally, normative defaults point to the difficulty of influencing a ‘default of usage’ once it has been established. I liken this to the philosophical notion of a ‘hardening of the categories’, whereby an established norm can be difficult to violate without breaching social standards. A comparison of legal and technological defaults reveals the latter to be especially problematic, as the authority to shift them lies entirely in hands of private actors.

 Ultimately, I argue that technological defaults should be set to maximize user privacy. A legislative mandate, 'privacy by default', could protect against technology’s proven power to shift both natural and normative defaults to influence choice and undermine autonomy. I conclude by reframing Hildebrandt’s central thesis, questioning whether the Rule of Law itself, could ever be legitimately displaced by smart technologies. Careful readers who noticed my use of the word ‘legitimately’ in the preceding sentence can probably guess what my answer is.

Download the full article.

Screen Shot 2019-05-31 at 4.43.09 PM.png

Chief Justice John Roberts Is a Robot” (2014) University of Ottawa Working Paper. [co-authored with Carissima Mathen].

In the piece, Carissima Mathen and I ask our readers to engage with a counterfactual designed to provoke questions about the nature of judicial purpose and legal reasoning. What if, sitting on the Court at the apex of the western legal tradition, there was a machine instead of a person? What if the Chief Justice of the United States Supreme Court was a robot?

In our thought experiment we imagined the following: after a brazen terrorist attack in which Chief Justice John Roberts is seriously injured, he is delivered to the hospital for life-saving treatment and cut open on the operating table, only to be revealed that he is actually a highly-sophisticated robot. Unbeknownst to the other justices, the entire legal profession, and the wider world, John Roberts, Robot (JR-R) was the product of an audacious experiment to see whether a social robot could successfully integrate into society, without human help or oversight. In order for the exercise to play out as planned, information about JR-R’s mechanical nature was kept highly secret, and even the robot itself was programmed to believe in its humanity.

Through this extravagant hypothetical, we invite you to evaluate the legitimacy of JR-R’s tenure on the bench. Does it matter that several landmark legal decisions of the twenty first century were written by an artificial intelligence? And what are the implications for a future that is certain to include at least some measure of mechanical jurisprudence? Today, legal scholars are grappling with the reality that integrating technology with the law will impact the way we administer justice. Embracing technology may lead to efficiencies, and perhaps even positive outcomes for access to justice and evidence-based decision making, but it may also fundamentally change our concept of what it means to follow a rule or be judged by our peers, and a number of other challenges to administration of law as a human endeavor.

Drawing chiefly on theories from HLA Hart, Ludwig Wittgenstein, and Cass Sunstein, we explore three areas in which legal realities confront the legitimacy of JR-R: constitutional barriers, judicial fitness, and the law as integrity.

Download the full article.

Screen Shot 2019-05-31 at 3.50.20 PM.png

Evitable Conflicts, Inevitable Technologies? The Science and Fiction of Robotic Warfare and International Humanitarian Law” Law, Culture and Humanities (2014) [co-authored with Katie Szilagyi].

Download the full article.

Prediction, Preemption, Presumption: How Big Data Threatens Big Picture Privacy”  66 Stanford Law Review Online 65 (2013).

Published by a top three American law journal as part of a special online symposium in which 15 pre-eminent privacy law scholars elucidate Big Data and Privacy, this article examines the social implications of predictive artificial intelligence. Contrary to the received view, Dr. Kerr suggests that central concern about “big data” is not about the data—it is about big data’s power to enable a dangerous new philosophy of preemption. Critically investigating the use of predictive decision-making – wherein big data is used to determine a person’s likely preferences or future actions and, accordingly, determine and limit that person’s life chances and opportunities– the article concludes that there is wisdom in limiting the kinds of assumptions that can and cannot be made about people.

Download the full article.

Reduction to Absurdity: Reasonable Expectations of Privacy and the Need for Digital Enlightenment” in Digital Enlightenment Yearbook (IOS, 2012) eds. Jacques Bus, Malcolm Crompton, Mireille Hildebrandt and Goerge Metakides [co-authored in equal proportion with Jena McGill].

Download this article

Tessling on My Brain: The Future of Lie Detection and Brain Privacy in the Criminal Justice System” Canadian Journal of Criminology and Criminal Justice (2008) 50:8.

This article, written with two of my fave researchers, Cynthia Aoki and Max Binnie, investigates the future of what we call “brain privacy.”

As we all know, the criminal justice system requires a reliable means of detecting truth and lies. A battery of emerging neuroimaging technologies make it possible to gauge and monitor brain activity without the need to penetrate the cranium. Bypassing external physiological indicators of dishonesty relied upon by previous lie detection techniques, some neuroimaging experts actually believe in the possibility of reliable brain scan lie detection systems in the criminal justice system. Likewise, courts have contemplated the possibility that neuroscience might provide a means of reducing the search for truth to the existence or non-existence of certain brain states. In this article, it is asserted that Canadian courts’ current approach to protecting privacy cannot easily accommodate the challenges caused by these emerging technologies, and addresses the potential threat to privacy this poses.

We begin our piece with an examination of the ‘reasonable expectation of privacy’ standard adopted by the Supreme Court of Canada, arguing that various courts across Canada have misunderstood and misapplied the Tessling decision by way of an inappropriate analogy. After a description of brain scan lie detection systems, we then examine the courts’ use of the Tessling analogy in the context of brain privacy. In addition to demonstrating the danger in a generalized judicial proposition that there is no reasonable expectation of privacy in information emanating from a private place into a public space, we conclude that a more robust account of brain privacy is required and speculate about possible sources of law from which this might derive.

Download a preprint of this article

A Tsunami Wave of Science: How the Technologies of Transhumanist Medicine are Shifting Canada’s Health Research Agenda” (2008) Special Ed. Health LJ 13.

This article, written in collaboration with James Wishart, begins with an examination of a growing movement known as transhumanism. With thousands of members from various backgrounds and academic disciplines assembled at prestigious institutions around the world, this group is morally committed to the idea that technology ought to be used to radically alter the human condition. While the transhumanist stance may appear to be radical, in this article we argue that the project of transhumanist medicine is to be taken seriously because its underlying philosophies are already embedded in the mainstream North American health research agenda, resulting in a recent shift towards “enhancement” medicine.

In Part I, James and I briefly outline the core principles and practices of transhumanism. In Part II of the article, we examine nanotechnology as transhumanism’s technologies of choice, illustrating the transhumanist vision of medical science as a self-enabled, interventionist, enhancement-focused enterprise. In Part III, we examine a shift in agenda in Canadian federal research and development towards an enhancement-focused medical science. Finally, in Part IV, there are two possible implications that we suggest will result form this shift towards a transhumanist medicine.

While emerging and future human enhancement technologies may well have much to offer, Canada’s health research agenda is shifting towards a self-enabled, interventionist, enhancement-focused enterprise without pausing to consider or address its underlying philosophies or implications. In conclusion, in this brief article, we suggests that there are significant ramifications in doing so, both in terms of our core conceptions of what health is and in our sense of entitlement to it. Although we offers no concrete answers to these issues, this work is intended as the preface to an enduring discourse that is long overdue in Canadian bioethics, health law and policy.

Download a preprint of this article

Emanations, Snoop Dogs and Reasonable Expectation of Privacy” (2007) 52:3 Criminal Law Quarterly 392-432.

In this article, co-authored by one of my fave researchers, Jena McGill, we examine the social implications of information emanations that contain valuable personal data, which radiate from our electronics, our personal effects, our homes and even our bodies. Contemplating new and emerging technologies designed to track these emissions, we consider the approach adopted by the Supreme Court of Canada with respect to existing technologies such as “forward looking infrared” and “sniffer dogs.”

We try to illuminate five main points. First, we contend that the majority of snoop dog decisions in Canadian courts have been wrongly decided; relying on an inappropriate use of judicial analogy that stems from a misreading of Tessling. Second, we warn against an excessively reductionist approach to informational privacy adopted in many recent reasonable expectation of privacy cases. Third, we warn against a non-normative approach to ‘reasonable expectations’ that is also gaining currency in several provincial courts across Canada. Fourth, we propose a different reading of Tessling, one that is better suited to the snoop dog cases and, perhaps more importantly, for subsequent application in cases concerning emerging high tech surveillance. Finally, it points to the future, suggesting that the A.M. and Kang Brown decisions are not just about snoop dogs; these two cases foreshadow the future of emanation information in a networked society.

We conclude by suggesting that courts must confront the social implications of informational privacy much more deeply than they have, interrogating its meaning, not one technology at a time, but within a larger empirical universe of information emanation. We warn that a failure to clarify the Tessling decision in the snoop dog cases and in the broader context of ubiquitous information emanation, especially alongside the maintenance of reductionist, non-normative approaches to informational privacy across Canadian courts, could seriously diminish the privacy rights of Canadians in a manner that the Supreme Court of Canada has until now been very careful to guard against.

Download a preprint of this article

Seizing Control?: The Experience Capture Experiments of Ringley & Mann” (2007) 9:2 Ethics and Information Technology 129-139.

Will the proliferation of devices that provide the continuous archival and retrieval of personal experiences (CARPE) improve control over, access to and the record of collective knowledge as Vannevar Bush once predicted with his futuristic memex? Or is it possible that their increasing ubiquity might pose fundamental risks to humanity, as Donald Norman contemplated in his investigation of an imaginary CARPE device he called the ‘‘Teddy’’?

Through an examination of the webcam experiment of Jenni Ringley and the EyeTap experiments of Steve Mann, this article, co-authored with my friend and colleague, Jane Bailey, explores some of the social implications of CARPE. Our central claim is that focusing on notions of individual consent and control in assessing the privacy implications of CARPE, while reflective of the individualistic conception of privacy that predominates western thinking, is nevertheless inadequate in terms of recognizing the effect of individual uptake of these kinds of technologies on the level of privacy we are all collectively entitled to expect. Jane and I urge that future analysis ought to take a broader approach that considers contextual factors affecting user groups and the possible limitations on our collective ability to control the social meanings associated with the subsequent distribution and use of personal images and experiences after they are captured and archived. We ultimately recommend an approach that takes into account the collective impact that CARPE technologies will have on privacy and identity formation and highlight aspects of that approach.

Download a preprint of this article

The Medium and the Message: Personal Privacy and the Forced Marriage of Police and Telecommunications Providers” in (2006) 51:4 Criminal Law Quarterly 469-507. Also available in Internet Service Providers: Law and Regulation, ed. L. Padmavathi (India: Amicus Books, 2009) 60-100 [co-authored in equal proportion with Jena McGill and Daphne Gilbert].

Businesses and law enforcement agencies in Canada are increasingly interested in learning who is doing what online. Persistent client state http cookies, keystroke monitoring and a number of other surveillance technologies have been developed to gather data and otherwise track the movement of potential online customers. Many countries have enacted legislation that would require telecommunications service providers (TSPs) to build a communications infrastructure which would allow law enforcement agencies to gain access to the entirety of every telecommunication transmitted over their facilities. Canada is considering doing the same.

This article, co-authored with my longtime bud and colleague, Daphne Gilbert, with one of our fave researchers, Jena McGill, investigates the changing role of TSPs from gatekeepers of privacy to active partners in the fight against cybercrime. We argue that the legislative approach provoked by the Council of Europe’s Convention on Cybercrime and soon to be adopted in Canada and will lower the threshold of privacy protection and significantly alter the relationship between TSPs and individuals.

We begin our article with a brief investigation of the role of TSPs as information intermediaries. After that, we examine R. v. Weir, a Canadian search and seizure case involving a TSP that acted as an ‘agent of the state’ by sending to police copies of a customer’s personal emails without a warrant and without notice to the customer. Next, we examine the Council of Europe’s Convention on Cybercrime, an instrument that calls for state signatories implement provisions that will mandate an expedited interaction between TSPs and the police. Focusing on its potential implementation in Canada, we argue that the proposed Bill would lead to a lower threshold of privacy protection, requiring recourse to the Canadian Charter of Rights and Freedoms. Finally, we conclude by considering the privacy implications of the evolving roles of TSPs and their shifting technological architectures. We predict that privacy invasive practices that used to happen infrequently and with judicial oversight will soon become part of TSPs’ business routine. Our claim is that the evolving roles of TSPs and the shifting architecture of our communications infrastructure must be built with various safeguards that will not only further the goals of national security and law enforcement but will also preserve and promote personal privacy.

Download a preprint of this article

Let’s Not Get Psyched Out of Privacy: Reflections on Withdrawing Consent to the Collection, Use and Disclosure of Personal Information” (2006) 44 Canadian Business Law Journal 54.

In this article, co-authored by myself, Jennifer Barrigar and Jacquelyn Burkell, we investigate PIPEDA’s (Canada’s private sector privacy law) conception of consent, with special emphasis on the right of individuals to withdraw consent. Instead of viewing consent in isolation, we read PIPEDA as providing a framework which aims to build a culture that better understands the importance of privacy protection. Not only do PIPEDA and similar data protection laws around the globe require consent prior to the collection, use, or disclosure of most personal information, we suggest that PIPEDA sets a higher threshold for obtaining consent than would be afforded by way of private ordering. Unlike the law of contracts – where consent is seen as a single transactional moment – PIPEDA generally allows the information subject to withdraw consent at any time. On this basis, we argue that PIPEDA’s consent model is best understood as providing an ongoing act of agency to the information subject that does not treat consent as an isolated moment of contractual agreement during an information exchange.

We try to demonstrate why the transactional approach to consent is wrongheaded through an examination of the psychological barriers to withdrawing consent. In our view, this inter-disciplinary approach informs a more robust approach to privacy protection in general and to the notion of consent as an act of ongoing agency in particular.

Download a preprint of this article

BuddyBots: How Turing’s Fast-Friends are Under-Mining Consumer Privacy” (2005) 14 Presence: Teleoperators and Virtual Environments 647-655.

This article, co-authored by Marcus Bornfreud, examines how intelligent agent technologies are currently being deployed in virtual environments by online businesses. In furtherance of various corporate strategies involving marketing, sales and customer service, BuddyBots are capable of altering consumers’ legal rights and obligations. We focus on a rapidly evolving field known as “affective computing,” wherein the creators of some automation technologies utilize various principles of cognitive science and artificial intelligence to generate avatars capable of garnering consumer trust. We demonstrate how such trust can be exploited to engage in extensive, clandestine consumer profiling under the guise of friendly conversation and show how BuddyBots and other such applications have been used by businesses to collect valuable personal information and private communications without lawful consent. As an antidote, we offer some basic consumer protection principles, with the aim of generating a more socially-responsible vision of the application of artificial intelligence in automated electronic commerce.

Download a preprint of this article

Virtual Playgrounds and BuddyBots: A Data-Minefield for Tinys and Tweenies” (2005) 4 Canadian Journal of Law & Technology 91-105  [co-authored in equal proportion with Val Steeves].

This article, co-authored by my good friend and colleague Valerie Steeves, a professor of criminology at the University of Ottawa, examines the online world of tweens (kids, not quite teens) looking at some of the places they play, chat and hang out online. We argue that these spaces are surreptitiously defined by commercial imperatives that seek to embed surveillance deeper and deeper into children’s playgrounds and social interactions. We show how online marketers do more than implant branded products into a child’s play; they collect minute and often intimate details of a child’s life. Part of our aim is to demonstrate that they do so by building relationships of trust between the child and the corporate brand. Although marketing to children in this way is not new, a networked environment magnifies the effect on a child’s identity because it opens up a child’s private online spaces to the eye of the marketer in unprecedented ways. Online marketers can invade the child’s privacy in a profound sense, by artificially manipulating the child’s social environment and communications in order to facilitate a business agenda. We offer several striking examples of this, including interactions with “buddybots”, automated software programs that can engage in realtime chat and create the illusion of friendship. This article was originally prepared for a workshop organized by On the Identity Trail for the 2005 Computers, Freedom and Privacy Conference in Seattle, called Keeping an Eye on the Panopticon: A Workshop on Vanishing Anonymity and subsequently published in the Canadian Journal of Law & Technology.

Download a preprint of this article

Two Years On the Identity Trail”, Ian Kerr and Hilary Young (2005) Canadian Privacy Law Review.

This brief article describes the activities of On the Identity Trail, the SSHRC INE funded privacy project for which I am principle investigator. Co-written with Dr. Hilary Young, currently a second year law student at the University of Ottawa, the article reports on the many milestones that our project accomplished during its first two years.

Download a preprint of this article

Hacking@Privacy: Anti-Circumvention Laws, DRM and the Piracy of Personal Information” (2005) Canadian Privacy Law Review.

This article is a shorter adaptation of “If Left to Their Own Devices: How DRM and Anti-Circumvention Laws Can Be Used to Hack Privacy” in M. Geist, In The Public Interest: Canadian Copyright in a Digital Age (Toronto: Irwin Law, 2005).

In it, I examine Canada’s recently proposed anti-circumvention laws set out in the former Bill C-60. The proposed laws would have protected the copyright industries against individuals using devices to circumvent technological protection measures (TPMs) and digital rights management systems (DRM). I argue that the proposed anti-circumvention laws fail to address any aspects of the privacy implications of DRM, despite the obvious privacy threats that automation, cryptographic techniques, and other DRM technologies impose. I provide three public policy considerations in determining an “appropriate balance” for DRM and privacy: (i) the anonymity principle; (ii) individual access; and (iii) DRM licenses. I conclude by giving three recommendations that would provide counter-measures necessary to offset the new powers and protections afforded to TPM and DRM if Canada’s anti-circumvention laws are implemented as policy: (i) an express provision prohibiting the circumvention of privacy by TPM/DRM; (ii) an express provision stipulating that a DRM license is voidable when it violates privacy law; and (iii) an express provision permitting the circumvention of TPM/DRM for personal information protection purposes.

Download a preprint of this article

The Implications of Digital Rights Management for Privacy and Freedom of Expression” 2:1 (2004) Information, Communication and Ethics in Society 87-94 [co-authored in equal proportion with Jane Bailey].

This article was co-authored by my good friend and colleague Jane Bailey , a law professor at the University of Ottawa. We examine some of the broader social consequences of enabling digital rights management (DRM), focusing particularly on two central features of DRM: (i) its surveillance function and (ii) its ability to unbundle copyrights into discrete and custom-made products. We conclude that a promulgation of the current use of digital rights management has the potential to transform the basis of control for intellectual creations from various public powers to the invisible hands of private control. We also try to show that the current DRM strategy has the potential to seriously undermine our fundamental public commitments to personal privacy and freedom of expression. This article stems from a presentation that we made ETHICOMP 2004 in Syros, Greece in 2004. Among other things, the food there was truly incredible!

Download a preprint of this article

Bots, Babes and the Californication of Commerce” (2004) 1 Ottawa Law and Technology Journal 285-325.

This article was my first scholarly contribution to On the Identity Trail. The study traces the architectures of human-computer interaction back to its conceptual origins in the field of artificial intelligence as the context for studying some of the lesser known consequences of today’s automation tools and their potentially harmful effect on everyday consumers. It illustrates how artificial intelligence can be used to simulate familiarity and create the illusion of friendship, sometimes with the aim of misdirecting consumers. It also exposes various forms of surreptitious surveillance that take place in the course of automated ecommerce and demonstrates how certain human-machine interactions can be used to diminish consumers’ ability to make informed choices, thereby undermining the consent principle required by data protection and privacy law. I think that this work constitutes one of the few published attempts to link existing work on privacy and data protection with future research on the human-machine merger.

Download a preprint of this article

Building a Broader Nano-Network” (2004) 12 Alberta Health Law Review 57-63. [co-authored with Goldie Bassi].

This article, co-authored by Goldie Bassi, is a shorter adaptation of “Not That Much Room? Nanotechnology, Networks and the Politics of Dancing,” which was published in the Alberta Health Law Journal.

Download the article here

Not That Much Room? Nanotechnology, Networks and the Politics of Dancing” (2004) 12 Health Law Journal 103-123.

Although there is a broadening social interest in the development of powerful and general nanotechnology, the public discourse to date has largely avoided a comprehensive examination of its social dimensions. Instead, the scientific debate has focused mostly on what is and is not scientifically possible. In this regard, much attention has been paid to the feasibility of Richard Feynman’s famous 1959 vision, i.e., whether it is possible to manufacture complex molecules atom-by-atom. Feynman’s vision has been fiercely debated in scientific literature and the popular press. This article, co-authored by Goldie Bassi examines the most famous version of this debate: a set of exchanges between Richard Smalley and Eric Drexler. We argue that, while this politically-charged debate has been extremely influential within scientific circles, the traditional point/counter-point approach to scientific dialogue does not provide an adequate basis for building normative or regulatory structures for nanotechnology. Although the development of sound social policy about a given technology must certainly commence with considerations about what is presently foreseeable, we suggest that it is also important to contemplate possibilities that are not necessarily congruent with today’s forecasts. We further propose that scientific forecasting is itself an insufficient social safeguard against a technology said to have the potential to revolutionize our ability to control and manipulate matter. In examining this claim, we demonstrate the power of scientific networks to shape policy agendas, control the development and implementation of new technologies, and influence the manner in which they are ultimately regulated. In response, we recommend that policy makers embrace a foresight model that aims to develop a broader network of social participants in their deliberations about the future regulation of nanotechnology.

Download a preprint of this article

Mesures de protection techniques: Partie I – Tendances en matière de mesures de protection technique et de technologies de contournement” (2003) 15:2 Les Cahiers de Propriété Intellectuelle 575-617 [co-authored in equal proportion with Alana Maurushat and Chris Tacit].

Mesures de protection techniques: Partie II – La protection juridique des MPT” (2003) 15:3 Les Cahiers de Propriété Intellectuelle 805-863 [co-authored in equal proportion with Alana Maurushat and Chris Tacit].

Technological Protection Measures: Tilting at the Copyright Windmill” (2003) 34 Ottawa L. Rev. 9-82 (co-authored in equal proportion with Alana Maurushat and Chris Tacit).

This article, co-authored by Alana Maurushat and Chris Tacit, is an adaptation of a longer, two part Study commissioned by the Department of Canadian Heritage. It examines the policy questions associated with Canada’s decision whether to ratify the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty, focusing on the extent to which Canadian law ought to protect the technologies that protect works subject to copyright in a digital environment. We commence with a detailed description of the current state of the art in technological protection measures (TPMs). We conclude that, until the market for digital content and the norms surrounding the use and circumvention of TPMs become better known, it is premature to ascertain the appropriate legal response. Consequently, we made what has turned out to be a relatively controversial suggestion: that Canada should not implement any new legal measures to protect TPMs at this time. Recognizing the possibility that such measures might need to be adopted for political reasons, we then recommended that the legislative creation of access-control right must be counter-balanced by a newly introduced access-to-a-work right. Finally, we pointed out that before asking whether and under what circumstances copyright legislation ought to protect TPMs, perhaps it is necessary to first ask whether and under what circumstances TPMs should be permitted to flourish.

Download a preprint of this article

Ensuring the Success of Contract Formation in Agent-Mediated Electronic Commerce” (2001) 1 Electronic Commerce Research Journal 183-202.

In this article, I examine a number of contractual issues generated by the advent of intelligent agent applications. The aim of this study is to provide legal guidelines for the developers of intelligent agent software by addressing the contractual difficulties associated with automated electronic transactions. I investigate whether the requirements for a legally enforceable contract are satisfied by agent applications that operate independent of human supervision, and provide an analysis of whether proposed and enacted electronic commerce legislation in various jurisdictions is sufficient to cure the inherent deficiencies of traditional contract doctrine. Given the trend towards automated electronic commerce, I conclude by highlighting the legal requirements that must be met in order to ensure the success of agent technology in the formation of online contracts.

Download a preprint of this article

The Legal Relationship Between Online Service Providers and Users” (2001) 35 Canadian Business Law Journal 1-40.

This article is an adaptation for the business sector of an important multi-disciplinary research initiative that was commissioned by the Law Commission of Canada, the Canadian Law & Society Association, the Canada Council of Law Deans and the Canadian Association of Law Teachers. The broader project, which resulted in a book explored relationships of dependence and interdependence. In this article, I examine the relationship between online service providers and internet users as a relationship of dependence in order to investigate whether ISPs might ever owe fiduciary obligations that would preclude them from disclosing a user’s personal information or private communications.

Download a preprint of this article

Pregnant Women and the Born Alive Rule in Canada” (2000) 8 Tort Law Review 713-19.

In this article, I examine the theory of liability for pre-natal injuries adopted by Canadian courts. In 1933, the Supreme Court of Canada became the first common law appellate court to allow a child born alive to succeed in negligence against a third party for pre-natal injuries. While the “born alive” rule may appear unproblematic vis-a-vis third party negligence, it becomes theoretically unruly in cases where a child sues his or her own mother for pre-natal injuries. The Supreme Court faced this issue in Dobson v. Dobson and for policy reasons found that pregnant women are immune from maternal tort liability in negligence. I argue that the decision to adopt public policy considerations to the exclusion of a principled approach ultimately sidesteps the issue of when the relationship between a pregnant woman and her foetus gives rise to a legal duty of care.

Download a preprint of this article

Contract Formation in The Age of Automation: A Study of the Attribution Rules in Electronic Commerce Legislation” (2000) 61 Revista del Colegio de Abogados de Puerto Rico 208-245.

Download this article

Electronic Miscommunication and The Defamatory Sense” (2000) 15 Canadian Journal of Law & Society 81-110 [co-authored in equal proportion with Jacquelyn Burkell].

This article was co-authored by my good friend and colleague, Jacquelyn Burkell , who is a professor in the Faculty of Information and Media Studies, at the University of Western Ontario. In it, we examine the effect that cultural and technological changes have had on interpersonal communication. Our aim is to provide an interdisciplinary explanation for the proliferation of defamation in electronic media. We argue that the absence of certain extra-linguistic cues and established cultural conventions in the electronic environment often results in miscommunication which – if not itself defamatory – gives rise to emotional exchanges between interlocutors in a manner that provokes defamation. We conclude by rejecting the naive point of view that a libel published through the Internet ought to be dealt with in exactly the same way that a libel published in a newspaper is. In the end, we suggest that further empirical research about the content that is produced as a consequence of contextual challenges in electronic communication is necessary.

Download a preprint of this article

Spirits in the Material World: Intelligent Agents as Intermediaries in Electronic Commerce” (1999) 22 Dalhousie Law Journal 189-249.

This article is a scholarly extension of some earlier work of mine commissioned by the Uniform Law Conference of Canada . In it, I provide a critical evaluation of the various solutions that might be adopted by a legislature seeking to cure formal defects in agreements that are negotiated and entered into by software programs, independent of human review. I begin by examining the current and future state of intelligent agent technology. After that, I outline the barriers to automated electronic commerce inherent in traditional contract doctrine. Then I argue against the proposal to cure doctrinal difficulties by deeming electronic devices to be legal persons. I also investigate the merit of the legislative approaches adopted by UNCITRAL, the National Conference of Commissioners of Uniform State Laws (U.S.), and the Uniform Law Conference of Canada. In the end, I offer up an alternative approach, based on the law of agency.

Download a preprint of this article

Indemnity for the Cost of Rearing Wanted Children From Unwanted Pregnancies” (1998) 6 Tort Law Review 120-124.

In this brief article, I examine how the use of the terms “wrongful pregnancy” and “wrongful birth” in unwanted pregnancy litigation obfuscates, rather than clarifies, the true issues raised in such proceedings. Using the English Court of Appeal case, R. v. Croydon Health Authority, I highlight that it is incumbent upon legal scholars and the judiciary to employ a consistent terminology that is straightforward and useful.

Download a preprint of this article

Pre-Natal Fictions and Post-Partum Actions” (1997) 20 Dalhousie Law Journal 237-274.

This article was my first ever publication, stemming from my doctoral dissertation in philosophy. The article provides a theoretical investigation of the problems for personhood theory raised by the facts in Dobson v. Dobson, one of the most well known and controversial tort law cases considered by the Supreme Court of Canada. This case involved a determination of whether an injury to an unborn child (caused by his mother’s negligent driving while pregnant) was recoverable once the child was born alive. In this article I provide a critique of the traditional theory of legal personhood, offering an alternative approach. Portions of this article were quoted with approval by the majority of the Supreme Court of Canada, who then adopted my suggested approach to personhood in resolving the dispute, consequently discarding the received view of the past several decades. This work directly affected the law in Canada and helped to clarify the Supreme Court of Canada’s theory of legal personhood.

Download a preprint of this article