Book Chapters

"Robots and Artificial Intelligence in Health Care" in Joanna Erdman, Vanessa Gruben, Erin Nelson, eds, Canadian Health Law and Policy, 5th ed (Toronto: LexisNexis Canada, 2017)

This chapter, written with Jason Millar and Noel Corriveau, examines the increased use and evolution of robotics and AI in the health sector as well as emerging challenges associated with this increase. Our goal is to provoke, challenge and inspire readers to think critically about imminent legal and regulatory issues in the healthcare sector.

 We begin in part one by offering an overview of the current use of robots and AI in healthcare: from surgical robots, exoskeletons and prosthetics to artificial organs, pharmacy and hospital automation robots, and finally examining the role of social robots and Big Data analytics. While great technological strides are materializing, there is a corresponding need for regulatory mechanisms to follow, ensuring a simultaneous evolution of the relationship between human and machine in order to achieve the best outcome for patients. In an era of information overload, security standards and data protection and privacy laws will need to adapt and be adjusted to ensure patient safety and advocacy.

 In part two, we discuss main sociotechnical considerations. First, we examine the impact of sociotechnical considerations in changing the overall medical practice as the understanding and decision-making processes evolve to include AI and robots. Next, we address how the social valence considerations and the evidenced-based paradox both raise important policy questions related to the appropriateness of delegating human tasks to machines, and the changes in the assessment of liability and negligence.

In part three, we address the legal considerations of liability for robots and AI, for physicians and for institutions using robots and AI as well as AI and robots as medical devices. Key considerations regarding negligence in medical malpractice come into play as the duty and standard of care will need to evolve amidst the emergence of technology. Legal liability will also need to evolve as we inquire into whether a physician, choosing to rely on their skills, knowledge and judgement over an AI or robot recommendation, should be being held liable and negligent. Finally, we address the legal, labour and economic implications that come into play to when assessing whether robots can be considered employees of a health care institution, and the potential opening of the door to vicarious liability. We remind our readers that institutions will also have to consider their direct duties to patients, including their duty to instruct and supervise in the case of robots.

 Our overall aim is to leverage technology to provide accessible and efficient health care services, without over- or under- regulation. We believe this will be achieved, while mitigating risks, through the development of new social, legal and policy frameworks.

Download this chapter

Delegation, Relinquishment and Responsibility: The Prospect of Expert Robots” in Ryan M. Calo, Michael Froomkin, Ian Kerr, eds, Robot Law (Cheltenham: Edward Elgar Pub. Ltd., 2016).

Rapid technological development in robotics and Artificial Intelligence has given rise to a dilemma that is becoming harder to ignore. Should we continue to entrust all our decision-making to humans, fallible though they may be, or should we instead delegate decision making to robots, relinquishing control to the machines for the greater good?

This chapter, written in collaboration with my good friend and colleague Jason Millar, engages this dilemma, exploring the notion of robots as ‘experts’ rather than tools.  When considered to be mere tools the true capabilities of robots may be disguised. Applying the normative pull of evidence, we argue that decision-making authority should, in some circumstances, be delegated to expert robots in cases where these robots can consistently perform better than their human counterparts. This shift in decision-making responsibility is especially important in time-sensitive situations where humans lack the capacity to process vast amounts of information, an advantage held by fast-computing expert robots like IBM’s Watson. 

Here, we explore four hypothetical co-robotic cases, where we argue that expert robots ought to be granted decision-making authority even in cases of disagreement. We also address the responsibilities of robots when placed in decision-making roles, and the likely challenges we may face as a result. For example, unpredictable expert robots, acting under time pressures and without the ability to express their thinking, pose challenges for assessing liability. Overall, this chapter aims to offers a narrative of what delegating and relinquishing control to expert robots could look like, but does not assess the maintenance of human control or the trust and reliability factors that are required to make the decision to delegate.

Download this chapter

"Asleep at the switch? How killer robots become a force multiplier of military necessity" in Ryan M. Calo, Michael Froomkin, Ian Kerr, eds, Robot Law (Cheltenham: Edward Elgar Pub. Ltd., 2016)

This chapter was written in collaboration with one of my favourite all-time coauthors and friends, Katie Szilagyi.

Lethal autonomous weapons—machines that might one day target and kill people without human intervention or oversight—are gaining attention on the world stage. While their development, deployment and perceived superiority over human soldiers are presumed to be inevitable, in this chapter we challenge the prevalent view, arguing that the adoption of these technologies is not fait accompli.

We begin by canvassing the state of the art in robotic warfare and the military advantages that autonomous weapons offer, aiming to scratch below the surface level success of robotic warfare and consider the drastic effects its implementation can have on international humanitarian law, adherence to humanitarian principles, and notions of technological neutrality. International humanitarian law governs the use of particular weapons and advancing technologies in order to ensure that the imperative of humanity modulates how war is waged. Based on the interest of protecting civilians, military actions are therefore restricted through compliance with humanitarian principles, including proportionality between collateral injuries and military advantages, discrimination between combatants and non-combatants, and military necessity for reaching concrete objectives.

This chapter suggests that serious and catastrophic consequences become foreseeable when robots are given full autonomy to pull the trigger in complicated and context-dependent situations, and that technological neutrality is not a safe presumption. We also argue that when a disruptive technology changes the nature of what is possible, there is a corresponding expansion in what can be perceived of as “necessary,” allowing lethal autonomous robots to become a force multiplier of military necessity. Ultimately, we ask our readers to consider the consequences of a future with lethal autonomous robots, when the power to implement them lies in the hands of those who have not fully come to terms with their implications.

Download this chapter

Prediction, Preemption, Presumption: The Path of Law After the Computational Turn” in Privacy and Due Process After the Computational Turn, eds. Mireille Hildebrandt, Solon Barocas and Katja de Vries (London: Routledge, 2013).

This chapter examines the path of law after the computational turn. In framing my argument, I use Oliver Wendell Holmes Jr.’s famous “bad man” theory as a heuristic device for evaluating predictive technologies currently embraced by public and private sector entities worldwide. Perhaps America’s most famous jurist, Holmes was so fascinated by the power of predictions and the predictive stance that he made prediction the centerpiece of his own prophecies regarding the future of legal education. Holmes believed that predictions should be understood with reference to the standpoint of everyday people, made from their point of view and operationalized with their sense of purpose in mind.

In this chapter, I argue that Holmes’ vision is rapidly giving way to a very different model: machines making predictions about individuals for the benefit of institutions. This trend in today’s predictive technologies, I suggest, threatens due process by enabling a dangerous new philosophy of pre-emption. My primary concern is that the perception of increased efficiency and reliability in the use of predictive technologies might be seen as justification for a fundamental jurisprudential shift from our current ex post factosystems of penalties and punishments to ex ante preventative measures. Such a shift, I argue, would fundamentally alter the path of law by undermining the core presumptions and procedures built into the fabric of today’s retributive model of social justice, many of which would be pre-empted by tomorrow’s “actuarial justice”. Given the foundational role that due process values play in our legal system, I raise the question of whether law ought to set reasonable limits on the types of presumptions and predictions that institutions are permitted to make about people without their involvement or participation. While reliability, efficiency, and the bottom line will continue to be important social goals, I am concerned that to limit the discussion to issues of system design is to ignore the insight underlying the presumption of innocence and associated due process values—namely, that there is wisdom in setting boundaries around the kinds of assumptions that can and cannot be made about people.

This chapter does not offer concrete solutions; rather, it is written in the hopes of inspiring further research in the area of important threshold issues about the broader permissibility of prediction, pre-emption and presumption in the face of the computational turn.

Download this article

Privacy, Identity and Anonymity” in International Handbook of Surveillance Studies, eds. Kristie Ball, Kevin Haggerty and David Lyon (London: Routledge, 2012) [co-authored in equal proportion with Jennifer Barrigar]. 

This chapter was written in collaboration with one of my favourite readers and writers, Jennifer Barrigar.  Together, we consider the complex interrelationship between privacy, identity and anonymity in an increasingly networked society through an exploration of the evolution of network technologies and its consequent shifts in social and technological architectures.   The rise of ubiquitous computing from CCTV cameras and handheld devices to digital rights management systems (DRM) and radio frequency identification (RFID) tags has precipitated a shift in the network architecture from one in which anonymity was the default to one in which nearly every online transaction is subject to monitoring and the possibility of identity authentication. We argue that this invariably affects the relationship between privacy, identity and anonymity.

Going forward, we suggest that individual experience will become increasingly characterized and shaped by ubiquitous computing, social networks, information intermediaries, actuarial justice and social sorting.  By briefly examining privacy, identity and anonymity in three distinct parts as well as offering a case study on anonymity in a networked society, we try to demonstrate that the creation of appropriate regulatory protections will depend on the preservation of commitments to fundamental underlying rights such as freedom of speech, autonomy, equality, and security of the person.  We also briefly examine the extent to which an individual’s ability to manage one’s privacy, including the power to identify oneself or to speak anonymously, is inherently linked to the concept of surveillance.  We conclude that, just as our desire for privacy may in some cases necessitate surveillance, so too does the ever-expanding database of personal information require that some of our performances can be separated from that person of record.

Download a preprint of this article

Emerging Health Technologies” in Canadian Health Law and Policy, 3rd ed. eds. Jocelyn Downie, Timothy Caulfield, Colleen Flood (Toronto: Butterworths, 2012) [co-authored in equal proportion with Timothy Caulfield and Jennifer Chandler].

This chapter, written with my colleague and good buddy, Tim Caulfield, briefly surveys four emerging technologies that are likely to have a significant impact on Canadian health law and policy in the coming years. We start the chapter with a consideration of the Human Genome Project and how social policy might contend with the possibility of genetic discrimination. Then, we examine Radio Frequency Identification (RFID) technology as a means of linking an unconscious or disoriented patient to an electronic health record and the potential privacy implications of doing so. Next, we look at stem cell research and the questions it raises about the challenges associated with making policy in a morally contested area. Finally, we contemplate issues not yet articulated in a field not yet defined: nanotechnology and how to regulate against potentially catastrophic harms that are not yet understood.

Our aim in this chapter is not so much to prioritize or predict as it is to offer a new lens through which to consider various fundamental legal and ethical principles and their application to health law and policy in novel situations. Rather than providing comprehensive coverage of all known technologies or every issue that might possibly arise, we have chosen to sample a particular array of current and future technologies, presenting each alongside a core health law precept or principle.

After surveying these four emerging technologies and the issues they raise, the chapter ends with a brief consideration of issues associated with how science and technology are transferred from the laboratory to the community through the process of commercialization. We consider how scientific research is transformed into technological applications through the process of commercialization. When the governance of science and the proper place of technology in our health care system is considered, it is important to recognize that the technologies that science enables are not neutral and that it is therefore not always appropriate to leave science to its own devices.

Download a preprint of this article

Digital Locks and the Automation of Virtue” in From Radical Extremism to Balanced Copyright: Canadian Copyright and the Digital Agenda” ed. Michael Geist (Toronto: Irwin Law, 2010) 247-303. 

This chapter examines the social and moral cost of digital locks. I trace the concept and construct of a lock all the way back to the mythical Gordian knot, revealing two essential features of locks. First, I argue that locks are important not only for what they restrict, but for what they permit. I develop this idea in the context of digital locks using the concept of automated permissions. Second, I argue that the restrictions imposed by locks come with a social and moral cost; namely, that the adoption of a universal digital lock strategy could undermine the cultivation of moral virtue.

I begin with an examination of a series of historical and cultural vignettes investigating the nature, purpose and symbolic significance of locks. I then examine digital locks and the power afforded to keyholders to control others through the automation of permissions, in effect enabling or disabling the world we live in by setting terms and conditions for its use. After discussing the control locks give to the keyholders, I illustrate the potential progression of a widespread digital lock strategy and what this might mean. I then go on to ask how this might affect us as moral actors who desire to do good things. In answering this question, I try to demonstrate that a state sanctioned, unimpeded and widespread digital lock strategy would impair our moral development by impeding our ability and desire to cultivate the practical wisdom necessary for the acquisition of morally virtuous dispositions. Finally, I briefly investigate Bill C-32, Canada’s (former) proposal for sanctioning the use of digital locks and prohibiting their circumvention. Arguing that the flaws in Bill C-32 are symptomatic of the larger digital lock strategy, I conclude that the proposed legislative solution is inelegant – a brute force formula that fails to achieve a balanced copyright framework.

Given that the Government of Canada has recently enacted anti-circumvention provisions in its The Copyright Modernization Act, I hope that you will give this Chapter careful consideration.

Download this article

The Strange Return of Gyges Ring” in Lessons from The Identity Trail: Anonymity, Privacy and Identity in a Networked Society (New York: Oxford University Press, 2009).

Book II of Plato’s Republic tells the story of a Lydian shepherd who stumbles upon the ancient Ring of Gyges that has the power to make him invisible. In the story, the shepherd uses the ring to gain secret access to the castle where he kills the king and overthrows the kingdom. Plato uses this story to pose the classic philosophical question: why be moral if one can act with impunity? 

In a network society—where social structures and activities are organized around electronically processed information networks this classic philosophical question ceases to be the luxury of an ancient philosopher’s thought experiments.

This article, written as an introduction to the anthology Lessons from the Identity Trail, begins by discussing “the network society” and re-articulates the lesson from the tale of the Ring of Gyges in the context of anonymous online activity. The article goes on to describe the three themes discussed in the anthology: privacy, identity, and anonymity. 

Download a preprint of this article

The Internet of People?: Reflections on the Future Regulation of Human Implantable Radio Frequency Identification” in Lessons From The Identity Trail: Anonymity, Privacy and Identity in a Networked Society, eds. Ian Kerr, Valerie Steeves and Carole Lucock (New York: Oxford University Press, 2009).

In 2004, twenty-five global law students and I listened to the proprietor of the Baja Beach Club in Barcelona pitch the idea of getting implanted with an RFID tag to allow easy access to the VIP lounge of the club and to act as an easy payment system for booze at the bar. Would my students seriously consider getting chipped?

The technological possibility of an RFID-enabled internet of things looms on the horizon. Companies like Applied Digital Solutions Inc., makers of the VeriChip, have been working hard to ensure this. In this chapter I argue that our privacy laws are not equipped to protect us in this fast-approaching new infrastructure.

Part I offers a brief account of RFID technologies. I define and explain the purpose and use of RFID tags. After describing various RFID applications, I suggest that if RFID becomes a mainstream technology, it could be truly transformative, enabling “the internet of things.” I then offer a brief overview of RFIDs in the realm of health care. This overview provides an example of the issues that can arise regarding the regulation of the many functions of human-implantable RFIDs.

In Part II I provide a brief explication of existing regulatory environment for RFID.  I review existing laws applicable to RFID such as regulations regarding such things as (a) communications, (b) electronic waste, (c) healthy and safety, and (d) privacy. The purpose of this section is to set the stage for Part III, where I set out my belief that current approaches are too narrow and will fall short in protecting our privacy and autonomy interests if implantable RFID becomes part of the infrastructure of the so-called Internet of things.

In order to grasp the potential shortcomings of our current regulatory environment, in Part IV I aim to show that human-implantable RFIDs are just one of the many implantable devices being developed as part of a growing trend to merge human bodies with machine parts.  In Part V, I conclude the chapter by suggesting that, rather than giving up core principles and values just because they are in tension with RFID and other emerging technologies, we must (i) rethink the appropriate application of these principles, and (ii) determine whether there is sufficient justification for moving forward with human-implantable RFID, ubiquitous computing, and the internet of things.

Download this article

Soft Surveillance, Hard Consent” Lessons From The Identity Trail: Anonymity, Privacy and Identity in a Networked Society (New York: Oxford University Press, 2009) [Co-edited in equal proportion with Valerie Steeves and Carole Lucock] (Oxford University Press, in press 2009) [co-authored in equal proportion with Jennifer Barrigar, Jacquelyn Burkell and Katie Black]. Also available in Consent and Law: Problems and Perspectives, ed. N. Sudarshan (India: Amicus Books,  2009) 23-45.

In this article, me and my co-authors explore how, like newer approaches to State paternalism, both public and private sector surveillance increasingly rely on what Gary Marx once referred to as “soft” measures. Taking their cue from the behavioral sciences, governments and businesses have come to realize that kinder, gentler approaches to personal information collection work just as well as coercion or deceit — and that engineering consent is the key to their success.

In our examination of this fascinating topic, we contemplate various aspects of the role of consent in the collection, use and disclosure of personal information. After demonstrating how consent-gathering processes are often designed to quietly skew individual decision-making while preserving the illusion of free choice, we point out the dangers of these subtle schemes as well as the inadequacies of current privacy laws in dealing with them. In examining some potential remedies, we investigate the practical implications of data protection provisions that allow individuals to “withdraw consent.” Canvassing recent interdisciplinary work in psychology and decision theory, we try to explain why such “withdrawal of consent” provisions will not generally provide effective relief and argue that there is a need for a higher threshold of initial consent in privacy law than in private law.

Download a preprint of this article

Scoping Anonymity in Cases of Compelled Disclosure of Identity: Lessons from BMG v. Doe” in Contours of Privacy, ed. David Matheson (in press 2009) [co-authored with Alex Cameron].

This chapter, co-authored by Alex Cameron, provides an exploration of the reasons why a Canadian Federal Court and the Federal Court of Appeal both refused to compel five Internet service providers to disclose the identities of twenty nine ISP subscribers alleged to have been engaged in P2P file-sharing. We argue that there are important lessons to be learned from the decision, particularly in the area of online privacy. Although this case reinforces the right to online privacy, we suggest that the Court’s decision could have the ironic effect of encouraging more powerful private-sector surveillance of our online activities, and that this might result in a technological backlash by some in order to ensure that Internet users have even more impenetrable anonymous places to roam. Consequently, we encourage the Court to further develop its analysis of how, when, and why the compelled disclosure of identity by third party intermediaries should be ordered by including a broader-based public interest in privacy as an element in the analysis.

Download a preprint of this article

Deputizing the Private Sector? ISPs as Agents of the State” in Desafíos del derecho a la intimidad y a la protección de datos personales en los albores del siglo XXI. Perspectivas del derecho latinoamericano, europeo y norteamericano (2009) [co-authored with Daphne Gilbert]. Also available in Challenges of Privacy and Data Protection Law. Perspectives on European and North American Law, ed. Maria Veronica Perez Asinari and Pablo Palazzi (Bruxelles: Bruylant, 2009).

This chapter was written in collaboration with my longtime colleague and co-author Daphne Gilbert. Together, we describe the changing role of telecommunications service providers (TSPs) from trusted stewards of clients’ personal information to “agents of the state”, from gatekeepers of privacy to active partners in the fight against cybercrime. We argue that the legislative approach that has been or will soon be adopted in various jurisdictions around the world, including Canada, will lower the threshold of privacy protection and significantly alter the relationship between TSPs and the individuals who have come to depend on them to manage their personal information and private communications.

The Chapter begins with an investigation of the role of TSPs as information intermediaries, and then moves to examine a Canadian online search and seizure case, where a TSP acted as an “agent of the state” by sending to the police copies of a client’s personal emails without his knowledge or consent. The Council of Europe’s Convention on Cybercrime is considered next, focusing on the privacy implications of its potential implementation in Canada and the possibility of a challenge to the constitutionality of new cybercrime laws based on the Canadian Charter.

Download a draft of this article

Nymity, P2P & ISPs: The Implications of BMG (Canada) v Doe” in Privacy and Technologies of Identity: A Cross-Disciplinary Conversation, ed K.J. Strandburg and D.S. Raicu (New York: Springer, 2005).

This chapter, co-authored by Alex Cameron, provides an exploration of the reasons why a Canadian Federal Court and the Federal Court of Appeal both refused to compel five Internet service providers to disclose the identities of twenty nine ISP subscribers alleged to have been engaged in P2P file-sharing. We argue that there are important lessons to be learned from the decision, particularly in the area of online privacy. Although this case reinforces the right to online privacy, we suggest that the Court’s decision could have the ironic effect of encouraging more powerful private-sector surveillance of our online activities, and that this might result in a technological backlash by some in order to ensure that Internet users have even more impenetrable anonymous places to roam. Consequently, we encourage the Court to further develop its analysis of how, when, and why the compelled disclosure of identity by third party intermediaries should be ordered by including a broader-based public interest in privacy as an element in the analysis.

Download a preprint of this article

To OBSERVE AND PROTECT? How Digital Rights Management Systems Threaten Privacy and What Policy Makers Should Do About It”, forthcoming in Intellectual Property and Information Wealth: Copyright and Related Rights (vol. 1), Edited by Peter Yu, Praeger Publishers, 2007.

I begin the chapter by distinguishing between technological protection measures (TPMs) and digital rights managements (DRM) systems, examining how such technologies are used to enforce corporate copyright policies and express copyright permissions imposed by a DRM through a registration process that requires purchasers to hand over personal information. Given DRM’s extraordinary surveillance capabilities, I argue that anti-circumvention laws must contain express provisions and penalties to protect citizens from organizations using TPMs and DRMs to pirate personal information, engage in excessive monitoring, and preclude people from exercising their right to access and control personal information. In determining an appropriate balance, I introduce three public policy considerations: (i) the anonymity Principle; (ii) individual access; and (iii) freedom from contract. I conclude that these three recommendations would provide the sort of counter-measures necessary to offset the new powers and protections afforded to TPM and DRM.

Download a preprint of this article

If Left to Their Own Devices…How DRM and Anti-Circumvention Laws Can Be Used to Hack Privacy” in Michael Geist, ed. In the Public Interest: The Future of Canadian Copyright Law (Toronto: Irwin Law, 2005).

This chapter examines the anti-circumvention laws set out in Bill C-60 (Canada’s first legislative attempt in response to the 1996 WIPO treaties), provisions that aim to protect the copyright industries from individuals using devices to circumvent technological protection measures (TPMs) and digital rights management systems (DRM). I argue that the proposed anti-circumvention laws fail to address any aspects of the privacy implications of DRM, despite the obvious privacy threats that automation, cryptographic techniques, and other DRM technologies impose. I start by distinguishing between TPMs and DRMs. Then I examine how these technologies are used to enforce corporate copyright policies and express copyright permissions imposed by a DRM through a registration process that requires purchasers to hand over personal information. After illustrating DRM’s extraordinary surveillance capabilities, I suggest that such privacy considerations are especially important in light of legislative reforms that use the law to further enable DRM and facilitate its implementation as a primary means of enforcing digital copyright. I investigate three public policy considerations in determining an “appropriate balance” for DRM and privacy: (i) the anonymity principle; (ii) individual access; and (iii) DRM licenses. These lead me to offer three recommendations that would provide counter-measures necessary to offset the new powers and protections afforded to TPM and DRM if anti-circumvention laws are implemented.

Download a preprint of this article

Should Law Protect the Technologies that Protect Copyright?” in Information Ethics in an Electronic Age: Current Issues in Africa and the World, ed. Thomas Mendina and Johannes Brtiz (Jefferson, North Carolina: McFarland Press, 2004.

This chapter provides a critical analysis of anti-circumvention legislation with a special focus on the extent to which the legal protection of copyright-protecting technologies might be said to undermine traditional copyright policy. While Technological Protection Measures (TPMs) hold the promise of ensuring legitimate access to digital work, when coupled with the ability to set licensing terms, TPMs provide copyright owners a much greater degree of control over work than copyright law historically allowed. After establishing the philosophical context of the protection of TPMs, I survey the four classes of technological protection: (i) general access control measures, (ii) limited access control measures, (iii) use control measures, (iv) anti-device control measures. Then I look at three available means copyright owners have of ensuring authorized access to their works: TPMs, existing copyright law, and the law of contract. I then review some of the US case law to illustrate the social consequences of adding as a fourth layer of legal protection (the Digital Millennium Copyright Act). I conclude the chapter by briefly examining how the American style of legal protection for TPMs could upset copyright law’s delicate balance between the private rights of creators, copyright owners, and the public’s interest in using works subject to copyright and suggest that there are potentially serious implications for public access to information, consumer privacy and freedom of expression.

The Role of ISPs in the Investigation of Cybercrime” in Information Ethics in an Electronic Age: Current Issues in Africa and the World, ed. Thomas Mendina and Johannes Britz (Jefferson, North Carolina: McFarland Press, 2004).

In this chapter, co-authored by my good friend and colleague Daphne Gilbert, we describe the changing role of internet service providers (ISPs) from trusted stewards of clients’ personal information to “agents of the state”, from gatekeepers of privacy to active partners in the fight against cybercrime. We begin with an investigation of the role of ISPs as information intermediaries and consider how information intermediaries can be used by law enforcement agencies. This is done through an examination of a Canadian search and seizure case, where an ISP was said to act as an “agent of the state” after sending copies of a client’s personal emails to the police of without his knowledge or consent. We argue that the “agent of the state” analysis is especially important in light of the Council of Europe’s Convention on Cybercrime. Finally, we conclude by considering the privacy implications of the evolving roles of ISPs and their shifting technological architectures, arguing that the changing face of our communications infrastructure must be built with safeguards that will not only further the goals of national security and law enforcement but will also preserve and promote personal privacy.

Download a preprint of this aricle

Online Service Providers, Fidelity and the Duty of Loyalty” in Ethics and Electronic Information, ed. Thomas Mendina and Barbara Rockenbach (Jefferson, North Carolina: McFarland Press, 2002).

In this chapter, I explore the possible privacy ramifications of our increasing reliance on Online Service Providers (OSPs), not only to provide quality informational services, but also to store and otherwise manage our private information online. Acknowledging that the current architecture of the networked world is moving towards a centralized (rather than end-to-end) computing model, this chapter investigates the degree to which OSPs are in a position of control and the extent to which they are duty-bound to safeguard our personal information. In particular, I question whether the moral institution of fidelity and the law of contract will adequately govern the relationships between OSPs and their users. Given that many OSPs break their promises with impunity and others make no such promises to begin with, I suggest that an alternative set of duties might be derived from the very nature of the relationship between some OSPs and their users- the fiduciary relationship. I conclude the chapter by proposing that when the criteria of a fiduciary relationship are met, it is possible to impose a duty of loyalty on some OSPs, requiring them to remain loyal to users whether it is in their best interest to do so or not.

Download a preprint of this article

Personal relationships in the Year 2000: Me and My ISP” in No Person Is an Island: Personal Relationships of Dependence and Independence (Vancouver: University of British Columbia Press, 2002).

This chapter explores the nature of the legal relationship between the Internet user and service provider by examining that relationship as a special instance of a relationship of dependence. After illustrating the incredible power that Internet Service Providers (ISPs) hold over their user’s informational privacy online, I look at the contractual underpinning of ISP-User relations. Part of my aim was to survey the broad range of ISP-user relationships and the varying degrees of confidentiality promised by ISPs. I also explored how legislative safe harbours that require ISPs to comply with law enforcement limit online confidentiality and run the risk of chilling free expression. Next, I examine dependence and interdependence in ISP-User relationships through an application of social exchange theory and law’s concept of the “fiduciary relationship.” By casting its focus on the informational imbalance between the parties rather than the more familiar types of power imbalances (e.g., inequalities based on economics, social status, physical strength, and expertise), the Chapter seeks to provide a more robust understanding of what it is that makes a relationship one of dependence in order to assist law reformers in determining whether the relationship between Internet user and service provider is, or ought to be, governed by anything other than the contractual arrangements between the parties or the minimal requirements of enacted privacy legislation.

Download a preprint of this article

When Computer Programs Contract Behind our Backs” in Transnational Cyberspace Law (Hart Publishing: Oxford, 2002).

The Legal Implications of Software Agents in Electronic Commerce” in Introduction to Transnational Cyberspace Law, ed. Makoto Ibusuki, (Tokyo: Nihon Hyoron Sha, 2001) [in Japanese].

Legal Fictions,” in The Philosophy of Law: An Encyclopedia, Volume I, ed. Christopher Gray, (Garland Publishing, 2000) 300-04.

This chapter was written while I was a law student. It provides a brief survey of the historical views and functions of the legal fiction, a judicial device used in civil and common law reasoning. A legal fiction is a false assumption of fact made by a court in order to reconcile a specific legal result with an established rule of law. I begin by looking at the nasciturus fiction originating in Roman law, which treats the unborn child as though born for the purposes of inheritance in order to circumvent the civil code’s prescription that legal personhood begins at birth. I then look at the historical debate between Blackstone and Bentham, followed by Maine’s middle ground view of fictions as an important tool in the development of full blown legal systems. Turning to present day concerns, I explore Lon Fuller’s view of fictions, extending his analysis of the risk of the fiction through my own analysis of the nasciturus fiction in contemporary private law within the context of maternal liability.

Mind Your Metaphors: An examination of the Inefficacy Argument as a reason against Regulating On-line Conduct” in Ethics and Electronic Information in the 21st Century, ed. Lester Pourciau, (Purdue University Press, 1999), 231-251.

This chapter provides a critique of what I call the “inefficacy argument”, which continues to be offered as one of the most common rationales against internet regulation. This line of reasoning is premised on the claim that the internet, as a decentralized communications technology, has the built-in ability to circumvent and thus render irrelevant many of our fundamental normative commitments. I argue that the regulation question is not simply a question of efficacy but is, instead, a moral question. I deconstruct the typical metaphors used in arguments against regulating online conduct and caution against allowing ourselves to believe that the realm of regulation is beyond our control. I argue that the destruction of normative commitments is not the result of new communications technologies, but rather, is the product of human interference. I conclude by proposing that normative concerns are not subordinate to practical ones and claim that to suggest otherwise is to speak in the language of metaphor and excuse, a language that will ultimately suppress the importance of individual responsibility and moral accountability.

Download a preprint of this article