- Providing for Autonomous Electronic Devices in the Electronic Commerce Act 1999
- I. The Technological Promise of Autonomous Electronic Devices
- II Doctrinal Difficulties Associated with Automated Electronic Commerce
- III. Curing Doctrinal Difficulties by Treating Electronic Devices as Independent Legal Persons
- IV. Curing Doctrinal Difficulties by Treating Electronic Devices as Extensions of Human or Corporate Interaction
- V. Curing Doctrinal Difficulties by Treating Electronic Devices As Agents
- VI. Summary of Recommendations
- All Pages
I. The Technological Promise of Autonomous Electronic Devices
What Is An Intelligent Software Agent?
To begin simply, “an agent is a software thing that knows how to do things that you could probably do yourself if you had time.”15 Besides carrying out tasks on behalf of some information user, what distinguishes software agents from other computer programs is that an agent is said to perform such tasks autonomously, i.e., without oversight or intervention. Besides autonomy, other properties that are characteristic of software agents include:16
- social ability (the capacity to interact with other software agents or with human beings through a shared language)
- mobility (the ability to move around an electronic environment)
- temporal continuity (the ability to run a process continuously in an active or passive mode rather than merely performing a once-only computation)
- reactivity (the ability to perceive an environment and respond to changes that occur within it)
- proactivity (the ability to initiate goal-directed behaviour)
- goal orientedness (the ability to handle complex, high level tasks by performing operations that break down tasks into smaller sub-tasks and then prioritize the order in which these tasks will be accomplished)
- adaptivity (the ability to adjust to the habits, working methods and preferences of a user)
In the current literature, “agenthood” is often as measured along two axises: agency and intelligence.17 In this context, the concept of “agency” refers to the degree of authority and autonomy given to an electronic device as it interacts with its user and with other electronic devices in an environment.18 The concept of “intelligence” in this context refers to
the degree of reasoning and learned behaviour: the agent’s ability to accept the user’s statement of goals and carry out the task delegated to it. At a minimum, there can be some statement of preferences, perhaps in the form of rules, with an inference engine or some other reasoning mechanism to act on these preferences. Higher levels of intelligence include a user model or some other form of understanding and reasoning about what a user wants done, and planning the means to achieve this goal. Further out on the intelligence scale are systems that learn and adapt to their environment, both in terms of the user’s objectives, and in terms of the resources available to the agent. Such a system might, like a human assistant, discover new relationships, connections, or concepts independently from the human user, and exploit these in anticipating and satisfying user needs.19One of the early prototypes out of the MIT Media Lab that exemplified a number of the properties that are characteristic of intelligent agents was a software program called Maxim.20 Described as a “personal digital assistant”, this software exploits agent technology in order to manage and filter email. The program can “learn to prioritize, delete, forward, sort, and archive mail messages on behalf of a user” by “looking over the shoulder”21 of a user as he or she works with his or her email and by making internal predictions about what a user will do with the email. Once Maxim achieves a particular level of accuracy in its predictions, it commences in offering suggestions to the user about how best to handle the email.
Around the same time that Maxim was being developed, Maes et al. also designed an Internet news filtering program known as Newt. After a human user provides Newt with a series of examples of news articles that would and would not be of interest, this information-specific feedback is utilized by Newt to develop an internal model of the user’s preferences, which is ultimately employed by Newt to filter and thereby select those items of news that would be of interest, without any need for the human user to browse the items. Newt is also capable of retrieving articles on the basis of explicit rules as provided by the user.22
Recent Applications of Intelligent Software Agents in Electronic Commerce23More recent developments at the MIT Media Lab and elsewhere have shifted away from automating pure information management systems in favour of agent technology aimed specifically at furthering electronic commerce. PersonaLogic, for example, is a tool that facilitates consumers in determining what to buy (product brokering) by guiding them through a large product feature space.24 This is accomplished by allowing consumers to specify constraints on a product’s features. A constraint satisfaction search engine then returns an ordered list of only those products that satisfy all the consumer’s chosen preferences. A similar product, known as Firefly, helps consumers find products.25 But instead of filtering on the basis of product features, Firefly recommends products via a “word of mouth” recommendation mechanism called automated collaborative filtering (ACF).26 “Essentially, Firefly uses the opinions of like-minded people to offer consumer recommendations. The system is currently being used to recommend commodity products such as music and books.”27
Other shopping agents have been developed that make comparisons not on the basis of products but by comparing merchant alternatives (merchant brokering). The first agent of this kind, developed by Andersen Consulting, is known as BargainFinder.28 When a user provides the name of a particular product, e.g., the CD titled: Dave Mathews Band - Live at Red Rocks, BargainFinder is able to search a number of merchant Web sites and determine and compare various price differentials. More recent agents, such as Jango,29 have been developed in order to correct certain limitations found in the earlier versions of merchant brokering agents.30 Other agents exploit different mechanisms for merchant brokering. Instead of surfing the Web for the best advertised prices, the University of Michigan’s AuctionBot allows buyers and sellers to congregate in the same virtual space and participate in personalized auctions that are created by sellers who are allowed to specify parameters such as clearing times, methods for resolving bidding ties, etc.31 One of the features said to distinguish AuctionBot from a number of other auction sites is that it provides an “application programmable interface” that enables users to create their own software agents to autonomously compete in the AuctionBot marketplace.32 By virtue of this feature, human users need not invest time in the actual bidding process, which often lasts for several hours or, in some cases, several days.
One of the more promising recent developments in agent technology related to merchant brokering is the MIT Media Lab’s Kasbah.33 This system is described as an “online, multi-agent classified ad system”:A user wanting to buy or sell goods creates an agent, gives it some strategic direction, and sends it off into a centralized agent marketplace. Kasbah agents proactively seek out potential buyers or sellers and negotiate with them on behalf of their owners. Each agent’s goal is to complete an acceptable deal, subject to a set of user-specified constraints such as a desired price, a highest (or lowest) acceptable price, and the date by which to complete the transaction. The latest version of Kasbah incorporates a distributed trust and reputation mechanism called the Better Business Bureau. Upon the completion of a transaction, both parties may rate how well the other party managed their half of the deal (e.g., accuracy of product condition, completion of transaction, etc.). Agents can then use these ratings to determine if they should negotiate with agents whose owners fall below a user specified threshold.
Negotiation in Kasbah is straightforward. After buying agents and selling agents are matched, the only valid action in the negotiation protocol is for buying agents to offer a bid to selling agents with no restrictions on time or price. Selling agents respond with either a binding “yes” or “no”.
Given this protocol, Kasbah provides buyers with one of three negotiation “strategies”: anxious, cool-headed, and frugal - corresponding to a linear, quadratic, or exponential function respectively for increasing its bid for a product over time. The simplicity of these negotiation heuristics makes it intuitive for users to understand what their agents are doing in the marketplace.34
As indicated in the above passage, Kasbah not only facilitates human users in the merchant brokering phase of electronic commerce but in the negotiation process as well. “Agent communication is based on a request-response protocol and is strictly agent-to-agent. There is no broadcast of messages and a third party agent cannot eavesdrop on a transaction taking place between two other agents.”35 When an agent (buying or selling) completes a transaction, a notification is sent to the user who created the agent. In a recent real-life experiment held at the MIT Media Lab, the notification messages were delivered to human users by pagers. Of course, there are other possibilities. Once the agent completes the deal, it ceases to negotiate with other agents and automatically asks the marketplace (a closed system) to remove it from the list of “active” agents. Among other things, this ensures that other agents will not be able to send it messages. According to the rules of engagement built into the design of the closed system, it is then up to the human users to “physically consummate” the transaction.36One last example of the recent innovation in agent technology relevant to electronic commerce is Tete-a-Tete (T@T).37 The feature that distinguishes this agent technology from its predecessors is that T@T negotiates in a cooperative rather than a competitive style.38 T@T can also negotiate across multiple terms of a transaction including “warranties, delivery times, service contracts, return policies, loan options, gift services, and other merchant value-added services.”39
Future Applications of Intelligent Software Agents in Electronic Commerce
[I]t is often impossible to identify the effects of a technology. Consider the now ubiquitous computer. In the mid-1940s, when the digital computers were first built, leading pioneers presumed that the entire country might need only a dozen or so. In the mid-1970s, few expected that within a decade the PC would become the most essential occupational tool in the world. Even fewer people realized that the PC was not a stand-alone technology, but the hub of a complex technological system that contained elements as diverse as on-line publishing, e-mail, computer games and electronic gambling.40It is unclear whether agent technology will appear in electronic commerce as part of an evolutionary or revolutionary process.41 As Hermans and others have pointed out, much will depend on the future infrastructure and architecture of the Internet, including: the chosen agent standards42; whether a homogeneous43 or heterogeneous44 architecture is adopted; whether interoperability standards will be required;45 etc.. The extent to which agent technology will require an interoperability standard exemplifies but one of the many difficult choices faced by the developers of agent technology. Currently, there is much debate over the appropriate agent paradigm in electronic commerce: should its negotiation protocol be competitive or cooperative in nature?46 Guttman et al. have recently rebuffed the use of competitive protocols in retail markets from economic, game theoretic, and business perspectives.47 Because merchants tend to strive for highly cooperative, long-term relationships with their customers in order to maximize loyalty, customer satisfaction and reputation, Guttman et al. recommend more cooperative multi-agent decision analysis tools instead of competitive negotiation protocols such as online auctions. If this approach becomes the norm – which presently appears to be the case – an interoperability standard will indeed be necessary.
If it turns out that open standards are further developed and adopted, one might expect that electronic commerce will shift away from it’s current mode of interaction – a mode which is in many ways constrained by the fact that transactions take place within a closed system (e.g., MIT’s Kasbah).48 In the future, there will likely be a move towards more open, “public” systems. This will require much greater agent mobility.49 In the open marketplaces of the future, the specific negotiation protocols will likely not be predetermined. These negotiation protocols would be left to the predilections of those who design, create and employ the intelligent agents involved in particular transactions.The future shift towards more open systems will have a significant impact on the legal treatment of automated electronic commerce. The current closed systems have the commercial advantage of clarifying all of the legal rules in advance. Recall, for example, that the gateway to Kasbah’s marketplace requires human users to adopt certain predetermined rules of engagement, many of which were built directly into the system.50 In the open systems of the future – where intelligent agents will be free to roam the Net in search of transaction partners without any preexisting commitment to the same rules of engagement as those preferred by agents encountered along the way – the threat of commercial uncertainty looms large. Unlike the original Kasbah marketplace, where the agents were purposely constrained to extremely simplistic negotiations in order to foster trust and confidence in the human users, consider the kind of legal clarification that might be required in the following future world:
Mary relies on a mobile agent to orchestrate her Friday evenings. Born months ago, the agent waits in a quiet corner of the electronic marketplace for most of the week; each Friday at noon it takes the following steps.
1. Mary's agent keeps a record of the films it selected on past occasions to prevent selecting one of those films again.
2. The agent travels from its place of repose to one of the many video places in the electronic marketplace. It uses the agent programming language's go instruction and a ticket that designates the video place by its authority and class.
3. The agent meets with the video agent that resides in and provides the service of the video place. It uses the meet instruction and a petition that designates the video agent by its authority and class.
4. The agent asks the video agent for the catalog listing for each romantic comedy in its inventory. The agent selects a film at random from among the recent comedies, avoiding the films it has selected before, whose catalog numbers it carries with it. The agent orders the selected film from the video agent, charges it to Mary's Visa card, and instructs the video agent to transmit the film to her home at 7 p.m. The video agent compares the authority of Mary's agent to the name on the Visa card.
5. The agent goes next to the Domino's pizza place. It uses the go instruction and a ticket that designates the pizza place by its authority and class.
6. The agent meets with the pizza agent that resides in and provides the service of the pizza place. It uses the meet instruction and a petition that designates the pizza agent by its authority and class.
7. The agent orders one medium‑size pepperoni pizza for home delivery at 6:45 p.m. The agent charges the pizza, as it did the video, to Mary's Visa card. The pizza agent, like the video agent before it, compares the authority of Mary's agent to the name on the agent's Visa card.
8. Mary's agent returns to its designated resting place in the electronic marketplace. It uses the go instruction and a ticket that designates that place by its place name and network address, which it noted previously.
All that remains is for the agent to notify Mary and Paul of their evening appointment. This is accomplished in the following additional steps.
9. The agent creates two new agents of Mary's authority and gives each the catalog listing of the selected film and Mary's and Paul's names. Its work complete, the original agent awaits another Friday.
10. One of the two new agents goes to Mary's mailbox place and the other goes to Paul's. To do this they use the go instruction and tickets that designate the mailbox places by their class and authorities.
11. The agents meet with the mailbox agents that reside in and provide the services of the mailbox places. They use the meet instruction and petitions designating the mailbox agents by their class and authorities.
12. The agents deliver to the mailbox agents electronic messages that include the film's catalog listing and that remind Mary and Paul of their date. The two agents terminate and the mailbox agents convey the reminders to Mary and Paul.51It does not require much imagination to conceive of adaptations in the use of this technology which would generate transactions much more sophisticated than the straightforward consumer purchases envisioned above. Imagine, for example, a similar agent technology applied by an industrial manufacturer that, instead of ordering pizza and a video, supports a team of software agents, each of which is dispatched to perform a particular task that will be carried out in conjunction with the tasks performed by other agents on the team. For example, after an agent designed to monitor the manufacturer’s supply of certain sub-components discovers that the supply is becoming low, it launches into action several merchant brokering agents which are then dispatched to search the Internet for the lowest prices for various sub-components needed to manufacture the ultimate product. Once the appropriate merchants sites have been discovered and evaluated, other agents would step in to negotiate the terms and conditions upon which those separate sub-components might be purchased (including product warranties, freight rates, delivery dates, exemption clauses, etc.). Other agents would assist with the information and communications pertaining to placing the orders and arranging for the shipping and receiving of the sub-components, while a different agent would initiate electronic payment schemes. Still other agents would deal with the marketing and sales of the ultimate product, once manufactured. Notice that the advent of electronic cash mechanisms52 – especially in cases where the goods bought and sold are information products not requiring a physical medium in order to execute the transaction – no longer requires human users to ratify or “physically consummate” agent-made agreements (as was necessarily the case in the original Kasbah experiment). Thus one ends up in a future world in which agreements are negotiated and entered into without any need for human traders to review or even be aware of particular transactions.
There is no doubt that a world such as this might create various advantages for human entrepreneurs. Such a world would spare human users from having to find, negotiate, and deal with buyers and sellers. A truly intelligent technology applied in this manner would depersonalize the process of negotiation, avoid misunderstandings resulting from language barriers and perhaps even free people to perform other important tasks or pursue more meaningful relationships.53 These systems would also allow more accurate business records to be kept since software agents could build databases that, among other things, keep track of all interactions (whether or not the particular negotiation resulted in the formation of a contract). Some authors believe that the proper integration of the information on such databases would not only reduce transaction costs but would lead to pricing that is closer to optimal.54Of course, such a world would also create various disadvantages too.55 As programers of intelligent agent technology become more adept, it will become possible for them to design deceitful and perhaps even malicious agent protocols. Some authors have suggested that there might be technological solutions to these technological problems: “We might have regulator agents roaming the marketplace to ensure that no illegal activity occurs.”56 It is difficult at present to know or even imagine whether agent technology could ever rise to the occasion. Even if such technology became possible, it is not clear that regulator agents could effectively operate in the open systems of the future where there would exist an indeterminate number of potential marketplaces. Nor is it clear that we would want them to.
Deceit aside, it is also quite possible for agent technology to malfunction or in some other way carry out decision processes that do not comport with the intentions or purposes of the human user who employed the particular agent or, for that matter, the human designer of the software agent. First, as Karnow points out, software is by nature unreliable.
The failure of a complex program is not always due to human negligence in the creation or operation of the program, although examples of such negligence are legion. But, in addition, there are problems with software reliability. While it is at least theoretically possible to check to see if a program output is correct in a given instance, it has not been proven that programs can be verified as a general matter; that is, that they are correct over an arbitrary set of inputs. In fact, it appears highly unlikely that even programs which successfully process selected inputs can be shown to be correct generally.
Software reliability generally cannot be conclusively established because digital systems in general implement discontinuous input-to-input mappings that are intractable by simple mathematical modeling. This is particularly important: continuity assumptions can’t be used in validating software, and failures are caused by the occurrence of specific, nonobvious combinations of events, rather than from excessive levels of some identifiable stress factor.The long-term operation of complex systems entails a fundamental uncertainty, especially in the context of complex environments, including new or unpredictable environments. That, of course, is precisely the situation in which intelligent agents are forecast to operate.57
Beyond the difficulties inherent in testing and verifying the response of software before it is put onto the market, it is well understood by programmers and computer scientists that producing the perfect, error-free program is a statistically impossible exercise. Software instructions are propagated through computer system by means of a series of ones and zeros or “ons” and “offs,” with each instruction creating a discrete state within the computer. Each new instruction interacts with the instructions given before, producing new discrete states. With literally millions of lines of coding and the resulting combinations of instructions, it is possible to determine that any computer, while processing a piece of software, can exist in billions or even trillions of completely unique conditions. It is thus impossible to predict the computer's behavior in all situations. In many cases, even if an error is found, a programmer will decide that its correction could lead to so many new complications that leaving the error in place and knowing of its existence is better than to attempt to correct the problem.58In addition to unreliability on the part of a software agent, the intentions of a human user are not always carried out even when the agent technology is performing reliably. For example, recent software technology developed and described by Hofstadter and Mitchell is designed specifically to handle radical shifts in context and to perform “unpredictable but pertinent results.”59 Because mobile agent technology aims to allow agents to be cross-software compatible, human users often will not know when or even where their agents are executing. When one software agent operates in conjunction with others in a cooperative agent system across platforms and operating systems, as described above, it will become next to impossible to distinguish between them and determine which agent did not properly perform its task. As two authors recently described it,
We envision a world filled with millions of knowledge agents, advisers, and assistants. The integration between human knowledge agents and machine agents will be [seamless], often making it difficult to know which is which.60
Other authors have made this point somewhat more starkly: “The biggest danger of any network-wide system that allows intelligent agents is that some of the agents will deliberately or accidentally run amuck. Agents have much in common with viruses: Both are little programs that get to seize control of a foreign machine.”61 Another commonality between some software agents and viruses is that they will sometimes mutate in order to perform their tasks. As a result, both are subject to polymorphism, a phenomenon which makes it difficult to isolate a particular program since its identity is not always persistent over time.62 Thus, if a particular intelligent agent carries out its function through a series of continuous mutations of specific bits of its program (“codelets”, as Hofstadter calls them), it is not long before that agent will become unrecognizable to the human user who created and employed it.In addition to the phenomenon of polymorphism, a relatively new form of programming threatens to obfuscate matters further. “Neural networking” is an approach to software design that models itself after one conception of the human mind. Rather than tackling a problem through examination by brute computational force, the computer is instructed to find relationships between certain data and certain conclusions. The more often such a relationship is found to be true, the greater weight that relationship will be given. When faced with similar data later, the program uses its associations to leapfrog to the correct solution. Though this approach vastly increases the speed and sophistication of a computer’s response, the software’s ability to learn rapidly alters the software beyond its original parameters. Described as a “lack of transparency”, this phenomenon makes understanding its decision making process quite difficult in retrospect. Such a program might eventually develop a better ability to make predictions about the behavior of other intelligent agents than it would about its own.63
The future is full of question marks. Although it is by no means clear precisely what software agents will look like or how they will operate in the years to come, it is virtually certain that software agents will play a major role in the next wave of electronic commerce. Agents will no doubt be employed to assist human interaction through the various stages of a transaction from product and merchant brokering through to negotiation, sale, distribution and payment. It is not unreasonable to predict that, in time, agent technology will become sufficiently sophisticated to perform many if not all of these sorts of tasks without human oversight or intervention. Such possibilities would perhaps require programmers to develop polymorphic systems that are capable of generating creative intelligence. Some of the decisions entailed by these systems would by nature be pathological, i.e., at least some of the outcomes generated by future agents would be unintended. Still, gazing through the window to the future, the technological and commercial promise of autonomous electronic devices is immediate and apparent.
Viewing the matter through the legal lense of the here and now, it is equally obvious that agent-driven commerce will run into a wall of doctrinal difficulties viz. the formation of contracts. How the law responds to this technology is very likely to have an important effect on the future development and growth of electronic commerce. In order to fully enjoy the benefits of automation, the Uniform Electronic Commerce Act must include a mechanism that will adequately cure contractual defects so as to ensure that the transactions generated by and through computers are legally enforceable. To do so, it is necessary to examine the doctrinal difficulties associated with automated transactions in greater detail.