|Donald S. Clark
Federal Trade Commission
600 Pennsylvania Ave.N.W.
Washington, D.C., 20580
April 30, 1999
I have been an active participant in the drafting process for the proposed Uniform Computer Information Transactions Act (formerly labelled proposed Article 2B of the Uniform Commercial Code) and the Uniform Electronic Transactions Act, and an occasional participant in the United States' Department of State's Advisory Committee on Private International Law: Study Group on Electronic Commerce. Several legal rules have been proposed for electronic commerce that would severely disadvantage small and large customers.
Here are my recommendations. In many cases, they are self-evident or they have been so widely discussed elsewhere that additional verbiage from me would provide no additional public benefit. I have expanded on a few of these points below.
COMMENT ON RECOMMENDATION 1 -- DEFINITION OF A CONSUMER
The profile of "work" and "commerce" has changed dramatically since the development of the personal computer. The computer gives individuals a printing press, a storefront, access to suppliers, all of the things we normally associate with a business, but when you look at the actual parties (customers or sellers), you discover an unemployed person who is trying out network marketing rather than filing a claims for welfare. Or children. (The front page of USA Today features a business run by children.) Or other people who we would all think of as "consumers." A customer who is buying goods or services for such a business should be thought of as a "consumer" rather than as a sophisticated business.
Additionally, the technology used to conduct electronic commerce (computers, internet connections, etc.) is no less mysterious to proprietors of small traditional businesses (dentists, doctors, convenience store owners) than it is to individuals who are buying merchandise for personal, family or household use. The contracting style that is fashionable for electronic commerce involves presentation of highly detailed, non-negotiable forms (often, presentation only after the customer has paid for the merchandise or service or license). A small business owner has no more negotiating power in this transaction than an individual homemaker, it is entirely unrealistic to expect either of them to retain a lawyer to interpret the terms.
Narrow definitions of "consumer" that exclude small businesses will put small businesses into an unreasonable and unfair situation. The "consumer" will have rights that the business does not have, even though the consumer will often be more technologically sophisticated. And the small business will not have the bargaining power or the legal budget that we normally associate with business-to-business transactions. A contract with terms that would be unenforceable against a consumer, and would be negotiated out by a larger business, would be enforceable against the small business.
For example, the proposed Uniform Computer Information Transactions Act (UCITA -- formerly labelled as proposed Article 2B of the Uniform Commercial Code) defines "consumer" narrowly and defines "mass-market" more narrowly than the definition proposed here. Many small business transactions, such as those involving any kind of vertical market software, will not be "mass market."
UCITA eliminates the perfect tender rule for non-mass-market customers.
In a society that values entrepreneurship and independent businesses, we should be careful to adopt electronic contracting rules that do not disadvantage small businesses.
I recommend a $25,000 cap on the transaction size because at some point, the rationale for treating a business like a consumer runs out. The question underlying my thinking is this: "How large would a transaction have to be before it would be reasonable to pay a lawyer to interpret the terms and to assist in negotations?" My answer, $25,000, is merely a guess. Some other number might be more appropriate.
Alternatively, one could imagine extending consumer protection to all nonnegotiable transactions involving off-the-shelf products or services that are sold, leased or licensed by merchants, but with a liability cap. For example, suppose that Customer sues Vendor for breach of contract and relies on a consumer protection rule. Customer can rely on the rule, even though Customer is a large business, but only if the amount in controversy for each transaction is less than $X (such as, $10,000).
COMMENT ON RECOMMENDATION 4 -- ELECTRONIC RECEIPT
I submitted the following comments to the chair of the Uniform Electronic Transactions Act drafting committee. Please note that the proposed Article 2B referred to in this memo is no longer a proposed amendment to the Uniform Commercial Code. It is now renamed Uniform Computer Information Transactions Act.
The 2B and UETA rules are that electronic mail has been "received" at the time the mail reaches the consumer's ISP (internet service provider). Legal notices may be sent by e-mail so long as the consumer "agrees" to this in a contract.
The determination that e-mail has been received is not a matter of rebuttable (or bursting bubble) presumption but is instead a fact established as a matter of law even when the notice was not in fact received/read by the consumer (intended recipient).
MY RECOMMENDATION IS THAT ELECTRONIC MAIL SHOULD NOT BE CONSIDERED AS HAVING BEEN RECEIVED UNTIL THE ADDRESSEE HAS ACTUALLY OPENED THE MESSAGE.
Businesses will adopt electronic delivery of legal notices when it is convenient for them. Consumers will often "agree" to electronic notice by buying products that have an electronic delivery clause in the fine print or by receiving a notice of change of contract terms from a bank.
Electronic mail is still a new technology for our culture. (Proof? Half of the U.S. doesn't yet use it.) We don't yet have a tradition of regularly checking e-mail, the way we check postal mail. Many people check their accounts irregularly, and open (or have opened for them for free) email accounts that they never or rarely use. ISP's have no tradition of reliable delivery and no liability if they fail to deliver e-mail. A rule that states that e-mail is received when it hits the ISP subjects the recipient to risk because
(a) many people will not retrieve and read that e-mail in a timely manner. We are using a commercial law to force a cultural change.
(b) there is no assurance whatever that the message, once received by the ISP, will actually reach the person, even if the person regularly attempts to retrieve mail.
(c) it is easy to abuse the system under the 2B/UETA definitions, to create dummy e-mail accounts that the consumer will be required to check but that the consumer will probably not check.
Most businesses will use e-mail in good faith. However, to the extent that a less honest business can benefit from holding consumers accountable for having received messages / notices that they have never actually received, 2B and UETA provide the opportunity to take advantage of this disparity.
Article 2B adopts the following definition for the receipt of a notice:
Similarly, the UETA says:
SECTION 115. TIME AND PLACE OF SENDING AND RECEIPT.
(b) Unless otherwise agreed between the sender and the recipient, an electronic record is received when the electronic record enters an information processing system that the recipient has designated or uses for the purpose of receiving electronic records or informationof the type sent in a form capable of being processed by that system and from which the recipient is able to retrieve the electronic record.
HYPOTHETICALS THAT ILLUSTRATE THE PROBLEMS
1. Vendor creates an address for the consumer Consumer buys consumer goods. As part of the transaction, which might be electronic or might be an in-store or over-the-phone transaction, vendor announces to consumer that it is granting consumer a free account on the internet. Perhaps the consumer will have to pay dial-up access charges (or will have to access the account from some other ISP). Perhaps the free account will merely by a free "permanent" e-mail address. (Hotmail is an example of a free account. You can obtain, for free, an account that lets you receive and send mail as firstname.lastname@example.org. To access hotmail, you need an internet access account with another ISP.)
The contract that comes with the goods or that is presented to the consumer at time of sale or that is signed by the consumer or that is delivered later to the consumer says that customer agrees to designate that this e-mail account is the place for the receipt of all notices from vendor.
Consumer already has an e-mail account, and he never uses the one provided by the vendor. A year later, vendor sends the consumer a notice (notice of a recall, notice of settlement of a class action suit, notice of something that the vendor must send to the consumer but that vendor doesn't necessarily want the consumer to read. Vendor sends the notice to the account that it provided.
Under 2B and UETA, this message has been "received" by the consumer.
Suppose that Consumer has no computer, no modem and therefore no access to this internet account. Therefore, messages to Consumer at this e-mail address will NEVER be picked up by this consumer.
Under Article 2B, the messages are still received because they have come into existence in a computer (the vendor's) that is capable of processing or displaying such messages and that the recipient has "designated" in the contract.
This variation comes from the Motion Picture Association in one of its briefs critiquing Article 2B.
A business customer establishes an access account with an information provider such as Lexis or Westlaw. (Call our hypothetical vendor Lexlaw.)
The Lexlaw contract recites that customer designates that the place for receipt of notices shall be the Lexlaw web site and that the terms of the contract may be changed at any time so long as the customer has 10 days within which to object from the time of the "receipt" of the notice (publication at the website) and the time that the change comes into effect. Additionally, all other legal notices to customer are "received" by being posted at the web site, which customer agrees to regularly check.
Under 2B, these messages will be received (even if customer never looks at them and even if the customer has no internet account and no browser) when they have been posted.
Under UETA, these messages are received when posted, but only if the customer has a computer, an internet account, and a browser (or access to same), because the customer must be "able to retrieve the electronic record."
Under 2B and under UETA, customer now has a duty to check this website every few days, to see if new messages for Customer have been posted there.
If this doesn't seem like a problem, set the clock forward five years and imagine that 423 vendors have included this clause in their contract with Customer. How much time will Customer have to waste every day by checking all of these addresses?
2. Vendor designates customer's regular email address as address for notices / service but customer doesn't check Here are examples drawn from my family. From discussions with friends and colleagues, I have concluded that my family are not atypical.
My father has an account on a well known ISP. He checks his mail irregularly. His first experience with the internet was with a different vendor, that he didn't like. Mail sent there stayed unread for months. He perceives no duty to check his e-mail and would be shocked if time-critical legal notices were sent to him exclusively by email.
My mother recently opened an account on a well-known ISP. She checks her mail irregularly because, I think, the system still confuses her. She's a semi-retired English professor, plenty intelligent, but I doubt that she would understand a fine print clause stating that notices can be sent to her by e-mail. If she learned that notices associated with her mortgage (including foreclosure notices) or her insurance could be sent to her exclusively by e-mail, I suspect that she would immediately shut down her internet access account. For newcomers to the Net, the mailbox rule is very, very dangerous.
Last year, my daughter used her internet access account to surf the web, but she hated receiving and answering e-mail. So she didn't read her e-mail. Ever. This year, she reads her e-mail occasionally.
Why should we create a regulation that says to people like these that they must regularly check their e-mail in order to pick up legal notices? Since when does the law merchant create and impose new duties on an unsuspecting class of people, rather than codifying existing commercial practice?
3. Lost mail Joe has become a sophisticated user. His email address these days is email@example.com. www.joe.com resides on the servers of isp-host.com, which is an internet service provider. isp-host is a reliable host for a website, and it's cheap, but it would take Joe a long distance call to reach isp-host.com. Instead, he has another inexpensive account at isp-dial.com, which has local access phone numbers around the country. (isp-host and isp-dial are fictitious names.)
An e-mail message to firstname.lastname@example.org arrives at isp-host.com, which forwards the message to isp-dial.com. Joe retrieves his mail by phoning isp-dial.com and downloading his messages. (This is not uncommon. For example, www.kaner.com is hosted on best.com and it forwards messages to my account at earthlink.net, where I retrieve them.)
Here's another variation on the same theme. Joe joins the Association for Computing Machinery. A member benefit is a mail forwarding service. He can use a permanent e-mail address, email@example.com. He cannot retrieve mail directly from acm.org. This is a forwarding service only. The ACM server will forward Joe's mail to his current ISP. This is very convenient -- it lets Joe switch ISP's without changing the e-mail address that he presents to the public.
In this case, the message to firstname.lastname@example.org arrives at the ACM computer, which forwards it to email@example.com, which Joe calls and downloads his mail from.
Under Article 2B, a message is received by Joe as soon as it reaches isp-host.com (or acm.org). The address firstname.lastname@example.org (or email@example.com) is the address that he holds out to the public for receipt of mail.
Under UETA, the message might be received by Joe when it arrives at isp-host.com, because he is (theoretically) able to retrieve the record from there. The message has not been received by Joe when it arrives at acm.org because he cannot retrieve mail directly from that account.
Suppose that for two hours on one day, mail from isp-host.com (or acm.org) gets lost in transit to isp-dial.com. (Yes, this happens. I switched my account from netcom.com after a repeated problem of lost mail between Netcom and AOL.). Thus, even though Joe diligently checks his e-mail every day, he never receives some of this day's messages, including some legal notices. Under 2B, Joe has received them anyway.
Next, suppose that the messages actually reach isp-dial.com and could have been retrieved from there, but isp-dial has a hardware problem just before Joe dials in. It loses Joe's (and many other) messages. Under 2B AND UETA, Joe has received these messages.
Next, suppose that the messages reach isp-dial.com and there is a problem while Joe downloads the message. The problem is with Joe's machine. He receives but cannot read the messages even though they were readable if he had directly logged onto isp-dial.com and read them there (which he could have done). Under 2B and UETA, Joe has received the messages.
Next, suppose that the messages reach isp-dial.com and they reach Joe's machine,but Joe has installed a filtering program to block the flood of MAKE-MONEY-FAST and SEX-TONIGHT messages that come to his e-mail account.
Unfortunately, the filter misclassifies a notice from a legitimate vendor.
Joe never even realizes the message arrived but it did arrive and was immediately erased from Joe's machine before he saw it. Under UETA and 2B, Joe has received the message.
Next, suppose that the messages reach isp-host.com (or acm.org) but they are rejected (without a rejection notice) by isp-dial.com because the postmaster of isp-dial.com has decided that isp-host.com is too spammer-friendly. isp-dial.com filters out all mail from isp-host.com. Joe is unaware of this. Under 2B, Joe has received the message because it hit isp-host.com, even though he never actually received it or notice of it.
4. Former ISP dumps mail Since 1982, I have had accounts with several online service providers, including the Source, the WELL, Compuserve, AOL, Netcom, Earthlink, Slip.net, Best, Prodigy, Planetall, and probably a few others. These days, mail comes to me at a permanent address, firstname.lastname@example.org (which is forwarded to whoever my ISP is), but most people still give an ISP's name (netcom, AOL, etc) for their e-mail address.
Suppose that Joe has an account, email@example.com. He "agrees" to several adhesion contracts that specify that notices are sufficiently sent if they are sent by email to firstname.lastname@example.org.
Joe decides to switch to isp2.net. Joe sends change of address notices to the vendors that he is aware of. (It's easy to lose track of who put what in their fine print.) The other vendors continue to send notices to email@example.com, an account that Joe has cancelled. The messages are not forwarded to Joe and error messages are not sent back to the sender. (This no-forwarding-and-no-error-message has happened to me, both as sender and as intended recipient.)
Under 2B, Joe has received these messages because firstname.lastname@example.org was designated in several contracts as the place to send Joe messages.
Note that the Post Office will forward mail to you for free or for a small fee when you change addresses. Some ISP's will do this, but some won't.
They have no legal duty to do so. Article 2B will impose responsibility on the consumer, but none on the ISP.
All of these hypos are plausible. Article 2B does nothing to protect the consumer in any of these cases. The UETA tries to be more fair but still assumes too much reliability in the e-mail delivery system and too much sophistication on the part of current users.
Rather than implementing a law that creates constructive receipt of a message today, why not wait a few more years until the technology is less new, the patterns of use are clearer, and a tradition of service develops (or doesn't develop) among ISPs?
COMMENT ON RECOMMENDATION 5 -- ELECTRONIC RECEIPT
The UETA and UCITA are remarkable in their handling of their version of the "mailbox rule." Rather than creating a presumption that a message was received (as in the traditional mailbox rule), they DEFINE receipt as having occurred as soon as the message has reached the Internet Service Provider.
The point that I am making here is that the receipt rule should involve a presumption, not a hard and fast definition. Additionally, the intended recipient should be able to rebut the presumption of receipt, by giving evidence of probable non-receipt or by showing that the message would probably escape notice or would be discarded as spam.
The issue of mail filtering and of ignoring mail must be taken seriously.
Internet mail is so much cheaper than postal mail that many of us receive a lot of it. It is common for me to receive 300 messages in a single day. I don't read them all. I don't open them all. I use a filter program to discard messages that are obviously unsolicited commercial mail or that come from places that normally send such mail. Sometimes, my filters discard legitimate mail.
If a mortgage foreclosure notice comes to you with a subject header of $$$ MAKE MONEY FAST $$$, should we assume that you have "received it" and hold you responsible for reading it, even if it reaches your home computer and you see the header on your screen? Should we say as a matter of law that every consumer has to check every piece of junk mail that arrives at her computer just in case someone sent an electronic notice that looks like junk mail? (If "MAKE MONEY FAST" seems like an unfair example, what about "IMPORTANT FINANCIAL NOTICE" or "PERSONAL AND CONFIDENTIAL: ABOUT YOUR FINANCES"?) If the law says that people have to read messages with headers like these, we'll all receive a lot of them, and most of them will be junk mail.
COMMENT ON RECOMMENDATION 10 -- DIGITAL SIGNATURES
At the last drafting committee meeting for UCITA / Article 2B (February, 1999), the draft again adopted a strong presumption that a message bearing a digital signature had been signed by the person who appears to have exclusive access to the encryption key used to sign the transaction.To escape the presumption, you have to prove that some other person used the key and they had no legitimate access to it. This is probably as difficult as proving that there are no white ravens, especially if the police have not caught the criminal who committed the fraud. This presumption provides an incentive for electronic fraud.
Appended below are comments of mine that were submitted to the Article 2B drafting committee an published in the UCC Bulletin in 1997. The same issues are still present. I will be glad to e-mail a later paper ("SPLAT! Requirements Bugs on the Information Superhighway") to anyone who requests it and who is willing to receive it in Microsoft Word format. Readers can contact me at email@example.com.
The Insecurity of the Digital Signature
September 26, 1997
A sender of a message purports to be a person named S.
Sender sends a digitally signed message to R, who checks with CA whether the signature was made by S's key. It was. S has not yet repudiated or suspended the key. In reliance on this, R ships merchandise to the address specified in the message and bills S.
S claims that S never sent this message. R's merchandise is not in S's possession and there is nothing in S's records that indicates that S received the merchandise. Sender is a crook, who impersonated S.
Who should lose the money?
Article 2B recommends that S lose the money. The underlying assumption is that a fraudulent sender would have gained access to S's key through S's negligence.
Therefore, the burden of proof should be on S to prove non-negligence, which S can probably not do, even if S was non-negligent. Draft Uniform Electronic Transactions Act follows the same risk allocation.
I have a PGP key pair, which I use to communicate electronically with my clients. I have not registered the key pair with a CA because I think that only an insane person, an ignorant person, or a fool would choose to accept such a risk allocation under modern technology. I advise my clients not to register their key pair with CA's either.
CRIMINALS WILL READ AND COPY KEYS OF NON-NEGLIGENT PEOPLE WHO WILL BE UNABLE TO PROVE THEIR NON-NEGLIGENCE
My problem is that reasonable, prudent people may have their key read and copied by a third party under circumstances that look like "normal course of business" situations, without any fault on the part of the key-holder.
Example 1: Electronic Registration
How often have you bought software and, while installing it, been encouraged to register the software electronically? In this case, you fill in a form, and the registration program will then dial the software publisher and upload the registration information to the publisher.
A couple of years ago, a company that makes a widely used electronic registration tool received an award in a software operations conference. The rationale for the award was that the tool facilitated software technical support, because it transmitted information about the customer's computer configuration as well as the information filled out by the customer. This additional information will help a support person troubleshoot your system, if you call for help.
Understand this transaction. You fill out a form that appears harmless. You allow the publisher to send this information to itself. Unknown to you, the tool lets the publisher send additional information, perhaps including a copy of your directory structure, your registry of software and hardware, your configuration files, and other stuff. This is happening today, and has been happening for several years.
I am not aware of any electronic registration program that was designed with a criminal purpose, but if programs can read your private directory structure, registry files, etc., and transmit THAT information, they can just as well also transfer all of your PGP-related information.
If your digital signature could be used like cash to order merchandise, someone will use an electronic registration technique to get this information. It's just a matter of time.
Very few customers realize that, when they register software electronically (which is the normal and requested mode by many software publishers), they might also be transmitting plenty of other private information about themselves. A reasonable, prudent non-security-expert would probably not recognize electronic registration of retail software as a security risk. But it is.
If a third party gains access to the customer's key in this way, how will the customer prove non-negligence? How will the customer ever come to realize that this was the means of access?
Example 2: Electronic Bug Reporting
There are several emerging standards for customers to report bugs (defects) electronically to software publishers. I am most familiar with E-Support, which is a reporting system developed by the Software Support Professionals Association (SSPA) and Touchstone Software. My firm is a member company of SSPA. I support its work and personally trust its executives. This use of E-Support as an example is in no way a criticism of SSPA or Touchstone.
Here you are, using your favorite word processor (I'll call it BugWare 97) from your favorite vendor (Let's use a hypothetical vendor name, ShipIt Software).
The program fails. Under a system like E-Support, you can now bring up an electronic bug report form and write your complaint/query/plea for help. (The software running on your computer system is an E-Support "client".) You have probably not been trained in software quality control and therefore your bug report will probably miss or obscure some important information. E-Support copes with this by taking a snapshot of parts of your system. It looks at your memory, system files of various kinds, etc. You are made aware of this by the E-Support folks--there is no element here of unfair surprise. You can configure E-Support so that it only transmits certain classes of information, and does not transmit other classes of information.
When the E-Support client takes a snapshot of your system, it encrypts the snapshot. You never get to see what E-Support actually sends in its bug report.
The snapshot, along with a plaintext copy of the bug report that you typed, goes back to the E-support server (probably via your e-mail system). The E-support server passes the message to ShipIt Software. It might also forward the message to your printer manufacturer, or to some third party whose product is on your system and might interact with BugWare in a way that makes a problem with one of those products appear to be a BugWare bug. If the receiver of this message is an E-support licensee, then it has the means to decrypt the E-support message and see your configuration. If it is not an E-support licensee, then it can read the plaintext complaint that you wrote, which it receives at no charge, but it cannot decrypt the information about your system.
I believe that the E-Support people are honest and have designed this system in good faith.
But what about a hypothetical product, C-Support, an E-Support look-alike manufactured by your favorite cluster of organized criminals? There is no C-support today, but if you create a financial incentive for stealing encryption keys, they can use a C-support client to do it.
Would a reasonable, prudent person recognize this as a security risk? Maybe you lawyers would say "of course." It sure looks like an obvious risk to me. But when I raised it at an SSPA forum, some attendees (executives, with years of computer support, diagnostics, or service management experience) expressed surprise and dismay that this could be a security risk. In my experience discussing this with customers and technical support specialists, unless I flag the issue to them (directly or indirectly), the security concern is rarely spontaneously raised as a potential problem with the system.
Therefore, I conclude that reasonable, prudent customers might reasonably believe that it is reasonable practice to file electronic bug reports.
So, if C-Support (the hypothetical criminal variation of E-support) took your encryption key from your system when you filed an electronic bug report, how would you know? How would you prove your non-negligence at trial?
Example 3 -- Repairs
If you have a technician service your computer, guess what the technician has access to your hard disk. If you have an encryption key on the disk, the technician could steal it.
Example 4 -- Remote Control
It is common to allow a remote technician to use a program called "Remote control" in order to diagnose problems with your computer or program. This is strongly encouraged by several software companies. Some offer discounts to customers who use remote control. Remote Control allows a technician who has called in over a telephone line, to control the computer as if they were right there at your keyboard.
A diagnostic session can take quite a while, and a reasonable person might walk away from this unintelligible series of commands being issued by the support technician, get a cup of coffee, and come back when the problem is closer to resolution.
The technician can download documents from your computer, probably in ways that would not be obvious (as to the content being taken) to a normal observer.
Example 5 -- Browser Security, Java Security, Etc.
We constantly hear that Browser X, or integrated office product Y, has some security flaw that allows a web site owner to put up a program that scans your hard disk when you visit their web site. Then we hear that this bug is fixed, just download version 3.04.02.21a and all will be well (until we find a new bug, which will be fixed in 3.04.02.21b).
Anyone who logs onto the internet might hit the web site of an unknown criminal who exploits an unpublicized new security flaw and gains access to the user's files. How will a reasonable, prudent person prove that they were non-negligent if this is how their key was discovered (and they don't know this)?
Example 6 -- Good Old Fashioned Hacking
Buy a fax modem. Connect it to the phone jack. Set the computer up to answer the phone when you're away, either to receive voice calls or faxes (Let's not even think about modem calls). Someone calls. They thereby connect to your peripheral device on your computer, and now have the opportunity to hack your machine. They copy your key and you never realize that your machine was hacked.
How do you prove your non-negligence?
Should we say that it is negligent to set your computer fax to auto-answer?
Maybe I'd personally agree (I don't do this), but this is common practice among computer owners. How can we call the ordinary behavior or reasonable people, "negligent"?
Example 7 -- Computer Literate Housekeepers
It is common practice to let your housekeeper clean your house while you are not there. What stops the housekeeper from turning on your computer when you're out and copying the contents of your hard disk to her portable hard drive?
Nothing. And there'll be no trace of this on the typical home computer.
It would be unreasonable to declare a societally normal practice "negligent." But if your housekeeper steals your key, how do you prove non-negligence (unless you learn that your housekeeper is the thief)?
Conclusion: Your Key is Not Sufficiently Safe For Strong Presumptions There are more examples, but this is enough to make the point. Normal, prudent people who behave in ways that I would call not-unreasonable, can still be in a position in which their encryption key is discovered.
If your key is compromised, without your knowledge, how much are you at risk?
You stand to lose everything. The house, the dog, all of your money, your credit rating, unlimited liability. Sender's computer(s) can crank out thousands of relatively small orders for merchandise in a relatively short period of time. You don't learn about them until you start receiving bills.
WE SHOULD MANAGE THE RISK RATHER THAN ALLOCATING IT
So, let's come back to the problem:
Who should pay for the stolen merchandise?
There is no fair allocation of risk here. S, R, and CA are all potential victims of the crook. There is no argument in principle that makes S or R or CA the fairer target to hit.
Rather than arguing over who to stick with the risk of potentially huge liabilities, I think that we should provide incentives in the law -- to the greatest degree that is reasonably practicable -- to reduce the potential liability.
Encryption Is Just One Security Mechanism. We Can Give Customers Control Over Additional Security Capabilities And Then More Fairly Allocate Remaining Risks To Them My concern with Digital Signature technology is that it relies primarily on one security-protection mechanism, encryption. If the user's key is compromised, she is at risk of unlimited liability.
Contrast this with a credit card number, such as MasterCard. The number is transmitted in plaintext. Copies of valid numbers are available in garbage cans, on the street, in every cash register, etc.
There is nothing like encryption in this system, but there is a great deal of risk-of-loss limitation in the MasterCard system. Shortly, I'll list some techniques that member banks use to limit their losses from fraud. Each of these techniques could (in theory) be used with a digital signature, and I'll note that application below.
I don't recommend that all of these techniques be applied to every digital signature. What I do recommend is that a person who creates her own key pair (or who lawfully gets a pair from a third party) should be able to specify whether or not these techniques will be used with her key.
This gives a key owner the opportunity to manage her own level of security and to limit her losses to an amount that she can tolerate. Given this opportunity to manage risk, especially if there is no money cost for adding security, a reasonable customer is more likely to feel fairly treated if she loses money from fraud, because she was able to control the amount of money that she was putting at risk. This provides a much stronger argument for the fairness of allocating risk onto the customer (and thereby reducing risk to the seller and the CA).
Technique 1 -- Delivery Location
Try using your credit card to buy an airplane ticket by phone. The airline will not send the ticket to any address other than your credit card billing address.
DIG-SIG: A great deal of fraud could be eliminated if Sender could only have merchandise delivered to S's billing address.
Technique 2 -- Credit Limit
The member bank refuses to authorize transactions that take you over your credit limit. If you have a $5000 credit limit on your card, then the bank is not at risk of being defrauded of more than $5000. DIG-SIG: the CA tracks the amount of money signed for under a given digital signature. If the amount signed for within the last 30 days exceeds LIMIT then suspend the card for 10 days.
Technique 3 -- Transaction Limit
I tried to buy a computer with a credit card. The purchase was well within my credit limit. This was the largest purchase I'd every made with that card, by more than an order of magnitude. The bank rejected the transaction as one that was too large under the circumstances.
DIG-SIG: This signature is not valid for purchases over $150.
Technique 4 -- Floor Limit For Authorization
For purchases over $50 ($100, whatever a given store's floor limit is), the retailer must call MasterCard for authorization before completing the transaction. (In many areas now, every purchase is authorized by modem, but the principle is the same--we just have a really low floor limit.)
DIG-SIG: This signature is not valid for purchases of over $100 unless you call me (subscriber) at the following phone number for authorization.
Technique 5 -- Requirement For Additional Identification
I bought a computer for my daughter, using a credit card. The bank required the retailer to check my photo ID before authorizing the purchase. Some merchants require photo ID as a matter of course for credit card transactions, probably as part of an agreement with their bank that reduces what they pay the bank for the credit card transactions.
DIG-SIG: This signature is not valid for purchases of over $100 unless you call me (subscriber) for authorization at the following phone number.
Technique 6 -- Pattern Analysis For Location
If you use a credit card in an odd geographical pattern (Florida, San Francisco, Mexico, and Toronto, in that order, in a two-day period), the credit card issuer might suspend the card until the issuer confirms with the cardholder that he has been travelling through those locations and is the person who used the card.
DIG-SIG -- I don't know of a good analog for this. The assumption of the encryption key is that it will be used with web sites from everywhere.
Technique 7 -- Pattern Analysis For Frequency Of Use
If there is a huge burst of small purchases, the card issuer might suspend the card until checking with the cardholder.
DIG-SIG -- CA should suspend the key if there are X purchases within Y time units. This instruction to the CA is kept reasonably private between the CA and the subscriber.
Technique 8 -- Pattern Analysis For Size Of Purchases
I used to manage clothing stores. Our bank gave managers periodic security training, which we were to pass on to our staff. This particular bank was very likely to suspend a card if a customer was carrying several packages and was making two or more purchases in our store in a way that kept each purchase under our floor limit.
DIG-SIG: In an electronic situation, I'd look for multiple small transactions, small enough that none of them should trigger any alarms, especially if there were multiple separate purchases made from a single seller. There might be good algorithms for this; I don't know enough to know how to specify this choice to a subscriber. Again, the choice made by the subscriber should be kept private between the subscriber and the CA.
Technique 9 -- Notification Of Rate Of Purchases
No credit card issuer has done this for me, but some telephone card issuers have -- after I used the card much more frequently than normal, the phone company called me to check if these calls were mine. Until the company reached me, it suspended my card.
DIG-SIG: e-mail notification or (for a fee) telephonic notification if there have been more than N purchases in M minutes. The rule (the fact that it's turned on for this customer, and the values of N and M) should be private, between the CA and the subscriber.
Technique 10 -- Limited Scope
Some cards (such as a gasoline company credit card) can only be used for some types of purchases.
DIG-SIG: This key can only be used to sign court documents. This other key can only be used for retail merchandise purchases (as opposed to hotels, plane tickets, etc.). This limitation might be kept private between subscriber and CA.
Customers Should Be Free To Use Or Not Use These Methods Each of these techniques is imperfect. Each of them has proved to be a pain in the neck sometimes. I would not want to IMPOSE these techniques on anyone using a digital ID, but I would want to allow a digital ID subscriber to choose any combination of them (and there are probably several others).
How should we associate a choice of restriction with a key? It seems natural to embed the restriction into the certificate itself, or to store the restriction with the CA, and have the CA enforce some restrictions and notify potentially relying parties of others.
I've been told that this isn't technologically feasible today. I don't know if that's true. Assume that it is. Nothing stops us from writing rules for the short term, today, and better rules that will automatically replace the first set, 5 years from now. For example, we can allocate risk today in ways that put some liability burdens on CA's, that will disappear as soon as they adopt the new risk management features.
Will The Market Protect Us?
I'm hesitant to rely on "the market" to guarantee availibility of these or any other loss control features. We have absolutely no assurance that CA's and other vendors will go out of their way to improve customer security when the customer bears all the risk of a breach of security. Competition might result in this, but we can't rely on that:
"In some countries, banks are responsible for the risks associated with new technologies. . . . the Federal Reserve then passed regulations that require U.S. banks to refund all disputed electronic transactions unless they can prove fraud by the customer. Since then, many U.S. ATM cash dispensers have had video cameras installed." "In Britain, the courts have not yet been so demanding; despite a parliamentary commission that found the personal identification number (PIN) system was insecure, bankers simply deny that their systems can ever be at fault.
Customers who complain about 'phantom withdrawals' are told that they must be lying, or mistaken, or that they must have been defrauded by their friends or relatives. This has led to a string of court cases in the U.K. . . ."
"The three main causes of phantom withdrawals did not involve cryptography at all: they were program bugs, postal interception of cards, and thefts by bank staff. . . ."
"It is well known that it is difficult to get an error rate below 1 in 10,000 on large, heterogeneous transaction processing systems such as ATM networks; yet, before the British litigation started, the government minister responsible for Britain's banking industry was claiming an error rate of 1 in 1.5 million!
Under pressure from lawyers, this claim was trimmed to 1 in 250,000, then 1 in 100,000, and most recently to 1 in 34,000. . . ."
"British banks dismiss about 1% of their staff every year for disciplinary reasons, and many of these firings are for petty thefts in which ATMs can easily be involved. There is a moral hazard here: staff know that many ATM-related thefts go undetected because of the policy of denying that they are even possible. "
Security methods are speed bumps on the fraud superhighway.
In some ways, public key encryption is remarkably secure. In other ways, we can identify clear risks. If the primary method of authenticating a document is a digital signature, then I recommend that the liability rules change such that there is either:
If you create a system that lets a person manage and limit her own risk, it is fairer to allocate that risk to her, and she will probably perceive the system as fair. If you create a system that puts technological risk management in the hands of a third party, it is unfair and unsound (doesn't further the improvement of security) to allocate unlimited liability to that person.
Thank you for your attention to this Comment
/s/ Cem Kaner, J.D., Ph.D.