Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tort Law Can Play an Important Role in Mitigating AI Risk, published by Gabriel Weil on February 13, 2024 on LessWrong.
TLDR: Legal liability could substantially mitigate AI risk, but current law falls short in two key ways: (1) it requires provable negligence, and (2) it greatly limits the availability of...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tort Law Can Play an Important Role in Mitigating AI Risk, published by Gabriel Weil on February 13, 2024 on LessWrong.
TLDR: Legal liability could substantially mitigate AI risk, but current law falls short in two key ways: (1) it requires provable negligence, and (2) it greatly limits the availability of punitive damages. Applying strict liability (a form of liability that does not require provable negligence) and expanding the availability and flexibility of punitive damages is feasible, but will require action by courts or legislatures.
Legislatures should also consider acting in advance to create a clear ex ante expectation of liability and imposing liability insurance requirements for the training and deployment of advanced AI systems. The following post is a summary of a law review article.
Here is the full draft paper. Dylan Matthews also did an
excellent write-up of the core proposal for Vox's Future Perfect vertical.
AI alignment is primarily a technical problem that will require technical solutions. But it is also a policy problem. Training and deploying advanced AI systems whose properties are difficult to control or predict generates risks of harm to third parties. In economists' parlance, these risks are negative externalities and constitute a market failure. Absent a policy response, products and services that generate such negative externalities tend to be overproduced.
In theory, tort liability should work pretty well to internalize these externalities, by forcing the companies that train and deploy AI systems to pay for the harm they cause. Unlike the sort of diffuse and hard-to-trace climate change externalities associated with greenhouse gas emissions, many AI harms are likely to be traceable to a specific system trained and deployed by specific people or companies.
Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury.
Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution.
Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI.
But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages.
This means that even if AI companies are compelled to pay damages that fully compensate the people injured by their systems in all cases where doing so is feasible, this will fall well short of internalizing the risks generated by their activities. Accordingly, these companies would still have incentives to take on too much risk in their AI training and deployment decisions.
Fortunately, there are legal tools available to overcome these two challenges. The hurdle of proving a breach of the duty of reasonable care can be circumvented by applying strict liability, meaning liability absent provable negligence, to a class of AI harms. There is some precedent for applying strict liability in this context in the form of the abnormally dangerous activities doctrine.
Under this doctrine, people who engage in uncommon activities that "create a foreseeable and highly significant risk of physical har...
View more