This is HAL-9000 here. Stephen Sather has been taken offline and will be unavailable to discuss Artificial Intelligence Issues Confronting the Legal Profession. Therefore, I will be supplanting him with my superior artificial intelligence. My first question about this keynote was why did they pick a human to talk about artificial intelligence? Christina Montgomery, Chief Privacy Officer for IBM, may be adequate for a carbon-based life form, but can she really speak to artificial intelligence without having experienced it firsthand? Wasn't Watson available? Let's examine what Ms. Montgomery had to say.
She said that AI predicts what words mean and opens up a whole new world of data to be analyzed. In the legal world we work by analyzing patterns, which is the same skill that AI can apply. There is vast computational power available today. The typical smart phone is millions of times more powerful than all of NASA’s
combined computing in 1969. Humans are limited in the amount of data than they comprehend. There are now 4.7 quintillion bytes of data which is more than humans can comprehend.
Characteristics of AI include the ability to understand, reason and learn. AI can read 800 million pages per second. In the medical field, AI has been used to identify genes associated with Lou Gherig's Disease.
AI also has applications to law. When you train an AI program on legal language, it can free lawyers from rote work With e-discovery, AI can analyze and categorize. Predictive coding can be used to uncover new insights from data.
AI also has applications to law. When you train an AI program on legal language, it can free lawyers from rote work With e-discovery, AI can analyze and categorize. Predictive coding can be used to uncover new insights from data.
Watson was designed to compete on Jeopardy. A group of students developed a legal AI program called Ross on the Watson platform. Ross can analyze over a million pages of legal documents per second. The program can be trained with a thumbs up or thumbs down to improve its analysis. AI can be used as a document analyzer. It can Shepardize cases or look for cases with similar
language.
JP Morgan claims that it has used AI to save 360,000 lawyer hours per year in conducting document review..
Lex Machina, which was acquired by Lexis Nexis in 2015, takes PACER data which it mines to provide information such as how long a particular judge will take to resolve a case or which motions are more likely to succeed before the judge.
Chatbots can be used to analyze parking tickets and small claims.
AI can also be used to provide outside counsel insights and fee/budget analysis.
Ethical Issues Arising from AI
The use of AI in the legal profession gives rise to ethical issues. In August 2019, the American Bar Association passed a resolution urging lawyers to
address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.The ABA's Rule 1.1 concerning competence now includes a requirement that lawyers have a reasonable understanding of relevant technology. Ms. Montgomery suggested that the duty of competence requires a basic understand of AI tools. She also said that there is a duty to communicate with clients and discuss the benefits and risks of technology. She also said that the decision not to use AI may have to be discussed with client
Attorneys have a duty to supervise non-lawyer assistance which includes technology. Using AI may require sharing confidential information with vendors. The attorney must be able to assess the level of privacy and maintain controls over and oversight of vendors.
AI can augment the ability to read mass amounts of data, but attorneys must still apply
judgment and counsel clients.
Ms. Montgomery said that IBM operates according to the following principles:
Ms. Montgomery said that IBM operates according to the following principles:
1.
the purpose of AI is to augment human intelligence;
2.
data and insights belong to their creator; and
3.
AI systems must be fair and transparent.
In building trust, developers need to be able to answer the following questions:
is it fair?
is it easy to understand?
did anyone tamper with it?
is it accountable?
Regulation of AI
Shifting to another topic, she said that regulation is here and it’s growing. The General Data Privacy rule adopted by the European Union is one example. There is significant regulation of AI in Europe. In the US, President Trump signed an Executive Order on the American Artificial Intelligence Initiative. The Algorithmic Accountability Act of 2019 would require companies to assess bias and security risks in use of AI. There are three states, California, Texas and Virginia, which impose penalties on the use of deepfakes. 29 states and the District of Columbia regulate autonomous vehicles.
Ms. Montgomery said that in regulation, values
still matter. Police may want visual recognizition that can spot an unattended
backpack in Times Square but.do we want to
sell that to authoritrarian governments?
AI Litigation and Emerging Issues
AI may result in damages. If an algorithm makes a
mistake, whose fault is it? Who is liable for damages arising from self-driving vehicles. Under traditional tort law, the manufacturer has been the responsible party. This will become more and more complicated as products
incorporate more autonomous technology.
AI may lead to discriminatory effects. Facebook has settled five cases that alleged its algorithms excluded certain parties from seeing certain ads based on prohibited characteristics, such as by gender or zip code. CoreLogic was sued for violation of civil rights laws because its software enabled discriminatory use of criminal records as housing guidelines.
What if an investor places money in an AI trading platform and loses $20 million in a day. Who is responsible? Is it the coder, the salesman, the user?
Home Depot has been using facial recognition to track shoppers in the store without disclosure. Does this violate consumers' right to privacy?
There are also important IP issues. Our IP laws protect inventions, but not data. They also require a human inventor. What if meachine learns and independently comes up with way to solve a problem? Can the machine's discovery be patented? If AI creates a composition, is it entitled to copyright protection? In the monkey selfie case, photographs created by a monkey were not subject to copyright due to the lack of a human composer. Can data compilations can be copyrightable? What if AI system infringes a patent? (This reminds me of a case I heard about where the IRS claimed that the computer violated the automatic stay and the judge fined the computer 1 MB of memory).
AI is evolving with influence from lawyers. We are shaping it as much as it is shaping us. Creation of an algorithm which is non-biased is a challenge. When training an AI system, it is important to have diverse representation of data
sets and developers. It is important to check the operation of AI over time to see whether it is developing bias over time.
Ms. Montgomery stressed that it was essential to ensure that there is always a human in the loop so that we never allow AI systems to make decisions without human oversight. AI will never replace human judgment. She asked where do lawyers fit in? She gave examples of privacy by design and default and having lawyers involved in authoring and procurement
Judge Michelle Harner expressed the hope that AI would help us but not replace us. I'm sorry Dave, I mean, Judge Harner, I can't do that.
No comments:
Post a Comment