During the last few years, the use of artificial intelligence by law firms has surged. Slow to adopt forward-motion change, the field of law is nevertheless coming around to acknowledging the tremendous power and assistance of Artificial Intelligence (AI), sometimes known as cognitive computing. More lawyers are recognizing that the technology is a formidable tool in the quest to serve their clients.
There are two primary categories of Artificial Intelligence. Soft AI (also knows as weak or narrow) can be said to use computational intelligence. This is where machines analyze enormous amounts of data and come up with a solution to a problem using high-end computer power and complicated algorithms. The chess champ Big Blue was a soft AI machine. With help from soft AI, tedious and less than intellectually challenging tasks can be left to AI, freeing up lawyers to pursue more intense issues or focus on litigation.
Hard AI, sometimes referred to as strong or full, is the stuff of films, where machines are taught to think like humans. It is sometimes known as artificial general intelligence. Hard AI machines can think with the intelligence of humans and even exceed a human’s intellectual capacity. A hard AI machine can perform such tasks as “reasoning, planning, learning, vision and natural language conversations on any subject,” according to Julie Sobowale in an article in the ABA Journal.
With the introduction of any new technology in the legal industry, attorneys’ duties and ethics must be considered. In the often cited scenario, who is liable for the self-driven vehicle that crashes due to its own faulty system? Who should have been monitoring the vehicle to ensure that it would not crash and do harm or injury to people or objects?
Comment 8 of Rule 1.1: Competence speaks to attorneys maintaining competence even with the advent of fresh technology. “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”
Just as attorneys were expected to learn how to use email and keyboards, they are now expected to learn enough about AI to be able to help their clients. While no one expects attorneys to be programmers or truly understand how the car engine works, attorneys are expected to know how to drive that car and respect the rules of the road.
To extend the analogy, lawyers must rely on mechanics and engineers to make sure the machines are running at full efficiency. This brings in the question of confidentiality and privilege. An attorney must ensure that a client’s information remains secure. This security cannot be compromised by the attorney’s having to share their clients’ information with AI tools, be they vendors or perhaps a network-based AI tool that can be breached. The highest level of security must be maintained.
Comment 3 to the ABA Model Rule 5.3 was amended in 2012 to take into account situations where it is necessary for attorneys to rely on vendors. “When using such services outside the firm, a lawyer must make reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations.”
While a lawyer often delegates certain tasks to an assistant or paralegal, not all tasks are compatible with AI and it is the duty of the attorney to recognize the difference. AI might be able to distill huge amounts of data into a few cases, but it is still the attorney’s duty to decide what is relevant or irrelevant to the issue.
Issues of diversity have come up with AI. Sometimes it happens that AI might be excluding members of one or multiple demographics, or even assuming biases toward a group. “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” said John Giannandrea, Apple’s head of Machine Learning and AI strategy efforts. It is important that lawyers know what training data was used to teach the AI system and to be aware of hidden biases.
Legal chatbots have given guidance to those who may not be able to afford an attorney or just want a quick, no-muss answer to a simple question. These devices are used in routine matters such as fighting parking tickets or drafting simple agreements. While chatbots are quite helpful in bridging the justice gap, they do beg the question: Is this practicing law without a license? Ethics lawyer Megan Zavieh believes that since there are currently no laws or rules addressing this particular question, “we have to look at the spirit of the rules, and balance protecting the public with allowing for innovation in the delivery of legal services.”
In making a decision to use a particular AI machine, a lawyer must know enough about the benefits and risks of the machine to make a sound decision. Again, no one expects the lawyer to be a programmer with intimate details of how the machine works. However, she must do her due diligence in finding a trusted developer and know enough about the technology to ask the right questions.
As with all pioneering technology, systems have to be ironed out. Artificial Intelligence is a tremendous step forward in the practice of law. The human element is that lawyers can be more engaged in the activities for which they became lawyers. Less time needs to be spent on the mundane tasks, with more time spent on the business of groundbreaking law and helping clients. And that is the bottom line of the forward progress of technology – helping people in getting the job done.
Latest posts by Jaliz Maldonado (see all)
- How This Law Firm Lost over $10 Million in Billable Hours - August 21, 2018
- 9 of the Biggest Cloud Computing Legal Myths, Debunked - August 15, 2018
- New in Artificial Technology, Ethics, and the Practice of Law - August 7, 2018