Now Reading:
Op-Ed: What In-House Counsel Needs to Know About Generative AI
Full Article 7 minutes read

Op-Ed: What In-House Counsel Needs to Know About Generative AI

By Francisco Morales Barron

What In-House Counsel Needs to Know About Generative AI

As generative AI technology becomes increasingly integrated into the legal profession, in-house counsel face a range of new considerations.

From ensuring confidentiality to achieving cost-effective workflows, both the risk and potential of generative AI (GAI) are vast. With that in mind, these technologies must be approached with both cautious optimism and a clear understanding of the limitations and risks.

In my discussions with clients and across practice groups at Vinson & Elkins, here are a few considerations I’ve learned to help in-house counsel navigate the complexities of generative AI.

Confidentiality and Security Concerns

Even the most secure AI systems carry risks around confidentiality and privacy. One of the primary issues in-house counsel must be aware of is the security of the information that they are putting in to the system. General commercial AI tools may not meet the stringent confidentiality standards, or may be feeding proprietary information into the model itself (or being reviewed by GAI provider). How the uploaded information is processed could also have implications with respect to attorney-client privileged.

For this reason, it’s advisable to use only licensed AI tools, and, importantly, to understand the tools’ terms of use. These are more likely to offer higher levels of data security and privacy to avoid potential breaches of confidentiality, privilege and safeguard compliance with professional standards.

Optimizing AI Queries for Better Results

The quality of the input provided directly influences the effectiveness of generative AI. Machine learning and large language models are designed to produce better results with well-formed prompts. Before using any model, lawyers should define their purpose and objectives, clearly crafting precise questions within the context that aligns with their goals.

Lawyers can play with several parameters (such as context, requested output, tone, audience) to better prompt the LLM. This approach produces a better outcome. Working with AI, however, is a bit of trial and error and a collaborative experience.

If you don’t like the response you are given by the tool, ask it another way, or aim to dig deeper. With a little work, you can get a lot accomplished. Legal departments should spend time training their attorneys on both best use cases for these tools, and proper prompting techniques.

Cost Efficiency and Scalability

Generative AI offers corporate legal departments a range of tools that can enhance cost-effectiveness. Many legal departments look to AI to manage increasing workloads without expanding headcount, allowing them to scale their internal operations more efficiently. By leveraging AI, in-house counsel are handling higher volumes of work and manage fluctuating workflows with greater flexibility. This scalability is especially valuable for legal teams operating under tight budgets.

Key Functionalities of Generative AI for Legal Teams

Generative AI tools offer diverse functionalities that can be tailored to support specific tasks within a legal department. Vinson & Elkins has a taskforce in place to study various subscription-based generative AI platforms. Some of the more popular use-cases we’ve seen thus far include:

· Legal Research: AI can assist with legal research, rapidly finding relevant case laws, statutes, and regulations.

· Document Drafting: Generative AI tools can generate initial drafts of legal clauses and simple documents, providing a starting point for attorneys.

· Contract Analysis: AI can help summarize contract provisions, compare contract clauses, and identify potential issues.

· Due Diligence: By analyzing large volumes of documents efficiently, AI can aid in due diligence processes.

· Litigation Support and Document Management: AI tools can enhance litigation support, organize and manage documents effectively, and allow quicker access to relevant files.

However, despite these functionalities, GAI continues to face several issues in the product that it generates, so as we instruct our associates, it is imperative to validate every output created by AI.

Integration Challenges: Implementation and Compliance

Integrating generative AI into an existing legal framework isn’t always straightforward. Existing compliance and document review processes may need reevaluation to align with the new technology. Like any new skill set, legal professionals should be trained to effectively use these tools, and there should be clear adoption plans to ease their implementation. Without a well-structured approach, integrating generative AI can result in introducing inefficiencies rather than improving productivity. Legal departments should adopt AI policies and procedures (addressing scope, oversight and quality-control measures). These policies should be reassessed periodically to ensure they remain effective and compliant.

The SEC, among other regulators, are beginning to scrutinize corporate claims about use and sophistication regarding AI technologies and uses by organizations. Legal departments should be vigilant that company statements on AI capabilities and usage are substantiated and factual. Similarly, if one’s company is using AI tools to provide their services and products, legal counsel should stay abreast of regulatory compliance schemes including data privacy regulation, consumer protection laws, and regulations specifically addressing AI (such as the EU AI Act).

Legal and Ethical Responsibilities

Although generative AI is a relatively new tool for legal professionals, existing ethical obligations still provide guardrails. Lawyers must adhere to their duty of competence, to not accept or continue employment in a legal matter which the lawyer knows is beyond their competence.

This means verifying AI-generated information, understanding the tool’s limitations, and not relying on AI alone for decisions. Generative AI could miss important legal aspects that are required for its output. Generative AI should be used as a tool for informed decision-making rather than a replacement for professional judgment. Similarly, a duty of confidentiality requires outside attorneys to not reveal information relating to the representation of a client.

Again, it is imperative that law firms and attorneys understand how their inputs are being processed by GAI tools. On July 29, 2024, the ABA Standing Committee on Ethics & Professional Responsibility issued Formal Opinion 512, Generative Artificial Intelligence Tools. The opinion offers a compilation of ethics guidance coordinating with the Model Rules of Professional Conduct. Additionally, outside counsel should take careful note of clients’ guidelines on use of generative AI tools.

Ensuring Accuracy and Reliability

Accuracy remains a concern with generative AI, as large language models, as we alluded to above, can sometimes produce inaccurate, misleading, or incomplete responses. Moreover, LLMs might not be trained on relevant documents or might be outdated. This inconsistency can be problematic, as it could lead to incorrect legal advice or uninformed decisions. While AI can provide a useful starting point, it’s essential to double-check all AI-generated outputs. In-house teams should always verify the information’s sources, context, and reliability before relying on it.

There is no doubt a future for generative artificial intelligence in the legal industry and we can’t ignore it. These tools have the potential to transform in-house legal work by boosting productivity, enhancing cost-efficiency, and enabling scalability. Similarly, for outside counsel, they have the potential to free lawyers from rote work and enable them to provide a more creative and personal service to their clients. However, all lawyers at outside law firms and in-house legal departments must be cautious, ensuring they address confidentiality, verify the quality of output, and enact compliance throughout the adoption process.

By maintaining a balance between innovation and professional responsibility, lawyers can harness the benefits of AI while upholding the high business, ethical and technological standards expected of them.

Francisco Morales Barron is an M&A and Private Equity partner in New York at Vinson & Elkins and member of the firm’s artificial intelligence taskforce. He will be teaching a seminar in Spring 2025 on generative AI in Corporate Law at Penn Carey Law.

Contact:

CorpGov

Editor@CorpGov.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Input your search keywords and press Enter.