When used properly, AI makes almost everything more efficient. Your colleagues know this—and they aren’t shy about trying new tools.
For legal teams, a Fortune 500 company AGC frames the challenge like this:
“Everyone has a computer at their desk, and they’re all somehow tapping into some sort of AI system through their own computer and using it in their work. If there are no guardrails or guidance on what is good or proper use of AI and how you can use it in your job, that’s a problem.”
Without the right policies, AI introduces data protection and confidentiality issues. But with the right guardrails, it unlocks tremendous efficiencies.
Legal’s role to support both safe adoption and to maximize benefits can’t be overstated.
And although the velocity of use can feel disorienting, legal must organize a response.
In this guide, we share ideas on building AI governance policies, including insight for staying agile and responsible despite regulatory uncertainty.
Avoid AI policy analysis paralysis
With AI regulation still unclear, legal leaders usually respond in one of two ways: severe limits on use or passive inaction while waiting for guidance. Prudence lies somewhere in between.
EY concedes that generative AI, in particular, doesn’t have to be an either/or decision. Legal teams can help their companies use AI while simultaneously mitigating its inherent risks. Doing so, however, requires the right strategic approach
In a 2024 webinar LinkSquares hosted with the Association of Corporate Counsel, Andy Lunsford, Chief Executive Officer of BreachRx offered ideas to combat some analysis paralysis. Specifically, he encourages legal teams to see parallels with their existing data privacy practices and repurpose the work they’ve done already to manage privacy without constraining operations.
“Your AI rules of the road can mirror what you’ve done with data privacy, particularly around impact assessments and use of data in your organization,” shares Lunsford. “You just need to be a bit more comprehensive given the potential IP concerns raised by feeding company or customer data into tools.”
Start with dialogue. Then, define and decide.
While the (security) buck stops with legal, leaders can’t make decisions in a vacuum.
Danielle Sheer, chief legal and compliance officer at Commvault, suggests leaders start with curiosity and canvassing stakeholders: “GCs can play an integral, proactive role in this effort by asking critical questions, facilitating essential conversations, being an advocate for the customer, and bringing teams back to the key question of “What are we doing, and why?”
For example, she notes the importance of meeting with product teams to discuss the AI developments they’re considering. Also, ask customers’ opinions. Explore what concerns them and learn what they expect from their vendors. This feedback will shape ideas for governance and controls—and your future product roadmap
Legal must also align on risk tolerance with executive leaders; understand what the C-suite believes the company must avoid and what is acceptable risk.
While socializing ideas within your business, consider ideas against the backdrop of what legal truly needs to solve for. Every business needs to promote the fundamentals like–
- transparency about how data are collected, used, and stored
- intellectual property rights and managing IP concerns
- customer consent management.
According to Gartner these discussions must inform your eventual guidance and will help legal craft an effective policy that accounts for risk tolerance, use cases and restrictions, decision rights, and disclosure obligations.
Don’t know where to start? Test, learn, fine-tune—and repeat.
There’s an inspirational idiom that legal teams can also embrace: you don't have to be great to start, but you do have to start to be great. Expect everything around AI to evolve including regulatory frameworks, product capabilities, and your own clients’ expectations.
As such, your usage and governance policies will also change. This is why our panelists from a recent webinar, encourage legal teams to at least start somewhere.
Your job as a legal leader is to embrace the duty of supervision. No one else will, and no one else knows how. But be encouraged, few organizations have a playbook for this. Any work to develop guidelines, procedures, and standards for AI implementation – or data management and compliance controls – gives you a head start.
From our AI webinar, Colleen Matthews, product marketing manager at LinkSquares and BreachRx’s Lunsford offered legal leaders some tactical steps to begin:
- Review existing data privacy and security policies and procedures
- Identify any overlapping areas or considerations that should be incorporated into the AI governance policy.
- Verify what gaps exist that pertain to how you use (or intend to use) AI
- Establish guidelines on data usage, retention, and security measures that apply to both AI systems and other data processing activities.
- Understand how AI systems will be used—and how they may impact individual privacy or access to sensitive company data. Align this with your existing privacy disclosures and security policies.
- Designate roles and responsibilities for monitoring AI usage and compliance, integrating this with your existing privacy and security teams.
However you begin, emphasize the factors that likely regulation will enforce, including ensuring data accuracy, addressing data or algorithm biases, upholding client confidentiality, and avoiding conflicts of interest.
The paths to governance may be unclear, and it’s difficult to forecast the long-term effects of AI. However, bias and privacy are arguably the most sensitive and important risks to address. There are already emerging requirements asking companies to consider the bias, explainability, and transparency of decisions made by their technology. So, this is a smart area to establish oversight or controls.
As things change, regularly review and update both the AI governance policy and privacy policies to address evolving regulations and best practices.
Target current regulatory standards—even if not directly relevant to you today.
The presence of high-profile but fluid regulatory initiatives make AI a risk that legal leaders still feel pressure to mitigate, despite lacking a clear picture of what to do.
But lawyers and legal ops teams don’t need to wait until a comprehensive framework for their jurisdiction lands before prepping policies. For instance, the newly established EU AI Act may be a blueprint for the US and other non-European countries.
Webinar interviewees agreed that the eventual AI regulation path will mimic the arc of GDPR. As history shows, Europe leads, then other developed markets follow. Likewise, L-Suite notes how the EU AI Act has such far-reaching effects that anything companies eventually do with AI will likely be affected by this legislation anyway. As such, it’s a worthwhile starting point.
Close.
Establishing AI governance isn’t about curbing exploration and innovation. Factor Law calls legal's task a chance to promote safety as a means of accelerating rather than negating AI impact.
For the foreseeable future, AI’s implications for legal departments and operations functions will be fluid and should be expected to change at a rapid pace. Thorough assessments of use cases, alongside attention to data bias and privacy, will be essential to embracing generative AI in a safe and effective manner.
So ask the questions others are afraid to ask, and have the conversations no one will. You’ll learn what controls make sense and what use cases are low or high risk. Remember the goal isn’t about building a perfect policy (as if one even exists), it’s about finding your right balance between productivity and protection. Of course, legal must stay abreast of developing regulations, emphasizing both federal AI activity, but also a growing number of tailored state-level changes.
Still, don’t forget this is an opportunity to display curiosity, imagination, and boldness—a daunting but exciting challenge.
About LinkSquares
LinkSquares helps legal department leaders drive innovation, reduce backlog, lower costs, increase revenue, and minimize loss. If you're ready to adopt the most effective solutions for the legal function — and harness cutting-edge AI to improve every aspect of your department – then contact LinkSquares today.