Victoria
Robertson
Partner, Trowers & Hamlins
From noise to signal: a calm framework for GCs navigating AI
Victoria Robertson
The market is loud. Your job is to be calm.
The AI market is deafening right now. Every conference, every inbox, every LinkedIn scroll brings another promise of AI transformation. For general counsel, that relentless noise creates a very practical problem: when everything claims to be essential, nothing stands out. It is not that there is too little technology. There is too little clarity.
Most legal teams already have access to some form of AI backed technology, whether that is document automation, search tools, contract analysis, a productivity platform of some kind. AI is not hypothetical any more. So the question has moved on. It is no longer “should we adopt AI?” but “how do we adopt it safely, proportionately and in a way we can actually explain to the board?” That is not a technology question. It is a judgement call, and it lands squarely on the GC’s desk.
Budget realities make this harder. Six-‑figure commitments, long implementation timelines, opaque pricing — none of that sits comfortably alongside stretched teams and a dozen competing priorities. So “value” cannot mean the shiniest product on the market. It has to mean measurable improvements in risk, speed or quality, achieved without creating problems that surface six months later. In a market this noisy, calm evaluation is not a luxury. It is a leadership discipline.
What matters most to GCs right now
In conversations with legal teams, four themes keep coming up. They are worth spelling out, because together they describe what meaningful progress actually looks like.
First, fitness for purpose beats feature breadth. GCs care less about what a tool can theoretically do and more about what it will reliably do in their environment, with their people, on their risk profile. A slick demo rarely answers the question that matters most: will this actually work for us? And it is worth remembering that AI used for drafting assistance raises very different issues from AI that shapes decisions about people. Lumping them together creates confusion — and, sometimes, unnecessary alarm.
Second, AI is not a legal-‑only issue, even if it often lands there. The risk frequently enters through other doors first — recruitment systems, performance tools, procurement platforms, customer-‑facing applications — all potentially using AI before the legal team even knows about it. That means adoption has to connect with enterprise controls, reporting lines and accountability structures. A standalone legal initiative that depends on people informally flagging things will not cut it.
Third, governance and ethics have stopped being abstract. GCs are rightly wary of “quick and dirty” deployment — the kind where human oversight exists on paper but not in practice. Token review steps, unexplained outputs and unclear accountability are increasingly hard to defend when something goes wrong. And things do go wrong.
Finally, cost realism matters too. There is real appetite for a pragmatic path that builds on existing tools, using lighter-weight solutions, and adopting in stages rather than committing to headline platforms whose complexity and expense can quietly outweigh their benefits.
A simple way to cut through the noise
When every proposal sounds urgent, a short set of disciplined questions can restore some clarity. The idea is straightforward: move from use case to outcome, and let the answers do the filtering.
Start with the problem. What exactly is this tool meant to solve? Drafting support, document review, contract portfolio analysis and workflow triage all travel under the AI banner, but they behave very differently. The sharper the definition, the easier it is to judge whether the response is proportionate.
Then think about data. What information touches the system, and what must never go near it? Personal data triggers data protection obligations. Certain inferences — particularly around health or behaviour — may bring heightened requirements. If nobody can clearly explain the data journey, that tells you something already.
Human involvement has to be real, not decorative. Reviewers need the authority, the time and the information to challenge what the system produces. Their interventions should be visible and auditable. And someone — a named person, not a committee — must be clearly accountable for the output.
Ask how the tool sits within the wider organisation. Does it feed into decisions made elsewhere? Is it covered by existing risk, audit and compliance frameworks, or does it sit in a silo? In practice, enterprise alignment is often what determines whether governance actually works or just looks good on a slide.
Be honest about cost. Licensing is rarely the full picture. Implementation, training, monitoring, insurance, ongoing review — it all adds up. A modest tool with strong controls will often deliver more value than an expensive platform without them.
Define success early. What should look different after ninety days? Faster turnaround, fewer escalations, better consistency, reduced risk — any of those can be valid. But if you cannot describe success simply, the initiative may be driven more by hype than by need.
Where judgement still matters
As AI becomes ubiquitous, the differentiator is no longer access to tools. It is the quality of the decisions around their use. This is where external advisers can still earn their keep — without resorting to inflated claims about what they offer.
One useful contribution is translation — and I do not mean languages. Vendor materials are often heavy on technical assurance and light on practical risk. Turning that into ‑something a GC can test against regulatory, employment and governance realities helps separate genuine confidence from marketing.
Another is acting as a governance partner. Policies only work if they reflect how people actually behave. Designing guardrails that anticipate real workflows, assign clear accountability and build in review cycles turns good intentions into something operational.
Contracting support matters too. Many AI risks crystallise in supplier relationships. Bias testing obligations, audit rights, transparency commitments, security standards, exit provisions — these are not box-‑ticking exercises. They are how you retain control over systems you do not own.
There is also room for pragmatic delivery. AI-‑supported approaches to high-‑volume questions or repetitive analysis can improve consistency and speed, provided there is proper supervision. The point is not to replace judgement but to focus it where it counts most.
The emphasis throughout is on outcomes and defensibility — not on being faster or cheaper for its own sake.
A closing thought
AI overwhelm is a rational response to a crowded market. There are, frankly, too many options, too many claims made, and too much pressure to move quickly. This can push teams towards decisions that feel uncomfortable six months later. The advantage does not lie in adopting first. It lies in choosing well.
Clarity comes from grounding decisions in real use cases, credible controls and an honest view of what things actually cost. When the market is this noisy, going back to those fundamentals — and to advisers who understand your business, not just the technology — is the quieter, more defensible path.