With a ‘new’ U.S. administration promising looser AI oversight, legal tech companies are wrestling with questions about the future of regulation. But focusing on whether a change in government means less regulation over the use of legal tech misses something fundamental: if your legal tech company depends on government regulation to ensure your AI solutions are accurate and ethical, you’re probably in the wrong business. AI is just a tool – what matters is how we use it.
This isn’t about dismissing the importance of regulation entirely. But when you’re building legal tech solutions, the real safeguard isn’t going to come from external rules, it will come from deep domain expertise. And the reality is that this isn’t anchored in having good engineers or powerful AI models.
Beyond Code: The Limits of Pure Engineering
AI engineers can write amazing code and build powerful systems, but most have never practiced law. Look at what happens when you’re reviewing a complex contract at 2 AM – you’re not just looking for patterns in the text, you’re looking for specific legal implications. We know now that AI can process the document, that’s no longer the challenge; it’s whether it understands what actually matters in its response to a practicing lawyer.
Data is essentially useless without expert interpretation to validate whether algorithms are returning meaningful results. When you’re building AI for contract review, you need lawyers who can look at the output and identify when something’s off. It’s not enough to know the system is working as designed – you need to know it’s working as a lawyer needs it to work. This means looking at what clauses and definitions the AI used to construct its answer and evaluating whether they’re actually relevant. Essentially, this gap between technical performance and legal accuracy demands expert validation at every stage.
Domain Validation as the Most Important Part of AI Use
The validation process is where domain expertise becomes critical. Our validation process examines multiple critical factors that transcend standard technical metrics: the selection of relevant clauses and subclauses, the appropriate application of definitions in context, the maintenance of legal relationships between document sections, and the professional credibility of the analysis.
You need people sitting down and methodically checking outputs, looking at the retrieval embeddings, and asking why certain sections are being flagged while others aren’t. When we talk about summarisation tools, you need lawyers who can verify that the summaries are capturing the truly critical components of legal documents. Reading the answer and saying it’s about 70 or 80% right is only the first step – understanding the legal reasoning behind those conclusions is the key.
There’s a hidden risk we need to talk about too – I call this the 99% trap. When a system proves reliable 99 times out of 100, humans naturally trust it blindly. Especially when time is poor, you are ready to sign off, and you are reliant on those 99 previous examples leading the way. That 100th case could prove catastrophic. This is particularly dangerous when you’re dealing with junior lawyers who rely heavily on AI tools from the start of their careers.
Legal reasoning requires specific critical thinking skills developed through experience – understanding how to interpret clauses in context, recognizing subtle implications, and questioning whether apparent meanings hold up under scrutiny. Consider how experienced lawyers approach a contract: they question implications, consider context, evaluate interpretations, and anticipate disputes. This questioning mindset comes from years of seeing how seemingly straightforward language can lead to unexpected consequences.
Building True Legal Tech Partnerships
In the next few years, domain experts are likely to become as valuable as software engineers in the development process. You can find plenty of engineers to build ideas, but without deep legal expertise informing that development, you’re likely to end up with a solution looking for a problem rather than solving real-world legal challenges.
We, as an industry, need to create robust partnerships between human expertise and AI capabilities. This means developing validation processes that complete the workflow: AI’s processing power churns through the data: domain experts, in this case lawyers, reason and validate. That’s what creates solutions that can enhance legal reasoning without endangering it in the process.
While everyone’s debating the implications of AI deregulation, the truth remains unchanged: responsible legal tech development comes down to having people who deeply understand the law working alongside those who understand the technology. No amount of external regulation can replace that fundamental partnership. Domain experts who understand both technical capabilities and legal requirements serve as the true regulators of legal technology.