Legal AI can draft your documents, analyze your data, and accelerate your workflows, but it can’t replace the empathy you feel when someone is calling your firm on the worst day of their life.
For years the narrative has stayed consistent—faster is better. Automate intake. Automate responses. Automate first contact. A singular talking point seemingly automated in and of itself. There is no doubt that such thinking made sense in certain areas across all practices of law. But as AI adoption has accelerated, firms started to feel the unintended cost of such thinking.
There was an ever growing consequence of removing humanity from devastating moments that truly demanded it.
When someone reaches out to a law firm, it’s often in moments of amplified emotion. And while you demand your firm is calm, organized, and thinking clearly, the raw reality is many clients are, in those moments, the exact opposite.
They may be injured, grieving, scared or overwhelmed. In those moments, efficiency alone is not enough. Being heard, understood, and supported is.
At this critically human point is where AI’s limits start to show.
AI is powerful at pattern recognition, data processing, and endless repetition. What it struggles with, and likely always will, is emotional context. It can’t sense hesitation in someone’s voice, pain in their eyes, it can’t respond to desperation with genuine reassurance. When these crucial moments are automated, humanity itself is lost.
The next phase of AI adoption in legal isn’t about stalling innovation, rather, it’s about placing it more intentionally.
This means rethinking where AI belongs and where it doesn’t. Recognizing emotionally sensitive interactions, like client intake, first calls, and moments of distress are not just workflows, they’re real, genuine moments where trust must be built and preserved.
Firms are beginning to realize that technology should support and enhance human judgement, not replace it. That systems should give teams control over their data, their processes, and their client experiences, rather than forcing a one-size-fits-all automation into places it doesn’t belong.
This critical line of thought doesn’t reject AI, it matures it.
Recently Matt Lautz, CEO of Neostella, shared this perspective as part of Law.com’s Legal Tech’s Predictions for Artificial Intelligence in 2026, contributing to a broader discussion about where AI is headed and where it needs to step back.
You can read the full Law.com feature and Matt’s thoughts here.




