Article • 5 min read
To drive AI adoption in legal, you have to make time to play
Shana Simmons
Chief Legal Officer at Zendesk
Zuletzt aktualisiert: December 16, 2025
A few months ago, I came across a LinkedIn post by one of our lead AI product managers, Mirza Besirovic, and it really resonated with me. He argued that companies are struggling to adopt AI not because of licensing costs or compliance hurdles, but because everyone is simply spread too thin.
To paraphrase his insight: The rapid pace of everything in the past few years has erased that magical “Friday afternoon” time for learning and exploration.
We see this across the legal industry. We are all drowning in tactical work – reviewing marketing copy, redlining routine contracts, answering the same compliance questions – and we know AI can help. But moving from “we should use AI” to actually building AI-leveraged workflows isn’t plug-and-play. It requires time that most in-house teams believe they don’t have, creating a drag on implementation.
We can mandate AI usage all we want, but if we expect our teams to internalize complex new skills through “LinkedIn osmosis” or weekend tinkering, we are setting them up to fail.
So, my leaders and I decided to try something different. We stopped asking the team to find the time. Instead, we cleared the calendar, paused the non-urgent work, and hosted our first Legal AI Hackathon.
Experimentation is our new R&D
The goal was simple: set aside dedicated time to experiment, with no idea too big or too small. But strategically, this was about establishing a culture of “Legal R&D”. Just as product teams have research and development, legal teams need a disciplined process of testing hypotheses and learning from controlled experimentation to create new strategies.
There is a misconception that to leverage AI effectively, you need to be a prompt engineer or a data scientist. The reality, as we discovered, is that the modern AI toolkit, including enterprise tools like ChatGPT, Gemini, and Zendesk’s own AI-powered app builder, democratizes the ability to build. You no longer need a computer science degree to create a functional workflow; you just need to understand the problem you are trying to solve.
To make this successful, we needed a few specific ingredients:
- Leadership support: This was the most critical factor. We had to provide psychological safety, showing the team that their day-to-day work could wait so they could focus on the future.
- Tech-savvy guides: We identified team members who had already experimented to serve as mentors. This empowered our people to become “insight archaeologists,” helping others dig into the tools to find value rather than just reviewing outputs.
- The right toolkit: We ensured everyone had access to enterprise-grade tools like ChatGPT (Agent Builder), Gemini (Gems), and our own AI-powered app builder.
- Cross-functional partners: We invited colleagues from other departments to show us what they’ve built and to serve as judges. This turned legal operations into a strategic nerve center, connecting us with the broader enterprise transformation.
Seeing AI’s potential in action
The outcomes of the Hackathon went far beyond the specific tools we built. We also saw a massive acceleration in AI savviness. For many, AI can feel abstract. But once they got their hands on the tools, the lightbulbs went on. They saw the potential of AI as a capacity multiplier, one that allows us to expand our reach beyond previous resource constraints.

We walked away with actual, deployable productivity tools designed to make us more efficient and provide our internal clients with a better experience. As one team member noted, “There was plenty of testing, learning and iteration in our team, with the solution evolving into something that could be useful and portable across AI platforms”.
Governance requires understanding
There is a deeper imperative for this hands-on approach that goes beyond efficiency. We often view AI governance as a mandate that sits with a specific council or officer, but true stewardship belongs to everyone who touches the technology.
Responsible AI is a collective duty carried by every employee and every user. However, we cannot effectively govern a machine we do not understand. If legal teams are to serve as true strategic advisors, capable of spotting risks and defining necessary guardrails, we must be more than distant observers. We must know the product as intimately as the engineers who build it and the employees who use it.
Amplifying human creativity and connection
Perhaps the most unexpected benefit was the impact on our culture. Zendesk has a global legal team with offices on five continents and remote employees in a dozen countries. Many of us have worked together for years without ever meeting in person.
Collaborating on a project that wasn’t specifically “work-related” allowed people to bond in a way that is challenging to do over standard Zoom calls. It reinforced that while AI is technological, the transformation is fundamentally human.
The imperative to build
The legal profession is changing. We can no longer rely solely on legal acumen; we must also possess the technological fluency to scale that acumen across the enterprise.
The consequence of not making time to learn – and play – is stagnation. As one of our Hackathon participants said, “We may not have hit a hole-in-one today but we’ve definitely made it to the green… AI itself will also keep improving, so as we learn, so will it”.
If you are struggling to drive AI adoption, stop waiting for your team to find the time. Give them permission and a mandate to play and carve out the time for them. The choice isn’t whether this transformation will happen because it’s already underway. Our choice is whether we will shape it or be shaped by it.
