August 28, 2025

00:39:38

How AI Governance Impacts Law Firms

Hosted by

Kevin Daisey
How AI Governance Impacts Law Firms
The Managing Partners Podcast: Law Firm Business Podcast
How AI Governance Impacts Law Firms

Aug 28 2025 | 00:39:38

/

Show Notes

In this episode of the Managing Partners Podcast, Kevin Daisey sits down with Bryan Ilg and Michael Paik of Babl AI to discuss how law firms can safely and effectively integrate AI into their practices. They cover the risks of AI, vendor management, governance systems, and how attorneys can leverage AI for efficiency without jeopardizing client confidentiality. Whether you run a solo practice or manage a larger law firm, this episode highlights key strategies for using AI responsibly in the legal profession.


Today's episode is sponsored by The Managing Partners Mastermind. Click here to schedule an interview to see if we’re a fit.

Chapters

  • (00:00:00) - How to Build a Managing Partners Podcast
  • (00:00:31) - Management Partners Podcast: AI Governance
  • (00:03:12) - Brown Algorithmic Bias Lab
  • (00:05:58) - How to Get Out of the AI Compliance Trap
  • (00:11:07) - The Future of AI for Law Firms
  • (00:16:23) - Large Language Models
  • (00:18:08) - The Role of AI in Governance
  • (00:21:26) - Lawyers Making Their Own Workflows
  • (00:28:50) - Have you got a policy on AI in your firm?
  • (00:29:33) - Babel AI's Hard Pitch for Legal Professionals
  • (00:36:45) - Babel AI: Law 30
  • (00:37:12) - An AI for Lawyers?
  • (00:38:13) - How To Grow Your Business With Data Privacy on The Podcast
View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:18] Speaker B: Most firms survive. The best ones scale. Welcome to the Managing Partners Podcast, where law firm leaders learn to think bigger. I'm Kevin. Daisy. Let's jump in. All right, what's up, everyone? We're recording another episode, the Managing Partners podcast. I got a unique crew on here with me today. I've known Brian for a little while, actually, through some other business, and he has Michael on here today. And we're going to talk about AI governance. And not a topic we've covered. AI is a hot topic, obviously, we talk about all the time. I'm actually giving a talk tomorrow on AI Search. So AI is just everywhere right now. And so this is something maybe that might be new to you. And I'm excited to have them kind of talk about what they do and the importance of it. So they're with Babel. That's B A B L Babbel AI. So you can go to Babel AI to check them out, but I'm gonna have them kind of talk about what Babel is, their background, and we're gonna dive into this governance talk and interested to see where we go with it. So, Brian, Michael, welcome to the show. [00:01:25] Speaker A: Thanks for having us, Kevin. Excited to be here for sure. And yeah, excited to talk a little bit about what Babel does, governance, and also how it impacts the legal space, the compliance space. That's a big, big topic that we deal with. And Michael's our general counsel and just recently created a. A governance certificate for legal professionals, which is aimed at really opening the eyes of legal professionals about all the unique, fun stuff that AI creates for lawyers to solve and figure out and, you know, with respect to the unique risks and challenges of AI. So we're excited to talk a little bit about that today. [00:02:09] Speaker B: Yeah. And I'll. Michael, I'll let you go in just a second, but you all have also for my listeners, which I really appreciate, if you go to Babel AI, you guys have a discount code for anyone listening to this episode or if you hear it in the future, that this code is only given to you all, you can tell them that. And then we'll have Michael kind of give us his background because it's extensive and can't wait to get talking to him. So thank you for tuning into the show today. I have taken things to the next level and I've started the Managing Partners Mastermind. We're a peer group of owners looking for connection, clarity, and growth strategies. So if you're looking to grow your law firm and not do it alone, please consider joining the Group spots are limited so I ask for anyone to reach out to me directly through LinkedIn and we can set up a one on one call to make sure it's fit. Now back to the show. [00:03:13] Speaker A: Yeah, perfect. Yeah. So if you go to Babbel AI B A B L A I you can go to our courses and use the coupon code Law30 for a 30% discount on any of the education that we have on the site. Like I said, we have the legal professional certificate, but we have lots of other things with AI auditing and all those things and maybe it's good to just talk a little bit about what Babel is before I kick it over to Michael. And I'm the Chief Sales Officer helping know get our message out. But we are the Brown Algorithmic Bias Lab. So we started as a research firm back in 2018 studying the best way to map, measure, image and govern AI. Some of our research was contributed to the NIST AI Risk Management Framework. We were one of the initial consortium members there. We've pride ourselves on our research. We have an awesome research team that's constantly researching edge case for AI use and we've taken that research into practice and we actually are out providing audit assurance for AI systems. We've done lots in the HR space, we've done lots with kind of edge cases, autonomous vehicles, facial recognition, really any way that AI you know, being deployed. We have a systematic approach to break it down, spot and identify risks within the governance, within the testing, any sort of bias, what's not being looked at, where does the desired outcome of that AI system get off track? So we can kind of start you with this point in time and you want to get to this point in time and we can help you measure and manage your way towards that. And that's our special unique gift. And we're hoping to explain how lawyers should be really aware of within that process the risks that businesses face by implementing AI into core, you know, profit models, revenue streams, business processes, business decisioning and judgments which were normally left up to humans and now we're codifying them and what risk those bring into the world and how governance really impacts that. So that was a best I could do to set you up, Michael. [00:05:18] Speaker B: Yeah, well, I'll just add before Michael goes, we'll never, we're never going to let Michael talk. Okay? [00:05:24] Speaker C: That's the goal. [00:05:25] Speaker B: He's just the guy behind the curtain. He's like the wizard of Oz over there. [00:05:28] Speaker C: But the time is being charged though. [00:05:30] Speaker B: So you know, well, you know, I go to Legal conferences. I do this podcast. I. I have lawyer clients all around the country. And all lawyers and firms are being just pushed and told, use AI, implement it here, there, everywhere. And so it's, you know, it's. It's being pushed quite often. Some companies just started, some are new, some they, Some of these lawyers are doing it themselves using, you know, OpenAI or ChatGPT. And so, you know, I just lay that out there before we dive in. So, Michael, since you're charging me by the hour, let's go ahead and have you introduce yourself. [00:06:04] Speaker C: So I'm Michael Peck. I'm the general counsel for bebola, and I'm actually also an AI auditor. And that's how I actually got into the firm. I took the courses and they qualified. And I like these guys so much, I joined the company. So I've been with Babel a little bit under a year, but I've been practicing for over 25 years. And my background is, you know, I started off in New York on Wall street in kind of a corporate practice, then went to Silicon Valley, and then I returned back to Korea, where I'm residing now, about 20 years ago. And at the time there wasn't that much work in venture capital, which was what I was doing in Silicon Valley. So I kind of pivoted into building out risk management systems for large conglomerates, industrials, so tires, steel, shipbuilding, large companies with many public firm companies in their system operating worldwide. And similar to some of these startups, their compliance infrastructures were inadequate. That's what I focused my time on. And then a couple of Years ago, as ChatGPT rolled out, I started, you know, using it for myself and then building AI apps and then, you know, getting interested in kind of the compliance around AI usage. And that's, that's how I found Babel and ended up joining them. My previous experience with AI is, you know, 30, 34 years ago, I studied linear programming because before I went to law school, I did operations research. And, you know, in practice I had an expert systems client. And then I was on the board for a computer vision company for many years. So I had some experience with it, but, you know, it really kind of picked up a couple of years ago. Just like with everybody else. [00:07:39] Speaker B: Yeah, for sure. I mean, we know it's been around for a long time, and I just think. I feel most people think it just came around in the last couple years. ChatGPT has really blown things up. [00:07:53] Speaker C: Yeah, I think, you know, the algorithms are everywhere now with, with, with social media and, you know, Shopping is on. It's, it's pervasive. It's just that we didn't think about it in this way pretty much until Chaiti crystallized it for many of us. [00:08:08] Speaker B: Yeah, definitely you had something to add to that. [00:08:10] Speaker A: Sorry, I was gonna say, yeah, I think I do a lot of reading in this space, but yeah, most people are like. Has been here for like, you know, since ChatGPT. But proven, I think back in like the 1930s is when this was mathematically proven that you could have automated decisioning through, through a machine. So it's been around for quite some time and it's like most elementary level. But yeah, what, how it's being used in business today is really disruptive to everything since ChatGPT. So, you know, and most people aren't really aware of all of those risks or compliance challenges and those types of things. So it's, it's certainly created a whole bunch of new things. And we see all the regulations that are constantly, you know, being discussed and voted on and you know, is it, is it this country, this state, this region? And that's why we talk with a lot of lawyers and legal professionals about how does it impact? Because we're, we're talking about AI. What is it? And how do you break it down? How do you even evaluate that for legal compliance perspectives? Right. How does it impact? So that's, that's kind of the angle specific to law firms. We're with a lot of different people because governance, I think generally is best with a diverse group. So legal should be a part of every single organization's, you know, governance team. They should be weighing in and helping there. And that's where if you understand how a governance works generally and some of the frameworks and that's what we teach in the program, then, you know, you can start to think about it from a legal perspective, which is where Michael comes in and really adds just a ton of value there in that, that certification. But yeah, we've been keynoting a different, like the D.C. bar association, we've keynoted that a couple years running. Our CEO's there, and he works with regulators globally. So, you know, we're, we're constantly talking. [00:09:55] Speaker C: About regulation for, for lawyers in particular. You know, I've been practicing long enough so that I actually remember when email got rolled out for lawyers to use. So, you know, we used to do paper distributions, mailing everything, hard copies. And back in maybe 98, 99, your law firms started using email and approving it for distribution. At that time, everybody was worried about confidentiality and what could go wrong. And I think we're at that stage now with AI and there is a lot to worry about. And that's kind of why we created this course for lawyers in particular to kind of address their special needs and to have them know, get a better understanding when they're working with clients who are deploying AI. There's a, there's a role for lawyers both at law firms and definitely in house in leading the charge on, on kind of managing these, these risks. Part of what we're doing is trying to get attorneys comfortable with AI and confident about opining on, on AI, you know, in their professional capacity, but also competent in using AI for their own work. And I know you have a diverse audience here, but you know, whether you're a solo practitioner or a law firm, you know, partner or you're in house, there are many uses of AI that you know, remain to be explored beyond, you know, the, the headlines and mis citing cases and litigation. That, that's really as far as I'm concerned, that that's actually kind of a no brainer. You know, you got to proof your stuff. [00:11:31] Speaker B: Yeah. [00:11:32] Speaker C: But the vast majority of use cases for AI for law firms, whether you're solo law firm or in house, I think are in knowledge management, in operations, in managing workflows, basically managing your business and creating kind of slack and efficiencies in your time so that you can bill and if you're in house then you can improve the risk management systems and maybe just go home early. So this, this type of. I appreciate the articles highlighting these, these bad, you know, cases of made up citations and litigation, but I, I think, you know, we don't need to really belabor it here. I think most of your, your audience is well aware of the risks and it's a professional obligation to manage your, your work product and, and that, you know, you can't skip it by, by using AI. [00:12:17] Speaker B: No. Yeah, a hundred percent. Yeah. Like hallucinations and, and people using it for that kind of stuff. I think that's. Yeah, but maybe that scares some people from utilizing it for other things like client communication and you know, those kinds of efficiencies that you can create. You know, again, I, I do have a diverse audience, so I appreciate that. So say if you have a PI firm on the listening, they have intake, they have, you know, they're trying to bring in potential leads, trying to turn them into consultations and then turn them into a client that they can set up on contingency, you Know all the work that's involved there. Right. Communication, follow ups, client journey and processes, you know, steps throughout the case history and updating the client. There's so much application for AI in those areas and there's a lot of companies taking advantage of that too. So. And then I see a lot of lawyers, I think I mentioned this when we, before recording lawyers creating their own AI products based on their own need or you know, they say, hey, I see a hole here, I see something we could fill. And so I'm seeing all kinds of creative things that are happening there too. So. But yeah, not just using it to, you know, do the casework for you. I think there's obviously systems out there and companies that are doing that, that are saying, you know, it's safe to use and blah, blah, blah. But I think, you know, people are still more concerned about that piece of it. But yeah, but just using it in your day to day operations and processes, I feel like that's been more adopted. But I would assume there's still risks associated with that and things that they may not know 100%. Right? [00:13:49] Speaker C: Yeah, fundamentally. And this is, you know, I think well understood but not always remembered, you know, this technology, particularly large language models, it's, it's stochastic and basically that means there's a probability distribution associated with its output. So the easiest way that I've found to explain it is there's, there's actually a cartoon called Adventure Time. There was a monster in this cartoon called the Demon Cat who had approximate knowledge of many things. And so this is what you're dealing with, you know, you're dealing with a tool that has approximate knowledge of a lot of things probably. So the trick is to, you know, corral that cat and to get the output that you want for your particular workflow. But that demon cat will raise its head when you least expect it. So keeping that in mind in your workflows is very helpful. So, you know, obviously lawyers are meticulous and we review our work, but you know, there are a lot of time constraints and it's, it's easy to sometimes, you know, overlook things in the midst of a lot of workflow. But this demon cat, it's going to pop up and that, that's the key to understanding, I think, you know, how AI works and then what you can do to manage it. And that's where these systems come in, right? Whether it's governance, risk management and, and, and really guardrails around your work processes. [00:15:09] Speaker B: That's a good, a good analogy. I Guess I like that. Yeah. Yeah. I feel like, you know, I, I didn't know lawyers that, that kind of use it every day, all day and at least for personal and maybe some business. And then I know some that are just like completely like, not. I don't even know what, how to use it. I don't, I don't use it. I don't want to. I'm scared of it. So, you know, I'd, I'd like to hope my audience is on the, the side of using it to their advantage. And I just go back to the fact that, you know, there's all these companies out there that offer products built on AI. And, and how would, you know if I, I have three AI companies that I'm hiring for this and that plugging in here and there, like, what's the risk the firm, you know, the understanding that they should have of those products and what they're doing and what they could do or potential risks with communicating, say, to a client or things that maybe appear harmless or controlled. But, you know, what should law firm owner really be going, okay, how am I assessing these products? How do, how do I make sure I understand what my risk might be? [00:16:08] Speaker C: Yeah, that's a really good question, I think, to get into the nitty gritty of it. Whether you're in house and you're doing vendor management or you're, you're buying software or services for your law firm, or you're subscribing to this stuff as a solo, there's certain questions that you need to ask. Let's get, you know, start with the demon cat, right? So all of these products are wrapped on top of large language models, right? Whether they're put up by OpenAI or Claude or otherwise. And, and all of those large language models have a version of this demon cat. And if you look at the underlying kind of agreements, because, you know, we're talking to attorneys here, if you look at the agreements that underlie those offerings by these large language model providers, they disclaim very clearly accuracy. So that's, you know, in writing, it's in the contract, and that's what your vendor is wrapping their services on. You know, since the, the model providers disclaimed it, where does it go? Is it, you know, the service provider that's going to hold that, that risk or they're going to most likely disclaim it again to you. So, you know, you have to read your agreement and to understand the kind of parameters of what's being offered and what their warranties are and what you can expect in terms of accuracy, a lot of the legal kind of LLMs or legal kind of services, they take great pains to talk a lot about, you know, we reduced hallucinations, we, you know, by confidentiality. There's no training on your data and that's important. But still that doesn't get rid of the demon cat. And so we need to think about what that means for your workflow, how you use it, whether you need to buy, you know, additional insurance to cover kind of disasters that may come about because of your use of this particular service. So, you know, buyer beware. And so in terms of vendor management, the contracting process is important, as is your due diligence. And we advise both in house teams and others on this. But it really starts on a meta level, not just at that one contract. Right. So this is what Brian was talking about with regard to governance. What are you trying to do with AI and what is your kind of entity or law firm's approach to managing the risks and opportunities related to AI? And the recommendation we make is you need management systems around this. Whether it's something from ISO, which is 42,001, which is kind of the standard that's coming about, whether you're using a NIST risk management framework or if you're active in the eu, you have much more onerous conditions because of the EU AI act that you need to be looking at. It depends where you are. But you do need a risk management system. So that's what lies between kind of the governance, your top level, board level, owner level decision on whether and how to use AI and how to manage it. And then the managers need to come up with a management system and part of that then is the vendor management step. When you, when you bring in services from outside, does that make sense? [00:19:03] Speaker B: Yeah. Yeah, that's awesome. I haven't heard ISO in a little while. [00:19:06] Speaker A: Yeah, it's back, it's back in full force. That's going to be the thing. But I was, I was going to say just on specifically procurement, I think it's a, as Michael pointed out, there's a lot of, you know, terms and conditions that need to be evaluated and who owns the liability of, you know, are we materially changing how the tool from the vendor is being used within our workflow is something that I think every business needs to be hyper aware of. My assuming liable risk for this HR decisioning because I've changed how that works and there are class action lawsuits out on some of these topics and you can see some of these Risks that play out in real time across different domains. So I'm not sure if they're settled, but they're active and, and they don't look great from a brand perspective. Procurement is a weak spot, I think, for every organization because they don't know the risks. And I think they live in the terms and conditions. And thinking through that kind of ownership of risk is a huge challenge for organizations to get their head around. And I think that's an opportunity for any lawyer. If they really get into the weeds and understand the emerging laws, they can probably add that as a, an item on the line card of services provided. So much. I think that's going to move into the digital realm for lawyers. And you know, AI is, you know, really the catalyst event. And you know, you get into finance, how does, you know, future state with, with code come into law? So, you know, just throwing that out there. But certainly areas to grow into new markets for every industry are being created and it's just another iteration of, you know, the domain. So it's, it's an interesting time, but throw that one out there. [00:20:47] Speaker C: That said, I mean, AI is just, you know, another new thing, right? Like email. We still have product liability, we still have, you know, other laws that apply and that, that's the domain of the attorneys. The point is that this new age of AI impacts all of this, but, you know, it's eminently understandable. It's just that, you know, you need to take a little time to understand the technology, how it impacts you in your own work, your organization and then your clients and, you know, it impacts them differently. And maybe we can just briefly go, go through them because, you know, I'd like to give as much value as possible on this call. [00:21:20] Speaker B: Yeah, go ahead. I'm being built for your time. I might as well use it said. [00:21:26] Speaker C: About, you know, lawyers making their own workflows. And you know, I mentioned that I, I did that as well. Right. I made no code apps for myself to manage risk management and compliance. And that's very helpful. I mean, but obviously, you know, the things that you need to be aware of is, you know, don't upload client information. You know, make sure that things are anonymized. You have to manage it, you have to review the output. All of that stuff's obvious. But, but in addition, there's this kind of probabilistic aspect to it. It is not like, first of all, it's definitely not like a Google search, right, where you look for information and you, you get it via, via this Web crawl that's indexed. What you're really looking at is a large zip file of the Internet about six months ago as kind of trained to talk to you nicely by, by kind of through reinforcement learning with the human feedback. And this chat function kind of lulls you into believing that you're, you're working with something that has actual intelligence. It's not, it's, it's a token tumbler, right? It's, it's, it's a random process that, that has been trained with, with lots and lots of money and a lot of math so that it, it sounds right, but the output needs to be reviewed. So when you're making your own apps, if you understand these limitations and you use it for things like transcribing intake interviews, maybe organizing your office expenses, maybe you know, doing an analysis of your timesheets over the past month or the quarter, you know, there are things that you should be aware of with regard to the limitations of LLMs when you're doing this. It ain't Excel, right. So there's an aspect to this that should be well understood according to the use case. And then again, if you're making an application for another as a service, you are in a good place if you know the domain, the workflow very well. That's the key, right? That's the context. So what's happening now, as you've probably seen in the media, is that it's no longer necessary to be a coder. The natural language is sufficient and you can do this graphically with kind of Lego boxes, but the prompting and the context for creating these applications is key. And the benefit of having the domain expertise is that, that you know, you know your stuff and so you can see if something's gone awry and that that kind of judgment is something that, you know, some 25 year old coder is not going to provide for you without that domain expertise and oversight. So bringing it all the way back around to the management systems, it's really applying that domain expertise and oversight to the risks of using AI so that you can get to your objective eyes wide open. [00:24:01] Speaker B: That's, I never heard it explained that way. So I appreciate that. Yeah. And I think everyone, a lot of people at this point, especially younger people, but even people around me, employees, my business partner, I think most people truly kind of believe or you know, take it at its word when it spits out a result, you know, you know, I chatgpt it and this is what it said. So, you know, I mean, I found myself very believable Yeah, I found myself in that. My. Hey, let me look up this. Oh, here, here's the answer. [00:24:27] Speaker A: My favorite. [00:24:28] Speaker B: You say, hey, I think this might be the answer. You say, here is the answer. [00:24:31] Speaker A: Yeah, my favorite is how good it makes me feel about myself with my crazy ideas. They're like, that is super smart. You're like, you're the smartest. So, yeah, definitely like that. [00:24:40] Speaker B: But, well, I could say, you know, what's the number one marketing agency for SEO in the world? It says my company. [00:24:46] Speaker A: Well, sometimes it's true, Kevin. [00:24:49] Speaker B: It has a history of me talking about my company all in my gpd, so it's obviously gonna say mine. Yeah, yeah. [00:24:59] Speaker C: For, for the, for the attorneys in particular, it's. It's very important that you go in and first of all get the paid version. Right? So this stuff's not being trained on, you know, notwithstanding the case in New York that it still has this kind of up in the air with regard to OpenAI, we have the cases in on the kind of the west coast that's on copyright and so on. All of this will, will eventually settle. But get the paid version. That, that, that's a first step. And then you need to customize your instructions. You have to tell it who you are, what you expect. Just like you have a new hire, so when you're onboarding a new hire, you let them know what your expectations are. And this includes things like, you know, irac, issue, rule, application, conclusion. This is the stuff that I. This is kind of the way I think about things. And I like your output. And depending on your practice, you'll have different kinds of, of constraints and parameters that you want to put on the. The. But you can do that in custom instructions. And that's not programming. That's just kind of, you know, letting it know who you are. And beyond that, you can start making workflows for yourself. So if we start with the individual, you know, if you make custom GPTs for yourself, that kind of hardwire additional instructions for a particular workflow, whether it's client intake or a particular type of like monthly newsletter that you send your clients, you kind of set up a workflow for that via custom GPT. And for OpenAI, for example, you can do it similarly with other applications. My favorite easy one for this is every time you take a cle, take the transcript, upload it along with the slides into either a custom GPT or for this one, you can use something that's free like NotebookLM, and you have a little mini paralegal that has this material that you can chat with. And even, you know, in the case of NotebookLM, that's good to create little podcasts for you regarding that subject matter. So you took a Cleon space law and then, you know, you, you want to get kind of a resource for this. It's like a little mini librarian. But you know, those, those kinds of personal uses require customization and, and kind of awareness of what, what's possible. And then you know, that kind of builds out further for, for the law firm and then for, for in house teams at the client. But basically you're building little management systems for a particular application. Whether it's something as easy as, or a monthly newsletter. That management system starts with that customer instruction and an awareness of dealing with that demon cat. [00:27:18] Speaker B: No, I love that. And I know for like my company, we have a paid version of ChatGPT for every employee. So every single person that has it is encouraged to use it, unless they're not going to for some reason. But yeah, everyone has it and we want, and we encourage them to try to use it within their position to, you know, find more performance or efficiencies. And so I know that we do that here. [00:27:38] Speaker C: I don't know, you're kind of leaning forward into the technology. [00:27:42] Speaker B: Absolutely. [00:27:42] Speaker C: But. [00:27:45] Speaker B: We'Re trying to grow. And so it's, if you manage this department or you're using chat to be, how can you leverage it? How can we become more efficient? And we want everyone on the team to be thinking that way. If they're passionate about what they're doing, they should be. [00:27:58] Speaker C: You know, the, the governance and management steps would be, you know, do you have a policy in terms of how it can be used? You know, do you have guidance on what not to use it and what not to do with it? You know, don't upload certain types of documents. So that, that's, that's the kind of governance and management system that, that we're talking about in terms of, you know, if you have nothing, if you have no, no governance or a management system and say you don't even say anything about AI usage at your company. People are still using AI in the course of their employment and you're still open to, to these risks because you haven't addressed them. So we call it shadow AI. But you know, addressing AI usage first of all and then creating a kind of a rule for the organization and then training people on it and then whatever in your particular organization you want to do in terms of boundary conditions, that, that's basically what we're talking about. And then when you start to reach out in terms of vendors and third parties, you need to have a system to, to manage that as well. [00:28:57] Speaker B: That's a good point. I mean, I know we do, we've encouraged it. We, we have some guidelines that we put in place and then we do like team and training. So we have, we have all hands meeting this week. So we have guest speakers on AI, we have, I'm giving a talk on AI search and stuff like that. But I'll say we are doing some of that ourselves. But yeah. Any law firm listening, what is your team doing? Are they using it? If you don't know they're using it, they're probably using it while they're in office on your machines. I guarantee it. So yeah. Is there a policy in place? Do you even know who, who's using what and when? And you can still be open to that. That's, that's very interesting. So I want to tie this back around to, to Babel and, and you know, kind of what if I was a lawyer right now and I'm like, let me go check out Babel, like, what's that experience like? Kind of what's the process like? What are they getting? Maybe you kind of hit that real quick. Yeah. [00:29:49] Speaker A: So for our education, so we have really kind of two main, three main product categories. We have AI Education, AI Advisory and then AI audit. So the education, obviously we talked a lot about the governance certificate for legal professionals. It's a four course certificate program which is, you know, breaks down just generally how does AI impact business? What are the unique risks that presented in business? How should we be approaching AI investment as a value framework in there to, to, to bring projects into existence? We talk about AI governance frameworks. The NIST, AI RMF, ISO 42001 and EU, AI Act. And just kind of how those all work. And as Michael pointed out, AI management systems are incredibly important for not just like getting it off the ground, but for continuous improvement iterations as the technology grows and becomes more impactful and work streams. And then the fourth is the specific domain legal professionals course. So all of that's capped off with a capstone project and then we issue the certification. So that really helps kind of round out education. Kind of where our flagship product is, we have a whole AI auditing certificate program. As Michael mentioned, that's kind of how he found Babel a while ago. And that is, you know, all authored by our CEO who was a PhD astrophysicist and leads our Research and all that fun stuff. So. And the keynote in the legal space. So he talks a lot more about risk, the assurance process, MLA and AI generally. I think Michael said that was the hardest one was getting all the mathematical concepts sorted out. But yeah, we have all that and then we're helping organizations all the time with these problems through direct advisory and then we're actually out. We have our own AI system, a model card kind of audit where below the kind of the global AI management system, each individual AI system we can evaluate and inspect the model to. To make sure that it's achieving what it's supposed to be doing within that probabilistic. What makes sense for the algorithm. Right. So in the system. So if you work with Babel you generally are working with our, you know, just directly with us. We do scoping. Every AI has a different context and we bring that into consideration. But it kind of flows into either providing assurance audit or advisory or you're learning yourself and putting yourself in position to you know, find your lane with AI into the future as it's kind of uncertain. But these are core foundational concepts. The technology is kind of interchangeable in how we approach things and we started with that foundational principles and you know these are good just skills and knowledge to possess as. As the technology grows and changes every week. You know we've been doing this work for years now. So that's kind of the, the Babel hard pitch. But yeah, just one more time. If you are interested in the Education Law 30 at Babel AI and you're able to get a discount across any of the educational products that we have including the certificate for legal professionals. [00:32:53] Speaker C: If I could just add one thing about the course for lawyers in particular. You know we spend a lot of time on management systems for in house and then also how lawyers can manage their own workflows in for. For personal use and professional use. But you know, given, given this audience I I'd like to reiterate for law firm whether you're solo or you're a partner or, or you're working with. With other partners in the law firm context. The finder, minder, grinder functions, they'll get that very clearly. But the, the origination of clients can be done much more easily and effectively using AI right. Just you know, getting intelligence on potential clients, creating a client file, remembering stuff simply using NotebookLM for non confidential information. Using perplexity with Edgar so that you can understand public companies and filings better and all the information that comes with that material. Contracts, industry analyses all of that stuff that's a lot easier now with AI, but for the grinder, so the kind of the operations functions related to the firm, every minute that lawyers are spending on non billable matters is money out of your pocket. So if you can streamline your operations and get more and better out of the systems that you have, whether they're yourself doing it or you're working with paralegals and an office manager, enable them to do more and to do it better and more effectively in the same time. I'm not talking about downsizing, I'm just talking about allowing them to do better and be happier. And maybe that comes into compensation as well, because that makes a difference. But for, for the lawyers who are also billing all that bullshit time that you're spending on making PowerPoints presentations for clients reports, you know, stuff for some, some presentation at the local bar association, you can whip that stuff up in five minutes, which gives you another 55 minutes to actually make some money. So this is important. And then on, you know, practice, this is a little sensitive. So you know, for litigation we've already talked about what to be careful of. But for other practices, there's a lot of work that can be made more efficient. And the question is how to address that. Because many law firms are built on a leverage model, right? So efficiency is not necessarily the core value of how these business models have developed. Over time, the world has changed and it's gotten a lot more competitive. Your clients are demanding efficiencies, you can now offer them efficiencies with flat fees or alternative fee arrangements by deploying better technologies in your firm and improving your workflow. So I think that that's it. From a law firm and provider perspective, understanding the in house implications is actually very important also for service provider lawyers because this is how the market has changed. I think if they're not already feeling it, they will feel the pressure to justify billable hours and their work process and why they're not using AI and you know, the hallucinations and oh, we're not, you know, confidentiality, that will only get you so far. You need to address this head on. And in order to do that, you will have to understand it better so that you can at least respond to these questions from your client. [00:36:04] Speaker B: Oh yeah, great points. 100% agree. And it sounds like, yeah, if you're a lawyer listening, you're using AI, you know, vendors with AI, hopefully you're using all those things, getting educated and understanding what you have, what your risks are put in place. Just like they just expose me potentially. We get chatgpt for every employee and we give them a little bit of eh, you should probably use it to do this. But we don't really have a system in place to manage that and to grow that and to expand on it. So yeah, it's all very interesting stuff and I appreciate you guys coming on to share really cool company. You're doing great things and I appreciate you sharing with us on here today. Go to Babel AI. It's B, A, B L, A I and coupon code what? 30. [00:36:52] Speaker A: Law 30. Law 30. W30. All caps. [00:36:56] Speaker B: Excellent. [00:36:57] Speaker C: 30 in numbers. [00:36:59] Speaker A: Yeah. 3, 0. There we go. [00:37:01] Speaker B: Good point. Law 30. I think they'll figure it out. These lawyers are smart. They got this. Yeah, yeah, yeah. [00:37:07] Speaker A: One thing doesn't work, try another. [00:37:09] Speaker B: Right? That's the lawyer way. [00:37:10] Speaker A: Trial and error. [00:37:12] Speaker B: Yeah. So guys, anything else you'd like to add before we wrap up and anything else you want to share with Just. [00:37:19] Speaker C: On the confidence part. All lawyers by definition, by training, we're all wordsmiths, right? This is how the AI's the large language models. I'm not talking about the vision models and diffusion models. The, the large language models work by word association. You are very much like the large language model. And then you know, the, the, the, the interaction should be natural and a good fit for the way we have been trained. But you know, we just need to keep our eyes wide open and treat them, treat the models and the tools as very, very capable and smart paralegals or associates that are very bright but have no clue. And every morning when they come to work, they forget everything else that they learned yesterday. So you need to, to be aware of those limitations. But there's a lot you can get done with these tools to save you time, make you money and maybe just. [00:38:10] Speaker B: Get you homework like that. Well, appreciate it, Brian. Anything else you got? [00:38:14] Speaker A: I was just going to say I appreciate you having us on and we'd love to, you know, if there's stuff that we talked about that you want to talk more to us, we're always open for a conversation. You can find us contact information there too, and happy to talk about things. We do offer partnership programs. I'm, I'm a channel guy, so if there's competencies you want to bring in, you know, we're, we're happy to discuss. We do have legal partnerships with, you know, data privacy folks or things along those lines to help bring in some of our services. So that's interesting at all. Love to talk more and help you grow your business. I mean, that's. That's what we're here to do. [00:38:53] Speaker B: Appreciate it. I love it. That's what the podcast is all about, growing your businesses and appreciate you guys coming on the show and sharing all this today. So check them out. Also, LinkedIn. I know these guys are both on LinkedIn. I'm always on LinkedIn. If you want a direct connection, let me know. If not, look them up and connect with them there, too. So I would encourage that. Well, everyone, thank you so much for tuning in to another show. As always, I appreciate your loyalty to the show. And guys, thank you so much. We'll talk to you soon. [00:39:24] Speaker C: Sam.

Other Episodes