
Copyright Risk in Financial Services and the Rise of Responsible AI – with Lauren Tulloch of CCC
The AI in Business Podcast • Daniel Faggella (Emerj)

Copyright Risk in Financial Services and the Rise of Responsible AI – with Lauren Tulloch of CCC
The AI in Business Podcast
What You'll Learn
- ✓Financial firms are rapidly adopting AI, but may not fully understand the copyright implications of using external content to develop and refine these systems
- ✓Employees widely use AI tools in their day-to-day work, which can lead to unauthorized use of copyrighted materials
- ✓Copyright risks can expose firms to regulatory scrutiny, litigation, and reputational damage if not properly addressed
- ✓Incorporating copyright considerations into responsible AI frameworks is critical, through employee education, clear policies, and partnerships with content providers
- ✓Collective licensing solutions can help financial firms leverage a wide range of content while staying compliant with copyright
Episode Chapters
Introduction
Overview of the copyright risks facing financial services firms as they adopt AI systems
Copyright Risks in AI Development
Risks around using external copyrighted content to fine-tune and develop AI systems
Copyright Risks in Employee AI Usage
Risks from employees using AI tools to access and leverage copyrighted materials
Incorporating Copyright into Responsible AI
Strategies for financial firms to proactively address copyright concerns in their AI adoption
Partnerships and Licensing Solutions
How financial firms can work with content providers and licensing organizations to stay copyright compliant
AI Summary
This episode discusses the copyright risks that financial services firms face as they increasingly adopt and use AI systems. The key risks arise from two main areas: 1) the development and fine-tuning of AI systems that leverage external copyrighted content, and 2) the day-to-day use of AI tools by employees that may involve unauthorized use of copyrighted materials. The episode highlights the urgent need for financial firms to proactively incorporate copyright considerations into their responsible AI frameworks, through employee education, clear policies and escalation processes, and partnerships with content providers and licensing organizations.
Key Points
- 1Financial firms are rapidly adopting AI, but may not fully understand the copyright implications of using external content to develop and refine these systems
- 2Employees widely use AI tools in their day-to-day work, which can lead to unauthorized use of copyrighted materials
- 3Copyright risks can expose firms to regulatory scrutiny, litigation, and reputational damage if not properly addressed
- 4Incorporating copyright considerations into responsible AI frameworks is critical, through employee education, clear policies, and partnerships with content providers
- 5Collective licensing solutions can help financial firms leverage a wide range of content while staying compliant with copyright
Topics Discussed
Frequently Asked Questions
What is "Copyright Risk in Financial Services and the Rise of Responsible AI – with Lauren Tulloch of CCC" about?
This episode discusses the copyright risks that financial services firms face as they increasingly adopt and use AI systems. The key risks arise from two main areas: 1) the development and fine-tuning of AI systems that leverage external copyrighted content, and 2) the day-to-day use of AI tools by employees that may involve unauthorized use of copyrighted materials. The episode highlights the urgent need for financial firms to proactively incorporate copyright considerations into their responsible AI frameworks, through employee education, clear policies and escalation processes, and partnerships with content providers and licensing organizations.
What topics are discussed in this episode?
This episode covers the following topics: Copyright risk, Responsible AI, AI adoption in financial services, Employee use of AI tools, Copyright licensing and partnerships.
What is key insight #1 from this episode?
Financial firms are rapidly adopting AI, but may not fully understand the copyright implications of using external content to develop and refine these systems
What is key insight #2 from this episode?
Employees widely use AI tools in their day-to-day work, which can lead to unauthorized use of copyrighted materials
What is key insight #3 from this episode?
Copyright risks can expose firms to regulatory scrutiny, litigation, and reputational damage if not properly addressed
What is key insight #4 from this episode?
Incorporating copyright considerations into responsible AI frameworks is critical, through employee education, clear policies, and partnerships with content providers
Who should listen to this episode?
This episode is recommended for anyone interested in Copyright risk, Responsible AI, AI adoption in financial services, and those who want to stay updated on the latest developments in AI and technology.
Episode Description
Today's guest is Lauren Tulloch, Vice President and Managing Director at CCC (Copyright Clearance Center). CCC provides collective copyright licensing services for corporate and academic users of copyrighted materials, and, as one can imagine, the advent of AI has exposed a large number of businesses to copyright risks they've never considered before. Today, Lauren joins us to discuss where copyright exposure arises in financial services, from the growth of AI development to more commonplace employee use. With well over a decade at the company, Lauren dives into the urgent need for proactive copyright strategies in financial services, ensuring firms avoid litigation, regulatory scrutiny, and reputational damage, all while maximizing the value of AI. This episode is sponsored by CCC. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
Full Transcript
Welcome, everyone, to the AI in Business podcast. I'm Matthew DeMello, Editorial Director here at Emerge AI Research. Today's guest is Lauren Tulloch, Vice President and Managing Director at Copyright Clearance Center, or CCC. CCC provides collective copyright licensing services for corporate and academic users of copyrighted materials, and as one can imagine, the advent of AI has exposed a huge number of businesses to copyright risks they've scarcely considered before. Today, Lauren joins us to discuss where copyright exposure arises in financial services, from the growth of AI development to more commonplace employee use. With well over a decade at the company, Lauren dives into the urgent need for proactive copyright strategies and financial services, ensuring firms avoid litigation, regulatory scrutiny, and reputational damage, all while maximizing the value of AI. Today's episode is sponsored by CCC, but first, are you driving AI transformation at your organization? Or maybe you're guiding critical decisions on AI investments, strategy, or deployment? If so, the AI in Business podcast wants to hear from you. Each year, Emerge AI Research features hundreds of executive thought leaders, everyone from the CIO of Goldman Sachs to the head of AI at Raytheon and AI pioneers like Yoshua Bengio. With nearly a million annual listeners, AI in Business is the go-to destination for enterprise leaders navigating real-world AI adoption. You don't need to be an engineer or a technical expert to be on the show. If you're involved in AI implementation, decision-making, or strategy within your company, this is your opportunity to share your insights with a global audience of your peers. If you believe you can help other leaders move the needle on AI ROI, visit Emerge.com and fill out our Thought Leader submission form. That's Emerge.com and click on Be an Expert. You can also click the link in the description of today's show on your preferred podcast platform. That's Emerge.com slash expert one. Again, that's Emerge.com slash expert one. Without further ado, here's our conversation with Lauren. Lauren, welcome to the program. It's great having you. Thanks for having me. Looking forward to our conversation. Absolutely. As we talked about in episode one with Rohani, we're starting to see many kinds of forms of AI really being brought through all kinds of enterprises. And this is presenting a lot of challenges from client onboarding to investment insights. The legal implications of how that AI is trained and used are only beginning to surface. Copyright concerns are no longer hypothetical. firms are publishing and operating in ways that could carry significant risk, particularly when content and tools are powered by unvetted or AI-generated materials. We're talking today about what FinServe leaders need to know about copyright risk and how proactive, responsible strategies can help them stay ahead of litigation and regulation But just to start out Lauren what are you seeing as the copyright risks for financial firms that they facing that they may not see coming So I think there a couple of key categories of risks depending on whether you talking about the development of AI systems or the day to day use of them So if we start with the development of AI systems, financial firms could be building their own systems. They could be licensing AI systems built by other companies. Or perhaps what I feel like we're hearing most often when we talk to our clients is that companies are licensing an AI system built by another company, but then they are doing additional work to fine tune that system with specialty content so that it's fit for purpose. So typically, the content that is used for fine tuning is owned. Some of that is owned by the organization itself. So it's their own policies, procedures, their own intellectual property. But then most often, there's also expert content that's published outside of the organization that needs to be used in order to really help this system do what the client needs it to do. And here's where you have the first potential area of risk when it comes to copyright, because if you have, for example, enterprise subscriptions to journals, to news, to specialty publication, to market analysis, those agreements don't necessarily allow for the use of content with AI systems. Those are secondary use rights that are typically not included in just a regular subscription agreement. So the very real risk there is that the technical teams that are working on refining these AI tools to really help the organization get the most value out of them might not understand the copyright terms of the subscriptions or other licenses that they have in place. And then there is the second area, big category of risk, which is the day-to-day use of these systems by employees within financial firms. And these are not necessarily the employees that are building or refining the AI systems like the ones I was talking about. These are just the day-to-day employees who are using these tools in their everyday work. So we've seen a lot of survey results. Some we've done ourselves at CCC, but also other firms have done surveying in this area as well. And we're seeing that nearly eight out of 10 employees and firms are using AI tools. And only 21% roughly are saying they rarely or never use these tools. So when you have that level of adoption, you are inputting into systems, copyrighted content as well. And so that's another area of risk. So you're actually potentially using these systems to say, for example, summarize content from a series of articles, things like that. And we're naming that risk at this point. Tell us a little bit maybe about what's on the books for what directly comes at banks that get in trouble in these areas. What's the regulatory pressure like and what degrees of risk in the examples that we've cited are being put in the way of financial institutions? Yeah, I mean, I think that some of the urgency risks are really around the extent and the speed of the AI adoption within the financial services firm. So what we're seeing is that when you compare across all different markets, financial firms are a bit ahead of the game when it comes to adopting and rolling out these AI tools. but they're absolutely reliant on the use of external information in order for them to be, to help them accomplish their goals, right? To do the forecasting that they're looking to do and things like that. And then you're going to see all sorts of regulatory risk, right? Because if you're not relying on reliable information, then you have, you're, you're putting your firm at risk. Either you not relying on published content So you probably not getting the value that you should be getting out of the AI tools or you are relying on published content And if you not getting the appropriate licensing in place then you at risk of you know the publisher of that content coming after you and saying hey you don have the rights to use this content in the way that you're that you are using it. You've only licensed basically read only access for your employees, but you've created an entire system leveraging our content. Does that make sense? Right, right. So it does. And knowing that financial institutions in particular need to be sensitive and knowledgeable in how they're deploying these responsibility frameworks, especially given that their employees are, it brings a lot of advantages to their work to use maybe those open shadow AI tools that we talked about in the first episode. What are the best ways for particularly banking leaders in financial institutions to think about incorporating copyright safe thinking into their responsible AI frameworks? Yeah, I think the first and most critical thing is just to make sure that the copyright element is not an afterthought, but is actually a foundational element of a responsible AI framework or program. And I don't think that's actually very difficult to do because fundamentally, based on all the information that's been out there in the world about these AI systems, most employees understand at a basic level what large language models are and how they work. They understand that these systems require content in order to work effectively. So introducing the concept of ownership of those works is actually not that difficult. The fact that somebody owns the content that has been used to make these systems work the way they should work is actually fairly straightforward. And then you can build on that with your education around, okay, what are the appropriate practices? there's already most of these frameworks already cover things like appropriate practices for confidential sensitive information or trade secrets things like that right things that you would not necessarily want putting into systems the same level of scrutiny can be applied to does our organization own this content or does someone else or gonna own this content if someone else owns this content do we have the appropriate permissions licenses in place to be able to leverage this content with AI systems. Do you think especially financial institutions are a space where there's a built-in need for them to be seeking partners out for being able to detect that copyright language, especially for given how large these areas are and how easy it is for employees to be engaging in that shadow AI? Yeah, I think there's a number of different ways that financial firms, there are going to be organizations that they license content from that they have direct and strong relationships with. And then there are opportunities to work with aggregators and with collective licensing organizations to leverage solutions that bring together content and licenses across a wide range of content types and publishers. Yeah, absolutely. Just in terms of building that copyright conscious AI adoption process, What should the priorities be just as they're getting started, especially as the importance of engaging customer employees, especially where they are now, becomes more important as AI is getting in everyone's hands? Yeah. So I think there's a couple of things, I guess, going back to what we were already talking about, I would say education, education, education. There are a few elements to training employees. There's sort of the copyright basic in terms of why is this important? What does it mean What happening with these systems but also education on just process and policy within the organization So things like escalation paths for employees when they have a question about what they can and can't do with an AI system. If they aren't sure whether or not they're allowed to do something or whether content has been appropriately licensed, who do they reach out to? What is the process to answer their questions? Things as simple as that are things that often get overlooked, but can really help in mitigating the risk for the financial firm. If people know who to ask when they have a question, then you can start to identify the gaps in employee understanding, and then you can kind of further build out your policies and procedures. Right. And even from a compliance standpoint, it really feels like that missing piece that we do all the time for SOC and planning and different kinds of certificates and privacy certifications, things of that nature and the training doesn't really seem that much more complicated it there it does kind of ask what goes back to our first episode of why haven't we kind of had this all along and i'm very relieved to say from my background in in music and technology it it is the the napster rulings of the early aughts and and so so interesting to pull these things apart after writing about them for so long but lauren thank you so much for being with us this week it's a pleasure having you yeah thanks for having Before we close, I think there were a number of instructive insights from our conversation with Lauren Tulloch, Vice President and Managing Director at CCC. Here are three we'd like to summarize before we close things out of today's show. First, copyright risk in AI isn't just a legal detail. It touches how systems are built and how employees use them every day. Second, most enterprise content subscriptions don't automatically grant rights for AI training, so overlooking licensing can expose firms to real compliance and litigation risk. And finally, building copyright awareness into responsible AI frameworks through employee education, escalation processes, and partnerships with licensing organizations is essential for financial institutions to adopt AI responsibly. Interested in putting your AI product in front of household names in the Fortune 500? Connect directly with enterprise leaders at market-leading companies. Emerge can position your brand where enterprise decision makers turn for insight, research, and guidance. Visit Emerge.com slash sponsor for more information. Again, That's E-M-E-R-J dot com slash S-P-O-N-S-O-R. If you enjoyed or benefited from the insights of today's episode, consider leaving us a review on Apple Podcasts and let us know what you learned, found helpful, or just liked most about the show. Also, don't forget to follow us on X, formerly known as Twitter, at Emerge, and that's spelled, again, E-M-E-R-J, as well as our LinkedIn page. I'm your host, at least for today, Matthew DeMello, Editorial Director here at Emerge AI Research. On behalf of Daniel Fagella, our CEO and head of research, as well as the rest of the team here at Emerge, thanks so much for joining us today, and we'll catch you next time on the AI in Business podcast. Bye.
Related Episodes

Governing AI for Fraud, Compliance, and Automation at Scale - with Naveen Kumar of TD Bank
The AI in Business Podcast
22m

Transparency for AI Systems, Regulations, and Humans in Agricultural Manufacturing - with Kun He of Bayer
The AI in Business Podcast
35m

Why Granular Visibility and Data Control Determines AI Success in Financial Services - with Chris Joynt of Securiti
The AI in Business Podcast
30m

Rethinking Clinical Trials with Faster AI-Driven Decision Making - with Shefali Kakar of Novartis
The AI in Business Podcast
20m

Human-Centered Innovation Driving Better Nurse Experiences - with Umesh Rustogi of Microsoft
The AI in Business Podcast
27m

The Biggest Cybersecurity Challenges Facing Regulated and Mid-Market Sectors - with Cody Barrow of EclecticIQ
The AI in Business Podcast
18m
No comments yet
Be the first to comment