Back to Podcasts
Gradient Dissent

Snowflake’s CEO Sridhar Ramaswamy on 700+ LLM enterprise use cases

Gradient Dissent

Thursday, October 10, 202455m
Snowflake’s CEO Sridhar Ramaswamy on 700+ LLM enterprise use cases

Snowflake’s CEO Sridhar Ramaswamy on 700+ LLM enterprise use cases

Gradient Dissent

0:0055:42

What You'll Learn

  • Ramaswamy's path from IC engineer to executive leader at companies like Google, Neva, and Snowflake
  • His approach to leadership, which balances technical expertise with people management and customer focus
  • The cultural changes he is driving at Snowflake, including increased intensity, excellence, and virtuosity
  • The need for Snowflake to move from a market-leading product to developing new AI-powered offerings that require rapid iteration and cross-functional collaboration
  • Ramaswamy's analytical and data-driven approach to measuring success and making decisions

AI Summary

In this episode, Sridhar Ramaswamy, the CEO of Snowflake, discusses his career journey from an IC engineer at Google to leading large teams and organizations. He shares his leadership style, which combines a strong technical background with a focus on people management and customer relationships. Ramaswamy emphasizes the importance of intensity, excellence, and virtuosity in the Snowflake culture, as the company transitions from a market-leading product to developing new AI-powered offerings that require rapid iteration and cross-functional collaboration.

Key Points

  • 1Ramaswamy's path from IC engineer to executive leader at companies like Google, Neva, and Snowflake
  • 2His approach to leadership, which balances technical expertise with people management and customer focus
  • 3The cultural changes he is driving at Snowflake, including increased intensity, excellence, and virtuosity
  • 4The need for Snowflake to move from a market-leading product to developing new AI-powered offerings that require rapid iteration and cross-functional collaboration
  • 5Ramaswamy's analytical and data-driven approach to measuring success and making decisions

Topics Discussed

#Leadership#Company culture#Product development#AI and machine learning#Data-driven decision making

Frequently Asked Questions

What is "Snowflake’s CEO Sridhar Ramaswamy on 700+ LLM enterprise use cases" about?

In this episode, Sridhar Ramaswamy, the CEO of Snowflake, discusses his career journey from an IC engineer at Google to leading large teams and organizations. He shares his leadership style, which combines a strong technical background with a focus on people management and customer relationships. Ramaswamy emphasizes the importance of intensity, excellence, and virtuosity in the Snowflake culture, as the company transitions from a market-leading product to developing new AI-powered offerings that require rapid iteration and cross-functional collaboration.

What topics are discussed in this episode?

This episode covers the following topics: Leadership, Company culture, Product development, AI and machine learning, Data-driven decision making.

What is key insight #1 from this episode?

Ramaswamy's path from IC engineer to executive leader at companies like Google, Neva, and Snowflake

What is key insight #2 from this episode?

His approach to leadership, which balances technical expertise with people management and customer focus

What is key insight #3 from this episode?

The cultural changes he is driving at Snowflake, including increased intensity, excellence, and virtuosity

What is key insight #4 from this episode?

The need for Snowflake to move from a market-leading product to developing new AI-powered offerings that require rapid iteration and cross-functional collaboration

Who should listen to this episode?

This episode is recommended for anyone interested in Leadership, Company culture, Product development, and those who want to stay updated on the latest developments in AI and technology.

Episode Description

In this episode of Gradient Dissent, Snowflake CEO Sridhar Ramaswamy joins host Lukas Biewald to explore how AI is transforming enterprise data strategies. They discuss Sridhar's journey from Google to Snowflake, diving into the evolving role of foundation models, Snowflake’s AI strategy, and the challenges of scaling AI in business. Sridhar also shares his thoughts on leadership, rapid iteration, and creating meaningful AI solutions for enterprise clients. Tune in to discover how Snowflake is driving innovation in the AI and data space. Connect with Sridhar Ramaswamy: https://www.linkedin.com/in/sridhar-ramaswamy/  Follow Weights & Biases: https://twitter.com/weights_biases  https://www.linkedin.com/company/wandb   Join the Weights & Biases Discord Server: https://discord.gg/CkZKRNnaf3

Full Transcript

You're listening to Gradient Dissent, a show about making machine learning work in the real world. And I'm your host, Lucas Biewoldt. Today's guest, Sridhar Ramaswamy, is someone I've been looking forward to interviewing for a long time. He's currently the CEO of Snowflake. But before that, I actually met him in his role as founder of Neva, a next Gen Search Company. And before that, he was the leader of AdWords at Google, a division that generated over $100 billion of revenue yearly. He started at Google as an IC engineer and quickly rose to prominence and then did the same thing at Snowflake. So he's not only an ML engineer, he's also a founder and executive. And he kind of brings all these perspectives to bear in this conversation where we talk about everything from the future of foundation models to his leadership style to Snowflake's AI strategy. I hope you enjoyed this conversation as much as I enjoyed it. First of all, thanks again for joining us today. I know you're super busy. And, you know, in learning about you, I found that you had such a fascinating path from basically an IC engineer, I think, at Google, not even that long ago, to running the vast majority of Google's revenue, to starting your own ad-free search engine, to getting acquired by Snowflake, and then running Snowflake. And I guess one of the questions that came to mind for me is, it seemed like over and over at Google, you were kind of picked to kind of move it to bigger and bigger roles. And then again, at Snowflake, you were picked to run the company. What do you think you're doing in your career that instills so much confidence in you to take on these bigger leadership roles over and over? This has happened three times. I was at a company called Epiphany, where I started as an engineer and then led most of the engineering of about 100 something people there. And I went back to being an IC at Google, grew that team. And then at Snowflake, for the first nine months, famously, I had literally no one reporting to me. I used to joke to people that I was a minister with auto portfolio. I was SVP of AI with zero reports. But I would roughly say that I love being part of groups where the company, the group, is making a big difference, both to itself and to its customers. I was a reluctant leader, but what I have really gotten to learn and love is being able to put technology, a great team, and customer needs together into something that creates magic. And I didn't come into Snowflake thinking that I would become the CEO. It was more the ending point for the Neva journey. But I got to learn more about the customers, more about the company. The space is one that's deeply familiar to me. And part of the benefit that you get from leading a large team is a part of soft leadership. If you're SVP of AI with zero reports, that's what it takes to move teams. It was a good test. And so I think it is this affinity for combining technology and people with customer problems that energizes me. And look, all of us progress in our careers when people see more in us than we can. And I am eternally grateful to lots of people, Alan Eustace, Larry, Sergey, and here, of course, people like Frank and Benoit for taking that chance on me. So that's a good segue to another question that comes to mind for me, which is that I've met Frank a few times and read his book, and I think he comes across as the most sales-oriented CEO you could ever find. There's so much to learn from him, but he's very loud and aggressive. The second you meet him, he's kind of coming at you, and he's got a lot of fire in his belly. and you seem like one of the most engineering oriented CEOs that I've met. I mean, you're detail oriented and you're kind of understated. In fact, you know, you're a Weights Advices user, which is a real honor. I think you might be the person running the largest company with an actual Weights Advices account with real runs in it. So, you know, it's like you clearly are willing to kind of go back into the details and you came up through engineering. And I was kind of curious, you know, kind of stepping into a role as a totally different kind of leader. What kinds of things did you want to keep and what kinds of things did you want to change? I mean, yes, I'm detail oriented. But to me, the, you know, the people aspect of leadership is really important. As a leader, you have to inspire. As a leader, you have to, in my opinion, lead by example. And so, yes, I have an engineering background and I love tinkering with products. But I've also done probably 250 customer calls over the past like nine-ish months. Why? Because I know I'm the face of the company. And it's important that all of our customers feel like there is an actual person that is going to back them up with everything that, you know, everything that we can. And so I would say there are different styles of leadership. Frank was an amazing leader. I am, you know, just as pushy, probably more pushy in private. I'm also analytical in a way that I'm pretty comfortable. And, for example, on the topic of sales, I have partly engineering, partly science. I've had conversations now with Chris Tegman, our head, you know, our CRO, about how really we are trying to measure, you know, processes and entities like accounts. And I introduced the idea of like Boolean metrics, like how do you describe the state of a world so that you can then describe the state of 5,000 entities in your world. It's like it's a way of thinking about numbers. And so I think it is that combination of people skill and, yes, a background rooted in analytics and engineering that makes me who I am. Honestly, I'm very comfortable with who I am as well. That's great. Are there any parts of the culture that you're trying to be intentional about changing right now or when you stepped into the role? A few things. One, I think I'm very open about it. It is the amount of intensity that we bring to bear. I am unapologetic about work and the need to work hard to succeed. It's not a very word. I also tell people that it is not necessarily in contradiction with things like needing to look after yourself and needing to look after your family. And so that intensity of work across the board is something that I very much emphasize. Again, being an engineer, I'll sort of caution and say, don't infer counterfactuals from what I'm saying. But the intensity matters. Excellence in everything that we do. There is a tendency for teams to declare themselves to be engineering teams or sales teams and say other stuff doesn't matter. As a leader, I find that to be a pretty bad thing to do because you demotivate large portions of your company. And so I demand just as much from the HR team as I do from the engineering team. I'm also stressing virtuosity. By that, I mean the ability to make changes, to try things out, and be deliberate about when something is not working so you can actually course correct on the way. Lots of successful companies know only one way forward. It's like strict. They just keep going. They pick a topic and come what may, they go at it. I'm trying to get the team to calibrate more for what is success, what is excellence, and what happens if for any number of reasons you're failing. Like how do you actually deal with it? Do you double down? Do you walk away? I want to make sure that we have things like that in our vocabulary. You can walk back from everything. and here is where things like what's a trap door versus what is something that you can walk right back out of. That's the kind of thinking that I'm trying to instill in the place. If I were to summarize, I would say I'd like to see greater drive, greater virtuosity just in getting stuff done combined with a realistic view of where are we very successful, where are we not, where do we need to double down, where do we need to walk away. you know, hopefully that gives you a little bit of an idea into how I'm thinking about what are the things I want to see in this team. Where do you think Frank would be different in emphasis? Because that actually sounds a lot like, you know, the book that he wrote. Frank, for example, you know, by his own admission, like thought of product strategy as different from like, you know, sales strategy. It made sense at a time when Snowflake was such a world-leading product that you didn't need to touch it. It was perfect, but it was six years ago. And we're doing a lot of new things. We'll talk about it eventually. We're doing a lot of new things in AI. It's new for Snowflake. Many of the customers we talk to are genuinely surprised when I talk about what we can do with AI on top of Snowflake. What this means for the company is, and as you know, AI is a fledgling field. The science is still being born regardless of what we think in terms of how do you produce a reliable application that uses AI. So everything is new. And so I stress much more of a culture of rapid iteration, yes, from the engineering and product teams, but a much tighter relationship between the engineering product and the go-to-market teams, the marketing teams, the sales teams. In fact, one of the pretty definitive changes that I've made is, you know, you can call them whatever cross-functional teams. We call them water rooms here because, you know, Frank liked martial analogies. And so we have a water room for AI. We have a water room for data engineering. And, you know, I'm stressing much more of a culture of we need to work holistically together. And then finally, as I said, I bring a much more analytical approach to how does one measure activity? How does one measure states of accounts? What is healthy? What is not? And again, that is from the lens of how do we make sure that the 3,000-person team that we have in sales is effective? But I would say the big delta change is Snowflake has to move from we have a product that is 10 years ahead or five years ahead of everybody else to we are creating entirely new product lines and rapid iteration is very necessary. I'm curious if you've come across that the Paul Graham SA founder mode that everybody's talking about and you know you are a founder and an executive do you have an opinion on it uh no like I ran a 10,000 person team and anyone that has worked with me in that context you know will inevitably tell you um you know this is an intense person I think it has less to do I I would think of Pondra mode more as a personality trait rather than, you know, like a positional, where are you, you know, within a company trait. I recognize foundationally, for example, that we are all lucky to be in the world of technology that is so rapidly expanding. I mean, think about you and I were starting a bank. It's like, you know, they only grow at certain rates. There's not that much you can do about it. You have to hustle pretty hard unless you have a special in. And until about 15 years ago, you and I would not have dreamt of starting a car company. But, you know, I think like in tech, for example, you need a certain attitude that is all about creating opportunity, seizing opportunity, visualizing the future, then letting reality correct that future and being intense and intentional every day, every week about how you get work done. So absolutely, I saw founder more. I think it has less to do with founders and more to do with the kinds of people that thrive in changing rapidly growing environments I think all of those things could be applied to like I don know successful generals But I guess you agreed with the idea that it more effective to kind of sweat the details and kind of get involved in all your direct reports activities versus giving people an outcome and giving them space to hit that outcome. Those are all situational. I don't believe in hard and fast rules about it. And I'm also very, very cognizant of not mistaking my activity for outcomes. At the end of the day, it's one person. Lots of people, founders included, get into hero mentality of I am the only savior for the company. Well, I guess 7,500 people. It's absurd if I believe in nonsense like that. And so absolutely the details matter, but levels of abstraction also matter. We wrote, okay, I refer the company for the first time. Why? because I wanted that level of abstraction. I also wanted clear accountability. There are names next to those OKRs. And just like I feel pressure for delivering revenue and growth for Snowflake, I want people at Snowflake to feel pressure about, hey, we got to hit these revenue targets or we got to ship this product. And so I wouldn't get carried away with like, you know, detail for detail's sake. You need to use a variety of techniques, especially when you're dealing with like a 7,500% company sitting in, gosh, I think like 30 plus countries. And so you really need like a suite of tactics to be successful. So one of the really interesting initiatives from Snowflake that was, you know, very visible to us and our customers was launching a foundation model, Arctic. I was a little surprised that, you know, Snowflake could kind of undertake the, you know, the expense and the effort required to launch a cutting edge foundation model. Can you talk about what's the benefit to Snowflake and why you decided to build your own foundation model? Well, I mean, first of all, we, I think by now, accept that different kinds of AI models are going to have fairly large influence on tech at multiple levels. At a simple level with existing technology, not even, you know, GPT-5 or 6, about language models being a bridge between unstructured data like spoken text to structured data. I tell people that there used to be like basically a team that would pretty much work on when should the weather one box trigger on Google search. Like when should you show the weather one box? It's more complicated than it sounds because people are like, weather yesterday, weather tomorrow, weather a week from now, you know, Providence, rainy weather. It is complicated. But all of a sudden, problems like that can be solved with language models very effectively. So I think of AI in that sense as foundational technology. as this bridge between structured and unstructured data, and this goes pretty deep. You can now go from images to characteristics of images very, very quickly. Or you can use a company like landing.ai and say, huh, I can train a model to detect manufacturing defects in pictures that are taken pretty easily. So I think that part people are beginning to get. To me, where things get really, really interesting is planning and tool use. If you can now take that, the foundational capability that you have of going from text to structured data and say, I can now use that to call APIs, and all of a sudden the language model can now plan and do additional things, you know, that's early, but you can do a lot with it. So that's the reason why we said we need to be good at this technology. As you know, we are going through a pretty uncertain phase. I would say like starting roughly two something years ago from June 2022 when GPT-4 was announced internally or was used internally to now, you know, that's been the state of the art. So late last year, it looked very much like many teams could catch up to GPT-4. That's when we invested in it. But then we also realized that there was going to be benefit for us. If you're going to do SQL generation, these models could help. If we wanted to ship a model as part of Snowflake's AI layer, these models could help. We also got a bunch of, it's not just the training people, we also got a bunch of inference people who are very, very important in how we run these systems at runtime as part of Snowflake. So these were all the reasons where we said we need to have a foundational capability. The thing that I tell people is like in the 2000s, if you wanted to be an internet company, you had to know distributed computing. Remember, there was no Hadoop, there was no Spark, but you needed to know distributed computing. In 2023, I think having a level of expertise with how foundation models work, how you can tinker with them, how you can play with them, how you can make changes to them, I think of it as a foundational skill for any serious data company. On the other hand, as I tell my team and all our customers, we don't pretend we are open AI. We don't pretend that we can invest $10 billion into training the new model. And so there's very much like a find your place, figure out the maximum points of leverage. But the simplest answer to your question is, I think of this as a basic technology that any company that is serious about data and computation like we are as a data platform need to be good at. But it's kind of interesting, right? Like a lot of data companies would probably agree with everything you said, but then they wouldn't build their own model. They might use Meta's Llama or Mistral or something like that, figuring that, let someone else bear that big expense of building the model and we'll become experts in using the model. So I feel like you made an interesting different choice. And I'm wondering why that is. like look we did overinvested our model it is something that we could absolutely do and I'll also be honest there is a little bit of you need to show your credible in the space without getting carried away about it's the it's the it's the end all it's the end all beyond absolutely you have partnerships with the folks that you that you talk about we have a partnership with Mistral. We have a partnership with Rekha. All of those things are true, but having the in-house capability is also very, very helpful because, as I said, the kind of people that understand how models are trained often are the ones that understand how you can do post-training on them. Remember, it's a suite of different skills. Maybe one day you're very focused on how do you train a foundation model from scratch, maybe another day you are much more into, how do I generate some really good post-training pipelines that perhaps we can collaborate with Meta on about the problems that matter together to both of us. There is an amount of flexibility that you get from having these skills. And the Arctic model itself, Did you design it specifically for the text-to-SQL case that I know is really important to you, or did you design it as a general purpose system? It was a general purpose system, and it's really in things like post-training, we said the two most important applications that we care about a lot, which is retrieval and things like cited answers. And yes, absolutely. SQL generation, including stealth-like specific SQL, those were things that we addressed in post-training. We continue to have interest in things like API calling. But the core model itself was trained to be a good general purpose model. And Lucas, you know this better than others. We don't know how to break models apart into their foundation. You and I would love a model that does all of the thinking and the planning of GPT-4 without the kind of cost that comes with it. We just quite don't know how to do that. Okay, well, I found another interview with you where I think you mentioned, And tell me if this is right, that there are 700 Gen AI use cases inside of, or making their way to production inside of Snowflake? Numbers a lot larger or not. Yes, absolutely. I mean, I guess there's different ways to count that, but that seemed like an astonishingly large number, even to someone like me who thinks there are tons of use cases. Can you talk about how that works and kind of what the biggest buckets are of use case for you? Yeah, yeah. I would put them into a few different buckets. Well, wait, what's the number now? It's a lot larger. What would you place it? You know, it's several hundred applications in production, over a thousand applications that are in implementation. Wow. These projects are a little bit faster because some of them are easy, and I'll talk about the easy part in a second. And I would describe AI at Snowflake, which is obviously built on top of the data foundation that we have, where our mission very much is to be the enterprise lake house, where we have a ton of data, either that's been ingested into Snowflake or is sitting on cloud storage and is accessed with things like iceberg format. That's the foundation. There's a ton of data in Snowflake. The first thing that we did was we wanted to make it easy to use the transformative powers of language models, which is the ability to go from unstructured to structured, things like that. We said we want to make it broadly available. That we called Cortex AI. It's a model garden that goes with every snowflake deploying, not rocket science. On the other hand, it is deeply integrated into every aspect of Snowflake so that anyone that writes SQL can now call these models. We provide a bunch of basically syntactic sugar for certain kinds of language model uses, whether it is summarization, translation, sentiment detection. You can just think of it as a function call. If you can write SQL, you can do these things. So there are a class of use cases that you can simply think of as data transformation use cases. All of us have inevitably worked on projects that are software engineering projects that are the equivalent of, I need to ingest these documents or perhaps pictures, and I need to extract this kind of structured data. That is as simple as setting up a pipeline in Snowflake, which is pretty much you write a piece of SQL and you say, please run this every six hours on all new things. So that's one class of applications, which is basically data transformations. It's our bread part. And that's the benefit of Cortex AI. All of a sudden, using a language model, you don't need to do anything additional if you know how to use Snowflake, how to write SQL. You got that done. You've also introduced additional, I call them primitives. One is search, which, as you know, is the basis for RAG applications. And you combine that with the language model and a rapid prototyping environment called Streamlit. You can write a chatbot in five minutes. And, you know, I downloaded a, I think it was a Kaggle data set yesterday in the morning because I want, I keep trying these things every now and then. And I had a chatbot ready in 10 minutes. It's like you download the data set, you stick it into a table, you create the search index, fire up a Streamlit app, and off you go. Did you publish it to the Streamlit public cloud? Could we post a link to it? Actually, it was super ticked that I actually wrote Streamlit apps because I was like, you know, it's a big deal, but clearly I should try it out. But to me, that's the power of Snowflake. Our take is that anyone that wants to build a corpus-backed chatbot is able to do that with a minimum of fuss. You don't need to know search. You don't need to know vector indices. You don't need to know which model you're using. We want to make that easy out of the box. And sure enough, there are classes of chatbot sort of applications that people use. They can also turn into question-answering applications. If you now imagine that you are searching over a set of documents that are coming, you have a question you always want it answered, that's fine. You can use the search index to generate answers for those. We also have something called Cortex Analyst, which is more of a structured data product. The idea is that people, for a while now, since the advent of BI tools, want to give business users direct access to data. so they don't have to go through the priests that data analysts are and the tools that are BI. They pretty hard cumbersome and if you want something different you have to go back to your priest and ask them how you supposed to get at that So companies like Bayer are experimenting with what we call Analyst which is a text Our secret sauce in all of these is we want to stress high reliability. We are pretty gung-ho about publishing the precision of all of the products we create because we think part of the disservice that the AI community is doing to itself is not being clear about when can you create a reliable application and when is something a toy. I tell people that we better not give Scarpelli, our CFO, a talk-to-your-data product that is not reliable. If not, if he's going to be asking questions about how much revenue did Snowflake make. But the fun thing about all of this is if you ship these primitives, people end up composing applications. They're like, okay, here's the little data store that's going to have a bunch of questions, and depending on the answer to that question, I'm going to use your structured data to go look up something specific. So there's a lot of hybrid apps that people are creating. But the broad primitives that are driving these use cases are use AI as an addition to SQL so that it becomes a data transformation tool, make things like sighted chatbots really, really easy to create out of the box and give people the primitives for structured data access. And there's a lot of magic when you give people creativity. And even within Snowflake, people are coming up with use cases where I go like, huh, I wouldn't have thought of that. Like what? I mean, when you talk about like 700 to 1000 use cases, are these like internal use cases? These are things our customers are implementing. Oh, these are customer implemented use cases. Customers are implementing. They're different. These are what we record. Sometimes customers, we roughly know that they want to get started on a project and we help them get going. They're under no obligation to tell us exactly what they do. It's their data. It's their system. They do stuff. These are things that we know that we have talked to them on and want POCs and things like that. I see. So are there any that you can talk about? Because I think people are so hungry for these use cases? You talked about a theme of unstructured, destructured data, which absolutely we see tons of that. You talked about chat applications and analyzing data through a conversational interface, and yes, we see that. But is there any areas where you could get a little more specific about how people are using these things? So in terms of the concrete examples, the talk to your data application is from Bayer. And this is, it is with Analyst, we basically try to simplify the problem of SQL generation so you can have a highly reliable application. And Bayer is an example of somebody that is giving business users access to data. Another example that I can talk about is from Elevance, which is a big healthcare provider that is using chatbots basically to answer questions about plans. They supply healthcare plans for a ton of companies, have over 150,000 admin users for these plans. And as you know, just being able to figure out which, like if a new request comes in, which plan is this about? Is a particular treatment covered or not? Can get pretty, pretty detailed. And what Elevance is doing is basically they're indexing all of the planned information for their, you know, I forget the number. It's thousands of customers and hundreds of thousands of users and simply making it easy for their call centers to be able to answer questions quickly and effectively. Siemens has a similar project, but that one is for manuals. They make hundreds of thousands of devices that they ship everywhere. And if you want to go repair something, as I'm sure you've tried to repair your washing machine or something like that, you need to find a super specific manual before you can do anything. But these are some of the examples of people that are using our tech, building applications on top of it. We've also had hedge funds, I forget the name, that are basically using the Cartex AI, the SQL layer, to be able to analyze streams of reports. They basically dump a bunch of PDFs or dump a bunch of text, coverage text, into a table that's accessible from Snowflake. And then you can have a series of questions similar to Google Alerts, except that you can get cited answers to these questions. and there are people that have those kinds of use cases as well. I will stress the flexibility, the operator-like model, and the composability. That's what makes these kinds of applications possible. In fact, our vision is that if you can think of a declarative language like SQL, now having access to this LLM functionality, but imagine a world in which you can also say, hey, run this piece of SQL over all of the images that are sitting in this particular stage, in this particular table, and run this kind of transformation for me. We have not built that. But to me, that's the kind of place that is going to make AI very, very broadly usable because you're not doing fancy applications. You're composing things in a pretty easy way. And as you can imagine, you can also like wire up a UI that lets you create these pipelines like many of our partners do. Yeah, I mean, I also love these kind of practical, basic AI applications. They also get me really excited. And, you know, in every interview that I found with you, you talk about reliability and you talk about reliability earlier. And it seems like a really amazing value prop because that clearly is like a thing that just holds this technology back from so many companies. But I'm kind of curious how you approach reliability, because from, you know, from our perspective, one difference in Gen AI versus ML is customers aren't even typically, you know, measuring, you know, performance. I think ML has been unreliable for so long that everyone working in ML, you kind of know you need to make these benchmarks and measure accuracy and gradually increase that. But that's not natural for someone who's doing these kinds of applications that you're talking about. And I do think 100% reliability, I don't know, is that really? 100% reliability is not a thing. So how do you approach that? And how do you make customers feel like your solutions are reliable without disappointing them? It's a good question. And sort of my first level answer to that at the level of AI. You know, I obviously run into friends. Like, you know, I'll like doctor friends, for example. They're like, oh, my God. Like, this is, you know, amazing. Or it'll replace my job. or, oh, come on, it can't possibly match a human. And what I try to tell them is, listen, all of us have the tools to be able to evaluate these kinds of things. You can ask very simple even medical questions like, what's the accuracy of this diagnostic? If you're going to use it in a medical context, they have a vocabulary. It's not like interpreting radiographs. It's 100% science, even when done by humans. They get it wrong all the time. They miss stuff all the time. And so I think basically having an engineer's mentality to how you approach software is also true for AI applications. The first thing I try to do in all the conversations and our teams try to do is, you know, there are tradeoffs involved. And you need to have a basic understanding of like what right and wrong is and stop thinking of models as articles that are just going to give answers. And then if you talk about chatbots, for example, the thing that I tell people again is like, if you were to create an internal search engine for a particular use case, you wanted to look through insurance plans, you said, okay, absolutely, you're going to come up with, okay, here's a list of 200 queries that I am going to test this search on. And I'm going to say, you know, yes, you can get into more complicated NDCG metrics and stuff like that, but you can answer the basic questions of, did you deliver satisfactory answers for a fraction of your questions? And that's the methodology that we have taken. We have acquired a company called Truera that we are in the process of integrating with Snowflake. How do you make something like that? By the way, this is just like software engineering testing. This is the bread and butter that you live on, which is you need to be able to actually experiment and measure what different systems do. Similarly, when it comes to Cortex Analyst, we tell people there are precision recall trade-offs that are involved here. The simpler you make the schema, the more clear the description that you give for the business metrics, the better the system will do. And absolutely, you should write down a list of your top 100 questions that you want to ask this thing and have an analyst vet it and say where we get things right, where we get things wrong. By the way, this process can actually become an aid in making these systems better. So if you have a set of examples that are positive examples, if you have a set of examples that are negative examples, we are then able to take them and say, if a new question comes and is very much like a positive example that we already saw, you can then begin to get into the real science of like, okay, should you answer this question or not? Or if this is a dead ringer for a question you know you should not answer, you can refuse to answer that question. It is that, you know, it's that iterative methodology that I think is important. And honestly, I think that is what is going to make people deploy applications that they can believe in. What else do you think holds back your customers' adoption of these LLM applications and features? you know now comes the hard part this is all cool stuff it's all cool tech but at the end of the day the thing that i tell all our customers also is like please make sure you you are thinking about business value please make sure you're thinking about what is making money for you what is saving money for you i mean at the end of the day like those are the things that are you know that that sort of matter. And that work is really hard. As I said, the average Snowflake project takes like three to six months to get done from start to finish. And that's the sweat that we are going through in terms of working with customers in order to identify situations in which AI can make a difference, but deliver a tangible value for the business. And as I said, the good news about a lot of this stuff is none of this requires a GPT-5. And to me, that value creation is the slow part, but it's very much there. How far do you think Snowflake should go in terms of delivering complete package Gen.AI solutions to customers? Do you think it's your role to give composable pieces? Like, you know, you mentioned, you know, certain types of search and, you know, like obviously Streamlit's like a wonderful canvas to build things on top of. How like far up the stack would you go to make something more complete that customers could use? No, I don't think there is a hard and fast rule that we have about how we think about this. Absolutely, we are a data platform. Our strength is in creating technology that is broadly applicable. We work with a number of partners in different areas. People build security systems on Snowflake. They build composable data, customer data platforms. They build observability platforms. And obviously, many concrete use cases like AML or fraud detection. We are very much a platform, but when it comes to the primitives that we provide, all of us are on this journey of discovering what these right primitives are. And for example, like, you know, with search, which is one of the things that like we have worked on is close to G8. The thing that we are realizing is that having a SharePoint connector that is then hooked up to search so that you can now come up with a rest endpoint for how do you search over all of these documents is actually super value. And the next delta in that is, well, you really need to figure out a way to obey the ACLs of these documents. Like in Drive for example people will say yes I want you to index all of the Drive documents and have them you know have an interactive access via like a chat interface to it But we need to make sure that you obey the ACLs. Is that part of the platform? Is that an application? That's actually a question that we are still debating in terms of where should that live. But we don't really have religion about where this boundary stops. We work with a number of folks. We want to be good partners to a lot of folks, to all of the folks that we work with. And we think very much in terms of what are horizontals that apply to many different verticals, to many different countries. But outside of that, I don't think anyone should be thinking about drawing lines for what they should do and what they should not do. It's just too early to call those things. my sense is that maybe this is wrong but my sense is that other kind of companies in your space like data bricks or microsoft um is kind of more willing to compete with their partners and more willing to add on things whereas snowflake has kind of stayed more contained in the data layers is that is that accurate um like with the microsoft for example you know they have office um and if you think about it, is like one of the biggest outlets for AI. You know, I want to click on 20 different like, you know, word links that no one, you know, they want that answer. And so I think the situation is different. And, you know, every, and obviously Windows as a platform has a long and checkered history of what it is and what it is not. Every company has to find its way. We've been pretty happy with our approach to how we do platforms, and we have wonderful partners that are extending Snowflake, like Kulo with predictive analytics or relational AI with Graph SQL and Graph databases, or Landing, which I talked about earlier, Andrew company that's doing vision models, it's a journey. But when it comes to things like AI boundaries, my take is it is really, really too early to say, where is the platform and where is the application just yet? Where do you think we are in the AI hype cycle? I feel like inside of Silicon Valley right now, there's a lot of sense that maybe there's some kind of disappointment. man, like, you know, we've tried all these things. There's like a culling coming. Sometimes when I go outside of Silicon Valley. Culling or culling has happened? You think the culling has happened? Okay. Yeah. I think that the volume of investment in AI is still pretty high. And when you go to enterprises, it does seem like they're still ramping up their AI efforts right now. That's my experience, but you're actually... Enterprises are, absolutely. They are slowly but surely ramping up. their AI investments, as honestly, like the tech also gets simple. The idea of needing to like extract a bunch of documents, put them into a vector store, figure out what model you're going to use, stand up a new binary to actually like host your chatbot. That's a lot of work. The technology is getting simpler. And so I think that's the reason you see that you will see broader, you know, broader adoption. Do you think there will continue to be more and more foundation models? Like, are you one of those people that imagines a world where there's thousands of them for different use cases, or do you think that it will narrow in the next year or two? There is the, I'll put it into three categories, which, you know, one is what do I want as a person? And that's number one. And number two is like, what is the current conventional wisdom? And number three, I think, is just as possible. It's just kind of boring and no one wants to talk about it. Okay, first. Wait, sorry, what's number three? You know, an equally likely possibility, which is sort of boring and no one wants to talk about. Oh, I see, I see. You know, I think it's great that you separate those things because I always ask people on this podcast, and I think people conflate what they want and what they think is going to happen, that always matches in everybody's answer. You know, it's like, you know, surprisingly everyone believes in the trend that actually benefits their specific business. So I'm excited to hear your measured response here. I have a quote for that. And the quote for that is like, the best religion to believe in is the one that's aligned with your business interest. You're right, right, exactly. What do I want? You know, I want a world in which there is healthy competition. call it at least dozens of companies in language marvels and are AI marvels. And the reason I say that is, you know, like monopolies monopolize. It's just like how stuff works. Any ecosystem in which there are two or three players reaches a point of stagnation. Okay, this is like, hallelujah, toothpaste. It's just like there are lots of people that make it. It's interesting you're saying that as the person that ran Google Ads for a long time. I also left Google Ads and instructed the company to compete with Google Search because I said there are not enough search companies. I can at least argue that I live my religion. There you go. So I think that's a healthy thing. That's the outcome that I really want to happen. Because I think a world in which there are three companies making foundation models, that, let's face it, that's just a tax. It's just like that's the practical aspect of how those things end up working. So that's what I want as a person. I think current conventional wisdom is that, oh my God, there's going to be a GPT-5 and there are going to be basically three players that can invest that kind of money, which is OpenAI and Anthropic and Google. And maybe there'll be one or two others that catch up. I think the third startup voting outcome that we really don't want to talk about because it's kind of voting is, yes, GPT-5 comes out. it is marginally better, but not like a thousand times better. And other people, but there are cool new things. Even GPT-4.0 did things that GPT-4.0 did not. You know, kudos to OpenAI for consistently pushing the boundary of what is possible. And a bunch of other companies, they sort of like grind and claw their way to parity. And then we take half a step more and, you know, rinse, lather, repeat. I actually think that the third one is the likely outcome. Nice. I got a hot take from Suna. I love it. Okay. What do you think about agents? I mean, there's clearly like a bunch of agent companies with astonishing valuations. Adept, you mentioned, would be an early one that maybe wasn't so successful. I'm not sure. But do you think agents are relevant today? Do you think agents will be relevant a year from now? I think there's value in the concept. If you think of it as a, I don't know, a souped up Google alert, that can do interesting things, actually take actions on your behalf. That's kind of a cool thing. Velvet in the realm of existing technology. This is why I put such an emphasis on what does it take to have reliable primitives. I'll tell you like an agent that I'd love to see, you know, we've built prototypes for this. I look at revenue reports every morning before 5 a.m., which is when I start work. I've told my team, it's like, you need to tell me, like, I need to get a report that says how much money did we make yesterday. And, you know, look at it every morning. And sometimes something is off with it. I usually have to wait or poke it on, go look at some dashboards, then figure out where it is going. Most of the time, it's fairly rote work in terms of, okay, if this happens, then you need to go do these other things. Now imagine if I could just codify a bunch of rules just in text about, okay, here's the report, please look at this. And if these metrics are off in this fashion, these are the drill downs that I want you to do. And depending on the drill downs, you might need to look some more. It's not that hard to describe the process. Now, if this could all be turned into, you can call this an agent, it's an analyst agent, it's an autonomous analyst. it's not going to solve every problem for you. But if you could solve 60% of the work, which is unique every time, by the way. So it's not like you can write a trivial program to do this because it would have all of these different cases. But if you can encapsulate this very nicely and say, okay, insert up the report, you get the report, you get a notebook that has done the initial analysis so you can actually go look at the data and you or somebody in your team can pick up from there. That is actually moving work forward in a pretty meaningful way for how you think about analysis. Is that an agent? It is beginning to be an agent. To me, the thing that people have trouble with is if you put highly unreliable systems together, you pretty much get into the world of like stuff is multiplying and it's just very unreliable overall. The more reliable you can make the core components, whether it's search or getting structured data right, the more likely it is that people can develop meaningful applications, can take, can develop meaningful, you know, call them agents, meaningful pieces of software that can take a set of actions that are useful and contribute towards a goal. like the core concept makes a whole lot of sense. You know, the devil's in the details, hopefully not like the self-driving cars where, you know, yes, the intellectual path is clear, but the details get significantly in the way. We need to develop a vocabulary for how to describe agents, what is a reliable agent, and how does like one move progress forward again. But I think the core idea has a lot of merit. it's really interesting i would think i mean i almost feel like if if you can write your own streamlet apps it seems like you should be able to cobble it together an agent that asks questions about your revenue report have you have you tried that does it break down somewhere uh i have not tried it uh i we uh you know we are we are sort of work you know obviously with things like our company revenue metrics you have to be careful about uh like how broadly we make it available So we are working on things like having cortex analysts work on the data, and then the analytical parts absolutely will come after that. As I said, it's not that hard to describe a recipe for how do you debug problems in these spaces, but you need the generosity of language models to make these recipes work. Okay, so last question. When you look forward into the next 18 months, there seems like there's so many tailwinds behind Gen.AI and so many reasons to get excited about it. And you all are obviously really well positioned. What are the things that worry you in terms of your Gen.AI strategy? I see there could be high-profile enterprise failures failures. It could be that it's impossible to get the technology reliable enough to get real adoption. What keeps you up at night when you think about Snowflake's Gen.AI approach? I mean, as I said earlier, we are not the hyperscalers. We are not one of the big three when it comes to AI. A real fear that I have, combined with a little bit of excitement about what is possible is like a GPT-5 that is indeed way better in planning and what it can do than anything else that there is. We have good partnerships, but that is something like any sort of unipolar world is one that we should be concerned about, as should other data companies that don't have access to this. And I worry less about spectacular failures with AI. That's just not the approach that we recommend for our customers. It is very much an incremental start with customer value, don't make giant investments before. It's the same as how I would tell people to approach building a new system, which is it needs to be built iteratively. I worry a little bit less about that part. Awesome. Thanks so much for taking the time. This is a real treat to get to talk to you about all this. Thank you. Thank you, Lucas. It was great to chat. Loved your question. Thank you. Awesome. Thank you. Thanks so much for listening to this episode of Grading Descent. Please stay tuned for future episodes. Thank you.

Share on XShare on LinkedIn

Related Episodes

Comments
?

No comments yet

Be the first to comment

AI Curator

Your AI news assistant

Ask me anything about AI

I can help you understand AI news, trends, and technologies