DATA SECRETS Podcast
Tales of business leaders uncovering insight from their data to drive growth and profits. Data Secrets is a true crime style business podcast hosted by Nathan Settembrini and produced by Allegro Analytics. The video version is available on YouTube and Spotify.
DATA SECRETS Podcast
The AI Interpreter: Data Secrets with Sairam Sundaresan (Ep 007)
In this episode of The DATA SECRETS Podcast, I sit down with Sairam Sundaresan, Engineering Leader and AI expert, formerly at Qualcomm and Intel. We dig into the real-world data challenges and surprising insights behind building safe, scalable AI systems for autonomous cars and how those lessons translate to any business using data to unlock value.
Sai pulls back the curtain on:
- How teams structure and collaborate to create reliable perception for vehicles
- The complexity of collecting, cleaning, and curating diverse, real-world data (e.g., snowstorms in Colorado vs. perfect San Francisco weather)
- What happens when the data tells you something completely counterintuitive, and how unexpected “secrets” hide in machine learning models
- A real business case where fixing tiny data details meant huge leaps in product success
- The horror story of a “data leak” between training and validation sets and what it teaches about trusting your models in production
- Why accuracy is an overrated metric, and what KPIs actually drive value and curiosity on high-performing teams
- How AI is changing the role (but not the importance) of data analysts in business
- Practical advice for anyone looking to upskill and leverage AI in their data workflows, starting with hands-on projects and tool experimentation
Whether you’re an engineer, data analyst, or business leader, Sai’s stories and advice reveal what it really takes to turn messy, real-world data into business impact. Plus, get a sneak peek at his upcoming book, AI for the Rest of Us, designed to bridge the gap between technical teams and business strategy.
Chapters:
00:00 Understanding Machines and Humans
03:16 Autonomous Driving Software Development
09:13 From Toy Data to Real Insights
10:50 Unexpected Model Bias Lying in the Shadows
14:16 Early Depth Camera Scanning Challenges
17:18 Data Leak Derails Model Performance
20:56 AI Explained for Non-Experts
25:08 Vanity Metrics in Accuracy
27:26 User Behavior Insights
31:12 Evolving Role of Data Analysts
33:08 Master AI Through Practice
36:12 Using AI to Solve Problems
Connect with Sairam:
🔗 Gradient Ascent Newsletter: https://newsletter.artofsaience.com/
🔗 LinkedIn: https://www.linkedin.com/in/sairam-sundaresan/
📖 Order Sai's Book: 'AI for the Rest of Us'
If you want a ringside seat on how business leaders crack the code, uncover the truth in their data, and translate insights into action, this is one episode you don’t want to miss.
Follow the DATA SECRETS Podcast
📬 Get episode recaps & bonus insights at allegroanalytics.com/podcast
🕺🏻 Connect with Nathan on LinkedIn
📺 Watch every episode on our YouTube Channel
Nathan Settembrini [00:00:02]:
Your data has secrets, secrets that could change everything if you only knew where to look. Welcome to the Data Secrets podcast. Welcome to the Data Secrets podcast, where we uncover how business leaders crack the code, find truth in their data, and turn insight into action. Today's guest is Sairam Sundaresan. He's an AI engineer. He's worked at Qualcomm intel, and now he's an engineering manager working on autonomous vehicles. And he's about to release a book. Sai, welcome to the show.
Sairam Sundaresan [00:00:41]:
Thank you, Nathan. It's great to be here.
Nathan Settembrini [00:00:43]:
Why don't you tell us a little bit about you and your story and what you're up to today.
Sairam Sundaresan [00:00:49]:
For me, this journey in AI has been 15 years long so far, and I. I've sort of seen the evolution from traditional machine learning, as we used to call it. Like, I kind of sound like a dinosaur. And now you have these modern neural networks. So it's been an amazing journey where I've got to see different inflection points, where the technology has changed so much, and along the way, just learned a lot and worked with some incredible people.
Nathan Settembrini [00:01:14]:
So.
Sairam Sundaresan [00:01:16]:
That'S been my journey. And about myself, I'm an engineer at heart, and I love problem solving. And more than that, I actually love to teach. So that's how I started writing and explaining concepts to first an audience in my newsletter, and now hopefully, to a larger audience through the book. So, yeah, it's been a amazing journey so far.
Nathan Settembrini [00:01:39]:
That's amazing. What would you say inspires you to do what you do?
Sairam Sundaresan [00:01:44]:
At first, it was to learn and understand how things work. I think that's always been the central pull for me, just uncovering how something works behind the scenes. It's always given me tremendous joy. And when I realized that you could actually teach machines to understand the world around us, that was just incredible. And I wanted to learn all about that. And now I sort of take that in the reverse way, where I'm trying to help humans understand machines through my writing. So it's kind of been an interesting twist on that sense.
Nathan Settembrini [00:02:19]:
So you're like the interpreter?
Sairam Sundaresan [00:02:22]:
The translator, yes.
Nathan Settembrini [00:02:24]:
Like, all right, machines. Here's what the people want. And it's like, all right, people, here's how you understand the machines? Pretty much. That's awesome. All right, let's talk a little bit about. So in your current role, leading a team of engineers building software that helps a car understand the world around it, um, how do you. How do you organize work to like, to make that happen? Like, how. How is your team structured how do you engage with the rest of the company? Like, just help us. I've, I've, I've worked in kind of business intelligence and data analytics for a long time, but on the product side, using data and, and leveraging AI. Like how, how do you guys do that in your current situation?
Sairam Sundaresan [00:03:16]:
Well, at the outset it's, it's, you know, it's a lot of hands that come together to make a software release, especially in the autonomous sector, because safety and security is paramount and there are a lot of moving parts, right from capturing data that you know, needs to be cleaned, curated, processed, stored, and then, you know, for the, the algorithm teams to work on it and then TR model, then make sure the model is doing what it's supposed to and then for the model to be shipped, optimized and then shipped into vehicle. So in terms of the structure, there's a lot of different units or departments, if you'd like to call them, where each department focuses on a certain aspect of the pipeline. I lead the algorithm side, so my focus is on making sure the team is using the data that we get and training a model that solves autonomous perception where we have different things a car needs to worry about. Right. The car has to know if there are obstacles along the way. Those obstacles could be pedestrians, it could be a traffic cone, it could be a pillar, it could be another car. And then it also needs to worry about ground markings like is it a turn, is it a stop sign on the road? Or conversely on the side of the road where you have these signs. And then of course there's this notion of path planning where you have to figure out whether car, you know, what's the route from the, you know, where the person gets off and where the car needs to go and park itself. So a lot of moving parts and definitely not, you know, one person job. So in that sense, I'm really blessed to be working with a highly talented team and there's a lot of folks who contribute towards the success of these products. So from a structure perspective, that's like a high level structure, but happy to give.
Nathan Settembrini [00:05:15]:
Yeah, no, that's. I have so many questions.
Sairam Sundaresan [00:05:18]:
Okay, that's always good.
Nathan Settembrini [00:05:22]:
The first thing that comes to mind, you mentioned source data collecting, cleaning. Is that from car like. So, you know, I just got back from San Francisco and you see the Zoox cars. So they have the little buses, but they also have other ones with a ton of sensors all over it with a human in the driver's seat going around. Is, is that where like, is that where you're collecting the data from. Are you buying data from some of those companies as well and sourcing that.
Sairam Sundaresan [00:05:51]:
Purely for certain reasons I can't reveal.
Nathan Settembrini [00:05:54]:
Sure.
Sairam Sundaresan [00:05:54]:
Some of these aspects, but what I can say is that, you know, the data is collected from a very diverse set of places and conditions, so that at the end of the day.
Nathan Settembrini [00:06:04]:
Right.
Sairam Sundaresan [00:06:04]:
I mean, if you think about it from the user's perspective, we can't tell them you can only drive here in this particular area. They've bought a car and they have the freedom to drive it wherever roads are there. And therefore we need to make sure our data has the coverage of different types of scenarios, different types of weather conditions, different types of seasons, all kinds of things. So there's a lot that goes into planning what goes into the data set.
Nathan Settembrini [00:06:34]:
Yeah, it's funny, I spend a lot of time in San Francisco and it is, you know, the weather there is almost always perfect. You know, there's some.
Sairam Sundaresan [00:06:45]:
And then Suddenly drops by 10, 20 degrees once, you know, you're very close to the bridge.
Nathan Settembrini [00:06:51]:
Yeah, yeah, yeah. But, but I live in Colorado and. Colorado, you know, it could be June and we get a random snowstorm, right. And so I feel like a lot of times, you know, these autonomous vehicles are being developed in. I also drive a Tesla, right. And so it's using all its camera based. And so as soon as it starts snowing, you know, it's blind effectively and it's like, oh, I can't do cruise control anymore, sorry. It's like, what? So I think that's the challenge of designing something in a place where the conditions are a certain way. And maybe that's very intentional. Right. Like we're doing a very hard thing and therefore we need to, we need a controlled environment to, to start from and then later we'll, we'll teach it how to drive in the mountains and in the snow. I don't know. But that's, that's just such a fascinating world.
Sairam Sundaresan [00:07:48]:
It's, it's, it's a very challenging problem. No doubt. And. Yeah, and at the end of the day, you want your, you want the driver and the people around the vehicle to be safe, right? That's, that's the most important thing. And if that means that you say you can't use the vehicle in some very adverse conditions, then that maybe in that context might be the right call. So, yeah, it's, it's a very challenging problem.
Nathan Settembrini [00:08:16]:
Yeah. And so I imagine on the algorithms team, you're also considering kind of the ethical implications of Do I like if, you know, if there's a baby and old person in the middle of the road and I have to swerve and hit one of them, like which one do I hit?
Sairam Sundaresan [00:08:33]:
Oh, that's a, that's a very tricky question and one I think I will, I will swerve away from.
Nathan Settembrini [00:08:40]:
Fair enough. Fair enough. Okay, so let's get into some of your data secret stories. So you know what's, let's just start with this first question here. So what was an insight that surprised you like when you first started digging into data? Or maybe not the first time, but like when you were maybe at a new company and you started digging into the data and. Yeah. What was a secret that you uncovered?
Sairam Sundaresan [00:09:13]:
I think the, the most interesting insight came when I sort of transitioned from say school into the industry when I started looking at data. Because in school you have toy data sets which are for the most part well curated and they're there to teach you a particular concept. But then the moment you step into the industry, you aren't spoon fed anymore and you have to do the cleaning and the curating and you have to spend time with the data. And the longer you spend time, the more insights you uncover and the more you realize. I mean, I feel like maybe with the deep learning side of things these days, maybe there isn't as much emphasis as there might have been previously with traditional machine learning where you had to do feature engineering and extract features before you were able to train these models. And that, you know, to be fair, I'm glad that I was there during that time, so that allowed me to really spend a lot of time. And often you find hidden patterns like you might be thinking the model is like definitely going to focus on some particular color or some particular part of the text. But then what you realize is it's actually focusing on some background thing that you, that just happens to be there all the time. And just uncovering stuff like that has been very, very interesting. And so, yeah, basic lesson was spend more time with the data. The more you do, the more you learn.
Nathan Settembrini [00:10:40]:
Yeah. Was there ever a moment when the data contradicted what you believed to be true or instinctually made sense to you?
Sairam Sundaresan [00:10:50]:
Yeah. So let me give you an example. Right, so imagine that you are trying to detect pedestrians and you have a very diverse data set and everything's good, you have cleaned it and you're making sure there's no class imbalance, all of that stuff. And then you think, okay, the model's going to focus on like, you know, Things that are vertical, you know, mostly because people are walking in the data set a lot, or it's going to focus on like bright colors or it's going to focus on thinner objects. But then there was this very weird thing that I saw where it was focusing on like transitions, like light transitions, which was super weird, as in whenever there was a transition from light to dark or dark to light, the model was somehow focusing on that particular aspect, which was very strange because we hadn't told them. Obviously we don't tell the model what to learn. But that was a dominant feature that was not the actual, the limbs or the head or the other aspects that you would typically expect a model. And we found out that that was because of how the camera was positioned. And it was like the shadows were much more like between frames. The shadows moved a lot more than the person in a very weird angle. And so the model was thinking, okay, I have to focus on that, because that correlates with the person being next to the shadow or some. That's how we made sense of it. And it contradicted everything I thought the model would be focusing on, which is like the, the obvious stuff.
Nathan Settembrini [00:12:38]:
That's interesting. I guess I've not, you know, you said something, obviously you don't tell what the model, what to learn. That's not obvious to me.
Sairam Sundaresan [00:12:51]:
No, like, what I meant to say is like, if you think about traditional software versus machine learning. Right. In traditional software, we program the rules. We explicitly say, if this happens, then do this deterministic. Exactly. And in machine learning, it's kind of we say, this is the input, this is what I expect as output. Now you go figure out the rules. So we don't tell it what to learn to figure out the rules. And in that sense. But we can hint, like in feature engineering, we're saying really focus on these particular aspects. I think they're very important for you. But with neural networks, it's much more where the model is trying to figure these out from a very large volume of data.
Nathan Settembrini [00:13:34]:
Interesting. What about, you know, so we're always focused on kind of the business impact. And the data tells us something. And you know, by itself that's not super valuable, but if you go take that and you do something with it, then that's where the value comes from. So was there a time when that happened where it surfaced something and you were able to go make like a real, real world business impact?
Sairam Sundaresan [00:14:03]:
I can give you a sort of partial story because I can't reveal too many details, but I will give You a partial story.
Nathan Settembrini [00:14:11]:
You're Mr. Secret Sai.
Sairam Sundaresan [00:14:15]:
It's okay.
Nathan Settembrini [00:14:15]:
It's okay.
Sairam Sundaresan [00:14:16]:
So there was a project where we were working on scanning people and objects and this was around the time frame of the original Kinect, if for any. And that might be dating myself, you know, like. But this is the time when your so called depth cameras were this big and you know, there was no way for them to fit inside a phone or a tablet. And we had developed a very interesting scanning algorithm that was working on device, not on the cloud and that was already usb. But the challenge we had was the data. Like the camera has its own white balancing and color correction technology and every device that a real world person would use has a different algorithm baked and depends on who's manufacturing the phone or the tablet. And so one of the things that I hadn't accounted for was the fact that when you apply those corrections, your scan needs to be skinned like in graphics terms. You would first create a 3D mesh of whatever you're scanning that creates the volume, the structure and then you'd paint it with the images, right? You'd add color and texture to the 3D mesh so that it looks like a person or an object from the images that you use for the scan. But if the images all have different auto white balance and different color corrections, it's going to look like you're going to see the seams, you're going to see all of the issues. And so that was something that turned into a very interesting business use case because fixing that problem, which was completely tangential to this actual scanning thing, if you think about the actual value, it's making the scanning faster and more accurate. But from a perception perspective, for somebody who's using the scanner, you want them to be able to say, ah, that looks exactly like the object or the person. And this was ruining that experience. So from that perspective it was a tangential problem, but fixing it by looking at the data, understanding what was going on added business value. And that's a story that hopefully convey that.
Nathan Settembrini [00:16:44]:
That's interesting. Yeah. All right, let's pivot to the dark side of data secrets. The data horror story. And so this is ideally a story where something went wrong or we were trusting this data, but oh my gosh, it was sourced from the wrong place or there was a wrong assumption made or all of production went down because of this mistake or yeah, what was your data horror story?
Sairam Sundaresan [00:17:18]:
So one that immediately comes to mind is a case of data leak where we had trained an algorithm that was like we had Spent weeks tuning it and making sure that all the KPIs were good. The model was generalizing well, and it was doing well on the validation set, which is very important for us. And we ensured that the validation set was decently. It had decent coverage, and we saw the KPIs, we saw the visuals, and everything made sense. And so the model went out into the world, and our world came crashing down because all of a sudden, the model was detecting things that weren't there in the real world, and it wasn't detecting things that were actually there, and it didn't make any sense. We were racking our brains, and it just turns out that through sort of an automated process that was earlier in the pipeline, some of the data in the training set had found its way into the validation set. And it was very, very difficult to find that because of the sheer volume of data that we were using. But that gave us the false sense of security that our model was good. And we had spent weeks optimizing it to that data set. And thankfully, what we did is we did a rollback, and we also had to do a retraining because now your model, you can't use those weights anymore because the training sets in your validation set already has part of the training set, and therefore you need to do clean reset and start over again. So that cost us a lot of time, but that was a lesson well learned because, you know, especially with larger models, you're. You're looking at weeks, you're not looking at days. And that's for a customer, for a business use case, that's a lot of time. And it's like, you know, measure twice, cut once. That was the real horror story I learned.
Nathan Settembrini [00:19:29]:
Wow. Yeah. I was trying to think of, like, a metaphor for that. It's like if you were teaching a kid to read and.
Sairam Sundaresan [00:19:36]:
And you gave them question paper and you said, learn all this and then they show up.
Nathan Settembrini [00:19:42]:
Yeah. Learn all these words with, ah, sounds. And then, you know, then you give them a test and they're like, oh, yeah, I can read all the words.
Sairam Sundaresan [00:19:50]:
Yeah.
Nathan Settembrini [00:19:50]:
And it's like, great, all right, now go read the dictionary.
Sairam Sundaresan [00:19:53]:
And it's like, exactly.
Nathan Settembrini [00:19:55]:
Never learned how to read these S sounds.
Sairam Sundaresan [00:19:57]:
Yeah.
Nathan Settembrini [00:19:57]:
So that's. That's interesting. Okay. It just seems like the world is moving so, so fast, and it's just accelerating.
Sairam Sundaresan [00:20:06]:
It is.
Nathan Settembrini [00:20:07]:
We're in. We're in such a different world than we were two years ago.
Sairam Sundaresan [00:20:11]:
It feels that way, doesn't it? Like, I mean, now we can't imagine how we would function without one of these chatbots helping us with writing emails or questions, like answering questions and whatnot. And yet, just like three years ago, you would have to do the grunt work and, you know, go on Wikipedia or elsewhere to finances.
Nathan Settembrini [00:20:33]:
Yeah. ChatGPT is my best friend, my therapist and my business coach all in the same. Yeah. So that's a great pivot toward your book. So you're releasing a book called AI for the rest of us. Yeah. So could you just give our listeners a sense for like, what is the premise of the book and who's it for and what are you hoping for? People who read it?
Sairam Sundaresan [00:20:56]:
Sure. So the book is intended for primarily three sets of people. The first set are the people in and around the tech industry, but who don't come from a software background or don't have a deeply mathematical background. So generally what I've noticed is in as far as AI books are concerned, they either approach it from code, so then people who are good at programming instantly understand the concepts because code is the teacher, or you have people who come from a very strong mathematical background. And so you have these textbooks with equations, which are the language in which these ideas are communicated. But then there is, you know, there's a vast number of people, the rest of us, who are in the middle, who neither code or who don't come from a mathematical background. And I wanted to have something there that doesn't dumb things down and at the same time is explaining everything about how AI works behind the scenes, how all these algorithms, from the basics down, you know, to ChatGPT in mid journey, everything works in plain English in a way that allows you to make smarter decisions. Because if you think about the people who are setting the strategy and the vision in different companies, they need to be speaking the same language as their engineering teams and understand, have a basic sense of AI literacy. It's not like they need to fine tune a language model. It's great if they can, but that's not a prerequisite. But instead they should be able to explain why they are choosing a particular path for a product. This is the way we want to build this company forward. And the second set of people are those engineering teams themselves who are frustrated from trying to explain to their bosses and their bosses, bosses and their bosses, bosses, bosses, bosses, about why something can't be done and why AI is not a fix all for all of those things. And this book would then be a translator for them to explain in ways that these senior leadership people can understand and then be connected and the third set is for people who are just curious about how AI is going to impact their lives and what it means and understand its strengths and limitations. So that way you are able to make informed decisions, learn how to leverage it for your benefit, and just, you know, keep up with the times, as it were. So that's the point of the book.
Nathan Settembrini [00:23:28]:
That's awesome. Yeah, so it's kind of a translation layer for super technical people to be able to talk to the business and for business people to understand how to talk to the AI engineers.
Sairam Sundaresan [00:23:39]:
Exactly.
Nathan Settembrini [00:23:40]:
Yeah. That's awesome. And it comes out later in October 2025.
Sairam Sundaresan [00:23:46]:
Yeah, it's releasing October 30th, so it's available for pre order and. Yeah, very excited to get it in the hands of people and see what they think.
Nathan Settembrini [00:23:55]:
Awesome. Yeah. And we'll have a link to the Amazon or whatever listing in the show notes. So for folks who want to check it out. All right, we're going to pivot to our KPI corner, our metric mystery theater. This is our lightning round where we talk about KPIs and metrics that really matter. And so, Sai, I'm fascinated to hear your KPIs that you've observed. So let's start with the fundamental KPI in your book.
Sairam Sundaresan [00:24:28]:
Rather, the KPI I would say people shouldn't focus on is accuracy. Like, if you ask me, one fundamental KPI, because that can be very misleading. And in the book there are Quite a few KPIs depending on the topic because we cover everything from computer vision, natural language, recommendation system, so there's like a whole variety. But I actually say the one KPI you shouldn't over index on to be accuracy.
Nathan Settembrini [00:24:55]:
Okay? So you should not over index on accuracy. So it's okay to be. You know, that's kind of how I live my life as well. So that's perfect. What about a vanity metric?
Sairam Sundaresan [00:25:08]:
Accuracy Jokes aside, right? I would say it's like anything that makes your model or your product look good, but it's actually hiding the truth. I would say that that's like a vanity metric and most often times it's like accuracy or because like, take an example, let's say you are trying to distinguish between. In a medical case, you're basically saying, is this a case of somebody having a block in their heart or not? And let's say you have 100 examples, and let's say you have 90 examples of has a block and 10 examples of does not have a block. If your model just guesses has a block, it has 90% accuracy. And that's a vanity metric and that's hiding the truth that your model is really, really terrible at detecting if someone does not have a block, which is, you know.
Nathan Settembrini [00:26:08]:
Yeah, so yeah, that's almost like you need a counter metric to. So like how good is it? How good is it at detecting not a block? Right, right.
Sairam Sundaresan [00:26:21]:
And class wise metrics. And that's the thing, like you have to get into the details and just a vanity metric is anything that prevents you and gives you that false sense of security that everything's rosy and good.
Nathan Settembrini [00:26:34]:
Okay, so on the other side of that, what's a more impactful KPI that would be worth tracking?
Sairam Sundaresan [00:26:41]:
I'll give you one that I use for my team and it's questions per week. I usually see how many questions the team asks me and if they aren't asking as many questions, I feel like then the curiosity and interest has somewhat died down. And if they're asking me a lot of questions, it means that they are motivated and looking forward. So that, that's a very good signal. And that's some, I would say. Yeah, that's a, that's a KPI. Love to track.
Nathan Settembrini [00:27:14]:
I like that. Yeah, like it's your curiosity index. What about the weirdest KPI I would say.
Sairam Sundaresan [00:27:26]:
Behavior change because so I would actually see, I would actually like to see is like whenever you have some kind of feature or something you release, like how is the user's behavior changing with it? Like are they adopting it, are they using it more? Are they trying to break it, play around with it? I mean it's difficult to quantify it, but it's something that gives you a lot of signal on whether you're building something that they're excited about or whether something is just, you know, my competitor has it. So here's my version of that thing. So behavior change. I don't know. That's weird.
Nathan Settembrini [00:28:14]:
Yeah. Yeah. But I think it's, it's important. Especially like if the goal is behavior change, then measuring. Yeah, measuring.
Sairam Sundaresan [00:28:25]:
That is, are they. Is the behavior changing?
Nathan Settembrini [00:28:28]:
But, but also is it the right behavior? Right. So like, yes, I, I used to run a little company and we would incent our people like, hey, if you book a meeting for your clients, you know, it was a lead generation kind of thing. Whenever you book a meeting, we'll pay you X dollars. Right. And so there was no counter, you know, like what's, how good is that meeting? Or things like that. Right. And so the behavior changed towards like, yeah, we're booking lots of Meetings. And then you go look at those, the quality. And it's. You know, some of them are even like, fake. I had to fire somebody because.
Sairam Sundaresan [00:29:08]:
Wow.
Nathan Settembrini [00:29:09]:
She was gaming the system. And yeah, it was. That was hard.
Sairam Sundaresan [00:29:14]:
But yeah, behavior changes. Actually, there's actually a. There's actually a very weird story that goes. That might tie into this, which is like, there was a time when there was. There were a lot of cobras in a country and in a village. And let's say that the. The kingdom wanted to get rid of the cobras, like, because there's just way too many of them. So they said, for every cobra that you bring, we will reward you with something. And eventually people started gaming the system because they started breeding cobras.
Nathan Settembrini [00:29:54]:
So. No.
Sairam Sundaresan [00:29:56]:
So the population actually soared. And so to say counterproductive.
Nathan Settembrini [00:30:02]:
That's crazy. Yeah, they had that problem in Florida with pythons, but I never heard of people breeding pythons. Get more bounty. What about your North Star metric? Do you have a good one? Basically, the thing that you. That, like, keeps you centered and focused and moving in the right direction.
Sairam Sundaresan [00:30:21]:
I would say clarity, because. And especially. Especially when. When it comes to teaching the. The thing that I really look for is, is the message getting through without being distorted and is understanding, like I'm optimizing for understanding, basically. And it's the same when I'm speaking with my teams, with my stakeholders and everyone else. It's is the message getting through. And so clarity is my North Star metric. Cool.
Nathan Settembrini [00:30:51]:
Awesome. Cool. I'm really curious where or how you see data analytics, kind of like business analytics, being impacted by. By this world of AI. Be my translator.
Sairam Sundaresan [00:31:12]:
I would actually think that data analytics will still be important, but the role of a data analyst will change, it will evolve. But at the end of the day, we're in this weird place where data is the lifeblood of most of these AI models. And we're at a point where being able to extract insights from the data, not for the model to understand, but for our understanding, is becoming just as important. Because if we are feeding these models data and we don't have a clue of the biases in the data or how, you know, the imbalance or the information that's encoded deep within, if we don't have a clear idea of how that is, you know what it is and you know, how the data structured and how we can present that as a story as, you know, not just as numbers, but go beyond that and just understand. I feel like that's going to put us in a bad place because the models are going to understand hidden patterns and everything. But if we aren't understanding it so that we can course correct, then it's going to be a problem. So the role I think will evolve, but the need I don't think will change. It's going to be more important. That's, that, that, that's my opinion.
Nathan Settembrini [00:32:36]:
Okay, and what, what advice would you have for somebody who is, you know, let's say they're a data engineer or a data visualization person who's amazing at Tableau or Power Bi or some tools like that. Apart from reading your book, which would be stuff one, what, what other things should they go learn? Like what general direction would you point them in? You YouTube things to search on Google or YouTube or ask ChatGPT.
Sairam Sundaresan [00:33:08]:
I think the, the, the, the answer that always, you know, that I give to most people is just spend time with AI and play with the tools that are available and see is because there is no real user manual for most of these AI tools, we are sort of writing it as we go and we find interesting ways to interact with them and uncover some ways where we can get to what we're looking for faster. And so the more time you spend with it, the better you will be at leveraging it for whatever your occupation may be. So I think just even doing that puts you at an advantage compared to the others who may not be as actively using these tools and interacting with the technology as much. So that would be my suggestion. And of course YouTube tutorials and blogs, all of that are fantastic resources. But instead of collecting a lot of these resources and then wondering where to start and everything, I would just say just pick a tool that you really like or you're curious about and just spend a lot of time with it.
Nathan Settembrini [00:34:16]:
Nice. Yeah. Looking at tools like N8N these kind of like workflows, they look very similar to, you know, tools like alteryx or SSIs where they're kind of like canvas based drag and drop, you know, like data comes in, we do some stuff, data goes out. And so you know, I would encourage folks to, you know, like it's very like Obviously everyone's using ChatGPT. I think you can't stop there, right? You need to push beyond to like what are some other tools out there? Get your hands dirty. And I'm preaching to myself, right? But you know, find like a real, I mean I think learning any new tool, like whether you're learning python or n8n or lovable or any of these vibe coding things, you need a project you need a real, real world thing that, hey, I want to do this. How can AI help me do this? And so I have a good friend who is toying with that right now. He built a voice agent. Nice. Well, he didn't. I mean, he's using a voice agent tool to do a business task where it's, hey, there's this. We need to make a phone call to a particular person. And today's world, it's a. It's like a real person picking up the phone and dialing the person and checking on the order or whatever. And he built something that will auto dial the person. He called me on it and I talked to the person, the.
Sairam Sundaresan [00:35:57]:
The agent. Oh, nice.
Nathan Settembrini [00:35:59]:
And. And I tried to throw her for a loop. I said, oh, you know, I'm going to be late. I had to stop at Buc EE's and get some food, you know. And she's like, okay, well, what time will you arrive? I was like, wow, this is amazing. So.
Sairam Sundaresan [00:36:12]:
No, you hit on an excellent point there. Right? I mean, as the saying goes, scratch your own it. So pick anything where you spend a lot of manual effort on and just maybe just note down the steps, like, first I do this, then I do this, and then try to see how you could use AI to do those exact steps and use a tool that, like, your friend did, that is easy and something where you can spend time on and eventually you'll see that you understand how the dots connect. And that's a really great way to one, solve a problem that's bothering you and two, actually learn how to use AI and get better.
Nathan Settembrini [00:36:54]:
Yeah. Awesome. Well, sigh. Let's. Let's land the plane here. How would you know if people wanted to learn more about you or you have a newsletter? What would you encourage them to connect with you on or go do?
Sairam Sundaresan [00:37:11]:
So I would say so. My newsletter is called Gradient Ascent and it's on Substack and it's read by a whole bunch of people across the tech firms in Silicon Valley and worldwide, which I'm very grateful for. And I try to demystify AI in very simple terms and help you keep up to date as much as possible with this crazy technology. I'm also on LinkedIn. You can find me just search for my name and as well on X and all the other socials. So cool. Awesome.
Nathan Settembrini [00:37:47]:
Well, really enjoy the conversation and thanks for being on the podcast.
Sairam Sundaresan [00:37:52]:
My pleasure. And thanks for your time, Nathan. Really enjoyed it.
Nathan Settembrini [00:37:55]:
Awesome. All right, everybody out there, the truth is there. You just got to keep searching for it. Keep sleuthing. See you out there.