Episode 9

Designing for Cognitive Load in Patient Communication

44 min
Share:

Dr. Vamsi Ithapu, who has spent his career studying how people process information and building technology that reduces cognitive overload, brings that perspective into healthcare and makes a compelling case for why patient communication should be designed with cognitive load in mind.

Featured Guests

Dr. Vamsi Krishna IthapuResearch Scientist, Meta Reality Labs

Transcript

AMRIT KIRPALANI

Dr. Vamsi Ithapu has spent his career studying how people process information and building technology that reduces cognitive overload. His work is grounded in a deceptively simple question: how much information can people actually process in critical moments?

Episode Contents

  • 0:00 Introduction to NovaNav
  • 0:58 Introduction to Dr. Ithapu
  • 4:38 From Alzheimer's research to Meta Reality Labs
  • 5:52 Why "AI" is an overloaded term
  • 6:03 The three functions of AI: prediction, reasoning, and planning
  • 7:40 What AI is actually good at today, and where it still struggles
  • 10:16 What hospital executives should ask AI vendors
  • 11:42 Why personalization in healthcare is different from consumer tech
  • 18:49 Where LLMs are genuinely useful in clinical settings
  • 19:45 Hallucinations, opacity, and why failure is hard to detect
  • 25:57 Discharge instructions and cognitive overload
  • 29:40 "Minimum viable information" for patients going home
  • 32:17 What it takes to move from research to real-world product
  • 33:10 Four product lessons healthcare can borrow from Meta
  • 41:23 Why not every healthcare problem needs an AI solution
  • 47:26 The "North Star" framework for recovery and care design
  • 48:11 Closing

Key Takeaways

During his PhD at the University of Wisconsin-Madison, he built predictive models for Alzheimer's disease. Later, at Meta Reality Labs, he led teams building systems designed to reduce cognitive overload for millions of users. This conversation explores why discharge instructions so often fail, what healthcare gets wrong about information delivery, and how systems can be built around what patients are actually able to absorb in vulnerable moments.

Transcript

Hello. Hey Lisa, how's it going? How are you? Good, good.

So I had your, so it's Abhamsin, it's not Krishna, it's your middle name, but how do you say your last name? I'm so sorry. No, no, it's good, Itapu. Itapu, oh that's easy.

Nice to connect with you again. Yes, nice. How do you know Amrit? So it's like you were speaking for yourself, I suppose.

A couple of months ago, I was, I've been actively looking at and following things in AI for healthcare and wellness for a long time, but nothing really concrete. It just so happened that during fall of 25, I was going through something, you know, in life, friends and family. And also there's a lot of discussions about like patient care and stuff just kept coming back at me for some reason. And Amrit just messaged me out of the box in LinkedIn.

And it just like, we just made instant connection right away. The problem is solving just, it resonated very well. Right, right. That was awesome.

I just, I, you know, not that I know Amrit so, so well, but like I know where some people come from and he just seems a little bit out of the healthcare norm, you know, and it's pretty tremendous that you work for MEDA. And I was just like, how did this happen? I know he's well-connected because he knows a lot of people. So I shouldn't be surprised, but I was just curious because.

Yeah, I was surprised as well. But I guess that's the thing, like, you know, I can't put a finger by saying, oh, this is how we met. It's just, it's just the mind space matched. All right.

So we're good. I saw your correction for your, so everything else is good. I'm going to do a bit of an intro. It feels longer always while you're doing it, but it really does matter.

And, you know, Amrit and I have landed on the, giving a really good intro. So just bear with, bear with me. I, you know, most people are like, you know, it's humbling, right? When you hear your own bio, but we, we've decided to land on a very, we want to be more comprehensive.

So hopefully you'll be okay with it. I think I shared it with you. Well, if not, we're going to, we're going to go for it. So no, that's fine.

Okay. So we're just going to start and then they'll edit. They'll, they'll start from here and, you know, perfect. Okay.

So welcome to the surgical journey podcast sponsored by NovaNav. And today I'm going to, I'm going to cut that one more time. Sorry. I'm going to start over again.

Okay. Welcome to NovaNav's podcast, the surgical journey. And today we've got a really exciting guest Vamsi Ittipu. Nice to meet you.

And thank you for being here. So for our audience, we, we've had the opportunity to really get into AI and technology with Vamsi. And we're going to have a really interesting discussion. So today on the surgical journey, I have a guest who comes from a world.

Most of us in healthcare don't get to peek inside very often. Dr. Vamsi Ittipu is a research scientist, manager and technical lead at Meta Reality Labs where he leads a team of researchers to engineer tech, building technology that adapts to individual users, reduces cognitive overload and improves how people communicate and connect. But here's what makes this conversation, especially relevant for our audience.

Vamsi didn't start in big tech. He earned his PhD at University of Wisconsin-Madison building predictive models for Alzheimer's disease and designing clinical trials. Healthcare was really where he started getting to the, took that foundation into meta where he spent years figuring out how to take complex technology and turn it into something people actually use in their daily lives. So welcome Vamsi to the surgical journey.

Thanks, Lisa. It's very humbling, the introduction and thanks for having me here. I'm sure it's gonna be a fun conversation. Lisa Yeah, I'm looking forward to it.

So as I mentioned in my intro, you started your career building predictive models for Alzheimer's disease. And now you lead support various research efforts at Meta Labs. For a healthcare executive or clinician who hears AI every day, but isn't sure what's really real versus not real hype, what they're being promised by vendors or people. How do you think about the difference between that technology that actually works and just technology that people are just saying, buzzword or just trend?

You know what I mean? Vamsi No, absolutely. I think this is a great question to start with. AI is surely an overloaded town.

I think we all agree on that. I like to call this technology, this suit of technologies, intelligent automatons as opposed to artificial intelligence, because really that's what they are. They're smart automatons. I mean, fundamentally, AI is really a suit of systems, a school of systems that learn patterns that they're doing pattern recognition.

And they really enable three things. And you can order them in different ways, different complexities, but they really do three things. Prediction, reasoning, and planning. So they predict stuff.

They learn patterns and predict stuff. We can reason about the underlying physics, math, science, underlying things. And we can plan things in the future, which builds on top of prediction. So anything that really is doing these three things in the best possible manner really is loosely referred to as AI.

Of course, the recent, like the current innovation in AI that we all are hearing a lot about, that started, I would say, like towards beginning of early 2000s, where there has been large investments in three areas, really. Large datasets, the ability to build complex algorithms, build optimizations that can handle nonlinear patterns. So a lot of innovation in math, really. And the ability to actually build compute, which we can use to train and build these AI systems.

So I would actually say AI systems that predict, reason, and plan, they sit on these three pillars as foundation. Very large datasets, large compute, and concrete core algorithmic innovation over the past two and a half decades or so. I mean, there's been a lot of work in the past in the context of statistical machine learning, signal processing. These are really the mother fields that drove AI.

But when we refer to AI, this is loosely what it is. And to the point of what's buzz and what actually works, there's a hierarchy. And the simplest way to think about this is AI systems today are very, very good at informational and basic low-level tasks. It's like detection, classification, information retrieval, things like that.

They're okay in reasoning tasks, meaning we have an ability to reason out some things like what's happening in a video. Can you summarize a 20-minute video and do a couple of frames? But they struggle in more complicated aspects, like, okay, can you reason out why specific types of disease patterns are recurring in a particular patient? Can you reason out the underlying mechanism of weather patterns, changing weather patterns?

This is more complicated. We're not there yet. And planning is a very hard thing. AI still has a lot more to do there.

I'm happy to further talk about it later. But we're in the beginning stages of how to do efficient goal planning and action planning with AIs. LLMs are really the way to go, but there's a lot more to this space. So what a great answer.

And I'm just gonna take a step back. I heard you explain prior to us meeting today, I met you on another group conversation, and I was pretty much blown away by your discussion or point about, and we'll get to it a little bit later, but you're about patient data and just how difficult that is and how maybe it's planning. I'm not gonna say this right, but how we look at the data, but we'll get there. I think that's what made me start thinking differently because you were so specific and so cautious about how we need to look at clinical patient data in big data sets, right?

If I remember the conversation correctly and how that needed to be thought out or coded or developed, right? I'm saying in third, basically. So I will get to that, but I love the three really, the three, the prediction, the reasoning planning, and then the large data sets, the large compute and the concrete code. So those things, those are the parameters or that's where AI lies.

So that's an interesting insight. So as clinicians or hospital executives, they can really come back and say, wait a minute, explain to me how your product does those, one of those three things in those three A's, you know? Because I hear it all the time, like, oh, we have an overlay of AI, but now I'm going to start saying, you know, asking, well, where does it fall into? Yeah, yeah, yeah.

What's your data? You know, do you have compute? These are really the Lego blocks, right? Yeah, thank you.

That's great. I think that's a good foundation for everyone had to think about that and discuss it. So my next question is, your team builds technology that adapts to individual users in real time. So it's not one size fits all, we talked about earlier, but it's personalized to what the specific person needs.

So in health care, we talk about personalized care a lot, but it's not delivered at scale. So have you figured out things in your, you know, what you're doing now that maybe healthcare hasn't adopted yet or should be thinking about? Yeah, that's a good question. Personalization is an inherently subjective thing.

So sure, there are generalized, like common patterns, to what that means from a user's perspective, but there is also a lot of nuance associated with it. One way to think about this maybe is like we can do a compare and contrast here. Like if we take entertainment and general well-being, general, you know, ability to do actions and carry on with daily life, that kind of tasks versus healthcare and wellness. We do take these two buckets, right?

Personalization really, you know, for many of the things that me, my team, and like generally we do in the tech industry. The idea here is, okay, are there things that the user who's using my product, are there things that they like, they may attend to? I can grab their attention for some point of time. Are there things that are potentially of interest to the user?

So this is very much like, very much exploratory choice agency-driven personalization, which is very different in healthcare and wellness when it comes to personalization, because the questions we're asking are, what are, for example, what are the list of care instructions? Or what is the, what are the list of, you know, delivery instructions that most appropriately suit my need right now, as of now? It doesn't need to be like, maybe my list of instructions that I prefer, that are personalizable to me, that can change in a span of two to three days again. So one answer doesn't fit all, even for myself, from the point of view of a patient, right?

So the main difference here is, first of all, the cost of failure, the risk associated with failure of personalization, in general, you know, entertainment, let's just call it that, industry is different from how, you know, the cost of failure in healthcare and wellness. That's the first main thing. The second aspect is, there is a, we have the ability to add structure, but at the same time personalize. So in Saharios and healthcare and wellness, it's really about the life, their life.

So if we take some agency away, but give a bit more structure, as long as the structure is not too overly complex, as long as it's low friction, there is still a way to address personalization. The other hand in entertainment or in general, like, you know, I'm going to use my glasses or I'm going to look for new contacts on Facebook. I think taking agency away feels like I'm not allowing, you know, the system is not allowing me to do what I want. And so there is a different way to look at personalization where we provide more structure, but do personalization in healthcare and wellness.

We limit the agency from a patient's point of view. But on the other hand, for general use cases, you get to do whatever you want. You have this list of tools and we'll figure out patterns. So it's a bit more loose, very less defined in some form.

So there's a big difference. And to your original question, like, have you figured it out? I don't think anybody figured out personalization. It's just implicitly ill-defined.

Yeah, no, I love how you, though, bucketed it or how to get closer to it, right? By taking away some of the agency. And so as we're looking at things in the NovaNav side and like you said, the instructions post as they leave the hospital, right? So they manage, now we can help manage the post-surgical side.

Those instructions, to some degree, or actually to a large degree, could change the next several days, depending on that patient. Very, very personalized decisions they make or some kind of outcome. But if everything is going somewhat as planned, then like for, I'll use it in our context with NovaNav, that we're able, then the patient can really check a box. We have less agency and they're in our model or in our framework, and then we can personalize it.

So I think that's what I heard you say. So that's maybe as close as we can get to it right now. I agree. With the exception of maybe if there's a problem, so it's, you know, they have to, we have to rescue and recover.

So it's urgent and then out to the physician or a clinician. Then, I mean, it's a different thing. I mean, there is a degree of personalization associated with the underlying task as well. Like for instance, if you're talking about delivery instructions and care, yes, as I was saying, there's a structure and there's a way we can probably approach it.

But if it is more about nuance associated with, you know, am I getting the feeling that I'm recovering well? How do you capture that? That's much more, we go back to the typical daily life, well-being, like general connectedness aspect of personalization. There, the book is wide open.

There are no rules written yet. And I think it's equally a hard problem. It's as hard as solving the general agency problem, general subjectivity problem. It gets into a bit of a philosophical territory as well.

Right. So I'm going to throw a couple of things at you, only a little different. We're going to get through like all of our questions, but you mentioned LLMs, right? Can you just talk a little bit more about that?

Since you did mention it, like how healthcare, could we be thinking about LLMs since you did mention it? Yeah, I think it's fundamentally the power of LLM is really in, okay, I'll try to address this in two ways. One is what are they doing and what are they not doing? So what are they doing?

They're really at the core, an LLM is really a token prediction machine. Just to expand that, it's basically looking at a list of a set of words from recent past, let's just say a few dozen words. It could be longer than that, but just for argument's sake, looking at a couple of words and then it's predicting what's the next possible word. Right.

So language is such an amazing thing that it abstracts out a lot of noise from just the general content. Like if I look at an image, there's like millions of pixels in this image, but really if this is an image of a beach where there's sunrise, there's really like two or three things in there. There's this calmness, there's sun and ocean and probably sand and it's peaceful. That's good enough, right?

You can actually concoct an image of a beach just based on those words. That's what a language is good at. That's what LLMs are good at. What they're really doing is language already presents you, it abstracts away all the noise, it presents you, it captures the right kind of contextual and content information in the best possible tight representation.

That's the basis of language and we are building models to predict patterns in those languages. So if we have a very, very, very large data set, the entire internet, these models would have looked at all possible combinations of words, all possible combinations of patterns and so they predict the best possible word, which is essentially a really, really large lookup table that you would have anyways gotten the answer for if you did the Google search yourself. So from an information gathering, from summarization, from description point of view, they're fantastic. They're doing just the right thing because we communicate in language.

What they're not doing is, first of all, it's really hard to interpret when something fails, when an LLM fails. It's a freakishly complicated blur, opaque box. Second is they have a tendency to hallucinate. That's what you were talking about, where it fails.

That's the conversation I was listening in on when you were talking this side. Where do you figure out the failure? Yeah. I mean, in a way, it's really like self-assurance.

You can think of it as a self-assurance mechanism. They have the ability to convince themselves that they have the ability to convince themselves to give a very high confident answer with low clarity. So they may not have the right kind of data. They may not have looked at that kind of data, but in a hypothetical scenario, you could put two and two words together and concoct a scenario, which from a basic physics and real life point of view might not even exist.

And they do this very, very confidently. This is bad. So from the point of view of healthcare and wellness, this is really bad, right? So we shouldn't really be thinking about I know I'm going in a bit of a circles, but we shouldn't be thinking about utilizing LLMs and their core capability to tasks in healthcare and wellness beyond informational, beyond retrieval, beyond general instructions point of view.

There needs to be a lot more that should be done in the space of what is the task from a health and well-being point of view? Am I working with someone a patient to make sure that the next 15, 20 days of their life is taken care of somehow? What does that mean? Like there is a component of what the user likes, what the patient likes, what they've seen, what they're going through.

There's a lot of nuance associated with their medical history, potentially a list of things that the doctors are suggesting to do, biophysical signals, general sensing and perception, this is a lot of information. So there's a lot of reasoning, there's a lot of planning, there's a lot of long-term continual learning. This is something that LLMs have not been, I mean, they're getting there. There's a lot of progress happening, but we don't know the best way to do it.

So we can cut the problem in that way. So if it is very much like instructional, descriptive, retrieval, just a representation, summary point of view, yes, they're really good at it and we should use their capacity. If it's more than that, we have to be careful. Yeah, no, I think you did.

I actually think this was very good. You went a lot deeper. So thank you. So LLMs, if we think about hospital on the business side, and reducing admin burden, or like you said, maybe in revenue cycle or supply chain or retrieving information or data, that's a perfect application right there.

So I liked how you segmented versus, I think everyone has more of the angst on the clinical side because that is so different. But I think you did a really beautiful job of explaining that, you know, what it does and what it doesn't do. I do have a fun question. I don't know whether to ask you now or not or continue and then we'll go with, we have like a couple more questions, but I'm just going to ask you and then it's, so what are your thoughts about vibe coding?

Just generally speaking, as somebody who's a PhD and, you know, like is so super smart and really knows AI. It's got to give you some angst, right? All of us that are playing around with vibe coding. No, I think like people who are, you know, much, much deeper experts than me, given a straightforward answer for this.

And I tend to think that these are really good tools. So vibe coding is really good in separating the thought processes associated with the code versus writing the actual code. Right. Right.

So I like to think of this example, you know, about vibe coding. Like if I have a four-year-old daughter, she's like beginning to phrase like long sentences and all that's fine. But it's clear that there is something in her head that she wants to say and she's finding the correct words to represent it. The ability to efficiently, correctly syntax, use a syntax and represent it shouldn't derail the thought process.

Right. So you could still be thinking, explaining, okay, I want to say this. So I want to code up a puzzle for Sudoku. I know how Sudoku works.

I can explain it in English. But I don't know the exact syntaxes, exact language nuances associated with it. That is where vibe coding is really useful. So you just talk to the, you're a system architect.

You're architecting the solution. The tool is building the code. You can talk to it. You can interact with it.

It will break you. You'll break it. Over time, you'll start finding relationship with this tool. I think that's fine.

We still have a lot to do, a lot to learn. I mean, Cloud and OpenAI have done amazing things. Many other people are doing amazing things in this space, but it's a good place to be right now as a coder. Yeah, it's just fun.

I mean, just as a side note, I was showing my son, he's 11, just kind of showing him vibe coding because I really do feel both my kids should be, AI should be their future, right? Some version of that, but I was showing him how he can create his own game just by, you know, vibe coding, just let's create our own. And so we created this really fun game. It was kind of tennis, kind of, you know, just something just kind of fun, almost like, I don't know how old you are, but if you remember Pong back in the day, we just had some fun.

But I actually think it's like, to your point, it begins the relationship, right? I mean, just even for young kids to understand, but then I think obviously they've got to go way beyond that. So thank you for having fun with me for a minute. I know it's something very basic.

Although I think people are still learning about vibe coding now. So one of the biggest problems your work solves is cognitive overload. You know, helping people process the right information at the right time without being overwhelmed. Surgical patients go home, stack of discharge instructions.

They're expected to manage their own recovery or with their family. What can healthcare learn from your team and about delivering the information and what actually they need? Because I think there is some really interesting parallels there. Yeah, I mean, cognitive load is, it's one of those terms which is very similar to AI.

It has a lot of depth associated with it, right? And I typically, you know, based on my previous answers, you might have gotten the sense. Now I try to use examples. So I mean, one way we can think of understanding the key features associated with cognitive overload and reducing it is by sort of looking at the manifestation of that in the context of perception and working memory.

So when we are saying cognitive overload, first of all, what is the sensing load? You know, are you using your eyes and ears or are you using a multi-sensory device? Are we talking about just language? Are we talking about something more, right?

So what is the sensing load? Could you have done a better, smarter way to sense and process the information, first of all? And second, what is the working memory associated with it? How much demand are you putting on the working memory?

And this is from a user point of view. So what I mean by this is, let's take an example of the discharge instruction. So what we are saying is, I have a list of things. That I'm presenting to the patient.

So we are asking them to, first of all, sense and perceive and just go through this list. So there is a sensing component to it. They're reading it. They're hearing.

It's probably just take a description. They have some notes written. So that's all sensing component. There is a load associated with that.

If I need to read like 20 pages of document, doesn't like five hours of video or audio. I'm just giving examples here. And then there is a working memory. So how much are we demanding from the patient?

Like what is the demands of understanding? So what is the minimal, from the point of view of addressing it, what is the minimal amount of information that we intend the patient to understand? And what is the lowest minimal, like let's just say lowest friction way to deliver that information? That's really at the core.

So instead of directly addressing and building a theoretical model for cognitive overload for a specific device and targeting that, it might just be we're better off working at a bit of a lower level by directly saying, okay, for this particular task, with this understanding about the cognitive load on the user, delivery instructions, what is the minimal amount of information that I as a doctor want my patient to understand? And what is the best way I can represent it? Representation here simply means like what is the format that you're delivering this instruction, right? From an AI's point of view, both of these are representation learning problems.

And both of these are actually solvable with the kind of tools that we have now. A lot of this can be done, like organizing information in a hierarchical manner, best possible manner. This is essentially the bread and butter of these, you know, major tech solutions, assistants. We've done this very well in the past as well, like there's been classical methods called decision trees and, you know, hierarchical information, databases, et cetera.

We know a lot about this, hierarchical information processing and representation. Delivering information in the past decade or so, the idea of using voice assistants, audio assistants, visual assistants, and watch, for example, watches and EMV sensors in the right possible way to send alerts, notifications, to minimize the content that you're transmitting, sharing with the user, but be making it most effective. We've done a lot of interesting, I mean, there's been a lot of interesting work in the space of the past decade or so, maybe longer. So we can tackle this, you know, what is the best way to deliver, you know, the information and we can consider the patient's constraints in some form.

Well, let's say if they're hearing impaired, visual delivery, or let's say visual impaired hearing assistants, or maybe they're a morning person versus evening person. So you know when they're active, when they're actually focusing, right? Most coders are, you know, they're active at like 1am in the night. So each of us has our times of like most active states.

So we can use all that information and deliver it in the best possible manner. So I just want to break out that point that's really great. So really your example for the discharge instruction. So it's maybe we're just giving them too much.

We're trying to tell them everything. We're trying to worry more about, you know, risk avoidance because we didn't say everything. The way we said it, we're overloading when in reality, we need to like, what's the minimum information that they need to know? They probably would do better with the minimum information because I know I mentioned like family on this podcast, but unfortunately in the last couple of months, I've been through various ER visits and so on.

I've had that discharge instructions and really it's a lot like, and I'm in healthcare. So like that's a great point. I never thought of it that way. Maybe it's like a redesign of that or maybe it's us at NovaNav looking at taking the discharge instructions and kind of working with our clients and making that better for the patients, number one, and then the lowest friction, of course, what you said, meaning how do we, in what way a wearable alerts in what way that suits them.

That's really, that's great. I love it. Another great example. Thank you.

So I have two more questions and I hope you don't mind, like to stay for, I don't, I didn't want to cut short one. So I want to ask you both. So, you know, where you get meta, you've seen what it takes to move from a research paper to a product that millions of people use it every day, right? Millions and millions.

So healthcare is sitting on a mountains of data, right? We have so much data, but struggling to turn that into change outcomes and really true insights. So what's that gap? If you were to tell a hospital executive that, you know, what it takes to operationalize that kind of technology, because we've really struggled with mountains of data, different places, even we've gotten better, I think, at bringing it together.

I don't, I still don't think we're where we're at with using the data. And I mean, this is a, this isn't a critical question, really, right? I mean, tech industry, for instance, dealt with, you know, a lot of great innovation in AI early 2000s. And it took some time for industry research labs, even to sort of build on top of it, make it into a format that it is productizable.

And now we are seeing a lot of products. So I think one has to just jump into the water and start swimming. And of course, there's a lot of tools, like new technology introduction. The concept has been well studied all the way from, I believe, from Apollo research, Apollo program.

They had like multiple stages, 10 stages, I think, in terms of going from something on a whiteboard or a blackboard at the time to, you know, landing on the moon. So I think there's a lot to learn from precedence. But really, I'll try to answer this question from my own experience perspective over the past decade and a half or so. I think I've really learned like four things that I think would be useful.

One is, by the way, when we are addressing this point about research to product, we already have done some research. That's where we started. But we have a sense of a problem that we're solving. There's some academic research that or there's some R&D that we started.

And I'm starting from that point. So from that vantage point, let's say, taking a research paper to a product, first thing really is for the problem that we're trying to solve or for the problem that we have some early R&D for, what is the most important metric or performance indicator when it comes to success? Because from a research point of view, you could always think and concoct, for the lack of a better way of saying it, metrics. And they're good.

Metrics are good, like hackish, cooked up metrics are good. But to an extent, they don't really reflect the underlying use case and the user value associated with the potential product that you want to build. So I guess that's the most important thing. For the tech that you have in mind, for the problem that you have in mind, what is your measure of success?

It's very dependent on the specific task and issue at hand. In the example that you were talking about, like with hospitals and execs, if the hospital is serving mostly non-English speaking individuals, you want the delivery instructions to be in a way that they actually understand it. You don't want to teach them English. Also teach them patient care.

So there's some nuances like that that we can take into account when it comes to building the right metric and performance indicator. So I just want to mention one thing. I think that's a great place to begin outside of knowing the problem. Because I think sometimes we try to tackle so many things.

So I think what is the most important measure of success at one thing? It's a great point because I think we're getting big data and we think, okay, here are the minus 10 things to solve for, right? Yep, yep, yep. Absolutely.

So I know you've got more on your list. Sorry. That's good. This is good.

Metrics is always like something that keeps me up at night, let's say. And then there's two interrelated aspects for this question. One is, I mean, in the wild, real world data is messy. It's noisy.

It's all over the place. I mean, there can't be a better example for this than EHR, right? Electronic health records and what we have done with it. We meaning just generally as the tech industry and broadly over the past half century, really.

Where it was and where it is right now, it's a really, really complex thing. So it's not possible to understand all nuances and all aspects of the data that you're handling. But there has to be a reasonably comprehensive description of the data that you have. And you have to have some sense of what are my edge cases?

What is the kind of noise in the data that I'm dealing with? Because what happens is you could have built your R&D, like research and clean datasets where things work well. And as soon as you start throwing them, throwing those tools on real world data, nothing works. And in most cases, you cannot interpret as to why things are failing because the models are very complex.

They are not interpretable. So now we have to, we go in the circle. I have to start introducing noisy data into research. It's fine.

That's good. But at the same time, that will sort of dilute the progress that we may have made on research and we go back and forth. This is a really complicated issue to deal with. So one way to break this is what is your gold dataset?

What is your noisy dataset? And in what kind of data do you think you expect the tool, the model to do well, the system to do well? So that understanding about the data will help us curate the kind of R&D that is coming out and sort of target solutions to specific target population. Maybe we don't have one solution for all of the patients who are visiting the hospital.

Maybe we're targeting a smaller group. That's okay to begin with, as long as you understand the nature of the data that they are dealing with. And that's really, really critical to understand the data. On a related note, when it comes to one thing that is very specific to AI.

AI doesn't solve everything. It's a nice tool. It's a very foundational innovation that we've done over the past century or so. But there are problems when it comes to real world, in the wild data sets, in the wild problem, in the wild scenarios where non-AI solutions, like classical solutions, simple parametric solutions where everything is interpretable.

It's a combination of if and then statements that are very clean. Those solutions might work, might just work. So it's important to keep a door open to non-AI solution space and product that can challenge and benchmark the AI solution space. By doing that, we understand where exactly is this capability of AI R&D landing and how are we able to leverage it in the best possible manner.

If it's not, if it's just overcomplicating things, we don't use it. We go with a different type of solution. So I guess this is one thing I used EHR earlier because once the health record started becoming electronic, there's a lot of database innovation that sort of anchored it. And it leaked into it.

That was great early on. But now this same database innovation, like lots of interconnected threads, that's become more complex to handle. And we have all of our providers spending time typing, right? Because they need to talk to the database.

And the database is such a big, freaking complicated thing. It demands that kind of attention from it. Could we have gotten away with a different type of database? I don't know.

At this point, I think retrospectively thinking about it is not helpful. But in future, we're going to make this mistake with AI. You know, it's the first time I've heard it in the way you've said it, which is true, that AI is not going to solve everything. But we don't think of everything the way you said, at least maybe I didn't think of it, that there are non-AI solutions we could use.

And that's really such an interesting and factual way to look at, you know, with data. I think we are trying to make everything AI, AI where in reality to your point, like it may not be an AI solution that we should take. And I think that's a big takeaway. I was at a conference, I think it was this, I forget which CIO, it was a large health system.

And he was saying that back in the day with big data, you know, everybody ran to big data, big data and all these projects. Basically everything just failed and we're doing the same thing with AI versus like having these, what you just said, these micro projects and these micro groups, specific patients, being able to have those wins there. But I've never heard anybody say, and I love it, like it may not be an AI solution. I think that's really a great point.

I mean, I think even in my early days before my time when I was working with Alzheimer's disease researchers, like there is a lot of knowledge, just by sitting and talking with a clinician who has interacted with dementia subjects, right? There is a ton of institutional memory in their head. If only I could have sort of, you know, captured that in the effective best possible manner. I kept thinking that as I was building my research.

And we've done some interesting work there. We did benchmark like, you know, these generative large scale models with simple parametric models. Yes, there are cases where it fails. There are cases where we saw success as well.

So I think this is a really important nuance that gets, as you were saying earlier, right, that gets overlooked because of the AI buzz. Yeah, no, I love it. I'm so glad you brought that up in that way. I think it's another one.

So I think, you know, think differently. So last question for you. Thank you for staying a lot longer. Looking ahead, where do you see technology having the most meaningful impact on patient experience?

Not the back office, you know, or the billing and kind of some of the things we spoke about a little bit on the admin side, but the actual experience the patient has kind of before or maybe even after they interact with their care team. Yeah, I mean, it's a little bit related to something that I was talking about representations earlier. The thing that is, like, because you brought up white coding, I'll stick to that example. So the thing with white coding is it's separating the syntax from the architecture.

So it's separating the engineer and the architecture. Both are important, because it's not like one is like, you know, a lower class that doesn't compare to the other. Both are equally important and necessary, but the ability to distinguish both of those gives us the right way to approach the problem at hand. So when it comes to things like, you know, patient experience, the syntax shouldn't play a role.

It's about the architecting. It's about what is the experience that I want my patient to have who walks out of the clinic today and let's say if I meet them again, like two months from now. But OK, as an example again, how do I expect them to be or what is the ideal recovery that I want to see in them? Now, if we can phrase that, if we can describe that, there is a persona associated with it, right?

You have a list of patients you can potentially cluster, like group them in different categories and you have a persona associated with the ideal recovery for each of these categories. Now, this has nothing to do with the way you deliver the instructions or deliver the, you know, represented instruction. That's all syntax. That can come separate.

That can stand on its own. So the way I see the big thing, like, you know, to your point about where do you see technology most meaningful impact? This is what I see. Like, I can clearly articulate for the kind of persona that my patient is, this is the ideal recovery and here is what they should be looking, they should be feeling like, looking like when I next interact with them.

And if that's not the case, then you're doing something wrong. If that is the case, how do you build your system and technology and the rest of the syntax to approach this meaningful impact and gradually build it over time? There is, of course, there's nuances here. There's continually learning, there's adaptive learning that we need to do for these syntaxes.

You need to tinker the approach over time. There's handholding, there's coaching, sure. But it's all driven by the actual architecture of how you want the recovery to manifest, basically. Okay.

I think this is, I mean, this has been an amazing discussion, but boy, I think you blew my mind a little bit here because it's true, right? You know, in marketing or sales, sometimes we're like, well, what's the persona? What's the ideal customer? What's all this?

But you've just reframed something that it's like, it's really brilliant, right? So doctor, what's the ideal outcome? How should that patient be, feel and look? And if you build it, like I don't even know if anyone's even thinking of it in that context, right?

Like if we start thinking about that state, that end, which is absolutely ideal, I wonder what we would do differently in the whole process. That means a lot of different things. If that really is the end outcome and a true definition of patient experience, the patient's best outcome, how they feel and how they look, boy, that would be amazing if that would be the most meaningful impact, right? Because it would solve, I think, everything.

Yeah. I mean, yeah, I agree. That's what technology is for, right? These are tools.

We can use technology to get our minds entangled in webs or we can use it to clear a path. So I see there is a lot of promise in not just AI in general, in these interactive computational systems that we're building over the past couple of, again, maybe last half century or so. And this is really the thing. What is the ideal place I want to be?

And what is the ideal person? And this is, I'm using the word person, for the lack of a better way of phrasing it. But really, there is a North Star that every provider, every doctor, every clinician, physician, they have in mind because they have a recipe of what health, what good health is, right? I think capturing that and walking back is really critical.

Yeah, I love that. What's the North Star? And then building that experience for the patient. I think that's the best place for us to wrap up this podcast.

But Bansi, thank you so much. What a great, great interview. And you gave us so many things to kind of dig into. And I believe that the healthcare listeners probably have learned a whole lot.

They're going to go back and really either redesign or have different discussions, even just to think differently. So Bansi, thank you for your time. It was a fun chat. Thanks, Lisa.

The questions were great as well. I mean, it got me thinking. Okay, good. Well, I may have to, like, twist your arm and come back again.

But thank you. We look forward to that. Thanks, Lisa.