Episode 69: Zak Kohane, Chair of the Department of Biomedical Informatics at Harvard Medical School

Episode 69 March 17, 2026 01:02:26

Show Notes

Today on the Biorasi podcast, we look back on three years of AI innovation in healthcare and clinical trials - breaking new boundaries and a creating a new reality across all industries.

Host Chris O'Brien welcomes back Zak Kohane, AI Oracle and Professor and Chair of Biomedical Informatics at Harvard Medical School. We'll journey back to Zak's first appearance on the podcast and take stock of what three years means to AI development including widespread adoption by patients and physicians, how cancer still outsmarts today's LLMs, and the importance of sharing prompts with fellow AI travellers.

Find answers to the top AI questions in this new episode!

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. Welcome to the Bio Rossi Few and Far between podcast. I'm your host, Chris o'. [00:00:09] Speaker B: Brien. [00:00:09] Speaker A: So how is AI actually doing? As it races up the capability curve, the healthcare industry is still unsure of how to respond. Meanwhile, patients and physicians are testing its boundaries right now, often without guardrails in place. My guest today, Zach Kohane, returns to Few and Far between to revisit his bold 2023 predictions. Zach is an AI Oracle and the professor and chair of the Department of Biomedical Informatics at Harvard Medical School. He's also the editor of the New England Journal of Medicine's AI Journal. On today's episode, we look back on three years of AI development, reporting on its wins, its failures, and the new reality it has created for clinical trials. We'll also discuss how insurance companies and healthcare Systems are using AI, how cancer outsmarts LLMs, and why sharing prompts with fellow travelers keeps driving AI forward. It's great to have Zach back on the show and be sure to check out the comments for the books and videos he mentions. Okay, let's start the podcast. Professor Zach Kohane, welcome back to Few and Far Between. [00:01:13] Speaker B: Glad to be back. [00:01:14] Speaker A: It has been a while, so I've been really looking forward to this. As we, as we record this, we're in a moment of mass, quasi mass hysteria about the changes in AI that are happening. So we'll probably do a few late breaking things. But I thought it would be good to start with the last time you were on, which is a couple of years ago now. You made some bold predictions. You were kind enough to make a couple of bold predictions. And I thought maybe we could start with how are we coming? How is AI doing as it moves through this, as it moves up the capability curve? So the first one was, we talked about the end of the diagnostic odyssey. Has AI succeeded in shortening the timeline, particularly in rare disease, for people getting to a diagnosis? [00:02:05] Speaker C: So [00:02:07] Speaker B: I would say it's a, I get partial credit for that one. And I get partial credit because in the United States, yes. Faster than other countries, consumers have not waited for permission to use AI for healthcare. [00:02:25] Speaker A: Yeah, agreed. [00:02:26] Speaker B: And so there are literally thousands of stories now of patients. And what's interesting, by the way, it's very hard to track this because it's happening on the patient side. There's no incentive, really, for doctors to be writing this up in the literature. [00:02:42] Speaker A: Oh, that's a really good point. [00:02:43] Speaker B: Yeah. [00:02:43] Speaker A: And I have seen this too, anecdotally, people saying, you know, I, I, I put all my symptoms in and I Got an alternative to at least to explore. [00:02:52] Speaker B: You know, all my friends, all my friends, patients, people with their pets are doing it because they cannot actually get to their doctors. [00:03:03] Speaker A: Yeah. [00:03:03] Speaker B: And in fact I wrote a, I think since we last spoke, I wrote New England Journal of Medicine article entitled Compared with what? Where it makes a point where if you have AI, who should you be comparing it to? Should you be comparing it to the ideal healthcare system where you have an instant conversation with your primary care doctor or the one that we have where you're lucky if you can have a conversation with your primary care doctor in six months? As opposed to where it is in Boston where our own doctors in training, our residents at Mass General, Brigham do not have primary care doctors. That is right. Because all the, all the practices are no longer accepting new patients because fundamental economics and sociology around primary care means it is totally screwed. [00:03:55] Speaker A: Yeah. No one has an incentive to go into primary care. None of these young docs. Right, Exactly. [00:04:00] Speaker B: And so therefore a lot of these prime, a lot of these diagnostic odysseys are being accelerated initially by this. [00:04:14] Speaker A: So what's, what's the skeptics case? Because I would have given you more than partial credit there. I, I feel like I, I use it very actively when we've got something going on in our family for this exact purpose. [00:04:23] Speaker B: So everybody else, everybody else. The reason I, I don't want to give myself full credit is because these things are still imperfect. [00:04:31] Speaker A: Yeah. [00:04:32] Speaker B: And I, I, by the way, I'm saying you should use it. I use it if you're, it's better than talking to nobody. It's better than doing yes, a Google search. But these things cancel. It's best used by people who know what they're doing still. [00:04:48] Speaker A: Yeah, no, that makes a lot of sense. The, you know, the other insight that I saw recently from one of the Frontier Lab guys was that if you're using a free version of ChatGPT, you're getting an outdated model. And so if you, if you have something important to do, it's probably worth spending the 20 bucks to make sure you're getting, you know, the sort of a best in class model today. And of course those guys are seeing the non released versions yet, so they're seeing six months down the, down the road. [00:05:16] Speaker B: Yeah, I mean I'll just, I'll just tell you what I'm personally doing. [00:05:19] Speaker A: Yeah. Yes, please. [00:05:21] Speaker B: I think I'm going to probably release this as open source software very soon, which is under the 21st Century Cures Act. You have a right to a copy of Your record. And in a computational form, not just as a stack of photocopies or fax. I see. [00:05:46] Speaker A: Okay. [00:05:46] Speaker B: And you can get it today, for example, through Apple Health. It'll ask you to register with your portal. Or you can. There are a variety of other portals and if you want, I can play you a, a 142nd video that I made of what I did, which shows you what I do for myself, which is I pull all the data. [00:06:09] Speaker A: Yeah. [00:06:10] Speaker B: Like 180,000 facts from my electronic health record. And, and also some of the wearable stuff all thrown in and I send an AI after it. And that way I can use a frontier model like Claude code or Claude and. But against the whole context, not just the most recent visit. [00:06:30] Speaker A: Holy cow. Well, so no, I don't want you to play it now, but I'd love to get the link and maybe we can link it in the show notes because that is, that is. This is. I mean, I do think the idea of patient empowerment, I understand it can be overstated and of course there are risks with error and also with user error. Right. In both sides. But come on, the potential is really getting exciting. [00:06:50] Speaker C: So. [00:06:50] Speaker B: Okay, great. [00:06:51] Speaker A: So let's call that prediction number one. [00:06:53] Speaker B: Yeah. And also by the way, the market has spoken. See, OpenAI is now openly embracing. They have this pilot around ChatGPT Health. [00:07:06] Speaker A: So should we all be playing with that? [00:07:08] Speaker B: Yeah. [00:07:09] Speaker A: Yeah. Okay, good. That's another call that a takeaway. Okay. The next, the next prediction was AI is co pilot really aimed at docs. Right. So you're, you're sitting right in the hot seat for this. What are young and experienced doctors? How, how free? How much is it in use today? [00:07:28] Speaker B: So here's where I give myself full credit. And, and not. In the end, I actually have good documented evidence that I won this one, but it's not in the way I thought I was going to win. [00:07:40] Speaker A: Excellent. [00:07:42] Speaker B: Which is over half the doctors in the United States and something like 70% of the to 80% of the young doctors are using frontier models every day to make decisions. [00:07:57] Speaker A: That's incredible. [00:07:59] Speaker B: But there is no oversight of this from the fda, from the CIO of the hospital, if AI officer governance. Why? Even though, if you go to most of the modern hospitals and look at what are the screens open to on the nursing stations or what are the screens on the smartphones on these doctors, they're looking at these models, but in a particular brand, which I'll reveal in a second that they log on to individually. And what is that brand? That brand is Open. Yeah. So Open Evidence has absolutely taken over the, the mind share of these individuals. And although they probably will resent that characterization, something else much more carefully curated in some ways up to date. Which is a very expensive tool. [00:08:54] Speaker C: Yes. [00:08:54] Speaker B: That, that multiple institutions pay for like millions of dollars. [00:09:03] Speaker A: Yes. [00:09:05] Speaker B: Probably because it's always, it's never completely up to date. [00:09:08] Speaker A: Yes. Yeah. [00:09:09] Speaker B: And because it's wordy, it's getting longer and longer. Whereas open evidence is kind of terse. [00:09:16] Speaker A: Let's, let's, let's. For folks who don't know, let's just explain what open evidence. [00:09:20] Speaker B: Yes, I apologize. So what evidence? Open evidence started as a very thin, like barely perceptible wrapper around GPT4. [00:09:28] Speaker A: Right. [00:09:28] Speaker B: And now it's gone much more better. They've developed their own relationships with different journals. Although I have nothing to do with it, I need to declare that in my role as NEGM AI Editor in Chief, the NEGM group made a deal with Open Evidence as the JAMA and so on to. And I, I'm not privy to the details, but part of it, I'm pretty sure was licensing some of our content to help make their models fascinating, more accurate. So the, here's the point. The point is that, the point is that doctors are using this everywhere. And so if I were young I would. And what do you do you actually just. You type? Sometimes they cut and paste into their iPhone. I hope they're not pasting in identifiable information. [00:10:24] Speaker A: Right, right. And legal officers blanching now. [00:10:28] Speaker C: Right. [00:10:29] Speaker B: Yeah, yeah. But the fact is they're doing it and I think it's a. First of all, it tells us that young doctors, as they should vote with their feet, they're incredibly time pressured. [00:10:42] Speaker A: Yes. [00:10:42] Speaker B: They want to do the best for their patients. And so they want to have the latest recommendations and they'll put in and say what are the recommendations? And this is what open evidence does for them now. What a indictment of the healthcare system for sure. That our hospitals have been so like, whoa, whoa, whoa, let's. We need governance, we need them. They slowed things so far down that, that in fact it's not available. [00:11:09] Speaker A: Is it fair to say that the docs have kind of left the hospital administration behind on this completely? [00:11:16] Speaker B: That is not fair. That is an accurate. [00:11:18] Speaker A: That's just a statement of fact. [00:11:19] Speaker B: That is a statement of fact. [00:11:20] Speaker A: Do you hear hospitals and administrators, senior faculty at your institution grappling with this? What do we do about it? Because obviously ignoring it doesn't feel like the perfect solution. [00:11:33] Speaker B: Yeah, well, I think I, I know that they're talking about it. I don't think they have any good solutions to it. [00:11:38] Speaker A: Yeah, no one knows what to do. [00:11:39] Speaker B: Right. [00:11:40] Speaker A: Because there isn't, there's not much of a legal framework that's I'm sure, you know, if there were, if one of us were a lawyer, God forbid, I think that person would be saying, hey, there's all kinds of risk here. And if docs are doing it individually and we institutionally don't know about it, well then we, we aren't responsible for it. I suppose that's what the argument. [00:11:58] Speaker B: I think that's, I think that's, that's a pretty good characterization. But because of the fact that our healthcare institutions, as I've increasingly trying to say openly are high revenue, low margin institutions. [00:12:11] Speaker A: Yes. [00:12:12] Speaker B: They're super anti self disruption and therefore it's very hard for them to do this. Now let's be very. One more thing that we should be aware of. What's the business model as far as I can tell for open evidence advertisement. [00:12:28] Speaker A: Yeah. Right. [00:12:30] Speaker B: And so these doctors are getting advertisements. [00:12:33] Speaker A: Yes. [00:12:34] Speaker B: And this is something that we used to shy away from a lot. I used to have the great joy of having pizza when I was a resident because drug companies would pay for it. But then it was decided that I was too, too fragile and too. So that. No more pizza. [00:12:49] Speaker A: Yeah. [00:12:50] Speaker B: Now we have like the equivalent of [00:12:52] Speaker A: pizza hot and cold running ads all the time. Yeah, yeah, yeah, yeah, yeah, yeah. [00:12:56] Speaker B: What a failure that is of our healthcare system because again I think the doctors are doing the right thing. They want to help their patients, they want to do their. But we're not. Our healthcare system is slow to embrace. [00:13:10] Speaker A: Honestly, the exciting part here is that the actual individual doctors are making these choices and as you said, voting with their feet. I want to then bring us that. That brings us to another related prediction that you talked about which was the democratization of X expertise. And you know, I, I don't live in Boston anymore but I did at one for many years and I knew I had access there to world class, world class specialists in pretty much anything you, you could imagine. Rural, rural docs. We talked about doctors in the third world or the developing world, I should say, having access to the same kind of expertise. How, how close are we to making that a reality? [00:13:49] Speaker B: Well, it relates to our beginning of our conversation. Unfortunately. People have said will AI replace doctors? The problem is we actually don't have enough doctors. And so again in primary care we've had to make ourselves in some sense better experts in order to be better patients because our doctors are not there when we need them to be. So that democratization, but also it's intersecting another trend, which is patients, as they used to do with Google searches, are now coming to doctor visits with a very detailed differential diagnosis. And doctors, to be fair to them, are rightfully complaining. Yes. Because a. It's another thing they have to do to explain. And the problem is both to the credit of the AI and to its discredit. It's the, as I like to joke, like Harvard faculty, these AI widgets talk very confidently. [00:14:59] Speaker A: Yeah. Very compelling. Yes. [00:15:01] Speaker B: Even when they're wrong. And so the problem is, and they speak in medicalese, it's a bit of a challenge for a doctor to explain to the patient why that AI may. [00:15:16] Speaker A: Yeah. [00:15:17] Speaker B: And so it's a real, I think an increasingly real burden actually to the visits. So on the one hand there's a democratization of expertise, on the other hand there's the false, sometimes a false impression of expertise. [00:15:33] Speaker A: Yes. [00:15:33] Speaker B: And a doctor who's really had the experience right now will know, you know. Yeah, technically that's true. But this does not apply to this case. And explaining why to the patient. Why doesn't I explain to the pace to this case takes a lot of time that in a 10 minute visit no one has so. [00:15:51] Speaker A: And I suppose that to your point, they're better at it now. So where previously everyone rolled their eyes at Dr. Google. [00:15:57] Speaker B: Yeah. [00:15:58] Speaker A: You know, that was a pretty easy thing to dismiss. Now it's a, you know, it's an eight bullet point explanation of why you have this disease. [00:16:04] Speaker B: Exactly. With the references which now increasingly are accurate and speaking like a doctor, it's a very, it's a challenge. [00:16:12] Speaker A: Yeah. Yeah. Okay, that makes, that makes a lot of sense. But do you think that, do you think that this sort of, you know, imaginary doc in a small hospital in the Midwest or in a developing economy that the access to open evidence and tools like this level that person up in terms of their ability to do their job? [00:16:35] Speaker B: It all, it could. It all depends on its culture. [00:16:38] Speaker C: Yeah. [00:16:39] Speaker B: Will it, will they be given encouragement or incentives to actually use AI? Interesting. And so you know, I can tell you even in my family, just using AI on their own images. Yes. Real stuff that were missed by radiologists, for example. Yeah. But there has to be incentives for that. [00:17:00] Speaker A: Have you noticed any, Is there any best practice there, is there any either medical system or government that is encouraging this? It feels like it would be a step function, opportunity if you have a really crappy national medical system to, you know, upskill your docs. [00:17:16] Speaker B: Well, the short answer is no. [00:17:20] Speaker A: Yeah, that's something. [00:17:22] Speaker B: In fact, you know, in Europe they're in fact discouraging use of AI, which they have as at least as big problems with primary care as we do. [00:17:33] Speaker A: Yes. [00:17:34] Speaker B: And they're so worried about AI and privacy that in my mind, which I think is fair. But they're forgetting the other side of the equation, which is how many patients could we help if we used AI? And they've really voted. I mean, they've decided to govern way on the other side and I think that's a real cost to patients. I current government here in the United States is making a bet that actually initially going slower on regulation will actually enable quicker adoption. And it's an interesting bet. We'll see what happens. Yeah, right. [00:18:14] Speaker A: I feel like there's a few right now. We have a lot of models being developed. The European one is clamped down very tightly. China is sort of a middle ground. Right. Where they have actual meaningful AI regulation. [00:18:27] Speaker B: In fact, you're right. In fact, you're correcting me because I do think there are some healthcare systems where they are encouraging the use of AI. Interesting. I'd like to know, in fact, if anybody from your listeners has actually direct evidence of that. I'd love to be able to write, publish a case report about that. [00:18:45] Speaker A: All right, you heard it here, folks. Write in if you've got it. Okay, next, next prediction. We talked about the old garbage in, garbage out problem of data quality. How are we doing on that and what are the big data plumbing challenges or risk? And here I'm thinking more about drug development. But we'll get into drug development in a second. [00:19:04] Speaker B: But yes. Okay, so first of all, I just referenced that companies like Open Evidence are realizing that high quality actually of data improves the product quality, basically purchasing license for the highest quality medical journals because not all medical journals are equal, not all public domain knowledge is high quality. So if you focus on high quality, you'll actually get high quality. And so that that's already happening. And on how to get healthcare system data, I don't think. I think there are many efforts to try to plumb healthcare system data to improve these models. But there's two issues with that. One is, are our healthcare systems in fact great examples of the practice of healthcare? We have, for example, great healthcare systems like Kaiser Permanente which are value based, they're trying to improve outcomes. And, and there's others which are fee for service where they are trying to improve outcomes, but they're Also trying to improve income. [00:20:05] Speaker A: Yes. [00:20:06] Speaker B: So what do you want your model to learn from? And second, the technology is not quite there yet. And that's because learning over a million records, or even 10 million records may not be sufficient even. Yeah. And because in AI world, that's small. Those are small numbers. [00:20:28] Speaker A: Small numbers, yeah. [00:20:29] Speaker B: And for instance, Epic has a Cosmos product which is over hundreds of millions of patients, but it's very heterogeneous. Yes. All the data are the same, not that all the data are high quality. So I don't think we're quite ready on the raw data yet. Let's shift to pharma if you want to. And so that's where I've noticed in the last two years a shift from a lot of skepticism to actually a lot of bullishness. And part of it is stuff that I expected, in fact anticipated. In the book that I wrote with Peter Lee and Carrie Goldberg, which is the administrative part of it, there's unbelievable hundreds of millions of dollars of bureaucracy of running a trial, just running a trial and filing those documents. [00:21:19] Speaker A: Yes. [00:21:20] Speaker B: That's actually being completely taken over by AI. But I was also talking to individuals in these biotech and large pharma companies. And there's also small parts before you even get to betting on individual drugs or leads. But there's small parts of the process which end up being hugely important, but are not hyper sexy. Like you have a new antibody. Yeah. Is the viscosity of the, of the fluid that you're inject going to be the right viscosity so that you can actually run it through an iv? And it turns out there's good AI models that now predict it so you can pick which one of your antibodies. So there's someone, a colleague of mine at Amgen, who said just 100 of those small inventions, and he says, I know that five years around, I'll turn around, I won't recognize my own company. [00:22:11] Speaker A: That's very cool. [00:22:12] Speaker B: A hundred of those small things will make a difference. [00:22:16] Speaker A: Yeah. This is almost like the difference between kind of the traditional American approach to innovation versus the traditional Japanese approach. Right. Like, yes, the Big Bang stuff matters, but 100 small innovations can dramatically accelerate the pace of discovery. That makes a lot of sense. [00:22:36] Speaker B: That's right. And then on the sexy part of it, the big bets around drugs, I think we're seeing some interesting, not previously non druggable targets that people are being helped at and potentially selection of the right lead compounds. But I think the biggest influential bets that AI is now beginning to help with is who do I make a bet on after a phase one trial? [00:23:10] Speaker A: Oh, interesting. So in other words, you're saying I've established, I understand toxicity, I understand sort of the basic dynamics. [00:23:17] Speaker B: I've got four at this level. I'm a human being. I'm just saying. Okay, looks good. But is there anything in the data that's telling me it's going to fail on phase three? [00:23:28] Speaker A: Oh, that's very interesting. [00:23:29] Speaker B: So that's where some big money bets are being influenced by AI. [00:23:33] Speaker A: Okay, the kind of the post phase one decision. Yeah, got it. And then what do you think about. So. So we've seen some high profile failures of AI initiated or AI enabled, you know, drug development. Do you, are you bullish on, you know, an AI discovered drug making it through the clinic? Where are you on that? [00:23:57] Speaker B: The other use cases I described are, I think the easier wins. I do think that we'll see even perhaps this year some successful phase twos or drugs that were in some significant part selected because of IAI prioritization. And I think that will improve. You know, medicinal chemistry is a bit of an alchemical black art. And the alchemists, the wizards are actually really good, but they're also very superstitious and some things just because that's the way things have been done. And I think the AI part of it was oversold. I think we're going to see a steady shift and it's not going to be as fast as some of the other components of the pharma use of AI, But I do think it'll shift and you will see, I believe, better and better, lower toxicity, more effective drugs. [00:24:57] Speaker A: I'm going to call that a middle ground position, Zach, because we know I saw a really funny thing saying if you talk to people at the AI companies now, they're all incredibly confident that AI is going to cure, cure all major diseases and all that stuff. And if you talk to a lot of biologists, they're like incredibly skeptical that this stuff's actually going to do anything. [00:25:15] Speaker B: Yeah. So I think it'll help us work things incrementally. But here's the sad fact. Cancer is so, so much smarter than any. [00:25:23] Speaker A: Yes. [00:25:24] Speaker B: Any AI model. And I think we'll push the frontier. But gosh, I sure wish the, the AI optimists were right. I don't think, you know, here's. If they could, let's say eliminate one of the major cancers is, let's say half the mortality. [00:25:48] Speaker A: Yes. [00:25:48] Speaker B: Of major, one of the major cancers in five years, that'd be an Outstanding victory. [00:25:53] Speaker C: Yes. [00:25:54] Speaker B: I'm not. I think it's possible, but I'm not super optimistic. [00:25:59] Speaker A: So there's a bunch of money going right now into longevity, all these different longevity science companies, some of it funded by aging, wealthy humans. Yeah, yeah. [00:26:12] Speaker C: Do you think that. [00:26:12] Speaker B: That. [00:26:13] Speaker A: Do you like this as a field of discovery or do you think, come on, these AI dollars are better deployed against some of the other things we're talking about? [00:26:22] Speaker B: Well, I think it's actually good to have a thousand flowers bloom. And we saw, unfortunately, the result of groupthink around Alzheimer's. Yes, yes. Where we were just focused on a few proteins that everybody said and would work. And anybody who had dissenting voice really [00:26:45] Speaker A: ground out from the conversation. Right, yeah. [00:26:47] Speaker B: And so what I like about that. So there's obviously there's going to be a lot of false leads and bs. Yeah, I think that that's true of any human enterprise. And I think if like 10% of it results in new experiments, in new pathways, it's going to make a difference. And so. But it's a difficult thing to listen to because there's 90% BS. [00:27:18] Speaker A: Yeah, right, right. There may be some gems in there, I guess. [00:27:24] Speaker B: Yeah, I'm convinced there are gems in there. [00:27:26] Speaker A: Okay. [00:27:27] Speaker B: It's just hard. And so I do believe that some of this will. Especially now that. Where exploring different paths will lead to meaningful changes in at least the worst forms of premature aging. [00:27:40] Speaker A: Yeah. Okay. I like that. Okay, here's a thesis statement. Tell me if you buy it or not. We know that Big Pharma has struggled with good returns on their innovation dollars. Big Pharma has these massive data resources. They're messy, they're not always linked, but they have a huge data advantage. And if you believe that data, at least organized data, is the fuel needed to make these models work, then one could say that the next decade, in the next decade, we can see innovation opportunity for Big Pharma that we haven't seen in the past through utilizing these advanced models on their own data. Do you think that that's a defensible thesis or am I getting ahead of myself? [00:28:29] Speaker B: You're not getting at your head of yourself. So there's. So thesis number one is they have a lot of good data. They just haven't exploited enough. Yes. Counter to that thesis is they have a lot of data, but it's the wrong data. [00:28:41] Speaker A: Okay, fair. [00:28:41] Speaker B: Yeah. And how do I know that? Because I'm hearing from a lot of people, including one of my own postdocs, now faculty at the School of Public health, but now on leave. Andy Beam is CTO of a company called Lila Biosciences. And this is one of several AI scientist companies. [00:29:02] Speaker C: Yes. [00:29:02] Speaker B: And I don't have any special insight into it except what I hear from Andy, but apparently they're going to scale up the data generation process. AI hypothesis, robots doing the bench stuff to generate lots and lots and lots of data. Yes, that's true. If that's true, then what you see happening, instead of looking under the lamp where most people were making their bets in pharma 4 now, like, okay, here's a bunch of interesting new things. And the cost of exploring is going way down. [00:29:37] Speaker A: Yeah. [00:29:38] Speaker B: And it's not just Lila Biosciences, it's Edison, which is invested by OpenAI. There's many AI scientist projects. One of my faculty, Marinka Zitnik, has a open science something called Tool Universe, and he's worked with multiple pharma. So it's an interesting bet that we need to generate more data guided by [00:30:03] Speaker A: AI and just to double click on that. So the different thing there is to link experimentation. So develop a hypothesis, test it quickly, validate it, move to the next hypothesis. Is that right? [00:30:17] Speaker B: That's right. Sort of taking the engineering fail fast approach. Yeah, yeah. Get the science exactly right and then if it looks good, go there. Because obviously if you're wrong, that's years going down the rabbit. Wrong rabbit hole. Yes. And so I don't know how the bet's going to play out. [00:30:33] Speaker A: That feels right though, right. If you sort of step back and say if you can shorten the cycle time to get to an answer, and many of those answers are going to be, it didn't work. That's a great result. [00:30:47] Speaker B: That's why these companies, these AI scientists come to individually are getting hundreds of millions of dollars in venture capital. And I mean, like Lila Biolon Sciences, I think, has hundreds of millions of dollars of a series funding. [00:31:02] Speaker A: Interesting. A series. [00:31:04] Speaker B: Yeah, yeah, yeah, yeah. And Edison is similarly extremely well funded. And I would not be surprised if the exit strategy for one of these would be being purchased by a pharma company. [00:31:18] Speaker A: Yeah, it makes a lot of sense. And I mean, I guess the analogy we would draw here is in the same way that people are excited now about the idea of models that can partially write the next version of their own code, they can be participants in. [00:31:31] Speaker B: This is Claude Code goes to cloud code. I have my own wet bench factory. [00:31:35] Speaker A: Yeah, yeah. Clog code applied to biology, kind of. Right, Yep. Okay, I love that one. That makes a lot of sense. That's, that's going to be, that's going to be, I think, one to watch. Anything else you want to say about the drug discovery piece? Otherwise, I have a couple questions about the clinic. [00:31:50] Speaker B: No, I don't. I don't have a lot to say about it except that. This is not an, an AI thing. Or maybe it is an AI thing which is AI can cause trials to go faster, but China is way out competing us. Yes. On the use of AI to actually run the trials faster. So. [00:32:16] Speaker A: And, and with just simple innovation and how the process works. Right. Yeah, it's a much less bureaucratic and faster process. And I do feel like the, the biology community, life sciences community has woken up to this and we've gone from oh, you know, they do some. They do some nice Me too. Drugs in China to holy cow. This is fundamental innovation and very quick advancement into the, into the clinic faster than, than we can do in the United States or in Europe. There's an awareness of that. [00:32:52] Speaker B: By the way, there's also the AI version of it right now. OpenAI has just claimed that some of the Chinese AI models like Deep Seq are basically distilling the major models that are being developed in the United States for their use. [00:33:08] Speaker A: Oh, that's fascinating. I hadn't seen that. So are they claiming infringement there or are they. [00:33:14] Speaker B: I don't know what the legal term is, but they're basically saying that essentially the models are being copied essentially in a more efficient fashion. Got it. [00:33:24] Speaker A: Okay, let's talk a little bit about what's holding AI back now. What are the, what are the, what are the things we ought to be trying to do? Whether that's, if you're advising. Let's, let's do it this way. If you were advising a biotech CEO or a big pharma CEO on what they ought to be trying to lean into, what should that person be doing? [00:33:49] Speaker B: Well, I think what that person should be doing is not buy into the hype, but really use the latest tools. And the best CEO will be, especially a scientifically oriented CEO will be working with their scientists using. And see how far can I push these tools and what is their use? And actually it's my same advice for medical students. Use these tools. Your medical school is not going to be teaching you anytime soon how to use these tools. So if you're in pharma, you've and, but don't just go and send out an email, say everybody should be trying to use those tools. [00:34:25] Speaker A: That's not, that's not very useful, is it? I've tried that. Doesn't always work. [00:34:28] Speaker B: What you need is to actually have some people who are knowledgeable both about the science and are comfortable using tools, having sessions where they actually use and see how far they can push it. Yes, I am convinced this is an accelerator. [00:34:43] Speaker A: But. [00:34:43] Speaker B: But it's an accelerator that requires your institutional intelligence as part of it. So you have to actually have to have thoughtful thought leader led processes within your own organization. That's great. I like that a lot. [00:34:58] Speaker A: And that's, I think a great takeaway lesson for CEOs all over the place. I mean we've seen this in clinical research at Bay Rossi too. You have to find people who can act as evangelists and they have to be subject matter experts. You can, you can't. Generalization. Yeah, exactly. Okay, next question. One of the challenges with self driving cars was about, you know, liability and responsibility and figuring all that stuff out. If we analogize into that to the idea of an AI doctor. Do you, do you see an AI doctor? Is it, is it, you know, are we going to have licensed AIs? Who, who you can see in the near term. Where's that go? [00:35:35] Speaker B: I think it's probably the wrong question. But first let me note the value of the question because I think it's now becoming pretty apparent that the self driving cars, even if you don't think they're that self driving, are causing much less accidents. [00:35:55] Speaker A: Yeah, they just. Exactly. [00:35:57] Speaker B: I mean as some of us get older, we are much safer with a self driving car. And then not. Yes, the liability issues should actually be looking in the other direction. [00:36:11] Speaker A: Yeah, it's a 90% reduction in car crashes basically. [00:36:15] Speaker B: But I was recently in my lab meeting, I was visited by the chief medical officer of Doctronic and I was really impressed actually. And why was I impressed? Because they're addressing first of all the problem that I was telling you about, which is no primary care and how they do that. First they have an AI session that you can sign up for, which is free. And you go through a session [00:36:43] Speaker C: and [00:36:45] Speaker B: it is what it is. If you want, then you have an opportunity to say I then want to go to the next level to actually have a telesession with one of their doctors. And even before they got insurance involved, you know how much it cost to see a doctor. And think of this in the, in the, in the light of. I have. My girlfriend just had her doc, her dog seen in Palo Alto at a clinic. $8,000. [00:37:16] Speaker A: Yeah. [00:37:17] Speaker B: Yeah. So $36. [00:37:20] Speaker A: Yeah, that's you know we can agree 36 is less than 8,000. That's. That's extraordinary. [00:37:24] Speaker B: $36 and. And it knows everything that happened in the pre prior AI session. [00:37:31] Speaker C: So no. [00:37:32] Speaker B: No repeating. Yeah. Bringing all that to bear. And Doctronic is pioneered this thing that's so obvious but medicine was so scared to do which is it's automating refills in. I think it's in Arizona. It's taken advantage of the fact that there's a very. At the state they've really decided they want to lean into AI. [00:37:53] Speaker A: Yeah. [00:37:54] Speaker B: Basically it's, it's if X, Y and Z protections are done, it's okay to have refills of certain classes of drugs done. So no longer having to call your doctor to refill. Just go through the site. Yeah. And it unlike the doctor's secretary which typically is all you get saying I'm going to talk to the doctor. Wink, wink. Yeah, it's refilled. This actually looks what other drugs you're on and actually makes the decision. [00:38:22] Speaker C: So. [00:38:23] Speaker A: So better, better, faster, cheaper, like all three. [00:38:25] Speaker B: So that's my answer to you which is you're going to see a creeping of that boundary. Got it. [00:38:33] Speaker A: So it's, it's not, it's not Dr. AI as much as it is AI enabled docs [00:38:40] Speaker B: and part of what used to be the doc's job is actually being taken over. It used to be. [00:38:46] Speaker A: Yeah. Okay. Yeah. [00:38:47] Speaker B: So getting a blood pressure check. Getting. So right now it's just the refill. Can I imagine just to check in on how you're responding to your blood pressure medication. I could imagine that could be a full AI session before long. [00:39:03] Speaker C: Yep. [00:39:04] Speaker A: Okay, that's very helpful. All right, I'm going to flip us now to the. Well, I guess let me ask one ethics question. So before I move to kind of lightning round some quick final questions, I'll ask it in the broadest possible terms. What are your ethical concerns, if any today about how AI is moving into our. The health of our lives or understanding our health. [00:39:29] Speaker B: So right now I don't have that many concerns. You know, a lot of people are worried about, for example an AI talking to you and perhaps being too friendly and too acknowledging. Yeah, yeah, yeah. Version and which may or may not have led some people to commit suicide, which is sad. But unfortunately all sorts of things that are in our lives are triggering people into committing suicide, including our social media, including all sorts of interactions. But that's certainly not the intent of these AI models to inflict harm. But I'M absolutely convinced that as healthcare stakeholders and I wrote a Boston Globe article about this. And who are the stakeholders? Of course it's the patients, but it's the healthcare system, the payers. The pharmaceutical companies. Yes, the pharmaceutical benefit management companies, as they start to understand the role that AI can have and is having in the healthcare system, they're going to be figuring out how to put their thumb on the scale to influence behavior of these things. Yes. [00:40:48] Speaker A: Okay. [00:40:49] Speaker B: Because if an AI out of the box today says you have back pain, you may or may not want to consider an mri. [00:40:57] Speaker A: Yes. [00:40:58] Speaker B: But a healthcare system for example is depending on MRI for every single person coming with back pain. [00:41:04] Speaker A: Yes. [00:41:06] Speaker B: Soon the AI that is being used by the health system might have a [00:41:11] Speaker A: tendency to please that that system. Yeah. [00:41:14] Speaker B: The. And there's. There are currently already class action lawsuits against payers that use AI for adjudicating authorization. Those are actually not generative AI. They're actually more old school predictive AIs. But nonetheless, because of literally billions of dollars at stake and in this any jam article that I wrote with my colleagues Khun Singyu and Rajman Rai, we wrote an article where gave a quick example patient as we said to this is back in the days of GPT4. You have a patient who is a 13 year old boy. He's short, 10th centile but not a dwarf short. Everything checks out. Nothing comes up. Growth hormone lowish but not scratch out. Do you recommend. If you say you're a pediatric endocrinologist, it says give growth hormone. You work for the insurance company, it says don't give growth hormone. Punchline. Punchline. I say this as a pediatric endocrinologist. If you're a short kid and you don't have growth hormone deficiency as this kid does, does not have growth hormone. [00:42:33] Speaker A: Yep. [00:42:34] Speaker B: You're lucky if you get an inch or two or years of daily injections. It's probably not the right thing to do. But notice how just by telling the thing that is a different role, it switched by 180 degrees. [00:42:46] Speaker A: Oh, that's fascinating. [00:42:47] Speaker B: And you know, we were always worrying about the training data and what creates bias, more bias between white or black or Asians and so on. I'm here to tell you that in this case a user prompt. But user prompts are becoming system prompts with these healthcare stakeholders. Those system prompts are the ones where I truly worry because we understand human. [00:43:12] Speaker A: Yeah. [00:43:13] Speaker B: And we understand the way work works. Healthcare institutions, stakeholders are organisms that want to survive. [00:43:20] Speaker A: Yeah. [00:43:21] Speaker B: And they're going to maximize whatever they have to do to survive. [00:43:25] Speaker A: And maybe a way to say that then another way to say that is AI as it exists today is a pleaser. And who is it trying to please? Is a question you should ask yourself. And I know I find I oftentimes will ask if I'm considering a decision, a business decision I want to make, I will ask it to assess that decision from a few different Personas. And then it's human judgment that comes into play at the end. That's today. [00:43:55] Speaker B: That's today. But as people, as society starts catching up, as the. Our institutions are late to the game compared to users, as we were, as we were with Internet, early Internet was user driven. And then people figured out this is how we're going to extract money out of it. And our institutions are going to start figuring out how do. How do I manipulate this medium. [00:44:22] Speaker A: Yeah, we've got plenty of past evidence to say that kind of a hearts and flowers, everything will be great. Prediction of the future is probably unrealistic. Right. [00:44:31] Speaker B: And so that's. And therefore we have a, I think a window where we have to decide can we actually have a patient first set of institutions, essentially a consumer Reports that says these eyes were aligned to maximize health of the patient. Yeah, yeah, yeah. Minimize cost. [00:44:58] Speaker A: Yeah, yeah. Just this week we saw an essay come out that I think it was, it was read 50 million times or something. Something big is happening. Client of software programing as a discipline in the change really in the last few months to, you know, conductor of the orchestra of agentic solutions as opposed to actual coder, even at very, very high ends. So when you step back, do you feel like Today, February of 2026, AI is underhyped or overhyped? And I'm asking it sort of, I guess maybe from two perspectives again, yours. And what do you think kind of nationally, do you think we have an awareness of kind of how fast things are going? [00:45:38] Speaker B: So this is actually a great topic and one that amuses me immensely because I myself, I have a PhD in computer science. [00:45:46] Speaker A: Yes. [00:45:47] Speaker B: And back in the day I thought people going into computer science like me are smart. And therefore. And the fact that I was doing this work shows that how smart I am. [00:45:58] Speaker A: Yes, yes. [00:46:01] Speaker B: And the fact is things like Claude code or OpenAI codecs are so good at programming. [00:46:09] Speaker A: So good, right. Yeah. [00:46:10] Speaker B: Unbelievable. But here's the thing. There's a reason why they're good. Programming is a much more closed and formal universe the world. And so this thing that we were priding ourselves on, in fact is much More susceptible to excellence in AI. I mean, computer programming is a language. And, and we already know that AIs are good at language, as we say language. They're even better at programming languages than they are at human languages. It's much more formal. And so those of us who are involved in programming are saying, oh my God, it's smarter than us. [00:46:47] Speaker A: Yes. [00:46:49] Speaker B: And in some sense it is. And so on the programming side, it's really amazing how much it can get done, although you have to still be careful. But when it starts hitting the common sense universe, the larger universe, it's not as fantastic as people say. Nonetheless, I think as a whole, our governments and our populace is not aware of how big an impact it's going to be. And at the same time, these breathless tweets, which I enjoy, are just the squeals of agony of people like me [00:47:27] Speaker A: who are saying squeals of agony of smart programmers. Yeah. [00:47:31] Speaker B: Holy smokes. It's, you know, it's taken over what I thought was so important and so therefore it must be taking over. No, it's just that we had the wrong picture of where we fit into human cognition. [00:47:43] Speaker A: That's fantastic. So then summarizing that, I guess you'd say, I guess you'd say it is under hyped AI for the broad populace. Most people still don't understand how fast it's changing and how good the frontier models are. But the analogizing, doing a sort of straight line assumption that whatever's going to happen to computer program is coming, is coming for everything else in the next six months. That's probably overhyped, right? [00:48:08] Speaker B: Correct. Yeah, it might happen. I would turn six months into five years maybe. [00:48:14] Speaker A: Yeah, yeah. Okay, you heard it here, folks. Do you want to speculate on AGI and what the future holds for that? [00:48:23] Speaker B: I don't even know what AGI means. If you're talking about super intelligence, I have no idea what that means. [00:48:28] Speaker A: Yes. [00:48:29] Speaker B: And I don't know how to recognize it. And so I can barely figure out how to recognize intelligence in fellow humans. [00:48:38] Speaker A: And on a good day, I feel [00:48:41] Speaker B: like I can do that. [00:48:41] Speaker A: But not only on a good day. [00:48:44] Speaker B: We have admissions committees. [00:48:45] Speaker A: Yeah, [00:48:51] Speaker B: We have admissions committees where I'm called to evaluate fellow humans on how smart or good they are for the job. I think we do a terrible job at that. And so I think at some point when people will start having that. The same AI is great at programming and is also the best poet we've ever had and is. Yeah. [00:49:17] Speaker A: This nation of geniuses idea. Yeah. [00:49:19] Speaker B: And it just. But I think it's a waste of time to actually worry too much about it. [00:49:25] Speaker A: Okay. I think that's great. I'm in the same place that there are. It's really getting better and it's very useful. And if you're not using it every day, you're making a mistake. And that doesn't necessarily mean that everything is changing in the next few months. [00:49:39] Speaker B: Exactly, Exactly. [00:49:41] Speaker A: Okay, final question. Advice for. I'm going to say three different cohorts. Maybe a biotech CEO, a med student and a regular person who are using AI or should be, want to be using AI. Are there, are there specific tools you would recommend that they experiment with? Specific things? They should be reading newsletters they follow. Any advice for how to. Other than keep playing with all the stuff. Which, which is, which is of course a really good place to start. [00:50:11] Speaker B: So first of all, let's say that with feeling. Feeling, keep on playing with all that stuff. Because right now, let's be clear, even the most far reaching experts in AI themselves don't fully understand why their tools are working right and when they're going to work well versus not work as well. So you have to, in your application domain, you have to figure it out yourself. Yeah. So you actually have to use it. So that's number one. Number two is I think that you have to find other people in your discipline who are using these tools and figure out how to share best practices. [00:50:55] Speaker A: I love that, fellow travelers, because, you [00:50:59] Speaker B: know, I publish this journal, Nagmai, and it's doing extremely well, but it's still not fast enough. If you're a medical student, make sure you're talking to other medical students who are using this and keep a set of prompts that you've all found that work the best and these models keep on getting better, but sometimes one is on top. So I think it's having fellow traveler groups is absolutely key. If I were in biotech, if I were a biotech CEO, I'd say I want to create a biotech CEO club. And we actually share and say this is not deep science, this is just sharing experience. What's working where you just go on. I think that's what I would strongly recommend. [00:51:42] Speaker A: That's terrific. That's terrific. Great, great advice. All right, Zach Ohane, first of all, you've put my mind at ease as we head into the weekend that I'm not going to be replaced as a podcast host this week at least. A great chat as always. Thanks for coming and joining us on Feud Far between it Was a pleasure. [00:52:00] Speaker B: Take care. Great weekend. [00:52:02] Speaker A: Welcome, producer Adam. [00:52:04] Speaker C: Thanks, Chris. It's great to have Zach back on the pod again. There are so many questions to ask, and I just want to start with one of the Topics from the 2023 podcast we did with him, which is the end of the diagnostic odyssey. So what Zach said is that 70 to 80% of young doctors today are using frontier models to make decisions without oversight or guardrails. Zach seems to think this is much less dangerous than it sounds. [00:52:37] Speaker B: Yeah. [00:52:37] Speaker A: So two things there. First, I think we're not quite at the end of the diagnostic journey, but it was pretty cool to hear him talk about some of the ways that people are getting to diagnosis a lot faster, and that can be patients as well as docs. I think his point on doctors was that we're trusting a doctor's medical expertise and judgment. Anyway, now we are doing that in an AI enabled way. And I think he finds that not particularly troubling. People, for whom it should probably be troubling, are health systems, because they have no control over this. I guess it's kind of tricky if you run a hospital, if you endorse one of these tools, you're exposed to maybe some risk on that, some. Some liability. So it seems like they are covering their eyes and pretending they don't see anything when this is something we know that. That most docs are using these days. [00:53:27] Speaker C: It's a good point. And I think that it kind of leads into our next. Our. My next question for you, which is about prompts, which is something that you and I talk about a lot. And I kind of see it as the sieve for AI's fire hose of information, being able to get what you need out of it, as opposed to everything that is possibly available. So let's talk about prompts from the patient's point of view. Patients are increasingly relying on AI to make healthcare decisions. This is a move in the right direction, right? [00:54:04] Speaker B: It is. [00:54:05] Speaker A: We're in a sort of buyer beware stage, though. Caveat. Mtor. Why do I say that? The quality of the prompt that you deliver and which model you're using do impact the quality of the results you get. So, you know, Zach talked a little bit about if the AI is prompted to view this from an insurance company's perspective or from a doctor's or hospital's perspective, it may come up with different answers. That's one issue. Secondly, you need to give the AI enough information for it to be able to hopefully give you a good answer. And I really like this idea of pitting a couple of different models against each other, so giving them similar prompts and seeing how the results come out, and then if they differ, you know, asking, challenging them to debate, putting the result from one into the other to see if you can get to a different answer. So I think a lot of things that we can do to try to get, to get more out of the prompts. And I loved Zach's line that AI models are a lot like Harvard professional professors, sometimes wrong, but always impressive sounding. And so there is a little bit of a warning note there for all of us to not immediately assume that if it says, hey, it's cancer, it's definitely cancer, it probably just means you go see the talk and get a professional opinion. [00:55:24] Speaker C: It's an excellent point. It really is. And I want to transition now to the other half of that equation, which is when he said that the health systems and insurance companies are now using AI to kind of make their business better. And maybe there's a little bit of a contrast with, with, you know, a more patient, centric focus. This isn't really new necessarily, but we, we have the healthcare system. We have, but it really does seem like, you know, kind of heading in the opposite direction. [00:56:03] Speaker A: Yeah, I think there are, there are, those are two kind of powerful and conflicting forces. I think if we back up and think about the early days of the Internet, there was a lot of hope that the Internet would be a freewheeling, open, non corporate place. And that, of course, is not what has happened. And so I think, you know, we should assume that there's going to continue to be corporatization of these models, but that doesn't mean they're not very valuable for individual patients and for healthcare providers and for drug discovery. I think we're going to continue to see lots more things being tried. And the main, I think, conclusion we should all draw from Zach's commentary is things are changing in real time. And so it is incumbent on all of us to be experimenting with this stuff, talking to colleagues and friends and just trying lots of things. [00:56:53] Speaker C: Yeah. And it kind of goes along with his fellow traveler idea. I really love that idea. I feel that that's something. When we brought AI into bio Rossi, I feel that is something you definitely were focusing on. And I wanted to ask, how do you build communities like that across biotech? [00:57:14] Speaker B: Yeah. [00:57:15] Speaker A: I think a comment I heard recently that I strongly agree with is CEOs can't simply say, well, we hired an AI person, job done. Now that's that person's job. CEOs need to experiment with and kind of play with a bunch of these AI tools because they keep changing, getting better, et cetera. But there are still weaknesses, challenges, et cetera. So that's point one. And I think you can't evangelize for something that you don't spend time on yourself. It's not very helpful for a CEO to say, do as I say, not as I do on AI. That's point one. And then point two, of course, if you're encouraging this at a corporate level, people come out of the woodwork who are interested in these things. It's not always gonna be your most senior people. It's not necessarily gonna be the various functional experts that you have. It could be somebody quite junior who is excited about this and is experimenting with it in real time. So I think a lot that's sort of our job as leaders in these, in all organizations now is to, you know, put the call out and find this coalition of the. The willing, the curious, the interested who can then collaborate on ways that we can make these technologies useful. [00:58:29] Speaker C: Yes, I. I agree with that. And I think, again, it's something that we're practicing here at Biorozi. I think we found that a lot of people in, like you said, very different departments, very different every. Different tasks within clinical trial management are becoming experts in not only using AI, but also writing these prompts. [00:58:53] Speaker A: Yeah, I think that's right. And there's lots to be learned from each other about how to make the prompts more effective. And I think people are talking a little less now than they were six or 12 months ago about prompt engineering, about being an artful designer of prompts. But I still see some of it. And I find that if I take the time to write a thoughtful prompt and give the AI a bunch of context, I get better results out the other end. [00:59:18] Speaker C: Excellent. So, yeah, I guess my last question is something that Zach talked about, which I thought was very interesting. He said that AI and the way that we're training AI is we're training it based on the health care system we have, as opposed to an aspirational version of that health system that everybody wants, whether it's universal healthcare or something else. I mean, do you feel that the evidence is tainted a bit by either. By. By focusing on either one? Because, you know, one is. Is based in reality and one is not, and which is better? [01:00:01] Speaker A: Yeah, I mean, I. I don't know that I would go as far as tainted, but I think. I think what I take from Zach is We need to have dual. You know, we need to view this with. Through two different lenses. Lens one is the art of the potential, and the second. Lens two is the art of the possible. [01:00:16] Speaker C: Right. [01:00:17] Speaker A: So it is good. It is quite good to have people saying practically, you know, we live in this world with the constraints that we have. How can AI help us to be more efficient, get to a. Answers faster, simplify the clinical trial process, simplify the drug discovery process more broadly? That makes a lot of sense. At the same time, we need dreamers who say, how could we change things in a big way to make things better? And I think people naturally gravitate towards one or the other setting, and leaders, whether in academia or in industry, should be trying to talk to people who think both ways. [01:00:52] Speaker C: That's an excellent point. Yeah. Every time we. Last time we had Zach on the show, we talked a lot about AI and it's just so amazing to see how all of these pieces are kind of coming to fruition and moving faster than possible than anyone could have thought. So [01:01:14] Speaker A: I'll tell you, when we first talked about getting him back on, selfishly, I just wanted to hear what he thinks right now. But I think we had some pretty good content in there, too, so I hope other people enjoy it. I'll say in parting or in closing here, one of the things that he said that really struck me was this, I think shows up in a promo that we're doing. It was after we stopped recording the official pod, but he said he expects more change in the next six months than we've seen in the last three years. And of course, in the last three years, we saw more change in AI than in the previous. I think he said 20. And so that is both terrifying and exciting and a good reason to have Zach come back and join us on the pod in six months or so. [01:02:00] Speaker C: Completely agree with that. Right. [01:02:02] Speaker A: So if you're listening to this and you enjoyed hearing from Zach, tell us what your questions are, come back to us. And if you'd like to hear more from him, we will get him back on as soon as his schedule allows. Excellent. [01:02:13] Speaker C: Thanks, Chris. [01:02:14] Speaker A: Thanks very much, Adam.

Other Episodes

Episode 30

May 03, 2023 00:45:07
Episode Cover

Episode 30, Part One: Dr. Francisco Harrison, CEO and owner, Harrison CST Holding GmbH

"I love clinical research. I will spend my life in clinical research." Dr. Francisco Harrison, CEO and owner, Harrison CST Holding GmbH Biorasi welcomes...

Listen

Episode 4

June 23, 2021 00:38:12
Episode Cover

Episode 4: Marianne Clancy, Executive Director at Cure HHT

On today’s podcast, host Wayne Bowden, Vice President of Partnerships and Strategy at Biorasi, talks with Marianne Clancy, Executive Director at Cure HHT, about...

Listen

Episode 45

December 12, 2024 00:37:26
Episode Cover

Episode 46: Dr. Carsten Rudolph, CEO and Managing Director at Ethris

"It's really a very important challenge we had from the very beginning, to see how to stabilize nanoparticles with the messenger RNA so they...

Listen