[00:00:16] Speaker A: Welcome to a special episode of the Few and Far between podcast. I'm your host, Chris O'Brien. Our guest this week is Dr. Keith Flaherty. Dr. Flaherty is director of clinical research at Massachusetts General Cancer center, professor of medicine at Harvard Medical School, and associate physician of hematology and oncology at Mass General Hospital. Dr. Flaherty is celebrated for his research in targeted cancer therapies, with much of his research funded by the National Cancer Institute. In this episode, Dr. Flaherty and I focus our conversation on current challenges in the clinical trial industry. We discuss the need for more crystal ball insights into new therapeutic interventions for oncology patients, the lack of modeling types for an estimated 70% of today's cancers, and breakthroughs in procuring, preserving, and interrogating tumor specimens in real time. We also touch on opening the aperture of today's AI platforms even wider to include complex, high dimensional data sets. With so much to talk about, you may have noticed we ran out of time in this episode. I look forward to having Dr. Flaherty back as a guest very soon. Okay, let's start the podcast.
Dr. Keith Flaherty, welcome to few and far between. I've been looking forward to this conversation.
[00:01:33] Speaker B: Thanks, Chris. Great to have a chance to talk with you.
[00:01:35] Speaker A: So one of the things I think that's exciting, well, there's a few things that are exciting about our chat today, but one of them is to kind of step back and say, from your vantage point today, what do you see as the biggest challenge or challenges in the clinical research industry today?
[00:01:50] Speaker B: Yeah, there are so many spoiled for choice.
[00:01:53] Speaker A: Yeah, exactly.
[00:01:54] Speaker B: And maybe we'll get a chance to unpack them somewhat completely. But I guess I would start with kind of a root cause statement, which is that basically we have an upstream problem as we enter clinical development for a novel therapy, which is we fundamentally lack model systems that give us crystal ball insights in terms of which patients by cancer type, or then, more specifically, which patients individually are most likely to benefit from a new therapeutic intervention. And so this is a real problem for us because we're pretty late in the game in terms of developing the concept of precision medicine in oncology, but we're really not in a stronger starting position now than we were 15 years ago. I guess I might even go back a little bit further. And again, it's fundamentally because model systems, and by that I mean immortalized cancer cell lines, patient derived lines, organoids, conventional xenographs, patient derived xenographs, the kind of whole progression, if you will. Yes, if you put them all together, my semi quantitative summary of the state of affairs is that about 20% and most optimistically, 30% of human cancer is reflected by those models. Wow. That creates a huge problem then, right, in terms of understanding for a new therapy that might be relevant for eight cancer types based on some kind of preclinical evidence. How do you rank order those? I mean, how do you say, well, this is the most addressable population, most likely to be responsive, again by sort of cancer type or subpopulation within cancer. But then, you know, again, the precision medicine principle is ultimately like, well, what are the determinants of response at the individual patient level? And there we're even in a worse spot. So I would say again, that's the root cause problem, because then we move into humans.
[00:03:40] Speaker A: Keith, before you go there, would you have guessed a decade ago that we wouldn't have made progress there? Or would you talk a little bit about how your view might have evolved on that over time?
[00:03:49] Speaker B: Well, okay, maybe the point for optimism was that certain oncogene targeted therapies came into existence against targets for certain cancer settings where we had reasonable representation of models available so that you could start to tell stories about kind of recurrent prospective successes where kind of preclinical models actually pretty well predicted outcomes. But around that, there were other therapeutic types that weren't mutated, activated oncogene directed therapeutics that were failing to translate, in other words, where there was, like, promising preclinical data, but then kind of completely falling apart in terms of any impact in the human population. But I'm actually either partly or largely alluding to the past ballpark, eight, nine years of immunotherapy investigation after the monumental success of PD one antibodies, we're still running the table through the mid 2010s time frame. But as you know, there was this explosion of novel immunotherapy mechanisms. And that's, like, a really powerful example of where model systems just, we learned, were effectively completely impoverished in terms of providing us prospective insights.
[00:05:02] Speaker A: Fascinating. So then we're in this place where we're operating kind of blindly as we head towards the clinic, right? Because the predictive ability, the predictive power of those models is so limited in so many cases.
[00:05:13] Speaker B: Yeah, I'll just expand slightly on that to say there's whole cancer types for which we essentially have almost no models. They immortalize cancer cell lines, including common cancers, like prostate cancers, like incredibly poorly represented in those types of models. Glioblastoma, like a profound unmet need cancer type, where our available model systems are very few and far between. And then maybe just take it above the tumor type level and just say that a real challenge for us has been able to try to constitute a model that contains cancer cells, but the other cells that participate in a tumor, right, that are co opted, right?
[00:05:49] Speaker A: Yes.
[00:05:49] Speaker B: By cancer. So microenvironment cells like endothelial cells, fibroblasts and the like, having those adequately, quote unquote, participate in a model has been a real challenge, and then the very highest order challenge is the immune system repertoire. And I'll just kind of rush right to the end of that progression. I alluded to before, the field has been scrambling to try to generate sort of immune reconstituted xenographs, where you take a human tumor and either stem cells or even some more differentiated subset of protoimmune cells, and transplant those into the model as well. Very much of a work in progress, but the idea is to try to fully constitute the tumor in all of its complexity, and the host in all of its complexity. We've learned over the years, putting a tumor into an immunocompromised mouse leaves a substantial part of the picture out of the equation. And that's not only relevant for developing immunotherapies, is what I'm trying to suggest. So, again, I'm framing problems here.
[00:06:42] Speaker A: Maybe we'll shift, we'll get to the fun part. Solutions and progress in a second. But, yeah, let's stay on problems.
[00:06:48] Speaker B: So, any case, that's my kind of root cause part. Let me just jump initially, at least, into the clinical development arena, which is ballpark. 60% of new drugs are coming from venture backed biotech companies into clinical trials, into first in human clinical trials, the rest coming from large pharma companies. Obviously, the resources are fundamentally different between the two, and the clock is ticking more loudly, kind of in one environment versus another. And what I'm getting at is it creates a real pressure as you transit through phase one two. Right. Everything in oncology development these days kind of falls into the bucket of phase one two, which can grow into a very long journey, even leading to FDA approvals in some instances, and then phase three in specific indications where needed, where the efficacy is not so profound, or where we haven't really honed in on biomarker defined subpopulation or the like. In any case, what I'm getting at is that when drugs are now in the clinic, and we, again are scrambling to try to catch up in terms of this understanding, we face, like, a numbers problem, right? Dose escalation is a modest number of patients treated, staggered over a period of time. Takes you minimum twelve, usually 18 months to get through that time frame. And then you transition into the more efficacy seeking sort of phase two component and you're really in. When I say you, I mean we as a community, collective.
[00:08:03] Speaker A: Collective.
[00:08:03] Speaker B: You, yes, those of us who collaborate with companies, and being someone who's founded companies and have kind of sat on both sides of this fence, we are really in a kind of mad scramble to figure out the answers to some of these profound questions, which is like, well, what are the tumor types where this drug really is going to have the greatest potential impact? And if it's the case that we have a line of sight towards molecular characters, features that will predict response to therapy, to make those discoveries, confirm them, and then rapidly try to develop a protocompanion diagnostic in time to catch up with therapy, it's a mad scramble. Right? And again, in venture back biotech particularly, it's a real, real challenge. Ultimately, what I'm most complaining about is here we are in 2024. We're almost a quarter century into the era of precision medicine and oncology, and yet we still face this fundamental problem, which is that basically drugs are transiting through early clinical development, and basically final decisions are being made on how they're going to be developed in their kind of definitive registration enabling mode without us having figured out which patient should get these therapies. And then that means you obviously are limiting probability of success. That's, I guess, problem number one. Problem number two is drug gets on the market and it's only benefiting a small subpopulation of patients. And I've learned over the course of my career, it's really hard to gain alignment of interests and to pull together the needed resources to do the work in terms of refining the population after the fact, which is why I always go back to the need to problem solve in this prospective, early or before. Clinical development is, I guess, obviously where I started with. Keith, will you riff a little bit.
[00:09:45] Speaker A: On, you mentioned 60% of drugs. It's the stunning number that actually reach. Patients are coming out through the biotech ecosystem. That's a highly constrained, resource constrained system that creates some of the pressures you just described. But big pharma has, we know, a sort of disappointing track record on return on R D dollars invested. You've probably been involved in both sides of this before. Why do you think that's the case?
[00:10:05] Speaker B: Well, I think it's all the same problem, to be totally honest. Right. I think it's the rank order priority. The difficulty in establishing a rank order priority for any given novel therapeutic mechanism, but whole classes of mechanisms. Right. How do we think about increasing understanding of epigenetic dysregulation in cancer or altered cell metabolism in cancer? The immunotherapy piece I've already made some comments about. But in these very large areas where substantial new insights have been gained in terms of potential therapeutic targets, we begin to see sort of prosecution of those targets. We're in uncharted territories just over and over again, but across, within one of these classes, or then across classes of novel potential therapy, how to rank order them. And of course, here I've been focusing my comments largely in predicting efficacy. Obviously, toxicity is the other critical part of the equation.
[00:10:57] Speaker A: Yeah, sure.
[00:10:58] Speaker B: And we lack insights there as well in terms of really understanding what's going to have the net therapeutic index, like kind of weighing both on the efficacy and safety side. So I'll just kind of transpose all those comments I made before about lacking crystal ball insights. That's the fundamental problem. I mean, if we could make these decisions preclinically, right, and kill early, early concepts, different world. I mean, fundamentally, then the research investment goes way down and you're taking only those therapies that have the highest possible probability of success. That's my major point of anguish here, is that we're doing all this work in humans, super expensive, and we're grinding through all of early development and taking drugs with sometimes very marginal evidence of human impact into large studies, which really crank up the research investment, where probability success is still very much marginal at that point. So just to use kind of a more investment term here, like derisking at earlier time points, I mean, at the latest and earliest clinical development. But what I'm suggesting is we need to think about intersecting human cancers and therapeutics, even steps earlier than that.
[00:12:01] Speaker A: Do you think there's a path to that? So when you sort of think about the case for optimism, do you think more about how do we make the actual trial process more efficient or this candidate selection and targeting process more efficient? I guess it's both.
[00:12:12] Speaker B: But it is, yeah, 100% both. Let me keep going down this rabbit hole, which is not the clinical trial efficiency piece, which I'm going to come to, because I think there's absolutely, positively huge potential there. Unrealized, to be honest. But anyway, let's stick with this more biologic question of kind of, how can we prioritize nominate agents at an early possible point? My point of optimism here really is, I guess, in two domains. In the preclinical side, it's leveraging a concept that is oftentimes now referred to as ex vivo functional diagnostics. And then there's an in vivo part, which I'll come back to in just a moment. And this is an old concept, as you may recall, that chemotherapy sensitivity testing was a thing before the dawn of my career, really, like in the decade of the 90s or so. And as I kind of came into the field, I wondered about that history and kind of what was causing people to sort of disparage the approach. Any case, technologies have changed enormously in terms of how we can basically procure, preserve, and then interrogate a tumor specimen, like effectively in real time, like a living cancer specimen. That's what I'm alluding to in this term, exo functional diagnostics. There's a number of academic laboratories, fairly few private sector entities that have really kind of put toes in this water or really have dedicated themselves to fleshing out kind of the sops and then corroborating or validating that. If you expose a human tumor explant with all of the complexity, I shouldn't say all much of the complexity that was in the human environment, expose those explants to certain types of therapies, including conventional chemotherapy, certainly tumor cell intrinsic targeted agents, as in oncogene targeted therapy, BCL, two antagonists, and the like. There's more and more evidence that supports the notion that there's a really strong correlation between how the explant responds and the human being from which that explant came receiving the same therapy. Fascinating. But it bothers me so much that this is such like a kind of quiet background exercise, because this is the kind of area that I would like to see more public sector investment in, frankly, like to sort of more fully empower this.
[00:14:11] Speaker A: Yeah, that's fascinating. Right. Because in theory, you're saying this can help us to get better at targeting, which can save, who knows, insert some very large number of millions of dollars of misdirected trial effort. Right. And also time.
[00:14:23] Speaker B: An explant can be treated with dozens of drugs in certain platforms, hundreds of drugs, combinations of drugs. By having that many treatment conditions of the explant, whereas the human can only receive one treatment, you can actually rank order and say, okay, well, your novel therapy comes out around the 25th percentile. Not so impressive. Many more therapies actually score sensitivity in this, not just one explant, but of course, but like a set of these. And then, of course, part of the beauty of this system is that these platforms can ingest, if you will, tumor types, multiple samples like lung cancer, breast cancer, colorectal cancer. And so you can be doing effectively the XvO clinical trial to sort of get this rank order sense of, okay, well, most reliably, it's non small cell lung cancer that's sensitive to this approach. And so, again, this is where our model systems are failing us. And so I don't like calling this concept or this platform a model. Right. Because it's not derived.
We're not manipulating it over time, propagating it over years. I mean, this is real time, quick.
[00:15:24] Speaker A: Hitting testing modality, right? Yeah.
[00:15:26] Speaker B: I'm not trying to suggest we don't lose something in the process. In other words, tumor microenvironments are complex. They're nutrient poor, they're hypoxic. It's very hard to recapitulate all of that complexity when you take a sample directly to the laboratory. But at least you've still got the complexity of cancer cell heterogeneity, microenvironment features and types, and the infiltrating immune cells. Now, you don't have the whole body's worth of immune system, admittedly, but you have at least some representation of what's infiltrated.
[00:15:53] Speaker A: So that's fascinating. Described that way. It sounds like the kind of thing would obviously make its way from academia into an expectation in the venture community. If, look, if we're going to fund you, we want to know that you've done this test, this is a small fraction of what you're going to invest in the trial. Why do you think it hasn't sort of become a mainstream way of thinking?
[00:16:09] Speaker B: Yeah, I've asked the question 100 times and pretty much get one response, which is, well, sure, if you show us that it works, in other words, if you show us that, and really the most strictest version of that is a prospective example, right, of a therapy that vetted through this approach is nominated, particularly for a certain tumor type, and then goes into the clinic and validated and validated in that way. What I'm suggesting is there's more and more, quote unquote, retrospective data, meaning where we have therapies that exist now kind of vetted in this platform. And like I said, really powerful data sets where you've got the explant coming from a human being who goes on to receive that very same therapy. And in a cohort of such patients, those who don't respond clinically, their explant didn't respond, and obviously response correlating with clinical response. So I would say the working is being demonstrated. But no, but I literally had this conversation with a number of large pharma drug development leaders and then obviously many in the biotech community. No strangers in the Boston Cambridge area for me, of course. And yeah, basically, I mean, look, it's taking on risk in terms of a novel approach like this. It'll be done when it's considered to be a de risking maneuver. Right now, it's considered to be of uncertain value for prospective prioritization, deprioritization, all of these existing therapies.
[00:17:27] Speaker A: I'm smiling because I think that one of the things that we see, particularly in biotech, and it's about this resource constraint and the amount of risk that the biotech is taking on with this sort of core question of the company, is this going to result in a drug that's commercializable? Right. So I find oftentimes biotech executives are very conservative on everything else because they're taking a massive amount of risk with the core question.
[00:17:47] Speaker B: That's exactly right. No one's without, let's call it drug risk, in other words, like being able to create your product, if you will, dialing in diagnostic risk, as I was alluding to before, in terms of landing upon kind of the parsimonious biomarker that's going to be your protocompaign diagnostic, and then being able to race to get that diagnostic enabled for your registration enabling trial and have it be able to suffer regulatory scrutiny. It's another aspect of risk. So then here I'm introducing another risk. Again, what I'm getting at is that our responsibility as publicly funded academic research community is to develop platforms like this that are kind of shared resources for the common good, if you will, the field. I'm optimistic, actually, that there's enough work that we're going to get there eventually. I've been frustrated by the relative underinvestment, and let me just sneak in this last point. There's a colleague of mine who's at Brigham Women's Hospital. So like our kind of sister hospital in the Harvard, he came out of Bob Langer's lab at MIT, where in the mid 2010s, they developed this micro device the size of a grain over ice that can be implanted into a tumor in a patient, and it has 20 drug eluting ports on it where drugs are released in a very controlled fashion, like adjacent to the port in the micro device that can be implanted a couple of days before surgery, surgery is done. One can use some cool technology to kind of ferret out the device and then basically read out cell killing.
[00:19:06] Speaker A: 20 answers. Yeah, exactly right.
[00:19:09] Speaker B: So again, now you've got, in a microscopic fashion, a single patient getting 20 treatments as opposed to just one treatment. And this is the kind of approach, again, that's just received minimal investment, exploration. But we all want the crystal ball 100% agreement on that point.
[00:19:30] Speaker A: Hi, this is Chris O'Brien, host of few and far between. We'll be right back with this episode in a moment. I personally want to thank you for listening to our podcast. Now in our fourth season, it continues to be an amazing opportunity to speak with some of the top thought leaders in the clinical trials industry. If you're enjoying this episode, please leave us a review on Apple Podcasts. It really helps people discover the podcast. And don't forget to subscribe to few and far between so that you never miss an episode. One last request. Know someone with a great story. You'd like to hear me interview? Reach out to us at
[email protected] thank you. And now back to the podcast.
Do you see regulators as having an important role here? One of the things we've seen is that sometimes with things like adaptive trial design and stuff like that, that regulators are sometimes more willing to embrace this in some cases than companies are. Do you think the FDA can shape this dialogue, or do you think it really has to be largely a commercial and scientific conversation between academia and the companies?
[00:20:28] Speaker B: Yeah, well, I'm tempted to answer like 100% the FDA can lead the way here. But the reality is the FDA, they ultimately need someone to show up on their doorstep. Sure, proposing one of these approaches. But look, you well know that in the past 15 years, maybe even I go further back to the dawn of my career in the year 2000, the FDA has really, in oncology, shifted towards this very kind of proactive, looking ahead to where the field's going approach. I mean, software as diagnostic device, diagnostic to improve clinical care would be one such example, creating the oncology center of excellence so that diagnostics and drugs can be kind of co considered, as I was alluding to before, when you're kind of racing to the finish line with both a diagnostic and evolution as well as therapeutic, that's a maneuver that came into being more than 15 years, well, about 15 years ago, that really vastly transformed the ability for those kind of two development timelines to ultimately converge. And then, yeah, I know for sure, as you're saying, adaptive designs. Oh, I left out a point that I wanted to emphasize beyond this kind of ex vivo piece, getting back to the humans and the clinical investigation before we get to kind of scaling and the difficulties we've been facing in terms of scaling clinical investigation, maybe just one thing to kind of reflect on that's been a real growth area and source of optimism for me, which is to really expand the notion of presurgical neoadjuvant therapy to include investigational treatments. Now, the breast cancer field has been an innovator as a specific discipline, for sure, in this space, but there are multiple tumor types where this same model.
And again, it's, well, who's responsible for building the platform, the capacity, right? And once again, this is publicly funded research institutions are 100% the ones who can and should. And I'm not suggesting it's not happening, I'm just saying it's being underdeveloped and therefore isn't there to handle the throughput of a whole generation of therapeutics. We're kind of catching up, really, in a way, in terms of trying to credential existing therapies, even in the new adjuvant context.
[00:22:31] Speaker A: How do you think it got going in breast cancer? Was it the advocacy organizations and the investment they were Susan G. Coleman and the like that were really pushing for an expanded view on that, or I wonder how it got going there.
[00:22:43] Speaker B: Yeah, I mean, the ISPI investigator group was an academic group through and through, came together around this sort of mutual concern in terms of very narrow pipeline, in terms of breast cancer therapeutics at the time, and very frustrated, I think, by some of the concerns I raised before, in terms of the way in which therapies were being introduced into clinical investigation, and where there wasn't an early debting of therapies, just to ask fundamental questions about whether they're doing their molecular job like you should, we really continue to invest further. And that's where the concept was really, that's its origin story. There's no question the patient then had a lot to do with pouring gas on that fire. No doubt, FDA has been centrally involved in the build out of that platform concept, not only within the ISPy network, but even more broadly in breast cancer neoadjuvant studies. So, yeah, that's a nice example of at first public and then ultimately ready for public private membership now, it's still the case that we're still not seeing wholesale prospective investigation of therapies in small numbers of patients in that type of setting where each patient can be enormously informative of whether a drug doing its job, and then kind of the downstream questions from there. So it also maybe reminds me of an important point before we launch into more population scale investigation, which is kind of the question you hinted at a little while ago. The other area where I think we need to be kind of focusing our attention in terms of kind of maybe higher complexity investigation, in a way, is taking a lesson from the neoadjuvant playbook and reflecting on the fact that we're now able to detect microscopic residual disease with increasing proficiency with just better and better diagnostic technology, mostly focusing on circulating tumor DNA, but not only. And as those methods have gotten more and more sensitive, we're now able to qualify what I refer to as a new cancer stage, which is basically patients postoperatively who have persistent disease detectable in blood, but not radiographically. Right. So this is essentially a new cancer stage. And in many cancer types, of course, we have standard adjuvant therapy that we offer blindly to patients in the postoperative setting, not knowing if they have disease or not. But what I'm getting at is here not only knowledge of who's got the problem with not complete competence in terms of false positive, false negative rates being completely ironed out, but here's what I'm coming to. We have the scenario now where patients who have microscopic residual disease present in blood receive standard adjuvant therapy, and after a short period of time, don't clear their MRD, those patients need more therapy. Right? Yes. So we're now framing the concept of, quote, unquote, second line adjuvant therapy. Well, that's never been an idea before, because we were just.
[00:25:17] Speaker A: That's very interesting.
[00:25:18] Speaker B: Unknown presence of disease. And what I'm getting at by raising that kind of prospect is once we've gone through the standard therapy, well, then, now we're in a situation where investigational therapy is certainly well justified, and certain cancers, like for sorting justified at that interval. And you're talking about treating cancer at far less disease burden, less complexity, a lot less of the problem of trying to find the sliver of the cancer population with late stage disease that's exhausted all standard therapies, are still in good health, and have the interest and alignment kind of with us in terms of trying an investigational therapy. Of course, being at Mass general, this is other major academic centers, we very much focus on finding and collaborating with that segment of the population. But it's a pretty small sliver, is what I'm getting at.
[00:26:07] Speaker A: You're talking about expanding the bullseye, really? Right.
[00:26:09] Speaker B: Exactly. That's exactly right.
[00:26:10] Speaker A: Yeah. That's very cool. Okay, let's go in the direction of population.
[00:26:14] Speaker B: Yeah, well, I think your question before hinted at the issue of like is a partial answer to some of our problems expanding the base, if you will, in terms of patient populations receiving therapies, about which we still have a lot to learn, which can investigational therapy means pre FDA approval. But what I'm just including in this, what about post approval? Like, when we're still in the perpetual state of wanting to refine the population of patients who receive that therapy, work out 100%, work out the issues of rational combinations, which is. That remains a pretty bold term, truly rational combination. And so this is where I would say the work that's been done is encouraging. But I feel like we've seen a bit of a flattening pace of progress in terms of lowering the activation energy for clinical trial participation in a broad population way. So how could we make that work? Well, one is we have to lower the barrier to entry in terms of we have to bring clinical investigation to patients, as opposed to expecting them to.
[00:27:09] Speaker A: Tag themselves in travel to research centers.
[00:27:12] Speaker B: That's one issue. The other is we fundamentally have to lower the cost of doing those trials. This is a societal issue for publicly funded research. It's definitely an issue for companies. I mean, right? It's like perpetual investigation at a price point of $200,000 a patient is just preposterous. Now you're talking about like a marketed product where there's demonstrated benefit relative to risk. So, okay, that's good, but we're trying to widen that benefit. Right? That was the point I was raising before in terms of refining the application of these therapies. And so how do we do the math in a way that actually works for any entity, be it large pharma or venture backed biotech company, at a price point of $200,000 a patient. So we have to find efficiencies, is what I'm getting at, and absolutely evident and obvious that there are platform technologies in here. I mean, literally just software that allows one to operate clinical trials in a much more digitally enabled way, allow patients to be at home and monitored at home. And by that, I mean literally just having them be able to report their symptoms and experience in real time, not as data to support patient reported outcome analysis a year and a half from now.
[00:28:13] Speaker A: I mean, real time now in the.
[00:28:16] Speaker B: Trial, real time communication of data, right? Not once a month when we see the patient in clinic. I mean, real time communication of the data where more than one set of eyes can see that data, right? Where the care provider, the sponsor, a regulatory authority, can see even individual patient data, but obviously at the increasingly population scale, so that we're safely exposing patients to therapy where we're still trying to learn something that can be truly investigational, I'm suggesting, but obviously for agents that are initially on the market, but where we've got a lot more to learn. This making clinical trials more portable, as you well know, became a really popular topic, particularly in the beginning of COVID right? Yes, because there was a rapid adoption of some tools that people weren't sure they could trust. We kind of snapped into this mode of needing continued patients on therapy, investigational therapy, including phase one, investigational therapy through the face of early COVID. These are patients with advanced cancers. So we weren't going to just take a break. And we witnessed a pretty inspiring sort of acceleration there. In terms of.
[00:29:13] Speaker A: Definitely agree.
[00:29:14] Speaker B: Yes, I'm kind of sorry to say that to my view, I'm not the only one who would say this pendulum has kind of swung back. We're really back to business as usual. It's really frustrating.
[00:29:23] Speaker A: Yeah, there was sort of a decentralized tools are the future. In fact, maybe the pendulum went a little too far, saying decentralized is everything, and then somehow it went back to, no, they're nothing from some folks. And I definitely agree, in fact, that this ability to have real time access to data digitized and shared and reported by patients is better than the occasional visit the clinic in a lot of cases.
[00:29:43] Speaker B: That's exactly right. What I'm suggesting is that basically there's a right place, right time for super close monitoring, highly specialized centers, small number of sites, investigating a very new therapy in phase one through the course of phase two, which is a long potential journey well before registration, enabling studies. There's a point where certain therapeutics, we, I think, feel like we can spot the transition where we'd say, okay, we've nailed down dose and schedule, and the safety profile is workable in the decentralized format. And let's start expanding our end, like I said, at lower cost ultimately. Right. So where for the same investment, we're gaining a lot more information. And it's basically in this critical phase where working out the biomarker questions is absolutely essential and requires numbers.
This is where, to me, this is a potential game changer. And then going back again to that sort of post marketing, if you will, setting. Post marketing is a term I don't like because it describes that there's like just this one a day before and a day after pre marketing, post marketing, the drugs get initial indications. Right. Sometimes it's only accelerated approval. That's just an initial indication. Right?
[00:30:47] Speaker A: Yes.
[00:30:48] Speaker B: We have so many cancer therapeutics where we have preclinical reason to believe that this is a therapy that actually could be relevant for multiple different subpopulations. And that first one is all well and good, but we want to be able to accelerate the discovery and the application to these other populations. And I'll tell you, we're in an era currently, companies large and small kind of lose interest, kind of get exhausted in terms of their resource allocation, in terms of hunting down every lab expansion opportunities.
[00:31:13] Speaker A: Yes.
[00:31:14] Speaker B: And being one who's very interested in sort of the tail of the curve, if you will, these kind of like exceptional responders, a given therapy that just is miraculously effective for a small and ultimately molecularly defined subpopulation of patients. This is ultimate precision medicine. I get it that it's not a huge market opportunity by definition, if you're talking about a very small fraction of the cancer population. But we have to have a system that allows for the discovery of those populations. Right. And this takes me back to publicly funded research. Right. This is where there can be a beautiful interplay between public and private.
[00:31:44] Speaker A: You say there's a societal need for society wants these cures, and we should be finding ways to fund these opportunities for expansion, for drugs that have proven out from a safety, and we understand dosage, and why wouldn't we be trying to find more application for them?
[00:31:57] Speaker B: Yeah, 100%. This is where NCI match was like the beginnings of this concept. We drafted that protocol. Now, nine years ago, the study was done in a sprint. I mean, it was like the demand on this trial. Like the investigator and patient demand. Extraordinary. I've never seen anything like it. I've worked in the cooperative groups my whole life. NCI funded cooperative groups. I mean, it's just absolutely extraordinary. But the trial was consumed, if you will, in a nanosecond, in a way we barely started, and is just a resource limitation. I mean, the NCI was not in a position to redirect all of its clinical investigation investment, all of its research investment, into perpetuating a concept like NCI match, which is just a multi arm, in this case, like dozens of arms. Phase two trial, right? Single arm, multiple, many parallel phase two trials that, had it been a construct that was carried out over longer time and obviously able to take on more therapies over time. It's exactly the kind of vehicle that can kind of do this finishing work to hunt down populations and narrow in on their molecular definition and so on for these, let's call them rare cancer types. I mean, rare cancer types people think about as like a cancer type by sort of site of origin or by its histologic definition. I'm talking about rare molecular subsets. That's just a biologic reality of cancer. We have many of those. We simply don't have a system right now that attends to matching drugs with those patients, first in investigational mode, obviously, and then ultimately in care.
[00:33:26] Speaker A: And Keith, the NCI match. It was an expensive, obviously, running all those parallel arms is expensive. But did you feel that that did drive down, I don't know, the cost for an answer or something like that?
[00:33:36] Speaker B: Oh, yeah. The price point for an NCI funded clinical trial, in terms of its true all in cost of conduct, is massively lower than the full freight industry. And of course, the math is complicated at the site level in terms of partial reimbursement for work done. But then, like at large academic centers, we have core grants that bring money in to underwrite, if you will, the conduct studies like this anyway, but the all in cost is a fraction. I would go there further to say tech enabled and decentralized that cost.
[00:34:04] Speaker A: Pick another. Yes.
[00:34:06] Speaker B: And then we're looking at a model that is cost sustainable. But if all of this has to happen on the public dime because the model isn't there, even at a reduced cost of investigation price point, well, so be it. This is all right. We do have in cancer, the benefit of a construct where there is a public private kind of participation in this same system. I would argue more so than other therapeutic areas, but we just haven't figured out the balance here. And of course, the problem that publicly funded clinical research takes so long to get off the ground.
[00:34:35] Speaker A: Exactly.
[00:34:35] Speaker B: Kind of becomes irrelevant from a timescale perspective. For many companies, that's a problem too. It seems surmountable to me.
[00:34:42] Speaker A: All right, I'm going to flip us now. We've been at 50,000ft, we've come down to 10,000ft or lower with this last blast. Let's get right down to ground level. And so, if you were talking to a first time biotech CMO or CEO, will you expand for a few minutes here on what kind of advice you would give such a person in oncologies thinking about some of these challenges?
I'm going to raise a lot of money to go after a target with an unclear or I'm not clear what the answer is yet, et cetera. What would you tell that person to be thinking about.
[00:35:10] Speaker B: Yeah, well, I mean, I had this conversation just about daily. So, yeah, I would say it's really package up everything I said before and kind of convert that to a strategy for customized to a given agent. And it's the biology it's after, what a priority, insights we have from model systems, and then reflect on what are the filters through which we can put this agent. As you're progressing it through preclinical discovery and kind of maturation towards a human therapeutic, as you know, that takes a while. Right. And even in a computational, AI informed drug discovery future, you're still talking about development time that affords a lot of parallel processing here. Right. So, again, being a founder of biotech companies, and now having kind of had a seat at the table from first conception of creating a program, counting forward from there, there's plenty of time to be sort of gathering the relevant information on emerging platforms for which there's reason to believe, and already emerging evidence that relevant to your therapeutic type, there's meaningful derisking that can happen, overall probability of success, elevation or lowering. But then also the narrowing in on where should you direct this, at least initially, like, what constitutes low hanging fruit is, again, the conversation I have over and over again. I'm not suggesting it's like there's a uniform answer, but everything I said about ixiofunctional diagnostics turns out to have a. There's a subset of investigators, including a couple of my super close colleagues at Harvard, who've developed these tumor explant formats, where the infiltrating immune cells are there, and where you can demonstrate the same phenomenon I described before more broadly, where in response to a PD one antibody, the xplant responds, which is to say CD, eight positive t cells start killing tumor cells in the explant. And that same patient who receives PD one antibody is a responder with striking correlation. More kinks to work out there. But this idea of being able to do, quote, unquote, trials like this, I think the field's been stuck on the point that a patient derived xenograph trial is the model is the only one.
[00:37:12] Speaker A: Yeah.
[00:37:12] Speaker B: And I keep saying up until the current day, well, show me the evidence that we've got more fidelity in that system than any other system, because I would argue with oncogene targeted therapies, actually immortalized cancer cell lines just in the dish, ended up doing fine in terms of crystal ball insight, in terms of the spectrum of response and resistance that we ultimately saw in human beings in the vast majority of cases where they were at least where we had model systems. Now, again, there's whole cancer types that are barely represented there. So that's where again, we need to fill in the gap ultimately, as opiate had mentioned. But I'm kind of just dumping those prior comments in a truncated way in response to your question that basically the conversation I have all the time is, let's use this discovery time. I mean, discovery is a broad term, but as you go from prototherapeutic through to development candidate and from development candidate, you've still got about a year before your Ind is filed. So we're talking significant periods of time in which you've got a prototherapeutic to put into these platforms, right. Because you don't have to have the final product in hand to be able to get much or most of the insights from these platforms. But this is to me kind of frustrating in a way that we still face this notion, that kind of conventional thinking, that, well, the minimum requirement to get into the clinic in an unmet need, life threatening disease like cancer, is this amount of in vitro data, a mouse model, and then we're off to the races, right, which is.
[00:38:35] Speaker A: There we go.
[00:38:35] Speaker B: Yeah, I'm not interested in minimum. I'm interested in what's the maximum that can be done. And justifiably before we head to the clinic. Before we head to the clinic, because even the maximum investment there is going to be a hell of a lot less expensive than trying to do all this learning in the clinic.
[00:38:50] Speaker A: That's fantastic. Okay, you mentioned AI just 1 minute ago or so, quite a ways into this conversation. Will you talk for a second about what you are excited about in terms of application of AI up and down the clinical development process. And I'm really mostly interested in near term benefits that you think show some promise.
[00:39:06] Speaker B: Well, I think what I was referring to in terms of predictive biomarkers, so like just molecular features is where I was kind of focusing there. But I think if you elevate it even, or you open the aperture even further, to me, the application of AI to very complex high dimensional data sets that include digital image data from a pathology specimen, from an h and e specimen. So, pathomics, quote unquote, DNA and rna sequencing, which we're doing routinely, clinically, all day, every day, radiomics, and then all clinical information of every type, including routine laboratories and the like, digesting all of those data types to derive models that aren't just anchored on rna sequencing alone, embrace all of that complexity. Our brains can't handle that.
It's a beautiful argument for the use of AI in settings where we've got all of the data types available and the clinically annotated cohort of patients treated in some relatively uniform way, or cohorts in parallel, where you've got meaningful numbers that were treated in a uniform fashion. So that's kind of the current use argument that I would make, that we are more than ready for primetime in terms of pouring those data types into AI model creation. But obviously, there's an enormous, I would say, partly realized investment in AI to basically digest chemo informatic information and bioinformatic information, to basically be able to understand much more about the complexity of the system that we're trying to perturb with new therapeutics. And there were arguments made about the use of AI in systems biology, certainly at ten years ago, in a way that, as I heard those arguments, I thought, like, this doesn't feel ready for primetime. To me, in terms of where the use case is now, I would say we kind of integrate all the points we've discussed so far in terms of kind of a more complete assessment, a therapeutic preclinically and up to and including tumor explants, taking the molecular temperature of a cancer and the complex explants, if you will, and then perturbing the system with therapies. Again, this is all before humans are ever treated and trying to understand some of the rules. Again, back to the issue of who's most likely to respond and why. To me, this is, again, a magnificent justification of the use of AI tools that are capable of taking very diverse data types into consideration, even just cell imaging. I mean, I've seen like, yeah, it's extraordinary, the progress.
[00:41:23] Speaker A: Right.
[00:41:23] Speaker B: Kind of thing that we just kind of throw out and don't pay attention to because get obsessed with the idea that, well, rna sequencing tell us the answer to all our problems. So I think we're already in that space. I wish we were in a position, as academic institutions to have data flow more freely than we are. We unfortunately live in a very constrained environment that impacts the private sector in a way. Right. The private sector doesn't have access to information beyond, let's say, patients that they enroll in clinical trials or model systems they expose with their own therapeutic. So we need more of a data commons for sure, to then be able to have to really leverage the power.
[00:41:56] Speaker A: Against which to deploy these models. Right. Yeah, that makes a ton of sense. Okay, final question for me, for right now, and again, I'm sort of spoiled for choice, because I think I've gotten to about half of the questions I had for you, but I want to be mindful of your time. So I'm going to ask an unfair question of a scientist because it's sort of a gut feel question. When you sort of look forward over the next five or ten years, are you optimistic or what's your level of optimism about what the state of cancer research is going to look like? On the other side of that, I guess I'm asking, do you think that these new technologies that are coming online, some of the other things that you alluded to here, can lead to more and sustained breakthroughs in the cancers that have been resistant to the treatment types that we have available today? What does your crystal ball say?
[00:42:38] Speaker B: Yeah, no, absolutely.
I guess the way I see it is that we just need to put a toe in the water multiple times in terms of crossing some of the wires that we talked about, and they'll be demonstrated kind of again, like unforeseen gains in foresight, then direct how a therapy is developed, and then absolutely the field changes, which is adoption. Not just adoption, but as I was suggesting, this kind of opening up the floodgates in terms of data types and really leveraging all of these technologies simultaneously.
This has become my absolute focus in my academic work, which is basically just simultaneous analysis with every modality we can possibly manage for every single specimen that comes out of a patient. Right. So, like, no longer with these blinders on, playing in terms of platforms. And so we just need a couple proof points where we weren't looking under the lamppost for the keys. Right?
[00:43:35] Speaker A: Yes.
[00:43:37] Speaker B: Where we make a true unanticipated discovery. And then to me, I mean, I've just seen it over and over again. You don't need so many individual successes to open up the floodgates. Obviously, we saw that with immunotherapy for sure. And then lamentably, a whole generation of new agents failed because our pre clinical models were insufficient for vetting. Know where we started the discussion?
[00:43:55] Speaker A: I liked your dip a few toes into the pool in that image. There's a whole bunch of people standing around that pool who are ready to jump in if we see a little bit of success, I guess. Right? Wonderful. Dr. Keith Flaherty, thank you so much for your time and for joining us today on few and far between. I will sharpen up my list of questions for part two at some point in the future.
[00:44:11] Speaker B: Yeah, sorry to have to drag you into a part two, but I would look forward to that. Thanks, Chris.
[00:44:15] Speaker A: Thanks, Keith.
Thank you for listening to the latest episode of few and far between. Our podcast is now available on Apple Podcasts and other major streaming services. Please take a moment and leave us a user review and rating today. It really helps people discover the podcast and we read all the comments. Those comments help us to make few and far between better and better. Also, be sure to subscribe to few and far between so that you don't miss a single episode. Got an idea for a future episode? Email us at
[email protected] or contact us on our
[email protected] I'm your host, Chris O'Brien. See you next time.