[00:00:00] Speaker A: Foreign.
[00:00:16] Speaker B: Welcome to a special two part episode of the Few and Far between podcast. I'm your host, Chris o'. Brien. We're back with the second part of our conversation with Yochi Sloanim, CEO and co founder of anima. In part one, we talked about AI as a collaborator in connecting data to real world biology. In part two, we'll start a discussion on building a smarter LLM with better reasoning and fewer hallucinations. We'll also get a snapshot of AI evolution, how to beat it, spoiler, you can't. And how giving AI what it's missing most can open up a much bigger world. I hope you enjoyed part two of our conversation with Yochi. This is a fun one. Okay, let's start the podcast.
[00:01:03] Speaker A: The change that we are seeing in big pharmas now is everybody wants to build AI disease models. How is that different from using ChatGPT to ask questions about give me your best ideas about als. Okay. Now everybody can do that and they do it, they get good ideas and they use those ideas to find basically what everybody else can find. If you think about it, the problem is everybody can find the same things that everybody else can find. Now it could be good for everybody. We'll find new targets. But most of these ideas are very raw ideas.
And by the way, most of the ideas that you can find in this way are in the old mind. They've been known, they've been tried, they are not working.
[00:01:50] Speaker B: Yeah, yeah, right, right, right.
[00:01:51] Speaker A: So it's not that AI exposes all of a sudden. It's unusual. You know, our biologists, when we did this over a couple of diseases as test cases. Yes. They were surprised by some of the ideas and say, oh, we didn't think about that. But then you go and you actually ask ChatGPT to give you the chain of thought and the evidence and then it start to surface things that you are familiar with. Ah, yes, yes, that was an idea that was tried seven years ago. Actually, ChatGPT doesn't even know it failed in the clinical way.
[00:02:20] Speaker B: Yeah, yeah, okay.
[00:02:21] Speaker A: You know, stuff like that.
[00:02:22] Speaker B: Yeah, okay, right.
[00:02:23] Speaker A: So a disease model is a different idea. It's saying if you are a pharma, you're saying, I've got all this data from all these experiments that I conducted over years and years, decades, and I'm going to take that, I'm going to feed it into an AI and I will train it on my data. So that's my advantage over everybody else. And what does this data look like? It looks like in these Cells.
We conducted this experiment and we found that this protein is overexpressed. We found that this pathway is not active. We found that when we knock down that protein, something else happens. So it's kind of facts like this. Each experiment is a conclusion and you feed all of that into the model. So the model is learning about the disease. The. Not from the public information of all the stuff that was done by other people, but it's your experimentation. Yeah.
[00:03:25] Speaker B: Your proprietary. Yeah. And your proprietary data.
[00:03:28] Speaker A: And there are companies that are actually building now, and actually you see this as a big trend in pharmas, that they are investing a lot of money in building those front end LLM training projects. And the training is actually creating eventually the capability to ask the same question as you ask, but to get a different answer. Yeah. Yes.
[00:03:51] Speaker B: You're kind of in a different mind. To continue your analogy, you are in.
[00:03:54] Speaker A: Your own private mind, which you never mind.
[00:03:56] Speaker B: You never mind your own private mind.
[00:03:59] Speaker A: Or maybe you have as well. And you are going just to be again, seeing the things that you already know or learning that your experiments, which you never replicated are now basically making your model hallucinate. You trained your model on incorrect data that comes from experiments that you never replicated.
[00:04:19] Speaker B: Yeah. Faulty data.
[00:04:20] Speaker A: So how do you actually do that? But this is what everybody wants to do. So. So think about it this way. AI disease models are the way that AI will become capable in understanding disease biology. Right. That's the goal.
[00:04:35] Speaker B: Yeah. This makes a lot of sense. It also solves a critical problem. Right. Because we see that, you know, the track record for big pharma on innovation is great. The return on a dollar, you know, versus biotech is not as strong. And so the pathway has been pretty consistently to buy biotech companies as a way to accelerate the innovation process. And that's very good for biotech companies at Bay Rossi. Those are our clients. So we're very happy about that as a pathway. But it makes sense that this is a way for big pharma to utilize a differentiating advantage, which is their data.
[00:05:08] Speaker A: Now the big problem remains. And here is the thing. AI, even when it is doing that, like you feed it with the literature you feed it was what is called the omics, proteomics, genomics, transcriptomics. All these omics is basically experiments that were done typically in wet labs using all of these techniques. And what these techniques are doing is to count omics, is counting things. How many MRNA's are there?
[00:05:37] Speaker B: Yeah.
[00:05:37] Speaker A: How many of this protein? How many of that protein? It's all about Counting things. Okay. So eventually I'm simplifying this a lot, but eventually this is also the major limitation of models that are trained by omics, which is the LLM is thinking, let's give it a credit, that it thinks like us. So thinking, meaning raising hypothesis. Hypothesis is this pathway is the pathway involved or major contributor or the underlying mechanism in this disease. Now, I need to connect it with some evidence from experiments. So here is my experiment. Experiment number one, I counted all the mRNA. Experiment number two, I counted it by location, spatial information.
[00:06:24] Speaker B: Yeah.
[00:06:25] Speaker A: Now think about this. This is very simple, but it actually brings up a major conflict between these two things. I have a pathway that is one biological process in the cell. On the other hand, I counted the mRNA. Okay, so how do you bridge these two together? If the counter is high, what does that mean to the biological process? So it only means something if you could actually always count the things that are representing your biological process. Yeah. Whenever you are counting or seeing things that are general over all the processes, you cannot actually infer from them easily anything about your pathway, which is your hypothesis. Okay. That's a major issue with this idea. Okay, let's compare it. In the world of imaging, there are companies that have been doing imaging, even at AI scale, very large scale of cells, for 30 years. We've been doing imaging of cells for 30 years. And yet the contribution of imaging to understand biology has been quite small.
Why is that? Because what we were able to visualize is cellular morphology, which means we are seeing the integrator of all the 20,000 processes into an image. And now I'm asking a question about a particular pathway in als, and it shows me the cells.
Now, there is a difference in the cells, but how is that difference related to a pathway? What is the pathway that this is actually showing?
[00:08:04] Speaker B: We see these technologies really working with things like tumor identification and stuff like that. Because that is cellular morphology. Right. That's relevance there. But I think what you're saying is for actually knitting together an answer for what causes a disease. Not terribly helpful.
[00:08:20] Speaker A: Yes. You can see visual differences to tell you that something is there.
And if you have a marker, if you have a marker that visually, for example, if you could actually just stain with color the tumor cells like they do in biopsies, basically what you see is the thing that you are asking about. But in the. Yes, let's say in the problem area of I have a hypothesis that a biological process is the one driving a disease. I need to be able to visualize that thing. Okay. Or I need to be able to conduct an experiment that is looking at that thing. And this is very, very hard. So the idea of training those models with the existing data, eventually the data is mostly counting things. Okay. Now sometimes there is overlap. There are some pieces of data, smaller part of it, that would correlate to particular biological processes. But if you are asking about a pathway that there is no data that was collected in the past, the model is now faced with its own thinking process. Is it that process or not? Now it starts to continue the chain of thought. If it is this process, then another process would be connected to it and it kind of gets caught into that loop of thinking. But it cannot experiment. It needs to do an experiment that will now look at this hypothesis and return back a result. Then it will continue to generate the next step in the reasoning. So how can AI reason about disease biology if it doesn't have access to the actual biology? That's a very, very fundamental question in this whole idea of applying AI to biology. Now put it in perspective in the visualization space. Okay. If you want a self driving car, self driving vehicles, so they have a camera. The camera is visualizing the road that you are going to be driving. It visualizes the reality around you. So as a result of that, it can answer questions. Is there somebody close to me? Now imagine that this was a model that was trained, but it cannot see the road. It cannot see the road. Now it was trained on all the roads, but it cannot see where you are right now.
That model does not drive the car.
[00:10:46] Speaker B: Yes, yeah, yeah.
[00:10:48] Speaker A: So this is the same.
It's a very fundamental problem. If you cannot connect AI with sensory capabilities to actually interact with the real world, no amount of training will be enough. So robots that are going to wash your dishes will have to actually touch the dishes.
[00:11:06] Speaker B: Touch the dishes, yeah. Be in the world. Yeah. When you share that insight with, you know, people inside of pharma companies, did they say, yep, that that's correct, or did they resist that line of thinking?
[00:11:17] Speaker A: You know, it's interesting. It's interesting because people are not good at imagining things that they never saw before. I mean, they can, they can imagine them, but they cannot actually relate to the question of whether this would be useful. For example, if Steve Jobs would go and ask people, what do you want? Also how do you present the question, would you want a phone that doesn't have a keyboard, but it has a screen and you can type on the screen? And most people say, are you crazy? I want a keyboard. How would I type without a keyboard? Say no. No, but you will also have Internet. Say no, but I don't have a keyboard, so how would I. It's very, very hard for people to do that. However, the idea that seems to be very intuitive and super interesting and we are talking to farmers all the time is this. What if I could extend your LLM which is on the front end, enabling it to chat with the cells on the back end. How is that different from talking about omics versus visualization? This is what is going to happen now. Your LLM can think what could be a hypothesis. And when it wants to actually know if this is true or not in the real world, it would talk to an agent, the visual biology experimental agent, which will be talking to the cells.
[00:12:34] Speaker B: Yes.
[00:12:34] Speaker A: Which will conduct the experiment right now.
[00:12:36] Speaker B: Yeah. Okay.
[00:12:37] Speaker A: Between cell types, as many cells as you want. You want a million? It will be a million. 10,000 is enough. It will be 10,000 and it will integrate all that result and it will reason about it because it's also an AI model, but it's a visual AI model. So it interpret it visually. Just like the self driving car is not returning the images, it knows what they mean. Say, oh, this is traffic light. It recognizes what it is seeing and then it can actually drive back.
Not just oh, there's a traffic light and this is a tree. And no, it says drive through the trees in this road but be careful of the pedestrians. Okay. So it will give the interpretation. So what we are doing at ANIMA now is, is actually to package. I mean we have packaged already this technology in an agent and that agent.
[00:13:34] Speaker B: So that's sort of the missing link, isn't it?
Hi, this is Chris o', Brien, host of Few and Far Between Conversations from the Frontline of Drug Development. We'll be right back with this episode in a moment. I personally want to thank you all for listening to our podcast. Now in our fifth season, it continues to be an amazing opportunity to speak with some of the top thought leaders in the drug development industry. If you're enjoying this episode, please leave us a review on Apple Podcasts. It really helps people discover the pod. And don't forget to subscribe to Few and Far between so that you never miss an episode. One last request. Know someone with a great story you'd like to hear me interview. Reach out to
[email protected] thank you. And now back to the podcast.
The way that I would summarize what you've shared is we didn't have a full loop. We had Hypothesis generation, but we couldn't sort of bring that hypothesis to the real world to validate it. And so the agent enables the AI to validate its hypothesis, positive or negative. It can confirm it or turn out that it was wrong. If it's right, then it can move to the next step in the process. And if it's wrong, it can go back to the virtual drawing board. Is that right?
[00:14:48] Speaker A: It's exactly right. But you can also think about it in a way like this. You know that AI models over the last 18 months, the difference between ChatGPT, the way that we saw it 18 months ago, or two years ago, actually two years ago. And now what has happened over the last two years is actually two, I would say axis of evolution. The first axis continues.
Let's build a more intelligent model. Better reasoning, less hallucination.
This is continuing the smarter model. And the end goal is, they used to call it AGI and now it's super intelligent. Yeah, okay, okay, great.
[00:15:30] Speaker B: Yes, yes.
[00:15:30] Speaker A: But the second access is multimodal, which means let's add images, let's add voice, let's add video, let's add this, let's add that. So you give sensory capabilities to AI to interact with the world. It can output images. It can input images, it can output audio. It can input audio. Now think about an AI disease model. It needs the ability to see the biology. So this is very fundamental. What this is doing is giving AI the ability to see biology. Now it is doing it in the way that AI is thinking about biology. Because if I give AI the ability to see the morphology of the cells, it doesn't help because the AI is saying this pathway is involved, I think is involved in als. Go and test it, and you come back with the images of the cells. So he's saying, no, no, not what I asked you about.
[00:16:27] Speaker B: Yeah, that didn't answer my question.
[00:16:29] Speaker A: I need you to visualize this pathway and tell me if it's different between the cells. And he said, no, no, I, I cannot do that. So you're not helping me. So this idea is very fundamental. We are building the interface between AI and biology. We are giving AI the ability to interact with real biology through another AI agent which is talking to the cells, basically talking and understanding. What is he seeing now? The conversation actually between these two agents, between the LLM on the front end that is, let's say, is in charge of researching the disease and the visual biology agent, which is in charge of visually experimenting the experiments.
[00:17:16] Speaker B: Yeah.
[00:17:16] Speaker A: In the cells of the disease and returning the results. At any scale, with full repeatability, with any number of cells that you want to compare. It's like so capable in conducting million experiments at the time that it will take to do one wet lab experiment that is not even reproducible. That loop actually is a loop that could lead to transformation. In drug discovery that is very fundamental, which has called autonomous agent drug discovery. How does that work?
[00:17:52] Speaker B: Is there a human in the loop in that model or is that fully accurate?
[00:17:56] Speaker A: How does that look, look at this loop. Okay, where do you need people in the loop? So let's say that you set it up because you know that you are researching Alzheimer's. So you set it up as a system with the few cell types that are relevant. You also know the biological processes that are possibly in the, in the game. Okay, let's say that you start with 200. And now this conversation between LLM on the front end and the visual biology agent on the back end looks like this. LLM is saying, I've got these 30 processes which I suspect are involved in the disease and I'd like you to check it out. Here they are. Yes, do it over cell type A, B, skip C and D. For this one, do also in E. And I want it to be because it's just a quick verification. They want to have to see if we underwrite direction. Do it only 10,000 times. Okay. The other agent is taking this as input, looking at it. He's saying, listen, I already visualized also B in certain other experiments. I've got that in my capabilities.
I could do it on the way. Is that okay? So yeah, do it, do it. Okay. As long as it doesn't delay, do it. So the other one is, is going. And now visualizing it. Imagine that it is moving the microscope, actually doing all that automatically.
Assuming that the system is set up in those cells and the microscope is already sitting there. Okay, so just operating it, returning the result. So far, no people in the loop. Right now let's say that the other one.
[00:19:33] Speaker B: Yep, yep.
[00:19:34] Speaker A: The first one is saying, now that I've seen this, I've got a bunch of other processes that I want to look at. However, I want you to design the experiment of. We'll have this antibody which is involved in. I want this one to be the visual, the visualization readout. However, the system does not have this visualization prepared. It may take a week to set it up and to tune it. Somebody will have to do it. Yeah, In a future world you would actually be able to synthesize that automatically. But this Is, you know, like it's not worth it because you could actually have a couple of people trained, you know, to actually do this. Now obviously if this is used by a pharma and they do it over time, they kind of will accumulate all the things that are needed and there will be few of these add ons here and there.
[00:20:26] Speaker B: Yeah. The more time goes by, the better it gets and the less variation, new variation there is that.
[00:20:31] Speaker A: Yeah. So, so basically what we're saying actually, I mean, we've been working with pharma since 2018 and done, you know, big collaborations. And it's a totally different ballgame now because they are all focused, you know, on this idea of how to bring AI to drug discovery and this very simple but super powerful idea that says we give your AI models the ability to interface with the real biology and to experiment. To experiment. Not just validate, it's discover, validate is once you are already sure, just validate this for me. But actually this is discovered. Yes, in a loop like this. And by the way, there are questions about how to, how to automate this, as you asked. Okay. But this is actually many times not the main issue. The main issue is not just how to automate. The main issue is the ability to build a model that actually will understand the disease biology. So it could take more time, but if you build that model, this is your competitive advantage. Yeah, that model actually knows the disease biology and is tuned and can grow in the way that biology works. So it's really very simple, I mean, to understand conceptually that if you are thinking about biology by using pathways as your unit of information, that's what you want to visualize, that's what you want to test, that's what you want the wet lab to test. But all of a sudden you've got this ab.
Because it's imaging, you can create a loop that is so powerful and so fast and can work on so many things in parallel. It's an AI scale on both ends. So the LLM is AI scale in the ability to bring up ideas. It needs a counterparty to talk to that is capable of keeping up.
[00:22:23] Speaker B: Yeah, at scale, right. Incredible.
[00:22:25] Speaker A: At scale. A bunch of guys in the wet lab doing one experiment at a time, coming back after three weeks with one result and it cannot be reproduced or there is a danger that it's not reproducible, is not AI scale and it cannot keep up. And you can understand that this doesn't work. Now, many farmers today are, I think they are solving the wrong problem because they are Focused on lab in a loop. What is lab in a loop? Let's automate the lab so that we can keep up. But eventually, yes, for very few experiments, you will be able to do it. There are so many different experiments that you need to do what? The beautiful thing about visualization is that it is always just one experiment. Visualize that thing for me million times and tell me what you're seeing. So anything that you want to check almost can become a digital experiment at scale.
[00:23:14] Speaker B: That's fascinating. It's incredibly exciting. Okay. When you and I spoke previously, I asked you about advice, lessons learned, et cetera, and you said, ugh, I could spend some time talking about generic stuff, but there's really only one tough question that I think CEOs need to wrestle with right now, and that's about AI. Will you share some thoughts on that here as we close?
[00:23:35] Speaker A: It's exactly as you said, the generic advice you hear. We need to focus. We need to hire the right people, great people. We need to automate our processes. We need to even around AI. By the way, when people today are thinking about AI, they say we need to bring AI into all parts of our business. Okay. But everybody else is also doing that. So I think that there is only one question, actually. It's not about operational improvements. It's not about excellence in understanding your customers better. You know, all the. It's only one thing. How are you going to compete with AI? That's the question that almost every company is facing either directly or down the road.
So if you are on a path where down the road, you are going to meet chatgpt. Did I say good luck with that?
[00:24:28] Speaker B: Yes.
[00:24:28] Speaker A: Okay. Now, yeah, it's actually, you know, it's like if you were to sprint against the world's champion for 100 meters, okay. Obviously you will lose. Yeah. But let's say that I give you a head start of 10 meters, you will still lose. Right. 20 meters, 30 meters, 40 meters. I mean, we are all dudes. Like, it's going to be. It's going to be 70 meters, okay? But if we get 70 meters and it's only 100 meters run, we are going to win against the world champion. Right? Okay, so now let's extend the Runway to 400 meters. Okay? Now, obviously the 70 meters is not enough, okay. Because the Runway is longer. So now we need, based on these proportions, 300 meters. No, no, because if it's longer, if it's longer, we need more because we need to run a lot.
[00:25:17] Speaker B: That's right.
[00:25:18] Speaker A: So we will need now maybe 320 meters years. Okay, now here's the thing.
Imagine now that the other guy is chatgpt. Okay?
[00:25:28] Speaker B: Yes.
[00:25:28] Speaker A: But here is another rule of the game with every year it's two times better. It's running faster two times.
[00:25:34] Speaker B: Yes.
[00:25:35] Speaker A: Yeah. So this means that in year one, the 70 meters is enough even for a short 100 meters. But in year two, I need 80 meters or 85 meters. Now if this is 400 meters and we are in year five, I need 395 meters.
Okay. Because he would run 10 times faster in year five or 20 times faster in year five. Okay. Two to the order of five, it's 32 times faster. Which means that if before he did the 400 meters in 50 seconds, he will do it in 1.3 seconds. There is. Yeah, yeah, yeah.
[00:26:14] Speaker B: So you understand the analogy fails. Yeah, it's just. It's too. It's too fast.
[00:26:17] Speaker A: The idea that you have an advantage over ChatGPT today is not going to be helpful in five years from now. You need your advantage to be almost from another world. You need to bring something that AI doesn't have. That's another way to think about it. Or let me give you another way to think about it. You need to give AI something that it doesn't have. Not to have something that AI doesn't have. You need to think about your company. Okay? AI is there. It's how do I bring something to the party that he doesn't have and that it's very hard for it to acquire. So where do you find the answer to this? You find the answer to this in data. Can I have access to data that would improve AI if it had access to it and it cannot acquire it elsewhere. That's interesting because, you know, AI is like a T. Rex. It's going in the woods and it's looking for food. What does it eat? Data. Data. It eats data. Okay, okay. So, yeah. And it eats also it's kind of steps on things that it doesn't even know that are there. You can be one of those things, stepped on and continued. It didn't even know that you are there.
[00:27:25] Speaker B: Right, right. You could be a casualty that it's not even aiming.
[00:27:28] Speaker A: Yeah. Didn't even know that you are there. But if you've got some food, some real food for it, it will look and consider you. So in our case, by the way, visual biology, data about processes from cells, AI needs it if it wants to be good in biology. And you cannot acquire it because it's a different Sensory system that lives in the real world and is not easily accessible.
[00:27:50] Speaker B: It's also the other thing that I like about what you're saying. I think lots of versions of can I find a data set that could be valuable to AI. Great. Right. But you're saying this is much more of an iterative process where AI can gather data that it needs in order to draw a conclusion. That to me is the really exciting thing about what you're building.
[00:28:10] Speaker A: I'm saying as an advice, look for a way to stay relevant in the face of AI improving at least 2 times 2x every year. If you're looking for that. Yeah. It's not about being faster. You cannot grow exponentially.
And AI can continue to grow exponentially. It could be by providing AI with another sense, meaning it would be able to access another dimension. Okay. And opening up new applications for it. And you will stay relevant because you've got the way to actually provide the data. Many people don't agree with this and they say as long as you are in a niche and you know your customer and you understand the processes and you've built the application, I think that they, they don't understand that the real issue is this. We used to be thinking that the specialist is better than the generalist. So if you were working on a very specific application in a very specific niche, hyper specialized Microsoft or Oracle or, you know, they will not be working on it and you will be living in that niche. The problem is this, with the AI models being trained, they make connections. So we are actually exposed to the danger that the generalist becomes better than the specialist in the niche. It's kind of looking at the problem that you are a specialist in solving, but it's making connections to other fields that you know nothing about. And all of a sudden it comes with a creative solution that is cutting corners on you and bringing new technologies and new ideas to solve the problem in a different way. And all of a sudden it's better than you.
[00:29:50] Speaker B: So the other thing that's I think a paradigm shift that you're suggesting is that the current model is people are thinking, how can I put AI to use for me, you know, for my company, for my or for me personally or whatever. And I think you're sort of saying AI is on the path to being so powerful, you know, a T Rex doesn't work for you. You're trying to find a way to be useful to an AI system where ANIMA is a very important part of that. But the driver on the conductor of the whole Thing is that front end.
[00:30:21] Speaker A: LLM, I guess the way to think about LLMs actually is not as applications. ChatGPT is an app on my phone. Okay. Actually, this is the new computer. This is an operating system that is actually running programs which are actually in the programming language of the future, which is English. It's English. It's the programming language of the future.
[00:30:45] Speaker B: Yeah, the programming language of the future.
[00:30:47] Speaker A: Yeah. And actually it is running programs which mean you give it a task and it starts to use everything that is under the sun as a tool. Say I need this and I need this and I need, I need to book a flight. So I go on Google flights and then I need to check the price as I go there and then I need to book it, but I need to understand how the airport terminal is. Will I make it for the connection. So I take the airport terminal, map all this stuff. It is kind of running and it has memory and it has file systems to bring and it can send back to you. We are not there yet, but it will come. Application interfaces on demand. Which means that the user interface that you are used to see on your laptop, operating with applications, imagine that as a result of talking to the LLM and asking question, it brings back a user interface with the data that you are working with. So it's an app on demand. It creates the app while it needs it. So we are going to be in a situation where it's. It's so powerful in solving problems and the only problems that it cannot solve are problems for which it cannot have access to. It's interesting. I met with a guy recently who is developing the ability for AI to smell.
[00:31:59] Speaker B: Feels like you're a logical guy to speak with under those circumstances.
[00:32:02] Speaker A: So this is like training a neural network to synthesize smells.
Generative synthesis of smell. So you say, I want this smell that is like this perfume of Armani, but mix it with this and that. But I want it to be a bit woody, you know, and reminding me of sitting by the fire. You know the smell of fire? Yeah, Smokey like this. Make it generative. AI will actually synthesize now that smell from a palette of a thousand little bottles like this and will actually make it into the room.
[00:32:38] Speaker B: So that's somewhere there's an elderly French perfumier who should be really nervous right now, I guess is what you're saying.
[00:32:44] Speaker A: But think about it. Now let's do it. Reverse. Give the AI you know the smell, it will tell you what it is. He's thinking more about the generative because it's super, actually.
That's interesting. You think about all these candles, you know, that are making smells, and you buy them. Imagine that you have only one candle and you can tell it which smell you want.
But this is an example of a company that potentially has a chance because you are producing something that AI doesn't have, and you give AI a new sensory system, a new capability.
[00:33:16] Speaker B: A new capability. That's fantastic. I do think this is a mindset shift for how we should all think about AI. Sort of level one thinking is, how can I use AI? Great. Got to, of course, start there. But thinking about AI as an increasingly powerful. This is not my analogy, but someone said we have met an alien intelligence. AI is effectively an alien intelligence under development. It may not be there fully yet, but it's coming. And everyone who spends time in this space, I don't hear skepticism about that. So then the question is, how do I relate? How does my company, my business, relate to that thing that keeps improving? And I love this idea of provide something that it needs that's really powerful. Okay, Jochi Sloanen, we're going to end there. Thank you for joining us. What a fascinating conversation. And what a cool company. I'm certainly in the cool category as opposed to.
[00:34:03] Speaker A: But thank you for having me. It was great speaking to you today.
Super interesting.
[00:34:12] Speaker B: Welcome, producer Adam.
[00:34:13] Speaker C: Thanks, Chris. We're back with part two of Yochi Sloan's episode. Yeah, let's start with the big news. You can't beat AI. So I guess my question is, is this really news? Are our industries really still trying to reinvent the wheel?
[00:34:28] Speaker B: I think a lot of people, you know, it's really hard for human brains to. To think about improvement that's happening at this speed. You know, when something is improving in competency at the rate that AI is improving at, you know where, as I think Yohi said, every year at least, these things are getting twice as good. It's hard to extrapolate forward to what a couple of years down the road looks like. And, you know, we see all the time people saying, oh, let me. I'm an AI skeptic, let me tell you about AI's limitations, et cetera. And it's very dangerous, I think, for people to say, well, I played around with. With ChatGPT or something six months ago, a year ago, it wasn't that great. Taking probably no responsibility for the quality of the prompts that they put into the tool even then. And to dismiss it. I think the improvement rate is startling, stunning. And I think that's really one of the points he's making, is that it's not so much, can you outrun it today? It's, it's, what's it going to look like in the very near future.
[00:35:19] Speaker C: Yeah, you know, it's really true. And speaking, you know, to Yoki's, you know, discovery platform for biology, it is an AI agent that allows AI to see and talk to cells. This seems to be like the missing link to AI evolution, the collaboration and the partnership, rather than being the leader.
[00:35:38] Speaker B: Yeah, I think that's right. I think the way to think about this is as a system and that system has a number of places right now where humans have to be a part of the process for drug development. It's kind of true for everything. And so what we're starting to see is people building tools that can AI ify additional. That's a technical term, Adam, that can AI ify components of that process that previously required humans to be involved. And the more that happens, I think the faster things get. So as a general rule, I really liked his guidance that, you know, hey, try to find a way to help the AI to do more. That's probably not a bad way to think about starting an AI business these days.
[00:36:14] Speaker C: So one of the things that he said that I really liked was that we are on a path where down the road you're going to meet ChatGPT.
How is the clinical trial industry preparing for this? I mean, we've had multiple guests on the show who have talked about AI and healthcare, but is this something that still requires a little bit of a level up or training in order to interact and get an ROI out of this?
[00:36:38] Speaker B: Yeah, I think that's right. I do think he's correct that AI teammates are just around the corner and AI bosses may not be that far behind. And whether that happens or not, I think if it's possible, I think we'd all be wise to prepare for it to some degree. How do you prepare for it? Use the tools and get advice. There's tons of advice out there now about how to write better prompts, how to have conversational relationships. With ChatGPT, I'm not talking about talking about the weather, but like, you can get to better and deeper answers if you say, hey, no, I don't, I don't want you to just tell me that my idea is great. I want you to critique my idea or things like that. There's lots of things people can do to get more out of the tools and that'll keep you closer to what the sort of state of the art is than otherwise, I think, yes, we're all going to meet chatgpt down the road.
[00:37:21] Speaker C: So we talked about Yochi's entrepreneurial spirit, which is very important and really helped move him forward in his success. But let's touch on the business methodology. He had mentioned that primarily biopharma is innovating through acquisition of biotechs. Is this true or is this really more of a nuance?
[00:37:40] Speaker B: Yeah, it's largely true.
It's certainly true that some drugs today started out inside of big companies and grew from there. That's. In other words, it's not exclusively true, but the return on M and A dollars and the number of drugs that are coming from acquisition is very high. So I think what he's pointing out is in order for, as he put it, in order for Large pharma to avoid being the commercialization, regulatory and manufacturing, manufacturing wing of the pharmaceutical industry and only doing that in order to avoid that, you know, they have to find ways to create value with the tools they have and to leverage advantages that Big Pharma has over biotechs. Biotechs are so nimble. They move so quickly. And we hear all the time from our clients, many of whom are refugees from Big Pharma, that, hey, I had a lot of resources, maybe all of the resources I could have wanted when I was working in a big pharmacy organization, but our decision making was so slow, so bureaucratic, it was this and that, that we struggled to get stuff advanced. I think he's absolutely correct. I think it's just that's a factual statement that biotech is the main innovation source for new drugs. And then the question is, you know, will his solution enable Big Pharma to break their own differentiated path?
[00:38:52] Speaker A: Yeah, true.
[00:38:52] Speaker C: Okay, so last question. So you and I both know people who are very much pro AI and all of the things it can do, and on the opposite end of the spectrum, people who are afraid, the singularity. So one of the things, and I know I've asked you this before, but I think that we're looking at, is there still a danger of companies losing themselves in AI and neglecting the human component? Because it seems like this year alone we've talked to a lot of podcast guests who use AI or use various technology, but still rely on that human component for business success?
[00:39:26] Speaker B: Yeah, I think, look, right now, I think most businesses still require plenty of human stuff. I mean, I'm not aware of many or any real businesses that are kind of, you know, one person and A bunch of AI stuff. Maybe there are small businesses that do that, but there's plenty of space for humans still in the loop. It's kind of a crazy statement, though, that we're, like, defending that there's a place for humans. And I think you have to sort of say, for now, I think we should all. And I don't just mean listeners to this podcast or the biopharma industry or even all the of business. I think human society needs to grapple with what it will mean if these tools continue to get better and what the world of work and actually what the world will look like in that kind of space. So I don't have answers to that, Adam. I don't really think anybody does. But I do think there are fortunately lots of people debating this right now. And I'm tech optimists, I guess. So I'm hopeful that we'll come up with good answers. But, boy, we probably need to hurry because things are changing quickly.
[00:40:23] Speaker C: Yes. And I think, you know, again with our podcast guests, you'll note that they are leveraging both the ChatGPT generation and also what they've learned in biopharma or biotech about people and how important they are. And I guess, like you said, it's just how. How are we fitting into this new dynamic? How are ideas and creativity going to survive? And also how. How we're going to move forward and be successful.
[00:40:49] Speaker B: Yeah, I think that's exactly right. And I think the thing we want to encourage everybody to do is engage with the technology, especially if you're a younger person with a whole bunch of years of work ahead of you. It would be a mistake to dismiss these technologies and not engage with them because the world of work is changing. It's really happening already. It's not a complete transformation, but we see better and more interesting tools now than we saw 12 months ago. And I'm sure that's true across maybe most industries. Yeah. Interesting days ahead, Adam.
[00:41:17] Speaker C: All right, well, thanks, Chris.
[00:41:18] Speaker B: Thank you.
Thank you for listening to the latest episode of Few and Far Between Conversations from the front lines of drug development. Our podcast is now available on Apple Podcasts and other streaming services. Please take a moment and leave us. A user review and rating today. It really helps people discover the podcast and we read all the comments. Those comments help us make Few and Far between better and better. Also, be sure to subscribe to Few and Far between so you don't miss a single episode. Got an idea for a future episode? Email us at Few and far between rossi.com or contact us on our website at bio rossi.com I'm your host, Chris O'. Brien. See you next time.