[00:00:00] Speaker A: Foreign.
[00:00:15] Speaker B: Welcome to a special two part episode of the Few and Far between podcast. I'm your host Chris o'. Brien. AI has materialized into every aspect of our lives, but its true calling might be as a catalyst for innovation in clinical research and healthcare.
[00:00:29] Speaker A: Mysterious.
[00:00:29] Speaker B: My guest today understands the nexus between AI and science and has leveraged his own software skill set into a visual pathway for comparing diseased and healthy human cells. Yochi sloanm is the CEO and co founder of Anima, a tech bio that has created an AI platform that thinks, sees and learns biology at the core of disease mechanisms. In part one of this two part episode, Yochi and I traced the start of his journey from recording the inner workings of software applications to building a visual biology agent for clinical sc. We also zoom in literally on finding software bugs in human cells, building neural networks and knowing that the word cool might be your signpost to success. We hope you enjoy part one of this episode and stay tuned for part two later this month. Okay, let's start the podcast.
Jochi Slonem welcome to Few and Far Between.
[00:01:29] Speaker C: Thank you for having me.
[00:01:30] Speaker B: So I've been really looking forward to this conversation for a bunch of reasons. First, I love talking to people who have non traditional paths and I would say your path into biotech isn't necessarily traditional. And then of course you're sitting at the nexus of biology and AI in some really exciting and interesting ways. So we're going to talk about the company and we're going to talk about, you know, what's happening in the broader world where those two forces are coming together. But let's start a little bit on you and your background. So how did you get started in business?
[00:02:02] Speaker C: Actually I started in software. We'll get to that later. Because Anima is a kind of a hybrid. I grew up from a vision that software could actually be applied to biology in a very specific way. But actually I got started in software. I was a co founder many years ago of a company called Mercury Interactive. That company became the world leader in category known as automated software Testing. I simplifying it to finding the bugs. Okay. And I was the bugs guy. That company was actually automating the work of people that were testing software. And I was a developer. But developers are always bothered by the bug.
[00:02:42] Speaker A: Yeah.
[00:02:43] Speaker C: So that company grew pretty far, much farther than I ever imagined, eventually reaching revenues of over $1 billion until it was acquired by HP for four and a half billion dollars. Became the software division of HP.
[00:02:56] Speaker A: Yeah.
[00:02:57] Speaker B: Massive. A massive success. Tell us a little about the initial idea for that company, as you said, it exceeded your expectations, I imagine exceeds most people's expectations for what a startup can do. What were the early days like?
[00:03:08] Speaker C: Well, you know, in the early days there was this idea that you could actually come up with something that would be automating a process that was done by people. People were actually sitting and testing software applications by repeatedly executing. You know, think about Microsoft wor how would you go about testing that? You know, you would operate the menu for File Open. You would look at the screen, you say, oh, it showed me the dialog. Okay, let me try finding. Oh, it didn't open the file. That's a button. So we kind of invented the idea of recording the user doing this and then repeating it next time on the new version of the software. Now this was a very simple idea of automation, but actually required already many years ago to be able to interpret what is happening on the screen. So it would have to actually visually understand what is happening on the screen, just like a user is. So, and we are talking like so many years before the invention or the idea of machine learning. So these were like trained models back then of understanding how computer screens show things that are user interfaces and what is happening on the screen. Is it a bug or is it a feature? Is this a good behavior or bad behavior? But the interesting thing is that I did a couple of other software companies after that. The one that actually was the highlight of my career. You know what it was? It was when my mother said that she understands what I'm doing.
[00:04:39] Speaker B: Yeah, that can be a high watermark. I understand this.
[00:04:42] Speaker C: That's a high water mark. But that was an idea because I was the bugs guy and we were at Mercury finding the bugs.
[00:04:48] Speaker A: Yes.
[00:04:49] Speaker C: Then developers would have to fix the bugs. And what you see on the screen as a bug is not why the bug happened. Now you still have to figure that out. So I came up with this IDEA that was 2000 and something. The Internet was happening big time and there were all these websites, you know, starting up financial trading applications, banks, and they were crashing all the time, all these websites and especially in the financial sector, this was very bothering for people. You bought the stock, it shows that you bought it. You didn't see it. You might have it later today, later in the day in your portfolio. Is it there? It's not there. So I came with this idea like the airplanes, when they crash, they have the black box recording and they put on the news that they are going to the ocean 10 km deep to find the black box.
[00:05:33] Speaker A: Yes.
[00:05:34] Speaker C: Not Something so positive to think about. But actually that idea was that these websites that are crashing and failing are like airplanes. Why don't we have a black box for the Internet?
[00:05:44] Speaker A: Yeah.
[00:05:44] Speaker C: Amazing application. And this was the black box. Flight recorders for software applications. Idea company called Identify Software. And it was the right time. It was a great simple idea, the black box. Everybody understood it.
[00:05:58] Speaker B: The analogy is really clear.
[00:06:00] Speaker A: Yep.
[00:06:00] Speaker C: Yeah. And you know, we started the company, we didn't have a product, we had the idea. And then I got a call from this lady from CNN Technology News.
Barbara something was her name, I still remember it. And she said, you know, we have this channel where we are kind of presenting. It's once a week, 7:00pm prime time.
[00:06:19] Speaker A: Yeah.
[00:06:19] Speaker C: Technologies that will shape the future of the Internet. When we came across this idea of the black box and it's so appealing, so easy to understand. Would you be willing to come and speak? We know that you are very busy. Of course I wasn't busy. But I told her, let me see when I have time.
[00:06:35] Speaker A: Yes, exactly. Yeah, I can fit you in.
[00:06:38] Speaker C: I had no customers and nothing. But we got in primetime that interview. And the next day the phone started to ring, I imagine, I imagine. Which is the black box, you know. Wow. How much is the black box? So that was pretty interesting. And actually that idea of recording, you know, what's happening inside software applications.
[00:06:57] Speaker A: Yeah.
[00:06:58] Speaker C: We sold eventually this company after five years. We got to like 2,000 enterprise customers and we had like revenues of 75 million a year. And we were bought by BMC, a big software company company. But that was something that was kind of, you know, the bugs.
[00:07:12] Speaker A: Yeah.
[00:07:12] Speaker C: Now to find the bugs, to record the bugs.
[00:07:14] Speaker B: It's related to your initial idea.
[00:07:16] Speaker C: Yeah, yeah. It's like the other side of it.
[00:07:18] Speaker A: Yeah, yeah.
[00:07:19] Speaker C: And then I came across this question of what am I going to do next? And I wanted to do something meaningful. It was already three or four software companies behind me and somehow I was meeting with my co founder, biologist. The conversation went about drug discovery. I didn't know what that means. Now we are talking 15 years ago. So by now I know I became a biologist to San Diego University online and I did like 30 courses since then. So kind of a self trained biologist. But back then I didn't know anything about it.
[00:07:48] Speaker A: Okay.
[00:07:49] Speaker C: And I was kind of thinking about. To connect it to my world.
[00:07:52] Speaker A: Yes.
[00:07:52] Speaker C: And I said, you know, a disease is a bug is a bug in the programming of your cells.
[00:07:58] Speaker A: Yes.
[00:07:58] Speaker C: Can we see the bug? Can we record the bug? Can we understand why it's happening. You know, I came from my world, you know, software applications.
[00:08:08] Speaker A: Yeah.
[00:08:08] Speaker B: You applied, applied these ideas.
[00:08:10] Speaker A: Yeah.
[00:08:10] Speaker B: When you said that to people, where people, people must have said, you do not know what you're talking about. This is much more complicated than software. Like, what was the reaction when you sort of shared the original idea?
[00:08:21] Speaker C: Well, at this point, I didn't have yet the idea. Okay, fair enough. It was the problem, actually. Can we do that? Can we record?
Can we see? And then I started to understand that actually biologists, they cannot see. So the way that they get to understand what's happening in the programming of the cell and think about cells, they also are like, they execute. They execute processes that are executed, by the way, by a processor. And most of the things that cells are doing are by small machines called proteins. They are processors and it's a multiprocessor environment where there are 20,000 of them and they're all talking to each other and they are all sending signals one to the other. So if you were to look at this as software application.
[00:09:10] Speaker A: Yes.
[00:09:10] Speaker C: That would be of complexity that maybe, maybe, Maybe now in ChatGPT's data center, between all the clusters of Nvidia, you know, computers and all the training of these models, it's probably 1% of the complexity of a single cell. Wow. If not, if not, 0.01%.
[00:09:29] Speaker B: So at this point you're thinking, okay, the analog feels like it holds, but the complexity is enormous, much greater than anything we've dealt with in machine.
[00:09:39] Speaker C: Exactly. And I was thinking starting to probe questions, meeting with biologists and kind of getting interested in, can we actually bring some technology from the software world into biology? What would that look like and why should we do that? And the thing that occurred to me was very simple. It was a disease is eventually something went wrong in your cell. I was actually working, after selling that third company, I was working with many startups. I had like an accelerator called fast forward. Me, FFWD me. We worked with like 25, 30 companies there in the early stages, and quite a few of them were in Fintech. And Fintech is where you try to actually deal with complexity of enormous scale and kind of try to predict where the market will go, predict where the stock will go. Predict, predict, predict. Based on machine learning training of a model. Basically you are building a stock, a model for the stock, a model for a market set, a virtual stock, if you will. Yeah. And I was kind of exposed to that idea of machine learning that was many, many years ago, way before it became mainstream, because that Was like the very front end of it in fintech. You know, I still remember meeting with this company, Three Sigma, and they were training models. They had like 2,000 quants, PhDs, you know, that are the smartest people in the world. They were training these models to try to predict the stock market. Pretty successful, actually, in generating tons of money. And I was looking at this and saying, could we do something like this? Let's say that we take a million healthy cells and a million diseased cells, and let's say that we could somehow visualize a thousand processes that are biological processes that are the highways of the cell.
[00:11:25] Speaker A: Yeah.
[00:11:26] Speaker C: What are the chances that in a disease, one of these core processes, if we could visualize it, will show a difference between the healthy and the diseased? After all, if something happened, it would go to the highways to show, you know, even if you have, I don't know, roads, and there is a traffic jam or an accident in a side road, it will be visible as a major traffic jam on a highway.
[00:11:53] Speaker A: Yeah. Okay. Okay.
[00:11:55] Speaker C: So. So it's kind of, let's visualize the highways, compare between the healthy and the diseased cells, and we will actually see what is causing the disease. We will not see. Exactly. But then we could zoom on that process and do it again.
[00:12:10] Speaker A: Yeah.
[00:12:11] Speaker C: If there are 20,000 of them, we'll kind of visualize them first, the first 1000, and then drill down for the next 1000. And a few iterations like this will go to the root cause, to the actual thing that is driving the disease. Okay. And that was. That was a very intriguing thing for me. I said, this could be meaningful. What is going to be needed to do a system like this? Yep. Now, my co founder, she's a biologist, and she said, no, no, we don't do it this way. We. We are actually doing in the wet lab. It's called wet lab.
[00:12:41] Speaker A: Yep.
[00:12:41] Speaker C: And we sit in the wet. And you are talking about dry digital imaging. You know. Yes. People are doing imaging. They're doing it, but they cannot see the biological processes. They can see. They can see the biology. They can see what is called morphology. And cellular morphology is like seeing the cells the way that you saw them in a microscope when you were in the fifth grade, you know, or high school biology lab.
[00:13:06] Speaker A: Yes.
[00:13:07] Speaker C: You don't see the biology in action. You see the cell, how it looks like. Here's the nucleus, here's the mitochondria.
This thing is called the Golgi. There are things that you can see, but it doesn't tell you anything about what's happening inside. And my idea was we will visualize what's happening inside the cells and actually compare the biology. Now, this could be actually super, super useful in so many, I would say, stages of research that the pharmaceutical company is conducting. For example, if you want to build a disease model today, everybody is about AI. We'll talk about that later, probably. Yeah, but you want to understand why the disease is happening. Okay, so here's the idea. Let's compare the biological processes. Visualize them one by one, as many of them as you want, over as many cell types as you want, over as many cells as you want in each type. And we'll take all these images and we'll fit them into a machine learning model. Today they are called neural networks. And we build a neural network that is basically trained by all these examples and is capable of first knowing how a biological process looks like, and then it can actually detect differences if it's diseased or healthy. Now, it's interesting that this application of machine learning to computer vision has been the most successful, the most robust, and the first that actually worked way, way before large language models, which are so much more difficult than that. It was a classification, basically.
[00:14:53] Speaker B: Hi, this is Chris o', Brien, host of Few and Far Between Conversations from the front line of drug Development. We'll be right back with this episode in a moment. I personally want to thank you all for listening to our podcast now in our fifth season. It continues to be an amazing opportunity to speak with some of the top thought leaders in the drug development industry. If you're enjoying this episode, please leave us a review on Apple Podcasts. It really helps people discover the pod. And don't forget to subscribe to Few and Far between so that you never miss an episode. One last request, know someone with a great story you'd like to hear me interview. Reach out to
[email protected] thank you. And now back to the podcast.
You're saying this ability, this is the first thing we succeeded with in applying that technology was visual identification and classification.
[00:15:41] Speaker C: Yes, yes.
[00:15:42] Speaker A: Yeah.
[00:15:42] Speaker C: I said to myself, okay, so this works. And I've seen it actually also working in the financial industry where they were kind of training movements of stock. So it's kind of a simple data stream. And imaging actually is quite easy. You know, you can take a neural network, the code is just 200 lines of code. It's about the data. And you feed a million images of dogs and a million images of cats, and you ask. It is It a dog or a cat and it knows.
[00:16:08] Speaker A: Yeah.
[00:16:08] Speaker C: But actually, if you will think about it, what is the rule that will tell you quickly if it's a dog or a cat? Well, it's very hard to find.
[00:16:17] Speaker B: Yes, exactly.
[00:16:19] Speaker C: Like we know. We know because we've seen them, but we. If you need to actually name the rule, there is only one thing that you could possibly try, which is the ears. Like if the ears are standing up. Okay. Then the dog, some of them have their ears dropping down. If the ears are dropping down, it's not a cat.
[00:16:38] Speaker A: Okay.
[00:16:38] Speaker C: Okay. But. But that's the only thing.
[00:16:40] Speaker A: Yeah.
[00:16:41] Speaker C: And this doesn't answer the question, because if they are standing up, then it still could be a cat or a dog.
[00:16:46] Speaker B: And this is why neural networks are such a big jump forward. Right. Is that before that what we were trying to do is say, well, if the head looks like this and the tail looks like that and it walks like this, then it's a cat. And then the problem was that even if we had 100 different rules, that wasn't enough. There were all these exceptions. Right. And then somehow neural networks managed to do that task effectively. So that's already proven technology when you get started. Is that right? So you start thinking, how can I apply this?
[00:17:12] Speaker C: So, yeah, in principle I was thinking like that. But actually when ANIMA got started, we started by trying, as human being researchers, biologists, to look at those images, because nobody ever seen those before. And we started actually in one of the highways of the cell, maybe the biggest highway, MRNA biology. So we said, let's look at what happens to the MRNA and let's visualize the processes around the MRNA biology in the cell.
[00:17:41] Speaker A: Okay.
[00:17:42] Speaker C: When you look at these images in colors, they show you where the MRNA is, how much MRNA is there. These are things that people could see before. Okay. However, we started to look at all the proteins that interact with the mRNA, okay. Modify it and change it and regulate it. Now we are talking biology. Now we are not talking static morphology, because to see the mRNA. Yeah. There is imaging technology for that. You can even count them, by the way. You don't need imaging to count them. There are other ways to count them. But let's say that the MRNA, instead of counting 20,000 of them, you counted 40,000 of them.
[00:18:21] Speaker A: Yeah.
[00:18:21] Speaker C: Does that tell you what is happening in the disease? No. That's the difference between the biological process.
[00:18:29] Speaker A: Yes.
[00:18:29] Speaker C: To understand what's causing it, and some downstream counter that went wrong.
[00:18:35] Speaker A: Yeah.
[00:18:35] Speaker C: So we were more interested in the biological processes. We wanted to do what we coined actually much later as visual biology, we said, we are the visual biology company. So we are going to look at the life cycle of the MRNA and we are going to look at all, all the interactions that happen with it by proteins that regulate it. So we actually get to see the biology. That was the point. Now, back then, we started to look at those images and it was very tempting for our biologists to look at them and to try to hypothesize what is happening.
[00:19:11] Speaker A: Yeah, sure.
[00:19:12] Speaker C: It's kind of so intriguing. So actually, ANIMA started with that bunch of rules that say if you see in the image this and this and this and that, but not this and that, then it is this process that.
[00:19:23] Speaker A: Okay.
[00:19:24] Speaker C: And we actually became kind of specialists in interpreting those images and writing all these algorithms. But actually, just like with the dogs and the cats, there is always something that is bypassing all the rules.
[00:19:38] Speaker A: Yes.
[00:19:39] Speaker C: Okay. It's like a fish that is small enough to go through the net. And then we started, that was like four years ago maybe, to say, okay, this technology that exists in financials and in image processing, let's apply it to this problem.
Let's train neural networks. And we started with MRNA biology, but actually over the last four years, we trained on over a thousand processes. So MRNA biology is maybe 10% of that. And this became like a new idea, completely new idea that is actually a very powerful idea. And the idea is, okay, we can visualize biological processes, not morphology of the cells, which means that we know what is the process. Now, if you have that, what could you do with it? And this is something that is actually super interesting, especially today. So what you could do with it, you could find, use cases for it from the very, very beginning of research in a drug discovery program, which is to figure out what is causing the disease.
And then you want to find targets, and then you want to validate them, then you want to find compounds, then you want to expand them, then you want to start to find the lead, then you want to optimize it and it goes down and down and down like this. Now, when you think about visual biology as a technology, if you position that technology along the full preclinical stage of a discovery program, here's what you're going to do. In the very early stages, you could take actually a large number of biological processes, compare them how they look between diseased and healthy cells in multiple cell types that are known to be involved in the disease, then you could actually take the same idea and you could run through these biological processes. And you could actually visualize a deeper view that shows you all the proteins that are along those pathways and what they're doing. So the end stage of this will be a target. So you kind of visually discovered a target and validated it in millions of cells, millions of types. Now, you could do many things that are super, super hard to do in those wet labs. You could actually.
You kind of streamlined everything into a single modality visually. So if you want, for example, to find out what happens if this target will be knocked down, okay, you can knock it down and visually see if the biological process is restored.
So everything becomes visual. It's your visual way to interact with the cells. Now, if you go into a molecule, now you've got a bunch of hits. Why don't we compare now, cells in the presence of each of these hits, and we see which one is moving the biological process in the right direction.
[00:22:41] Speaker B: How long does this part take? If you compare this to the traditional way of, you know, kind of advancing something through the process in the wet lab, I'm assuming this must be faster as well. Right? It's obviously much, much more precise.
[00:22:52] Speaker C: It's 100 times faster. It's actually turning biology into a digital experiment instead of running it in the wet lab, which we have, by the way, at animal, we have a huge wet lab.
But because there are many things that still you want to be able to do. But the idea that you could actually generate an interface into real biology. Yes, through visualization of the biological processes. And you could answer so many questions just by visually seeing things. When you compare it to the wet lab, there is another thing, which is the robustness of the answer. If you go to the wet lab, let's say you suspect that some biological processes is the right one, and you invent or you replicate from the literature a specific experiment to test it. So imagine that for each hypothesis that you have, you would need to think, what is my experiment? How am I going to set it up? What's the protocol? What are the reagents and the materials that I need? What machines do I need? Go and execute it, come back with the result, and let's say that the result is saying, yes, this biological process is driving. Somebody will say, what is our confidence level in the experiment? Yeah, oh, it's 99%. Oh, okay, so one in 100, it could be wrong. Yeah, let's repeat it. You go and you repeat it. It takes another three weeks. You come back, oh, this one did not show the same result. Okay, now we have two. One was right, the other one was wrong. Let's do the third one. Is that enough? Now that you know that you cannot repeat the experiment? Maybe you need to do 50 now. Yeah, yeah, yeah, sure. So and each cycle and each. Now, if you have a different hypothesis now it's a different biological process. Now you need to think, how am I going to test this one? So it becomes an. A very, very hard and inaccurate way of doing this. And the repeatability of the experiments is an issue with the visual biology. Actually, what happens is that you go and you do the experiment. No matter what the hypothesis is, it translates into a pathway which we want to visualize. Now let's visualize it in seven different cells. Half a million cells each of them. So it's a. It's a biology experiment that is repeated at scale. At scale. Huge scale, yes. Comparing different cells comes back with the integration of all these results.
And say, I've conducted it, I've seen it half a million times in all of these cells. And let me tell you something. Half a million, 10,000 of those did not show involvement of the pathway, but 490,000 of those showed it. Now you feel much better about this result.
[00:25:37] Speaker B: Much better than one out of 100.
[00:25:39] Speaker C: Yeah. So really, this technology, we call it pathway light, it's the visualization of cellular processes at AI scale.
[00:25:46] Speaker B: One of the questions I have here, it's always interesting about adoption for new ideas. When you show people what you just described is the reaction, I'm sure some of the reaction is, wow, that's really exciting. But if people are skeptical, are they saying, I don't trust the digital version here, I still want to see it in the wet lab, or is the response generally that feels legitimate. How do bench scientists respond?
[00:26:10] Speaker C: So the first thing that we are actually seeing is that people say, cool.
[00:26:15] Speaker B: Yeah, cool, right?
[00:26:16] Speaker A: Right. Pretty cool.
[00:26:17] Speaker C: Yeah, it's very cool. By the way, this is interesting because throughout my career, I saw that when you are trying to present an idea, all the answers that you are getting, all the feedback that you are getting after hearing the idea, they fall into two types of answers.
[00:26:33] Speaker A: Okay?
[00:26:34] Speaker C: The cool and the but.
So it's like this.
[00:26:39] Speaker B: I love it. This is. This is. That's. That's some wisdom.
[00:26:42] Speaker A: Yeah, cool.
[00:26:43] Speaker C: That's a good. That's a good sign. Okay, cool. You can build on that. Okay. Yeah, but many times it's like this.
Okay, but, you know, in our company, we do it like this.
[00:26:54] Speaker B: But here.
[00:26:55] Speaker C: But what about this? The moment that you hear the word but, you are in trouble.
[00:26:59] Speaker A: Yeah.
[00:26:59] Speaker B: Okay, fair enough.
[00:27:00] Speaker C: Yeah. The moment that you hear the word cool, you have a chance.
[00:27:03] Speaker B: You're in a good place.
[00:27:04] Speaker A: Yeah.
[00:27:05] Speaker C: What about cool? But.
But still, the first word was cool.
[00:27:12] Speaker B: There was still cool in there. Yeah. Right, right.
[00:27:14] Speaker C: So this is cool. This is cool stuff. Everybody, everybody's saying. Now, there are many questions around that. One is, can we visualize every process that we're interested in? Like, what can you visualize?
[00:27:27] Speaker A: Yes.
[00:27:27] Speaker C: Now, the key thing is this. We can visualize any process we visualize so far, over a thousand of them. Not because those thousand were hand picked. They were the ones that actually came from projects where our partners, we partnered with ebv, with Lilly, with Takeda, they.
[00:27:44] Speaker B: Selected these processes and we have our.
[00:27:46] Speaker C: Own pipeline of 20 programs.
So when there was a need to visualize something. Yeah, we could visualize it. So we kind of scripted the process of how to turn the name of a pathway.
[00:27:58] Speaker A: Yes.
[00:27:59] Speaker C: Okay. Which by the way, if you interact with ChatGPT and you ask ChatGPT, give me your best 20 ideas for the pathways that are critically important in ALS.
[00:28:11] Speaker A: Yeah.
[00:28:11] Speaker C: It takes only one second and you get 20 pathways. Now, what is a pathway? It will tell you. It will tell you in the language of biologists that are subject matter experts, it will tell you the pathway where this kinase is controlling this protein that is talking eventually to the MRNA of protein X is very promising direction. For example, there is a protein called SOD1. SOD1, which is the number one researched target, potential target for als.
[00:28:47] Speaker A: Okay.
[00:28:48] Speaker C: And by the way, nobody ever actually was able to validate at scale that really this is something that is involved in the disease in the way that I am saying.
Did you see it half a million times?
[00:29:01] Speaker A: Yeah.
[00:29:02] Speaker C: Okay. Not how many publications and papers are talking about it. Different issue, because Nature conducted this exercise a couple of years back and they found that 70% of the experiments could not be replicated, could not be repeated. So the fact that there are so many people talking about it, it's only because other people were talking about it. But it's not even.
[00:29:22] Speaker B: That's a self replicating thing in some ways.
[00:29:25] Speaker C: Yeah. Citations. The number of citations of a publication doesn't mean that the publication is correct because nobody replicates the experiment. By the way, we took SOD one in the cellular system.
That is recommended. And we actually visualized, guess what, it's true.
[00:29:39] Speaker A: Okay.
[00:29:40] Speaker C: Okay, it's true. But that was not so interesting as much as finding that there is an interaction with the MRNA of SOD one by another protein and this one Nobody actually thought about.
[00:29:52] Speaker B: Interesting.
[00:29:53] Speaker C: The visualization actually exposed through the signature of that MRNA that something is happening there. Visually, it knew because it was trained and it said, that's a modification of the MRNA that I'm familiar with. And now it found actually a new idea, a new hypothesis that, yes, it's connected to SoD1, but it's actually related in a different way to how it is regulated inside the cell. And this is something that nobody actually knew about. So where does it tie up today? I would say ANIMA started like this, and we've been doing this now for 12 years. This idea that started with let's visualize the bugs in the cells.
[00:30:34] Speaker A: Yeah.
[00:30:35] Speaker C: Okay.
[00:30:35] Speaker B: Hard problem.
[00:30:36] Speaker C: Yeah. Turned out to be a hard problem, but we found a very systematic way of looking at it. And this ties to the heart of the, I would say, where the whole pharmaceutical industry is going. And this is super, super exciting for me because after 12 years, it feels that the game is just starting.
[00:30:57] Speaker A: Yes.
[00:30:57] Speaker C: And it was quite costly to buy the ticket.
[00:31:01] Speaker A: Yeah, right, exactly.
[00:31:02] Speaker C: To that party.
[00:31:03] Speaker B: Exactly.
[00:31:05] Speaker C: But we are sitting in a very, very good position because here is what is happening today. And now we are coming to that word, AI.
Okay. We are an AI discovery platform company. We have Lightning AI.
It's the visual biology discovery platform. This is what sets us apart.
[00:31:24] Speaker A: Yes.
[00:31:24] Speaker C: It's visually seeing biology. Why does it matter? Why is it interesting today more than it was three years ago? So AI is believed. I'm saying believed, because we still have to see the proof is believed to be transformational for drug discovery.
[00:31:44] Speaker A: Yes.
[00:31:45] Speaker C: Why is that? Because here's what people were doing before when they were looking at how are we going to find drugs for als? Okay, so let's read all the publications. Let's get ourselves very much into expert position about that. And in the publications, there are ideas for targets and pathways. So let's understand all of that. Typically, it looks like this publication A is saying protein A is controlling protein B, which is interacting. Then you have another publication. Protein B is interacting with protein C.
Third publication, Protein C regulates the MRNA of protein D.
And that is the chain that leads you to think, aha. So A, in the beginning is actually regulating D. So now I have an idea of how the whole chain works.
[00:32:36] Speaker A: Yes.
[00:32:37] Speaker C: But imagine. Imagine how tedious that is, how hard it is. And maybe you have missed some connections.
[00:32:45] Speaker B: As you just pointed out, a lot of the studies are not replicable. So some of those facts in the chain may not be true.
[00:32:50] Speaker C: So AI proved to be right out of the box. People Started to talk to ChatGPT and they said, give me 20 ideas about what is causing Alzheimer's. And it read 2 million publications, it read it and it understands it. So it kind of connects the dots and it comes back with 20 ideas and then you ask him, but why? And it retraces backwards the chain of thought, just like people were doing.
[00:33:16] Speaker A: Yeah.
[00:33:16] Speaker C: So the ability of AI to actually read all this information as a biologist and explain to you the disease biology was feeling like the light in the end of a tunnel of 40 years.
[00:33:30] Speaker A: Yes.
[00:33:31] Speaker C: People were betting big time on that, a hundred percent. So actually the first application of AI in drug discovery was following exactly what I described. It's more fitted to target discovery. It's to discover the pathways that are operating at the core mechanism of the disease and to map out the proteins.
[00:33:50] Speaker B: I think what you're saying, I just want to double click on that. But what you're saying is what that version did was accelerate an existing discovery process. It didn't change the process so much. It could read at scale and it could find the needles in the haystack, it could find the connections. But it was still the same kind of literature based thesis development, is that right?
[00:34:11] Speaker C: Yes. In the very beginning of it it was exactly like that. And actually people were astounded, you know, and looking at this and saying, wait, this is interesting, I've never thought about that. Yeah, the connecting the dots.
[00:34:24] Speaker A: Yes.
[00:34:25] Speaker C: Was kind of the ability of AI to deal with massive amounts of information in its so called brain, to actually keep all that and to connect the dots.
[00:34:36] Speaker A: Yes.
[00:34:36] Speaker C: And most people cannot do it right now, if you think about it. But it's true what you said. I call it the mining problem. It's like the data, the publications, all the data that is out there. Think about it as a mine. Okay. Now you want to. It's not a gold mine, it's a drug mine.
[00:34:53] Speaker A: Yeah.
[00:34:54] Speaker C: You have to mine the targets or the drugs. Okay.
So let's call it the target mine because they want to fish out the targets from that information. Now it's like a mine where for 40 years there were thousands and thousands and thousands of miners manually digging.
[00:35:15] Speaker A: Yes. Yeah.
[00:35:16] Speaker C: Digging in the mine.
[00:35:17] Speaker A: Yeah.
[00:35:18] Speaker C: So. So the idea is this, okay. AI is a new mining technology in the same old mine.
[00:35:26] Speaker A: Yeah. Okay, great.
[00:35:26] Speaker C: Okay. So it's the same data that we had before. Now we can bring mining technology.
[00:35:32] Speaker A: Yes.
[00:35:32] Speaker C: Now this is actually a question, you know. Yes. Technology like AI can be very powerful, but there are so many miners over so many years already digging that the question is, will it find only the leftovers? Okay. Or will it find something that nobody ever found before? The jury is still out on that. Okay. Because it throws many new ideas, but actually none of them has been validated so far as a drug. So it takes time to actually validate, but maybe. But the question is, is AI only going to give you value in better mining in the old mine or can AI create new mines? Yes, because if you could actually find completely different ways to go about it, which means you need to have data that was not available before.
Because if the data was available before, then it's a question of is AI better than all the manual effort? And by the way, they are not only manual.
Prior to AI, there were many computational technologies, computational chemistry, computational biology. So algorithmic, not neural networks, but algorithmic that are trying, you know, rule by rule step.
[00:36:46] Speaker A: Yes, yeah, yeah, yeah, exactly.
[00:36:48] Speaker C: But eventually, in your software, you already captured 9,000 rules.
So can AI find the one that sneaked? The small fish that sneaked through the.
[00:36:58] Speaker B: Right, okay, the net? Yeah, yeah, yeah.
[00:37:00] Speaker C: Just like we at anima, we actually became quite good with those rule based systems. But when we transitioned to neural network, we didn't bother to actually go back and see do we discover now more than we discovered before it just knowing that you don't need to deal with all that stuff anymore. You have your neural network and it kind of learns at scale, so it's kind of solving the problem that you don't need to write the code.
[00:37:26] Speaker B: Yeah, okay.
[00:37:27] Speaker C: For all these rules. But the question is, can you discover something that nobody else discovered? So what are the farmers trying to do today? Every big farmer, and we are seeing this all over the place. They've shifted their priorities completely from, let's say, let's try to find this yet another thing that manually we couldn't find.
Okay. Into a very big vision. The vision is, let's use AI to build a disease model.
[00:37:59] Speaker B: Welcome, producer Adam.
[00:38:00] Speaker D: Hi, Chris. How are you? This episode really gave me a new perspective on AI for our conversation today. Let's start with Yochi's journey from software to visual biology. Equating software bugs to diseased human cells is a pretty fresh take, don't you think?
[00:38:16] Speaker B: Yeah, certainly to me, I mean, I think, you know, we've seen plenty of examples of people who've had success in digital technology thinking, like, all right, now I'm going to move to the, this biology thing and go cure cancer or solve some other massive problem. And oftentimes I think those don't translate very well, biology is frustratingly intractable and complicated. So Yochi's view of applying some of the sort of big ideas that he had from tech seems actually to me to be a really smart way to think about this. Yeah, I really liked it.
[00:38:42] Speaker D: And do you think that this could have happened without automation and AI?
[00:38:46] Speaker B: No, I really don't. I think it's a little bit of right idea, right team timing that the overall technology needed to get to where it is now for people to be able to do these things. We often see this with successful companies that it's a combination of those three things. You know, it's a great team, a compelling idea, but also the environment has to be right for. It's a famous line from some venture capitalist that early is a synonym for wrong. You know, you launch the right company at the wrong time, it may not turn out to be much. So yeah, I think AI is essential here.
[00:39:15] Speaker D: I think that really flows into his concept of syncing visual biology with real world biology. It translates, transforms AI into a collaborator instead of just being the end all solution.
[00:39:27] Speaker B: Yeah, I think that's right. And I found that to be a reasonably mind blowing way to think about the world. You know, that part of your job is to build things that will solve problems. For existing large language models. That kind of puts the large language model in the driver's seat as opposed to thinking of it as a tool you can use to accomplish things. This is more how do we help the LLM achievement achieve its goals? In a way, I think that is both disturbing and smart as a way to think about kind of what the path forward looks like.
[00:39:55] Speaker D: Yeah, I do agree with that. So one of the things that I thought was really interesting, and it's not something that I've really seen, is when Jochi was talking about how his idea took off before he had even built a product. How often does something like that happen in clinical research?
[00:40:10] Speaker B: I think that's really rare. I think in all entrepreneurial ventures. Mostly the entrepreneur has to be an evangelist for their new idea, this thing that they want to create. And this is the great gift that entrepreneurs have of conceptualizing something that doesn't exist yet. And they may not quite know how to get there. In fact, I think often they don't. And they're sort of, you know, getting people excited about it enough to put money and time and effort into helping to make this vision a reality. In this case, you know, he was on television talking about this thing when it was still mostly a PowerPoint presentation. I guess that's pretty crazy. And I think that's a pretty strong, strong signal from the market. You know, entrepreneurs are always trying to find product, market fit, you know, the right product to solve a problem that customers are willing to pay for. And hard to think of many stronger signals than people ringing up and saying, when can I get this product and come talk to us about it.
[00:40:58] Speaker D: I agree. I feel like it really drove and inspired every step of his company. Moving forward with having this beacon over you in the background.
[00:41:07] Speaker B: Yeah, I think that's right. You know, there is that old line about how you make your own luck. And so certainly I think he got lucky on the top timing here, but it was all of his insight and experience and credibility and money and the money that he put in up front to get this off the ground. That's essential to getting to where they are now. It's a really fun story.
[00:41:23] Speaker C: Yeah.
[00:41:24] Speaker D: Yeah. Final point. A lot of our guests that I've noticed lately have gone from tech to biotech. Is it a short trip and an easy trip or is there a major learning curve?
[00:41:34] Speaker B: Yeah, I think be very cautious about generalizing off of Yochi's experience. I think mostly it's a really painful trip. People get there. Some tech entrepreneurs are doing really exciting things, either in biology directly by starting biotech companies or creating tools that leverage AI and live at this intersection between AI and biology. I think that's exciting and wonderful. I think it would be a mistake to think that's an easy transition. And I do think oftentimes people who've been very successful in building tech companies while they come to biology for obvious reasons. Right. It's a very compelling, mission driven thing to work on. If you're working on curing disease, that's I think, a lot more exciting and motivating than some of the other ways you could spend your time. So we get why people are interested in it. And these are very talented, successful folks who think, hey, my skills ought to translate many times. They do. It's often a trickier path because biology is so frustratingly complex as we've seen time and time again.
[00:42:29] Speaker D: Good point, Chris. I'm really excited for everyone to get to the second part of this episode as well. There's some great information there about AI and all of the human senses that it's going to encompass moving forward.
[00:42:40] Speaker B: So, yeah, I really thought this conversation was a mindset shifter for me and I hope for other listeners. So. So, yeah, bring on part two.
[00:42:48] Speaker D: All right.
[00:42:49] Speaker B: Thanks, Adam.
Thank you for listening to the latest episode of Few and Far between conversations from the front lines of drug development. Our podcast is now available on Apple Podcasts and other streaming services. Please take a moment and leave us a user review and rating today and it really helps people discover the podcast and we read all the comments. Those comments help us make Few and Far between better and better. Also, be sure to subscribe to Few and Far between so you don't miss a single episode. Got an idea for a future episode? Email us at fewandfarbetweenirroci.com or contact us on our
[email protected] I'm your host, Chris O'. Brien. See you next time.
[00:43:32] Speaker A: Sam.