Read the First Edition of The Lens, a new magazine by LSI arrow-icon

Christopher Kent, Reveal Surgical - AI-Enabled Optical Detection | LSI Europe '22

Reveal Surgical's Sentry system is using advanced optical techniques such as Raman spectroscopy along with sophisticated machine learning algorithms to identify tissues with diffusely infiltrative cancer in vivo, while in surgery.
Speakers
Christopher Kent
Christopher Kent
President & CEO, Reveal Surgical

Transcription

Christopher Kent  0:04  

Hi, thanks. Yes, I'm Christopher Kent, I'm the founding CEO of Reveal Surgical. We're a Montreal based company that's been going for about six and a half years, and happy to share our story with us. So what we do are with you. So what we do in a nutshell is bring together the power of AI and machine learning, and combine that with the real time optical biopsy platform that we're developing. And this is really to enable more data driven surgical approaches to oncology, and ultimately deliver better cancer treatment. This is our product. This is what we've been working on for the last few years. It's our century system. And it's actually a deceptively simple product, what it does is perform in vivo real time non destructive Raman spectroscopy measurements directly in the patient. This is an endogenous signal we do don't rely on any contract agents or drugs. And what this does is it allows us to use the molecular signature of tissues to develop fairly complex diagnostic models that we can seamlessly integrate into existing surgical workflows, and other tech platforms that are present in the ER. But the great thing about that endogenous signal is it opens the door to a whole range of very customizable data analytics. So by using our machine learning models, we can actually provide actionable insights to the surgeons and in turn support safer, more effective procedures, which ultimately increases the clinical value of that surgeon, that surgery sorry. So by having more effective surgeries, we reduce the overall cancer cell burden at the margin. That in turn allows the chemo and the radiation to really do their thing. What this looks like in the clinic. So this is an actual case from one of our early pilot studies. This is a glioblastoma or sorry, actually a low grade glioma resection procedure. And the standard of care is to rely on the preoperative imaging the MRI here to identify the contrast enhancing region, and resect as much of that region as possible based on surgical guidance. So you can see the MRI defined margin outlined there in red. But where we come in is that once that barrier has been debunked, we can use the sensory system to take a series of measurements alongside the sort of conventional margin, and use that information in real time directly in the patient, to safely extend that margin out to where all the system or all the readings come back as clean. It's a very straightforward readout. So real time they touched the tissue, it says tumor, it says Normal there, they can then act on that information and push that that surgery out as safe as possible. And so you in the resulting Raman defined margin, you now have maybe roughly double the size of the resection. In this particular case, this was an extra eight and a half grams of extra cancer infiltrated tissue, none of which would showed up on the on the preoperative imaging. And this patient actually is now prognosis to be 20 years, 25 years progression free survival. And so that has a huge impact in terms of the overall care journey. But it all starts with the surgery. So there are a lot of other players in the space of of margin detection and cancer detection. But we really sort of stand out from the crowd and the number of a number of ways. I think that the most critical factor is that fact that we can be used in real time directly in vivo, in the surgical cavity, we don't require the removal of any tissue. So if the tissue is healthy, it stays intact. And we can very easily integrate into that workflow. We have a pretty variable product strategy. Right now our main focus is on open surgery indications with our LEAD program being in brain tumors. But we're also developing minimally invasive surgical probes, and laparoscopic probes, as well as endoluminal probes. Each one of these indications has its own sort of value case, that's very unique to the indication. But we've spent a lot of time studying these problems. And we understand sort of where we can close that gap with the millimeter level precision that we're able to provide. The takeaway from this, though, is that collectively, this represents a massive market opportunity with hundreds of 1000s of oncology procedures being performed worldwide every year. So focusing in on our brain tumor indication, this is our pilot data, these studies that there's three different study sites that were performed, this is now largely completed, we acquired data live in the patient both in the known tumor region, that's the area highlighted in red, but also in that peripheral region. All the data points were, as I said, acquired directly from the patient. And as soon as the Raman measurement is made, we then take a corresponding biopsy, so we're able to compare the biopsy results to the Raman results. And that's how we build up the models over time. We now have 1000s of data points from over 150 different patients. And we've shown accuracy is ranging anywhere from 81-2% all the way up to 96% depending on the different tumor types, and I'll show you the models in a second. A few of the other highlights that came out of our pilot studies, we showed that we were significantly more sensitive to cancer than the fluorescent imaging agents that are available, particularly in those areas beyond the traditional margins. And not only are we finding more cancer, we're also removing more cancer. And so when we did limited cases where the surgeon actually acted on the information, they will resecting additional tissue in over 50% of the cases. And that's, that's quite important, because it shows that not only is this information valuable, but it's also actionable. This is data that's all now been presented at various meetings over the last year. These are the models that I was referring to, you can see there are subtle differences both in the features and the sort of the profiles of the spectra as we look across different tumor types. So there's the meningioma in the top of the brain metastases in the middle and the glioblastomas at the bottom, but very, very high degrees of separability, across all different tumor types, the glioblastoma as you can see, we get slightly less performance. And that's not a decrease in specificity, we still stay quite high in terms of specificity, but because of the very diffusely, invasive nature of that cancer, you know, we see a decrease in the sensitivity of the models as we get farther and farther away from that traditional tumor bed. And that's to be expected. But that's also where the the highest need is. And so that's actually going to be our first indication as we move into the pivotal study. So this is a class three device, it's subject to a PMA, we are working directly in the brain. But we have been given breakthrough device designation, we're actually grandfathered into the program, when the program was launched. And we've now established our pivotal trial design with the input from FDA. So we're on the verge of launching this study. This is really sort of the the next step, we're going to be launching in 10 sites across Europe and North America. And it's a two part study design, the first part being largely observational, where we provide the final validation of the accuracy of the models. And then the second part should we hit our performance targets in part one is really geared to demonstrate safety. So in part 260 patients, the surgeons will receive the data in real time that we'll be able to extend those resections. And then we track those patients against match controls from part one, and so that those patients are no worse off. Having received the intervention are in terms of our go to market strategy. As I mentioned, the great thing about what we're doing is that it's very easy to integrate this directly into existing technologies that are already in the operating room. And so we've pursued strategic partners that already have a footprint with our patients in the field of neurosurgery, we've been working with the group at Medtronic, the neurosurgical group, they've provided a tremendous amount of input. And then in our other programs, we have been working in surgical robotics with Intuitive Surgical. In terms of our product vision. This is our clinical prototype that has been deployed in 11 hospitals to date. We've wrapped up a lot of the pilot studies and the formative testing, we've now. Oh, there we go. We've now developed our actual commercial prototype, we've built two of these systems and are moving into design and engineering vnv. This is what will be deployed in the pivotal trial. And then ultimately, we seek to commercialize this as an add on to navigation within the surgical suite where the cart is sold as the razor the probes are sold as the razor blades, those are directly navigated, and the data can be integrated into sort of the overall digital or ecosystem, giving the surgeons a really powerful tool to add sort of molecular inputs to the overall and anatomical assessments that they're doing their cases. In terms of financing, we've raised 6.8 million to date, we've had another million dollars in non dilutive funding. We're here raising 10 million series A and we actually already have a strategic lead for 8.5 of that 10 We're looking to extend the round to 12 million and we're currently in conversations with different players who want to come in alongside are strategically. Thank you very much for your time.

 

 

LSI USA ‘25 is filling fast. Secure your spot today to join Medtech and Healthtech leaders.

March 17-21, 2025 Waldorf Astoria, Monarch Beach | Dana Point, CA Register arrow