The Future of Medical Imaging Technology for Surgery
Every entrepreneur out there who has tried to get a business off the ground knows how important execution is. Someone else can take your exact same idea, execute on it better than you, and put you out of business. One way to keep other people from using your idea is to have some intellectual property (IP) that you can use to defend yourself with while you execute like mad, and a good place to look for IP is at top research universities like Stanford. (Here are 1,475 examples of technologies you can license from Stanford.) Universities like Stanford help facilitate the transfer of research from the lab to commercialization by promoting partnerships between academics and industry. One such example is the Stanford Center for Image Systems Engineering or (SCIEN)
SCIEN is a partnership between the Stanford School of Engineering and technology companies developing imaging systems with the purpose of getting more technology out of Stanford’s research departments and into commercial use cases. This past week, we attended an event hosted by SCIEN where they showcased some of the latest imaging technologies from both the commercial side and the academic side. Following the event, we summarized some of the work being showcased in our article on The Future of Medical Imaging and Machine Learning. Today, we’re going to look at some of the technological advancements they showcased in the area of surgery.
While being a physician is the most respected profession of any other, surgeons take that to the next level. For something incredibly complex as surgery, those who practice it demonstrate their competency by explaining extremely technical concepts in ways that even an MBA can understand. Take for example the notion of “positive and negative margins” in reference to cutting out a cancerous tumor.
Let’s say you have a golf-ball-sized tumor you need to carve out. In order to do the job properly, you need to cut out a layer of healthy tissue 5 millimeters deep that surrounds the golf ball. If all goes well, what you end up carving out will be a golf-ball-shaped tumor covered in a layer of healthy tissue that’s exactly 5 millimeters deep all around. That’s called “at the margin.” Remarkably, if that layer of tissue surrounding the golf ball is reduced to 2 millimeters, survival rates drop by half.
Having less than 5 millimeters is not desirable and something surgeons refer to as “positive margin.” Less than 1 millimeter would be considered “grossly positive margin.” When you have a case of positive margin following a surgery, there’s a cost associated with that. Usually, it involves blasting the area with chemo which is costly and unpleasant for the patient. Margins matter, and one way to reduce positive margins is by using imaging techniques.
Fluorescence Imaging Techniques
Dr. Eben Rosenthal of the Stanford Cancer Center is working on the use of optical imaging to improve cancer surgeries, and he talked about the most commonly asked question by patients after an operation – “Did you get all the cancer?” The answer is most commonly, “I removed everything I could see.” In many cases, that’s not enough. Despite the advancements made in technology, the incidence of positive margins hasn’t changed over the years.
One technology that’s being used to solve this problem involves injecting a molecular agent that literally lights up the tumor so that it appears fluorescent. The optical properties of these dyes give you 5 millimeters of tissue penetration so you can cut until you see fluorescence, then you know you’re right at the margin. They’ve already entered into IND-enabled human clinical trials for real-time cancer detection. While this technique involves “labeling” through injection, there are also technologies that don’t even require injecting agents into the body. In either case, the end result is the ability to see cancer.
While many people think of surgery as cutting someone open and having a look around, a large number of surgeries take place without actually having to cut people open. A procedure called an “endoscopy” lets surgeons enter your body through one of its natural openings to have a look around or even perform surgical procedures.
An endoscope is a slender tube with a light and camera that allows a surgeon to see inside various parts of the body and even perform surgical procedures. A technique called Fluorescence Lifetime Imaging Microscopy (FLIM) allows for tissue fluorescence without the use of molecular agents by using laser pulses to display fluorescent properties. The laser pulses are delivered by fiber optics which could be implemented into standalone probes or endoscopes. It can be used by surgeons in real-time to distinguish between cancerous tissues and healthy tissues or to guide robots like the da Vinci from Intuitive Surgical. One common application for endoscopic surgery is bladder cancer.
The majority of bladder cancer surgeries are performed using endoscopes. It’s the sixth most common cancer and the most expensive cancer from diagnosis to death. One reason it’s expensive to treat is because of the high recurrence rate of up to 50-70%. In order to reduce recurrence, we need to figure out better ways of making sure we get all the tumors out of the bladder.
One way to improve the outcome of surgeries is by knowing in advance where the tumors are located. A technique called “confocal laser endomicroscopy” involves enabling an endoscope with microscopic capabilities allowing for a truly closer look. The amazing ability to examine the inside of a human body using a microscope is also made possible by fluorescent agents, some of which are becoming increasingly sophisticated.
Another workshop presenter was Dr. Suehyun Cho who is working at a startup called Bionaut Labs to develop a technology that binds a fluorescent molecule to a nanoparticle which can locate and attach to cancer cells. The nanoparticle fluoresces and heats up when it is illuminated by a near infrared light. This lets you see the cancer cells and kill the cancer cell at the same time. The bladder is a great place to start because you can treat it like a closed system (whatever you put in the bladder stays in the bladder). Joseph Liao, a urologist who was at the workshop, talked about how the holy grail is a system that both identifies (marks) and destroys cancer in one blow. That sort of technology falls under a field of work called “theranostics.”
An emerging area of study that Dr. Rosenthal’s lab has been working on is called theranostics – a combination of therapy and diagnostics – which will bring about a new form of precision medicine. For example, doctors may use one drug to identify cancerous tissue and another drug to then target the tissue that’s been identified as cancerous. His team is proposing the use of CD47 as a theranostic target which can provide automated tumor detection with AI algorithms identifying the cancer in real-time and guiding the surgeon.
In order to train the algorithms, the team took a collection of endoscopy videos, marked out the tumors, then fed them to a neural network. They were then able to demonstrate how cancerous tissues can be identified in real-time by AI algorithms and applied the technique to videos taken from 81 different patients. This ability to identify cancer in real-time is a form of augmented reality that’s also being used in traditional microscopy.
Augmented Reality (AR) and Virtual Reality (VR)
AR for Microscopes
Also present at the workshop was Craig Mermel, Product Lead for Pathology at Google AI, who talked about the work they’re doing with subsidiary Verily to create an augmented reality microscope that can detect cancerous tissues in samples of tissue that have been extracted from tumors (also called biopsies). The example they gave was prostate cancer which relies on something called a Gleason Scale to help evaluate the prognosis of men with prostate cancer and decide which therapies to use.
By integrating AI algorithms into the microscope’s camera, they’re able to overlay microscopic images with annotations that depict cancerous tissues. The tool has proved to be better than the human eye – 91% sensitivity vs. 73% – and the technology was CE marked in 2018. Since most high-end microscopes already have cameras, it’s believed that 90% of standard clinical microscopes could support this technology as an add-on.
Skull Surgeries and AR
Another presenter at the workshop was Dr. Jason Chan who flew all the way in from Hong Kong to talk about how we might be able to advance head and neck surgery using AR. Surgeries that take place within the skull often use “bony landmarks” for guidance. In order to register the patient’s anatomy for these bony landmarks, various methods are used which usually involve waving a wand over the patient’s head for a reading. The process can take a while and it’s a frequent pain-point for surgeons. They’re a necessary evil though, because these “patient registrations” provide a roadmap for the surgeon who needs to navigate inside someone’s head with an endoscope to cut out a tumor while being wary of not snipping nerves and big blood vessels.
Using AR, Dr. Chan is now able to overlay the endoscopic video with actual depictions of nerves and blood vessels.
While promising, such a technology faces challenges. Someone raised the question of how surgeons might feel when seeing these images, and to what extent showing an artery in neon pink is more annoying than helpful. Do these overlays need to be rendered only as lifelike images before they become truly useful? The ability to switch these overlays on or off is needed since not all surgeons might want to use them. In order to convince surgeons that the technology is useful, they’ll need to show a positive patient outcome. This work takes time.
Preoperative Preparation with VR
Another person who spoke about using AR and VR for planning and training was Dr. Raphael Guzman who suggested that surgeons might give the patient VR glasses to see a view of what the surgery will entail to make them feel more comfortable going into the procedure. For surgeons, they can walk around the image in virtual reality and view it from all angles. This lets them more easily interpret the actual images they see during surgery. Dr. Guzman’s work shows that medical residents become much more comfortable after viewing images in VR. One prerequisite needed to view medical imagery this way is something called “medical image segmentation.”
In order to view medical images in VR or otherwise, a process called “medical image segmentation” is used to carve up what’s being displayed so that the viewer can easily differentiate what they’re looking at. Here’s an example.
The act of carving an image up into nicely colored sections is referred to as “medical image segmentation” and it’s challenging because our bodies are all so different. On multiple occasions during the workshop, people lamented the fact that no technology solution presently exists for “automated medical image segmentation,” something that’s solvable with machine learning but apparently nobody has solved yet. To the MBA who reads this article and runs with that idea, you’re welcome.
Mixed Reality Training
Another workshop attendee was Case Western Reserve University who was there to show off their use of AR to train students at their new health education campus which opted to forego the cadaver lab in favor of teaching anatomy using digital methods. They were the first Microsoft partner to use HoloLens, and their product HoloAnatomy has been deployed now for about a year. Studies showed that it doesn’t matter if you use a cadaver or HoloLens, the training turns out to be equally effective. However, there is a 60% improvement in time savings using AR and teachers can also quickly figure out who attended class and who didn’t.
The goal of using mixed reality for training is to shorten the time it takes to go from beginner to expert. Pilots have been using simulators to accomplish this for decades, so why have surgeons only recently been looking at using simulations to learn? In a simulation, you are immersed in the experience, so your brain makes new connections as it learns. Medical residents learn much more effectively this way, something that’s also been observed with pilots. It’s all about combining preoperative technique with interoperative techniques to improve outcomes. As with all the technologies we’ve discussed in this article, the goal is to level the playing field between a beginner surgeon and a great surgeon.
Precise diagnosis is a prerequisite to precision medicine. Every cancer is different, and we can’t take a one-size-fits-all approach when it comes to how tumors are extracted. No longer is it appropriate to spray-and-pray with chemo because the surgeon had a positive margin for whatever reason. The imaging technologies we’ve looked at extend well beyond using AI to look at X-rays and now include the use of entirely new approaches like theranostics or augmented reality cancer detection which are now possible, thanks to advancements in technology. The mythical “cure for cancer” will eventually manifest itself as a combination of both early detection and precision medicine.
Looking for lower transaction costs? Zacks Trade is offering $1 trades for U.S. stocks and options for one year after you open an account. After a year of dollar trades, you'll pay just $3 a trade or a penny a share, whichever is greater. Zacks Trade is one of the cheapest brokers out there and we use them to trade stocks on over 90 foreign stock exchanges. Click here to open an account.