Helping brain surgeons practice with real-time simulation
In their 2018 paper Enhancement Techniques for Human Anatomy Visualization, Hirofumi Seo and Takeo Igarashi state that “Human anatomy is so complex that just visualizing it in traditional ways is insufficient for easy understanding…” To address this problem, Seo has proposed a practical approach to brain surgery using real-time rendering with Unreal Engine.
Now Seo and his team have taken this concept a step further with their 2019 paper Real-Time Virtual Brain Aneurysm Clipping Surgery, where they demonstrate an application prototype for viewing and manipulating a CG representation of a patient’s brain in real time. As part of the User Interface Research Group, Igarashi Laboratory, Graduate School of Information Science and Technology at The University of Tokyo, Seo and his team are working on a real-time visualization and training application for brain surgery that more accurately portrays the brain’s structure and how it deforms during surgery. The software prototype, made possible with a grant (Grant Number JP18he1602001) from Japan Agency for Medical Research and Development (AMED), helps surgeons visualize a patient’s unique brain structure before, during, and after an operation.
Addressing the challenges of aneurysm surgeryThe brain aneurysm or cerebral aneurysm—a bulge like a balloon on a brain artery—is present in about 3% of the adult population worldwide. Aneurysms can rupture and cause the artery to bleed out internally, which causes death in 40% of cases, and permanent neurological damage in 66% of survivors. A ruptured aneurysm is the most common cause of strokes.
One of the most effective treatments for an aneurysm is clipping, where a surgeon places a small clip across the neck of the bulge. Clipping prevents further blood flow to the aneurysm and effectively holds the artery closed.
Any clipping procedure involves entering an opening in both the skull and at least one sulcus (groove) in the brain. The transsylvian approach to clipping pulls and opens the Sylvian fissure, a deep sulcus between the frontal lobe and the temporal lobe of the brain.
Within the Sylvian fissure are several blood vessels connected across the frontal and temporal lobes. To safely open the Sylvian fissure during surgery, neurosurgeons must pull each vessel aside to its dominant region. Choosing the correct direction for each vessel is important, as failure to do so could cause instability of the blood vessel, or hemorrhage.
When the surgeon can see these vessels directly, making this determination is basically straightforward. However, during the surgery the visible area is very limited—only partial segments of the blood vessels are visible.
“Neurosurgeons all over the world performing aneurysm surgery want some kind of pre-surgical simulation, practice, or check, because the actual surgical view is very limited and the surgery itself is very difficult,” says Seo. “They also know that the brief blood-vessel-branch dominant region of each blood vessel is easily predictable if they can see the whole brain and the blood vessels. So many neurosurgeons have wanted to use 3D CG for a long time, but they don’t know how to implement it.”
Creating an appAbout two years ago, Seo’s Igarashi Laboratory was asked to collaborate with the Department of Neurosurgery at The University of Tokyo Hospital to develop a CG tool to help surgeons visualize the transsylvian approach in real time, and as realistically as possible.
In their aforementioned paper Real-Time Virtual Brain Aneurysm Clipping Surgery, Seo and his fellow authors propose the approach of creating a deformable CG brain from patient data, with intelligent algorithms to automatically determine the dominant region for each blood vessel. The model includes automatically synthesized virtual trabeculae (strands of connective tissue) to represent the thin strings that connect the brain and vessels. In the application, the user can “pull” on the brain and the blood vessels to deform and open them at the sulcus, with the visuals updating instantly in real time to show the result.
With a real-time 3D visualization, the surgeon can load a model of a patient’s brain from the individual’s MRI and 3D Rotational Angiography (3DRA) data, look at it from any angle, pull apart the CG sulcus to see inside, and even make the lobes invisible to better see the blood vessels. The user controls everything via simple mouse cursor movements or via multi-touch, making the app easy and accessible for surgeons without technical experience.
In developing the application, Seo’s team chose Unreal Engine as the underlying real-time technology because of its graphics and programming tools. “Unreal Engine has powerful mathematical C++ APIs such as FVector, FMath, and UKismetMathLibrary, so we find it to be a suitable platform for research on 3D CG geometry,” says Seo.
Due to the need to implement a super-fast physics simulation, speed was also a factor. The real-time app Seo’s team developed runs at 40-50 frames per second, something the medical industry is unaccustomed to. “Real-time deformation of the brain is a big surprise to people discovering our applications,” says Seo. “The beautiful rendering quality is also very new to the medical field.”
Epic Games is pleased to support this innovative use of real-time rendering. As also evidenced in virtual reality orthopedic surgery, the ability to realistically portray anatomy in real time gives the medical community enhanced methods to train surgeons in not only the practical aspects of surgery, but also in the decision-making process it inevitably includes.
Interested in finding out how you could use Unreal Engine for medical simulation? Get in touch and we’ll be glad to start that conversation.