The Automation and Interventional Medicine Robotics Research Laboratory (AIM Lab) at WPI was founded in 2008 to enhance healthcare through smart medical robotic systems. The lab supports a wide array of undergraduate and graduate student projects related to healthcare cyber-physical systems. The AIM Lab’s core capabilities range from low-level embedded system hardware and software development to robot design and fabrication to high-level visualization and control software. A primary focus area is image-guided intervention, for which intraoperative medical imaging (primarily MRI) is used to provide “closed loop” surgical interventions.
The AIM Lab has strong collaborations with several clinical sites including UMass Medical School, Albany Medical Center, and Brigham and Women’s Hospital. The AIM Lab also is affiliated with PracticePoint at WPI, which is an alliance of academic, industry, and healthcare providers to evolve innovative medical technologies. This unique facility embeds clinical suites in an engineering environment where we co-locate advanced manufacturing and test equipment with clinical settings that replicate four point-of-practice healthcare settings including surgical imaging, rehabilitative care, controlled care, and residential suites.
Our surgical robots operate inside the bore of an MRI machine. The MRI both injects noise into electrical sensors and is sensitive to electrical noise. The machine’s intense and fast-changing magnetic fields require carefully designed systems that operate in or near it. The sensitivity to electrical noise in the megahertz range also requires the drive systems to produce clean signals that do not overlap the frequency band used by the MRI for imaging. Furthermore, a surgical robot requires precise coordination among all its motion axes so that paths are followed precisely as the surgeon intends.
Our first prototypes used a SOM that contained the Xilinx Zynq-7030 system on a chip and met most of our needs. However, the lack of an integrated programming environment significantly increased prototyping time. As a result, we redesigned our system to incorporate the NI CompactRIO SOM that uses a developer-friendly FPGA hardware description visual programming language in NI LabVIEW and exports a C API in easy-to-use header files. This enabled faster software development so our team could focus on our core value proposition instead of infrastructure work. An established development environment and commercial off-the-self (COTS) hardware also enhance our ability to validate the system as we move towards scale-up and commercialization.
How It Works
The control system is composed of five different types of boards: backplane, daughtercard, power input, power distribution, and breakout (Figure 1). This architecture ensures we meet the system’s modularity requirement while maintaining all the required safety features of a medical device.
The backplane connects the entire system and provides a gateway to external control. It is the largest circuit board in the system. The backplane interfaces the CompactRIO SOM to each one of the up to 10 daughtercards with individual sets of LVDS SPI lines, a heartbeat, a card detect line, and a card reset line. Gigabit Ethernet over a fiber-optic connection jack serves as the primary connection between the backplane and external entity (Figure 4). The CompactRIO SOM runs the NI Linux Real-Time OS on which our custom control software runs. A LabVIEW implementation of hardware SPI blocks and a packet parser for each daughtercard connection offloads processing from the ARM core. The use of the LabVIEW FPGA Compile Cloud Service reduced our FPGA image creation time by approximately 25 percent.
Each daughtercard attaches to the backplane via PCI Express-style connectors and handles the low-level control of one robot axis. This includes motor actuation, sensing, and encoder-counting functionalities. The number and type of daughtercards can vary depending on the system.
The power input board receives raw power from the power supplies, measures current usage, and switches on and off power for the motor supply rails. This board is a key member of the safety chain. It analyzes the state of each of the heartbeat signals, detects which slots have daughtercards physically present, emergency stops the system, and independently determines whether the switched rails should be powered.
The power distribution board provides voltage rails to the daughtercards. The robot-dependent breakout board connects one or more daughtercards to the cable that connects to the robot inside the MRI.
This motion control system is the first to be modular enough to work with multiple MRI surgical robots; operators simply replace the daughtercards that drive each axis to match the motor and sensor types of that specific robot. The control system is currently used as a controller for two different MRI-compatible surgical robots: the NeuroRobot (in preclinical trials) to ablate deep brain tumors and the ProstateRobot (a previous variation was used in a clinical trial) to conduct targeted biopsies. See figures 2 and 3, respectively.
Stereotactic neurosurgery is a form of minimally invasive surgery that uses a 3D coordinate frame to target locations inside the brain through a single burr hole. However, the requirement to use preoperative images for surgical planning can lead to errors of up to 20 mm due to brain shift when the actual procedure takes place. The NeuroRobot addresses this issue by remaining in the bore with the patient during MR imaging and aligning itself with the target locations based on interactively updated intraoperative image feedback. The robot has a total of 7 degrees of freedom including insertion and probe rotation. We are experimenting with using the robot to ablate brain tumors using interstitial needle-based high-intensity focused ultrasound transducers; it can also be used for other procedures such as biopsy, electrical stimulation, gene therapy delivery, and brachytherapy, which involves implanting small radioactive seeds near or inside a tumor.
Accurate prostate biopsies are an important step towards the diagnosis of prostate cancer, which is the second leading cause of cancer-related deaths among men in the United States. Although intraoperative imaging is sometimes used in clinical prostate biopsies, manual open-loop insertions suffer from decreased targeting accuracy due to unmodeled needle deflection and target shift. The use of real-time MRI alongside a robot for closed-loop active compensation to steer the needle towards the target can help address this issue. We are also experimenting with virtual fixturing as a way to program the robot to guide the needle around sensitive structures. This adds an extra layer of safety to the procedure.
The strong magnetic fields in the MRI do not allow for the use of common electrical actuators such as DC motors. Instead, these robots use piezoelectric resonant motors. These actuators have a thin lead zirconate titanate (PZT) ring attached to a copper stator that when vibrating at resonance creates a traveling wave that leads to the rotation of the rotor through frictional coupling. To properly control these motors, the control system needs to identify the resonance point and tune the drive frequency around it based on the desired velocity.
We are analyzing a 30-patient clinical trial with the prostate biopsy robot in assistive mode during which the robot directed the needle and the surgeon inserted it. Following the clinical trials of robot-assisted biopsy, a cooperatively controlled needle driver was developed to mount in place of the manual needle guide. This system accepts a user force input to create an insertion velocity alongside closed-loop autonomous rotational bevel-tip positioning. Validation studies of this system using tissue phantoms inside the MR bore are under way to prepare for preclinical trials. Animal trials for the NeuroRobot are currently under way to validate the overall system prior to human testing.
We will continually improve the control system. Through the use of NI’s integrated development environment, we can focus on adding and improving features without spending time on infrastructure work.
The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Worcester Polytechnic Institute