Ultrasound-guided spinal navigation

Spinal fusion surgery is a common procedure to treat spinal instability when medication and physical therapy fail. During the last two decades, the number of annual spinal fusion procedures has known a significant increase, with over 413,000 interventions reported in the United States. The surgery consists in rigidly fusing multiple vertebrae using rods and bone grafts to help stabilize the spinal column. The rods are fixed to each vertebra using screws implanted within the vertebral pedicles. In open surgery, the posterior part of the vertebra is exposed and the surgeon uses image-guided surgery (IGS) to align the screw trajectory through unexposed anatomy. The current IGS procedure is based on intra-operative 2D fluoroscopy or 3D computed tomography (CT) imaging, which increases operating time, interrupts the surgical workflow and exposes the patient and the operating room personnel to harmful radiations. In this research, we investigate a radiation-free alternative using intra-operative ultrasound (iUS) imaging for spinal navigation. The objectives are:
  • to address the problem of patient-to-preoperative image alignment during spine surgery;
  • to build an open-source software that provides basic functionality features for pedicle screw navigation;
  • to evaluate the solution in a clinical environment.
Publications:

Augmented Reality for Brain Tumour Surgery

Augmented reality (AR) visualization in image-guided neurosurgery (IGNS) allows a surgeon to see rendered preoperative medical datasets (e.g. MRI/CT) from a navigation system merged with the surgical field of view. Combining the real surgical scene with the virtual anatomical models into a comprehensive visualization has the potential of reducing the cognitive burden of the surgeon by removing the need to map preoperative images and surgical plans from the navigation system to the patient. Furthermore, it allows the surgeon to see beyond the visible surface of the patient, directly at the anatomy of interest, which may not be readily visible.

 

ar

Figure: Augmented reality visualizations from our neuronavigation system. The surgeon used AR for craniotomy planning on the skin (A), the bone (B), the dura (C), and also after the craniotomy on the cortex (D). In A, the orange arrow indicated the posterior boundary of the tumour and the blue arrow indicates the planned posterior boundary of the craniotomy that will allow access to the tumour. The yellow arrow shows the medial extent of the tumour, which is also the planned craniotomy margin. In B, the surgeon uses the augmented reality view to trace around the tumour in order to determine the size of the bone flap to be removed. In C, AR is used prior to the opening of the dura and in D the tumour is visualized on the cortex prior to its resection.

Video

Publications

  1. M. Kersten-Oertel, I. J. Gerard , S. Drouin , J. A. Hall , D. L. Collins. “Intraoperative Craniotomy Planning for Brain Tumour Surgery using Augmented Reality”, to be presented at CARS 2016. 
  2. I. J. Gerard, M. Kersten-Oertel, S. Drouin, J. A. Hall, K. Petrecca, D. De Nigris, T. Arbel and D. L. Collins. (2016) “Improving Patient Specific Neurosurgical Models with Intraoperative Ultrasound and Augmented Reality Visualizations in a Neuronavigation Environment,” in 4th Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging, LNCS 9401, pp. 1–8.*** Best Paper
  3. Kersten-Oertel, M., Gerard, I. J., Drouin, S., Mok, K., Petrecca, K., & Collins, D. L. (2015) Augmented Reality for Brain Tumour Resections. Int J CARS, 10(1):S260.

Spinal Surgery

Each year more than 20,000 Canadians are treated surgically for lower back pain. Image guided surgical (IGS) techniques can reduce the number of complications that can arise in classical surgical techniques that can be as high as 20–30%. Intraoperative imaging can further reduce risks by improving accuracy and precision in the placement of pedicular screws used for lumbar fixation. This project will investigate the use of intra-operative ultrasound as an inexpensive alternative to intraoperative CT or MRI. We will develop techniques to precisely identify the boney surface of the vertebrae in both CT and ultrasound and use this information to improve patient-image registration required for image guided spine surgery. This in turn will improve the accuracy in guidance and precision of screw placement and result in better care for the patient.

We have tested the hypothesis that intra-operative ultrasound is viable, precise and clinically relevant to improved precision and will reduce operating time in image guided surgery of the spine. Our specific aims were:

  • To develop an automated model-based method to segment vertebrae of the human spine from 3D computed tomography data and ultrasound data.
  • To develop an automated slice-to-volume registration method using intraoperative 2D ultrasound (US) to align a patient’s vertebra to pre-operative CT images to improve the accuracy of image guided surgery and to reduce the time required for registration.
  • To validate and characterized the accuracy of the registration and segmentation methods in vitro using human plastic spine models, swine models and human cadaver specimens.
  • To evaluate the precision and speed of the ultrasound-based registration method in vivo with patients in the context of lumbo-sacral pedicle screw implantation with respect to landmark-based registration in a commercial navigation system.

Increasing the accuracy of the patient-image registration enabled the surgeon to improve pedicular screw positioning and thus decrease risks for the spinal cord, nerve roots or blood vessels. Improved precision can also increase instrumentation strength, thus preventing loosening of the misplaced hardware. When used with an anterior approach, the IGS system can be used to facilitate removal of disks or tumors, or positioning of artificial disks. By decreasing the time required for the registration procedure, overall operating time will be reduced, implying shorter muscle retraction times with the potential of reduced post-operative pain and reduced risk of infection for the patient. For the clinical team, shorter operating times will reduce fatigue, and may enable completion of more complex procedures with greater assurance. This work was supported by a CIHR operating grant (PI: Collins, co-PIs: Goulet).

References

[1] Fonov VS, Le Troter A, Taso M, De Leener B, Lévêque G, Benhamou M, Sdika M, Benali H, Pradat PF, Collins DL, Callot V, Cohen-Adad J. Framework for integrated MRI average of the spinal cord white and gray matter: The MNI-Poly-AMU template. Neuroimage. 2014 Sep 7;102P2:817–827

[2] G Forestier, F Lalys, DL Collins, J. Meixensberger, S Wassef, T Neumuth, B Goulet, L Riffaud, P Jannin. Multi-site study of surgical practice in neurosurgery based on Surgical Process Models, Journal of Biomedical Informatics, 46(5), October 2013, Pages 822–829

[3] Yan CX, Goulet B, Chen SJ, Tampieri D, Collins DL. Validation of automated ultrasound-CT registration of vertebrae. Int J Comput Assist Radiol Surg. 2012 Jul;7(4):601–10

[4] Yan, C. X., Goulet, B., Tampieri, D., & Collins, D. L. (2012). Ultrasound-CT registration of vertebrae without reconstruction. International journal of computer assisted radiology and surgery, 7(6), 901–909.

[5] C.X.B. Yan, B. Goulet, J. Pelletier, S.J.S. Chen, D. Tampieri and D.L. Collins, “Towards Accurate, Robust and Practical Ultrasound-CT Registration of Vertebrae for Image-Guided Spine Surgery,” International Journal of Computer Assisted Radiology and Surgery, 2011 Jul;6(4):523–37.

Brain Shift

Since the introduction of the first intraoperative frameless stereotactic navigation device, image guided neurosurgery has become an essential tool for many neurosurgical procedures due to its ability to minimize surgical trauma by allowing for the precise localization of surgical targets. The integration of preoperative image information into a comprehensive patient-specific model enables surgeons to preoperatively evaluate the risks involved and define the most appropriate surgical strategy. Perhaps more importantly, such systems enable surgery of previously inoperable cases by helping to locate safe surgical corridors through IGNS-identified non-critical areas.

For intraoperative use, neuronavigation systems must relate the physical location of a patient with the preoperative models by means of a transformation that relates the two through a patient-to-image mapping. Throughout the intervention, hardware movement, an imperfect patient-image mapping, and movement of brain tissue during surgery invalidates the patient-to-image mapping. These sources of inaccuracy, collectivey described as ‘brain shift’, reduce the effectiveness of using preoperative patient specific models intraoperatively. Intraoperative imaging, such as MRI, has been shown to improve the accuracy of tumour resections through lengthened image guidance. However, such technology is extremely expensive, prolongs surgery, poses logistical challenges during awake surgeries, and is available in only a few centres worldwide. We have developed a neuronavigation platform (IBIS Neuronav) that integrates tissue deformation tracking during surgery based on tracked intraoperative ultrasound (iUS) that can accurately align all pre-operative data to the iUS to account for brain shift throughout a surgical intervention.

 

 

Reference:

[1] I. Gerard and D. L. Collins, “An Analysis of Tracking Error in Image Guided Neurosurgery”, Int. J. Computer Assisted Radiolgy and Surgery. 2015, Jan 4; 1–10 [Epub ahead of print].

[2] H. Rivaz, D.L. Collins, “Near real-time robust non-rigid registration of volumetric ultrasound images for neurosurgery”, Ultrasound in Medicine and Biology. 2015 Feb; 41(2): 574–587.

[3] H. Rivaz, S.J.S Chen, D.L. Collins, “Automatic Deformable MR-Ultrasound Registration for Image-Guided Neurosurgery”, IEEE Transactions on Medical Imaging. 2015 Feb; 34(2); 366–380.

[4] H. Rivas, Z. Karimaghaloo, D.L. Collins, “Nonrigid Registration of Ultrasound and MRI Using Contextual Conditioned Mutual Information”, IEEE Trans Med Imag. 2014 Mar;33(3):708–25.

[5] S. Beriault, A. Sadikot, F. Alsubaie, S. Drouin, D.L. Collins, G.B. Pike. “Neuronavigation using susceptibility-weighted venography: application to deep brain stimulation and comparison with gadolinium contrast”, Journal of Neurosurgery. 2014 Jul;121(1):131–41.

[6] L. Mercier, D Araujo, C Haegelen, RF Del Maestro, K Petrecca, DL Collins, “Registering pre- and post-resection 3D ultrasound for improved residual brain tumor localization”, Ultrasound in Medicine and Biology, 2013 Jan;39(1):16–29.

[7] M. Kersten-Oertel, P. Jannin, D.L. Collins, “The State of the Art in Mixed Reality Visualization in Image-Guided Surgery”, IEEE Transactions on Visualization and Computer Graphics. 2013 Mar;37(2):98–112.

[8] D. De Nigris, D. L. Collins, T. Arbel, “ Fast Rigid Registration of Pre-Operative Magnetic Resonance Images to Intra-Operative Ultrasound for Neurosurgery based on High Confidence Gradient Orientations”, 2013 July; 8(4): 649–661.

 

Stereotaxic surgery for movement disorders

We have recently developed techniques [1] used to create a lower resolution 3D atlas, based on the Schaltenbrand and Wahren print atlas, which was integrated into a stereotactic neurosurgery planning and visualization platform (VIPER), and a higher resolution 3D atlas derived from a single set of manually segmented histological slices containing nuclei of the basal ganglia, thalamus, basal forebrain, and medial temporal lobe. We have therefore developed, and are continuing to validate, a high-resolution computerized MRI-integrated 3D histological atlas, which is useful in functional neurosurgery, and for functional and anatomical studies of the human basal ganglia, thalamus, and basal forebrain.

Parkinson’s disease (PD) is a neurodegenerative disorder that impairs the motor functions. Deep brain stimulation (DBS) is an effective therapy to treat drug-resistant PD. Accurate placement of the DBS electrode deep in the brain under stereotaxic conditions is key to successful surgery [2]. Accuracy depends on a number of factors including registration error of the stereotaxic frame, geometric distortion of the MRI scans and brain tissue shift during resulting from cerebrospinal fluid (CSF) leakage, cranial pressure change, and gravity after the burr-hole is opened.

By scanning through acoustic skull windows, transcranial ultrasound can provide non-invasive visualization of internal brain structures (i.e. midbrain, blood vessels, and certain nuclei like the substantia nigra) as well as metallic surgical instruments (i.e. DBS electrode and cannula). We believe that such images can be used to improve stereotaxic accuracy.

In the past decade, we have developed a prototype image-guided neuronavigation system called IBIS (Interactive Brain Imaging System) in our research laboratory, which enables the acquisition of intraoperative 2D/3D ultrasound, and addresses the issue of registration errors caused by brain shift by using ultrasound data to improve the patient/image alignment. By linking the preoperative MRI, and the corresponding surgical plan, to the transcranial ultrasound with appropriate registration methods, we will enable real-time monitoring of the DBS implantation and will improve the safety and accuracy of the procedure. Our goal is to acquire transcranial ultrasound images, and examine its performance as an intraoperative imaging modality.

The study, as well as surgical treatment of PD necessitate the delineation of basal ganglia nuclei morphology. Few automatic volumetric segmentation methods have attempted to identify the key brainstem substructures including the subthalamic nucleus (STN), substantia nigra (SN), and red nucleus (RN) due to their small size and poor contrast in conventional MRI. I recently developed a technique [3] with my Ph.D. student Yimming Xiao based on a dual-contrast patch-based label fusion method to segment the SN, STN, and RN. Our proposed method outperformed the state-of-the-art single-contrast patch-based method for segmenting brainstem nuclei using a multi-contrast multi-echo FLASH MRI sequence. This method is encouraging as it will provide promising developments for the treatment and research of PD. This study is supported by the NSERC/CIHR Collaborative Health Research Program

Reference

[1] A. F. Sadikot, M. M. Chakravarty, G. Bertrand, V. V Rymar, F. Al-Subaie and D. L. Collins. “Creation of computerized 3D MRI-integrated atlases of the human basal ganglia and thalamus”, Frontiers in Systems Neuroscience, 2011;5:71

[2] Y. Xiao, V.S. Fonov, S. Beriault, F. Al Soubaie, M.M. Chakravarty, A.F. Sadikot, G.B. Pike and D.L. Collins, “Multi-contrast unbiased MRI atlas of a Parkinson’s disease population”, Int J Comput Assist Radiol Surg. 2015 March; 10(3):329–41.

[3] Xiao Y, Fonov VS, Beriault S, Gerard I, Sadikot AF, Pike GB, Collins DL. Patch-based label fusion segmentation of brainstem structures with dual-contrast MRI for Parkinson’s disease. Int J Comput Assist Radiol Surg. 2015 July; 10(7):1029–41

[4] M.M. Chakravarty, G. Bertrand, C. Hodge, A.F. Sadikot, and D.L. Collins, “The creation of a brain atlas for image guided neurosurgery using serial histological data,” NeuroImage. 2006; 30(2): 359–76.

Augmented Reality

In image-guided surgery the surgeon must map pre-operative patient images from the navigation system to the patient on operating room (OR) table in order to understand the topology and locations of the anatomy of interest below the visible surface. This type of spatial mapping is not trivial, is time consuming, and may be prone to error. Using augmented reality (AR) we can register the microscope/camera image to pre-operative patient data in order to aid the surgeon in understanding the topology, the location and type of vessel lying below the surface of the patient. This may reduce surgical time and increasing surgical precision.

Our current projects in this area include:

YouTube-logo-full_color