Ultrasound and Augmented Reality

For intraoperative use, neuronavigation systems must relate the physical location of a patient with the preoperative models by means of a transformation that relates the two through a patient-to-image mapping. By tracking the patient and a set of specialized surgical tools, this mapping allows a surgeon to point to a specific location on the patient and see the corresponding anatomy on the patient specific models. However, throughout the intervention, hardware movement, an imperfect patient-image mapping, and movement of brain tissue during surgery invalidates the patient-to-image mapping. These sources of inaccuracy, collectivey described as ‘brain shift’, reduce the effectiveness of using preoperative patient specific models intraoperatively. Additionally, the surgeon is left with the cognitive burden of merging the virtual models of the patient with the visible and invisible physical anatomy.

An underlying advantage of IBIS (IBIS Neuronav) is that it allows for both individual streams of research as well as the combination of different streams to overcome major or minor pitfalls within them. This is demonstrated through our combination of iUS and AR for improving the accuracy of AR visualizations during tumour neurosurgeries. With this combination of technologies, the interpretation difficulties associated with US images are mediated with detailed AR visualizations and the accuracy issues associated with AR are corrected through registration of the US images. This allows for improved patient-specific planning intra-operatively by both prolonging the reliable use of neuronavigation and the understanding of complex three dimensional medical imaging data so that different surgical strategies can be adapted when necessary.

The avatar represents the orientation of the patient’s head. The surgical field of view (left), the AR view before US correction where the tumour seems to conform unnaturally to the surrounding tissue (middle), and the brain shift corrected AR view where the tumour visualization now lines up naturally with surrounding tissue and can be used for accurate intra-operative planning.

Publications

[1] Gerard, Ian J., Marta Kersten-Oertel, Simon Drouin, Jeffery A. Hall, Kevin Petrecca, Dante De Nigris, Tal Arbel, and D. Louis Collins. “Improving Patient Specific Neurosurgical Models with Intraoperative Ultrasound and Augmented Reality Visualizations in a Neuronavigation Environment.” InClinical Image-Based Procedures. Translational Research in Medical Imaging, pp. 28-35. Springer International Publishing, 2015.

[2] Gerard, Ian J., Marta Kersten-Oertel, Simon Drouin, Jeffery A. Hall, Kevin Petrecca, Dante De Nigris, Tal Arbel, and D. Louis Collins. “Improving Augmented Reality Tumour Visualization With Intraoperative Ultrasound In Image Guided Neurosurgery: Case Report.” International Journal of Radiology and Surgery 10(S1):1-312, 2015.

 

Augmented Reality for Brain Tumour Surgery

Augmented reality (AR) visualization in image-guided neurosurgery (IGNS) allows a surgeon to see rendered preoperative medical datasets (e.g. MRI/CT) from a navigation system merged with the surgical field of view. Combining the real surgical scene with the virtual anatomical models into a comprehensive visualization has the potential of reducing the cognitive burden of the surgeon by removing the need to map preoperative images and surgical plans from the navigation system to the patient. Furthermore, it allows the surgeon to see beyond the visible surface of the patient, directly at the anatomy of interest, which may not be readily visible.

 

ar

Figure: Augmented reality visualizations from our neuronavigation system. The surgeon used AR for craniotomy planning on the skin (A), the bone (B), the dura (C), and also after the craniotomy on the cortex (D). In A, the orange arrow indicated the posterior boundary of the tumour and the blue arrow indicates the planned posterior boundary of the craniotomy that will allow access to the tumour. The yellow arrow shows the medial extent of the tumour, which is also the planned craniotomy margin. In B, the surgeon uses the augmented reality view to trace around the tumour in order to determine the size of the bone flap to be removed. In C, AR is used prior to the opening of the dura and in D the tumour is visualized on the cortex prior to its resection.

Video

Publications

  1. M. Kersten-Oertel, I. J. Gerard , S. Drouin , J. A. Hall , D. L. Collins. “Intraoperative Craniotomy Planning for Brain Tumour Surgery using Augmented Reality”, to be presented at CARS 2016. 
  2. I. J. Gerard, M. Kersten-Oertel, S. Drouin, J. A. Hall, K. Petrecca, D. De Nigris, T. Arbel and D. L. Collins. (2016) “Improving Patient Specific Neurosurgical Models with Intraoperative Ultrasound and Augmented Reality Visualizations in a Neuronavigation Environment,” in 4th Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging, LNCS 9401, pp. 1–8.*** Best Paper
  3. Kersten-Oertel, M., Gerard, I. J., Drouin, S., Mok, K., Petrecca, K., & Collins, D. L. (2015) Augmented Reality for Brain Tumour Resections. Int J CARS, 10(1):S260.

Augmented Reality in Neurovascular Surgery

In neurovascular surgery, and in particular surgery for arteriovenous malformations (AVMs), the surgeon must map pre-operative images of the patient to the patient on operating room (OR) table in order to understand the topology and locations of vessels below the visible surface. This type of spatial mapping is not trivial, is time consuming, and may be prone to error. Using augmented reality (AR) we can register the microscope/camera image to pre-operative patient data in order to aid the surgeon in understanding the topology, the location and type of vessel lying below the surface of the patient. This may reduce surgical time and increasing surgical precision. In this project as well as studying a mixed reality environment for neuromuscular surgery, we will examine and evaluate which visualization techniques provide the best spatial and depth understanding of the vessels beyond the visible surface.

A: Colour coding of a vascular DS-CTA volume based on blood flow. B: Vessels overlaid on the patient skin prior to draping (left). The AR view is used at this step to help tailor the extent of the craniotomy. On the right we see vessels overlaid on the cortex prior to resection, here the AR view is used to determine the optimal resection corridor. The blue arrows point to the pink markers that indicate the location of deep feeding arteries. The orange arrow indicates the major arterialized vein, shown as red and not blue. C: Different visualization techniques for combining the live camera image (prior to resection) with the virtual vessels (green, red, blue) are shown.The use of simple alpha-blending between the real and virtual worlds does not provide spatial information (top). More sophisticated techniques such as modulating transparency in the area of interest and using edges (from the virtual vessels and/or camera image) and using fog are applied. D: Based on the virtual information the surgeon placed a micropad on the brain surface above a virtual marker representing a deep feeding artery to help with the resection approach and vessel localization.

Publications

M. Kersten-Oertel, M., Gerard, I., Drouin, S., Mok, K., Sirhan, D., Sinclair, D. S. and Collins, D. L. Augmented reality in neurovascular surgery: feasibility and first uses in the operating room. IJCARS (2015): 1–14.

Kersten-Oertel, M., Gerard, I. J., Drouin, S., Mok, K., Sirhan, D., Sinclair, D. S., & Collins, D. L. (2015). Augmented Reality for Specific Neurovascular Surgical Tasks. In Augmented Environments for Computer-Assisted Interventions (pp. 92–103). Springer International Publishing.

M. Kersten-Oertel, I. Gerard, S. Drouin, K. Mok, D. Sirhan, D. Sinclair, D. L. Collins. “Augmented Reality in Neurovascular Surgery: First Experiences.” Augmented Environments for Computer-Assisted Interventions. Lecture Notes in Computer Science Volume 8678, 2014, pp 80–89, 2014.

YouTube-logo-full_color