5,557

Augmented Reality and Minimally Invasive Surgery

Jacques Marescaux, Michele Diana, Luc Soler

Jacques Marescaux, Michele Diana, Luc Soler, IRCAD-IHU, General, Digestive and Endocrine Surgery, University of Strasbourg, France

Correspondence to: Jacques Marescaux, MD, FACS, (Hon) FRCS, (Hon) JSES, IRCAD-IHU, General, Digestive and Endocrine Surgery, University of Strasbourg, France.
jacques.marescaux@ircad.fr
Telephone: +33-3-88-11-90-28
Received: August 20, 2012
Revised: October 23, 2012
Accepted: October 24, 2012
Published online: May 21, 2013

ABSTRACT

Augmented Reality (AR) is the process of superimposing live images with synthetic computer-generated images. AR can serve the surgeon during an operation to highlight anatomical details as a navigation tool. AR may be obtained from preoperative images or in real-time in the operating room. The process to obtain AR includes different phases: (1) generation of a virtual patient-specific model; (2) visualization of the model in the operative field; (3) registration, which corresponds to an accurate overlaying of the 3D model onto the real patient’s operative images. In this editorial, we provide a brief overview of the different available technologies and approaches for AR in MIS and we describe the challenges to obtain accurate, flexible and patient-specific real-time anatomical reconstruction.

Key words: Augmented Reality; Virtual Reality; Computer-Assisted Surgery; Image-Guided Surgery

© 2013 The Authors. Published by ACT Publishing Group Ltd.

Marescaux J, Diana M, Soler L. Augmented Reality and Minimally Invasive Surgery. Journal of Gastroenterology and Hepatology Research 2013; 2(5): 555-560 Available from: URL: http://www.ghrnet.org/index.php/joghr/article/view/378

INTRODUCTION

In the Myth of the Cave[1], more than 2000 years ago, Plato, the Greek philosopher, formulated the theory that reality is a preconditioned phenomenon filtered by our senses, and is essentially the mirrored copy of a virtual world, called the “World of Ideas”, which Plato considered to be the highest form of knowledge of reality, as opposed to the world which can be analyzed through our perception. In the computer era, the philosophical concept displayed in Plato’s allegory has been translated into a variety of fields with the advent of Virtual Reality (VR), also known as the “immersive environment”. VR may be defined as a realistic three-dimensional (3D) scenario created by a computer system, in which the user is fully immersed and separated from the real world. The user and the environment may “virtually” interact through specific sensors and effectors, which have a direct influence on the degree of realism in a VR experience. When pushed to the limits of its capabilities, the user should not be able to differentiate the real experience from the virtual one, as did the prisoners of Plato’s cave, as they had never seen the real world and could not tell what was real, and what was not. The user then might live the experience or get trained in a risk-proof condition.

VR has a number of applications ranging from entertainment to training through simulators and has become increasingly relevant to the medical and surgical field. Medical imaging is representative of a symbiosis between the technical support offered by computer science and medicine, and is a specific area of application for VR. The interpretation of a Computed Tomography (CT) Scan or Magnetic Resonance Imaging (MRI) may be enhanced by the use of a VR 3D reconstruction which enables to navigate through the body and perform a virtual exploration. This tool highlights anatomical details which could be underestimated on a customary image[2]. Additionally, VR may provide a tool for preoperative planning and for surgical procedure simulation. To correctly assist the surgeon intraoperatively, the 3D virtual model created from the Digital Imaging and Communication in Medicine (DICOM) format images may be combined with real images to provide an enhanced navigation tool allowing to discriminate fine details. This combination between live images and synthetic computer-generated patient-specific images is known as Augmented Reality (AR)[3]. AR is a promising technology which may help to solve some of the issues related to Minimally Invasive Surgery (MIS), and build the fundamentals of this emerging discipline of Image-Guided Surgery (IGS).

AR is a multiple-step process presenting a specific glossary and different technologies. We aim to outline the most common pathways used to achieve AR to support MIS and some current clinical applications specific to the digestive and endocrine system.

Augmented Reality: X-ray vision to support Minimally Invasive Surgery

A radical shift in the practice of surgery has occurred with the advent of minimally invasive endoscopic techniques. The target organ is reached through small skin incisions through which low profile surgical instruments can be introduced. The working space is created by insufflating a controlled flow of carbon dioxide set to maintain a constant pressure. An endoscopic camera is introduced into the body’s cavity and the vision of the operative field is transferred from the surgeon’s direct view to a magnified and high definition optical system and displayed on a monitor. The reduced surgical insult ascribes unquestionable benefits to patients. Reduced postoperative pain, shorter length of hospital stay, earlier return to daily activities, reduced morbidity rate and improved cosmetic outcomes represent the scientifically proven advantages of this approach as compared to conventional open surgery. However, MIS is not intuitive, and specific training is necessary to balance some unnatural requirements and to be proficient in laparo-endoscopic techniques, with a steep learning curve[4]. The inherent challenges of the MIS approach may be summarized as follows: (1) the hand-eye axis is distorted since the surgical field is visualized through a screen with incurred loss of visual drive of haptic proprioception; (2) 2D vision offered by the flat screen results in a reduced depth of perception; (3) the angle of view is limited by the scope’s zoom; (4) the “touch” sensation (tactile feedback) is very limited due to long low profile instruments, and as a result, important information such as tissue stiffness, presence of a nodule or pulse of an unapparent vessel, is lost.

Robotics provide technology which allows to overcome some of these issues with the surgical platform currently available on the market (DaVinci®, Surgical Intuitive) which is equipped with a binocular camera that gives a stereoscopic, 10-fold magnified and high-resolution view, a haptic interface that commands the instruments with a natural movement of the hands, effectors that exactly replicate movements into a precise and downscaled fashion, and eliminate physiologic tremors. The lack of tactile feedback has yet to be improved as well as the ability to detect anatomical details and to correctly define surgical planes and resection margins.

AR obtained from preoperative images or, ideally, in real-time, may provide surrogate guidance to dissection in MIS (with or without robotic assistance), highlighting target organs and anatomical variations through a modular virtual transparency that allows to look inside “closed” cavities or through more superficial structures as would an X-ray. However, this requires for the surgeon to rely on a consistent, precise and flexible patient-specific anatomical reconstruction and enhancement of real intraoperative images.

The paradigmatic applications of AR and Image Guidance have been brain surgery[5] and maxillo-facial surgery[6]. In these fields, motionless and highly contrasted structures such as bones make the virtual model highly congruent with the real patient, while AR presents additional challenges in digestive surgery due to respiratory movements and to organ manipulation and deformation. However, the fundamental steps for AR are essentially similar.

The process of computer-assisted surgery: from the patient’s specific anatomy to surgical planning and intraoperative Augmented Reality (Figure 1)

The stepwise approach for AR build-up includes the following: first, the generation of a virtual patient-specific 3D view, secondly, the display of the 3D view in the operative setting, and finally, correct superimposition of the 3D virtual view onto real patient images (registration). Different technologies are available[2,7], and have been used in various combinations to achieve every single one of these steps. Our aim is to give an overview of this promising technology; as a result, extensive technical details extend well beyond our scope.

3D VIRTUAL VIEW OF THE PATIENT

The process of AR starts with the generation of a 3D virtual view of the patient obtained from DICOM format images using mainly two different approaches: Direct Volume Rendering and Surface Rendering (Figure 2).

Direct Volume Rendering from raw DICOM data

Direct Volume Rendering (DVR) methods generate images of a 3D volumetric data set without explicitly delineating structures and extracting their surfaces from the medical images[8]. These techniques rely on an optical model to map data values to optical properties, such as color and opacity. Each grey level of the initial 3D medical image is then ascribed to a color and a transparency through a transfer function. Each voxel of the image is seen as a colored cube with a defined transparency and the image is considered as a set of various colored transparent cubes. During rendering, a simulation of light ray going through each cube of the 3D image is performed. The optical properties are then accumulated along each viewing ray to form an image of the data. The quality of the results and the useful information visible under such rendering are directly linked to the transfer function.

The majority of current radiologic workstations can provide easy and fast DVR, without any long and time-consuming pre-processing. DVR is also available on personal computer through open source software, e.g., Osirix on MacOS[9], Slicer[10] or through our own open source software, VR-RENDER® and VR-Planning®[2] on all OS (Windows®, MacOS and Linux).

The clinical interest of DVR is limited to the appreciation of highly contrasted tumors or vascular anomalies. However, DVR is not suitable for complex tasks such as surgical planning or computer-assisted surgery since organs are computed as a whole and cannot be individually manipulated.

Surface rendering from pre-processed data

Surface Rendering (SR) is a 3D visualization method consisting in a rendering of geometrical meshes which surround the organ’s surfaces. A pre-processing of organ delineation, which can be manual, semi-automatic or fully automatic, is required. From this delineation, a colored geometrical mesh is generated automatically and SR allows to visualize it with or without transparency. SR is traditionally used in virtual planning software such as VR-Planning® that allows virtual navigation, virtual tool positioning, virtual organ resection, and associated resected volume computation (Figure 3).

Working with a patient-specific 3D virtual model is profitable on different levels and especially for surgeons who are more familiar with the “think and act” approach in 3D. A virtual surgical exploration through immersive and interactive navigation inside the body may allow the detection of details which may be underestimated by standard radiological work-up, as well as to simulate the surgical approach with an insight to critical anatomical relationships[11]. In addition, it may serve as a dynamic educational tool for students and may be implemented into simulators for a more effective and procedure-specific surgical training (Figure 4).

Finally, in contrast with DVR, SR can be used on mobile devices in the same way as on a personal computer, making access to patient data easier. It is the case with the VisiblePatient® (www.visiblepatient.eu) developed by Kitware and IRCAD and suited for mobile phones and tablet PCs under iOS and Android (Figure 5).

DISPLAY OF THE MODEL IN THE OPERATING ROOM SETTING

Once generated, the 3D virtual model of the patient must be displayed in the operative field and superimposed with real-time images to obtain AR during the surgical procedure and to provide guidance to surgeons, highlighting anatomical relationships through modular transparency. Synthetic images may be displayed using optical see-through devices, projectors and/or video cameras.

See-through optical display

The optical see-through display has the theoretical advantage of generating AR on the natural view field providing the surgeon with “X-ray vision”. Solutions proposed include a semi-transparent mirror through which the surgeon looks at the patient and receives AR data on the mirror[12] or the integral videography screen equipped with micro-lenses that allow to see through with a 3D effect of in-depth perception[13]. Another modality is represented by see-through eyewear, which is rapidly evolving and currently available for multimedia entertainment or military applications (e.g., Vuzix®, Tactical Eye; or Laster Technologies Smart vision system). However, AR based on informative glasses needs precise tracking of pupil movements and the current technology still does not ensure the degree of accuracy needed for MIS. The clinical application of these devices, although promising, has been very limited. Recently, Okamoto et al[14] evaluated a video see-through system in 3 cases of hepatobiliary pathologies (gallbladder carcinoma, hepatocellular carcinoma, and benign biliary stricture) in which the synthetic surface-rendering images reconstructed by preoperative CT-scanning were overlaid with that of the real organs. Although authors felt that the system provided useful navigation, they outlined accuracy errors and lack of in-depth information.

Video projection

A beamer is positioned above the patient and the virtual model is projected on the patient’s skin. Sugimoto et al[15] have used a projector-based and video-based AR navigation by overlaying preoperative 3D models obtained through DVR using the OsiriX® 3D viewer (Pixmeo, Geneva, Switzerland) on the patient’s skin and on screen to guide the placement of operative ports and to assist procedures such as laparoscopic gastric and colorectal surgeries. The authors reported an improved strategic port placement. Although the visual appearance is remarkable, resulting in a virtual transparency of the patient, this method suffers from major shortcomings due to the different perspectives of the surgeon’s view and to the optical focus of projected images. Similarly to see-through devices, a system able to track the position of the surgeon’s head and the patient’s skin has to be implemented to improve accuracy. However, this is difficult since any other user will necessarily have a different position and perspective.

Video camera display

Real-time operative images are captured by endoscopic or external cameras and displayed on-screen, and the 3D virtual model is then overlaid with operative images to obtain AR. External static cameras are the cheapest and most effective solution for an external AR view of the patient’s internal structures. An alternative solution consists in the use of head-mounted cameras which capture two videos that are displayed in front of the surgeon’s eyes through head-mounted display[16,17]. Although camera-based head-mounted display has the advantage to project AR directly into the surgeon’s visual field, it is still uncomfortable for the surgeon and needs to be accurately tracked in the OP room, which dramatically increases costs. In minimally invasive surgery, it seems natural to provide AR information directly onto the endoscopic image as illustrated with adrenal surgery[18] (Figure 6), liver and pancreatic tumor resections[19] (Figure 7), and minimally invasive parathyroidectomy[11].

REGISTRATION OF THE 3D MODEL

Registration is the process of precisely adapting the 3D virtual model obtained from preoperative DICOM images to the patient’s real anatomy. Accuracy in registration is of paramount importance to any AR to provide correct additional information to the surgeon. Current registration methods require some degree of human interaction, since automatic registration is particularly cumbersome and is an avenue of ongoing experimental research.

A straightforward interactive registration method consists in visualizing the preoperative 3D model directly on the operative monitor and manually resizing and orientating the model according to some visible landmarks (e.g., bone structures such as the iliac crest or eventually some radio-opaque markers used during both acquisition of preoperative and intraoperative images)[18]. Alternatively, the position of landmarks is initially determined using preoperative imaging and subsequently outlined during the procedure with a navigation pointer[20], and the model is then semi-automatically repositioned. Ieiri et al[21] recently applied an optical tracking system for registration between volume image and skin markers to guide laparoscopic splenectomy in 6 children. The authors reported an acceptable registration accuracy of the system in the clinical setting and the navigation ability to provide real-time anatomical information, which could not be otherwise visualized.

The main problems with automatic registration include organ deformation or displacement by surgical manipulation during surgery or breathing. The ability to proceed with a real-time intraoperative update in the three-dimensional model for the targeted organ, or at least to estimate the motion range with good approximation is key to fully automatic AR registration. The main approach is to directly acquire a 3D image of the interest zone during the procedure which could be updated at any time during surgery. This can be achieved by using 3D ultrasound[22], as recently demonstrated by Nam et al for liver surgery applications, or by CT-scan[23] or MRI. Shekhar et al[23] proposed to use an intraoperative low-dose CT-scan image to update the 3D liver model and to refresh the AR view during surgical manipulation. Additionally, the authors used an optical tracking system to follow the endoscopic camera. Although interesting, this approach presents a drawback related to the low-dose CT-scan which provides less detailed images, and the second inconvenience is the need for a CT-scan throughout the procedure. In addition, the update rate of the 3D virtual model should be rapid enough to be of clinical utility during a surgical procedure. All these aspects require further improvements, but seem to be the right path to follow. Our group is actively working on different avenues for automatic AR registration. A worthwhile approach is to use a predictive real-time simulation of organ deformation during breathing, using structured light to track skin surface movements without any marker[24].

IRCAD© VR-RENDER® DICOM viewer

Any process of Augmented Reality real-time navigation includes all the previously discussed steps: generation of a 3D virtual model of the patient’s anatomy, registration of the model to fit the real anatomy of the patient, and fusion of the synthetic images with real patient images. The main difference between our approach and other currently available methods hinges on the VR-RENDER® DICOM multiplatform viewer suite which was conceived by our Research and Development department within the framework of a project named PASSPORT (PAtient Specific Simulation and Pre-Operative Realistic Training), funded by the European Community. VR-RENDER® integrates direct volume rendering techniques (not requiring organ modeling) and surface rendering techniques (requiring organ modeling). The number one difference of VR-RENDER® when compared to other imaging software (e.g. OsiriX) lies in the possibility offered by modeling to manipulate each organ and structure separately through a semi-automatic pre-processing of segmentation. Other software could allow for the generation of a 3D virtual model (by direct volume rendering) but this model would be indivisible and one could not work on a given organ individually. The limited possibilities with DVR make it unsuitable for surgical planning or simulation or for effective Augmented Reality. With DVR, the operator can simply modify the contrast function to visualize the most contrasted structures only, but the surrounding structures are simply “shadowed” but still present in the model. VR-RENDER® allows to separate each structure and to choose, whenever required, whether to hide or to display them by checking dedicated boxes (skin, bones, kidney, liver, spleen, any vessel, and so on), and also to select the “intensity” of the structure to be displayed using the modular transparency function. Anatomical structures are colored using a “Netter-like” color code, that most physicians are familiar with. A complete set of surgical anatomy and simulation tools has been developed from an open source framework dedicated to computer-assisted surgery: VR-RENDER® VR-Anat, VR-Med, VR-Planning, VR-Med Fusion© ircad: VR-Anat integrates segmentation and modeling algorithms and can be used to generate 3D anatomical modeling; VR-Planning offers the opportunity to simulate organ resection using several topological components hence allowing multi-segmentectomy. It then automatically computes the future remnant volume (e.g. for liver resections); VR-Med-Fusion integrates fusion of data, allowing to combine several 3D models of a same patient to follow the tumor’s evolution or to fuse organs or tumors extracted from different vascular phases (typically arterial, portal and venous phase).

In simple words, the key difference lies in the segmentation or delineation of a single structure applying a geometrical mesh on the target organ on DICOM images in order to define the boundaries of the structure. This step is performed in a semi-automatic fashion by a computer scientist proficient in medical imaging. The realism of the reconstructed organ depends on mesh generation algorithms, which implement data coming from mechanical organ modeling based on ultrasonographic elastography and Magnetic Resonance Elastography. Algorithms have been validated based on a database including more than 400 patients modeled on multiple anatomical areas or pathologies such as parathyroid tumor detection, adrenal tumor, pancreatic tumor, and sometimes abdominal pathologies (e.g. nodules in the abdominal wall). Based on these results, we opened several anonymous databases with a total of 110 real clinical cases freely available to the scientific community for education and research purposes. Among these databases, two are available for free on the WeBSurg website (www.websurg.com/softwares).

Once the entire set of organs in the DICOM image has been delineated with a mesh, the 3D reconstruction of the virtual model is achieved: currently, this step requires almost 30 minutes. The model can be used for intraoperative real-time navigation. Real-time fusion with intraoperative images requires resizing and adapting the model. This step is performed manually based on visible anatomical landmarks and using a video-mixer: this step takes 1 to 5 minutes and can be repeated during the procedure as soon as the surgical manipulation has perturbed the accuracy of registration. As described above, automatic registration of the model on the patient’s anatomy and automatic refresh of the model following intraoperative modifications is an area of active research.

Computer-assisted surgical education: Virtual Reality and Augmented Reality

Virtual and Augmented Reality as teaching tools could be the answer to the rapidly changing needs for surgical education in MIS and may provide the possibility to standardize surgical education. The main advantage of virtual simulators for laparoscopic skills (such as LapSim® or Simbionix®) lies in the fact that they may provide an objective assessment of trainee performances[25]. However, virtual reality simulators lack tactile force-feedback. AR-based simulators (e.g. Haptica®) may provide an added value of realistic haptic feedback as perceived with standard box trainers with the ability to assess the performances of a virtual simulator. Our R&D department is working on the development of a simulator, named ULIS (Unlimited Laparoscopic Simulator) which integrates the haptic feedback based on texture and tissue resistance providing the surgeon with highly realistic images and feelings (Figure 8).

Robotics and AR: fundamentals for surgical procedure automation

The ultimate dream (or nightmare?) of computer-assisted surgery is to explore the possibility of a fully automatic surgical procedure via a robotic surgical interface. The first generation of surgical robots were notable for performing image-guided precision tasks such as brain biopsy[26], on the basis of preoperative planning. What prevents the surgical robot from accomplishing a fully automated MIS procedure is the anatomical knowledge of the surgical planes and the ability to perform complex multi-step surgical strategies. In the future, one could anticipate that a fully automatic AR coupled with imaging systems would be able to provide real-time update, as well as the ability to program a self-controlled robotic platform to achieve an automatic and extremely precise complex surgical procedure, with the surgeon acting as a back-up throughout the whole process. To conclude this overview with an additional futuristic application of VR and AR assistance to surgery, special mention should be made about robotic telesurgery. AR has indeed been imagined as the guidance modality for extreme tele-robotic applications such as surgical procedures during long-lasting space exploration missions[27] or in hostile environments such as battlefields.

CONCLUSIONS

AR guidance in MIS is still a complex challenge, but the achievements made with the available technology paved the way for further evolution. The ability to guide the surgeon during complex procedures makes this technology highly promising as it should increase safety and overcome MIS-related limitations.

ACKNOWLEDGMENTS

Authors are grateful to Christopher Burel and Guy Temporal for proofreading the manuscript.

REFERENCES

1 Plato. The Myth of The Cave. In: The Republic; 360 Before Christ

2 Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol 2011; 20: 189-201

3 Soler L, Marescaux J. Patient-specific surgical simulation. World J Surg 2008; 32: 208-212

4 Tekkis PP, Senagore AJ, Delaney CP, Fazio VW. Evaluation of the learning curve in laparoscopic colorectal surgery: comparison of right-sided and left-sided resections. Ann Surg 2005; 242: 83-91

5 Iseki H, Masutani Y, Iwahara M, Tanikawa T, Muragaki Y, Taira T, Dohi T, Takakura K. Volumegraph (overlaid three-dimensional image-guided navigation). Clinical application of augmented reality in neurosurgery. Stereotact Funct Neurosurg 1997; 68: 18-24

6 Wagner A, Ploder O, Enislidis G, Truppe M, Ewers R. Virtual image guided navigation in tumor surgery--technical innovation. J Craniomaxillofac Surg 1995; 23: 217-223

7 Shuhaiber JH. Augmented reality in surgery. Arch Surg 2004; 139: 170-174

8 Calhoun PS, Kuszyk BS, Heath DG, Carley JC, Fishman EK. Three-dimensional volume rendering of spiral CT data: theory and method. Radiographics 1999; 19: 745-764

9 Volonté F, Pugin F, Bucher P, Sugimoto M, Ratib O, Morel P. Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion. J Hepatobiliary Pancreat Sci 2011; 18: 506-509

10 Pieper S, Halle M, Kikinis R. 3D SLICER. In: Proceedings of the 1st IEEE International Symposium on Biomedical Imaging: From Nano to Macro; 2004; 2004

11 D'Agostino J, Diana M, Soler L, Vix M, Marescaux J. 3D Virtual Neck Exploration Prior to Parathyroidectomy. New Engl J Med 2012

12 Fichtinger G, Deguet A, Fischer GS, Balogh E, Masamune K, Taylor RH, Fayad LM, SJ Zinreich. CT Image Overlay for Percutaneous Needle Insertions. Journal of Computer Assisted Surgery 2005; 10: 241-255

13 Liao H, Inomata T, Sakuma I, Dohi T. 3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay. IEEE transactions on bio-medical engineering 2010; 57: 1476-1486.

14 Okamoto T, Onda S, Matsumoto M, Gocho T, Futagawa Y, Fujioka S, Yanaga K, Suzuki N, Hattori A. Utility of augmented reality system in hepatobiliary surgery. J Hepatobiliary Pancreat Sci 2012 Mar

15 Sugimoto M, Yasuda H, Koda K, Suzuki M, Yamazaki M, Tezuka T, Kosugi C, Higuchi R, Watayo Y, Yagawa Y, Uemura S, Tsuchiya H, Azuma T. Image overlay navigation by markerless surface registration in gastrointestinal, hepatobiliary and pancreatic surgery. J Hepatobiliary Pancreat Sci 2010; 17: 629-636

16 Bajura M, Neumann U. Dynamic registration correction in video-based augmented reality systems. In: IEEE Computer Graphics and Applications 1995; 15: 52-60

17 Sauer F, Khamene A, Vogt S. An augmented reality navigation system with a single camera tracker: system design and needle biopsy phantom trial. Medical Image Computing and Computer-Assisted Intervention -MICCAI 2002; 2489: 116-124

18 Marescaux J, Rubino F, Arenas M, Mutter D, Soler L. Augmented-reality-assisted laparoscopic adrenalectomy. JAMA 2004; 292: 2214-2215

19 Mutter D, Soler L, Marescaux J. Recent advances in liver imaging. Expert Rev Gastroenterol Hepatol 2010; 4: 613-621

20 Marvik R, Lango T, Tangen GA, et al. Laparoscopic navigation pointer for three-dimensional image-guided surgery. Surg Endosc 2004; 18: 1242-1248

21 Ieiri S, Uemura M, Konishi K, Souzaki R, Nagao Y, Tsutsumi N, Akahoshi T, Ohuchida K, Ohdaira T, Tomikawa M, Tanoue K, Hashizume M, Taguchi T. Augmented reality navigation system for laparoscopic splenectomy in children based on preoperative CT image using optical tracking device. Pediatr Surg Int 2012; 28: 341-346

22 Nam WH, Kang DG, Lee D, Lee JY, Ra JB. Automatic registration between 3D intra-operative ultrasound and pre-operative CT images of the liver based on robust edge matching. Phys Med Biol 2012; 57: 69-91

23 Shekhar R, Dandekar O, Bhat V, Philip M, Lei P, Godinez C, Sutton E, George I, Kavic S, Mezrich R, Park A. Live augmented reality: a new visualization method for laparoscopic surgery using continuous volumetric computed tomography. Surg Endosc 2010; 24: 1976-1985

24 Hostettler A, Nicolau SA, Remond Y, Marescaux J, Soler L. A real-time predictive simulation of abdominal viscera positions during quiet free breathing. Prog Biophys Mol Biol 2010; 103: 169-184

25 Botden SM, Buzink SN, Schijven MP, Jakimowicz JJ. Augmented versus virtual reality laparoscopic simulation: what is the difference? A comparison of the ProMIS augmented reality laparoscopic simulator versus LapSim virtual reality laparoscopic simulator. World J Surg 2007; 31: 764-772

26 Kwoh YS, Hou J, Jonckheere EA, Hayati S. A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery. IEEE Trans Biomed Eng 1988;35: 153-160

27 Haidegger T, Sandor J, Benyo Z. Surgery in space: the future of robotic telesurgery. Surg Endosc 2011; 25: 681-690

Peer reviewers: Dimitris K Iakovidis, Assistant Professor, Department of Informatics and Computer Technology, Technological Educational Institute of Lamia, GR 35100, Lamia, Greece; Nenad Filipovic, Professor, Bioengineering Department, University of Kragujevac, Sestre Janjica 6, 34000 Kragujevac, Serbia.

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.