Envisioning the Future of the Neurosurgical Operating Room with the Concept of the Medical Metaverse

Article information

J Korean Neurosurg Soc. 2025;68(2):137-149
Publication date (electronic) : 2024 November 4
doi : https://doi.org/10.3340/jkns.2024.0160
1Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
2Department of Neurosurgery, SMG-SNU Boramae Medical Center, Seoul, Korea
3Neuro-Oncology Clinic, National Cancer Center, Goyang, Korea
Address for reprints : Chul-Kee Park Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea Tel : +82-2-2072-0347, Fax : +82-2-741-8594, E-mail : nschpark@snu.ac.kr
Received 2024 August 29; Revised 2024 October 30; Accepted 2024 October 31.

Abstract

The medical metaverse can be defined as a virtual spatiotemporal framework wherein higher-dimensional medical information is generated, exchanged, and utilized through communication among medical personnel or patients. This occurs through the integration of cutting-edge technologies such as augmented reality (AR), virtual reality (VR), artificial intelligence (AI), big data, cloud computing, and others. We can envision a future neurosurgical operating room that utilizes such medical metaverse concept such as shared extended reality (AR/VR) of surgical field, AI-powered intraoperative neurophysiological monitoring, and real-time intraoperative tissue diagnosis. The future neurosurgical operation room will evolve into a true medical metaverse where participants of surgery can communicate in overlapping virtual layers of surgery, monitoring, and diagnosis.

INTRODUCTION

In recent years, the term “metaverse” (a combination of the prefix “meta” meaning transcendental/virtual and “universe” meaning world) has reemerged as a major topic of discussion with the maturation of augmented reality (AR) and virtual reality (VR) technologies. The term ‘metaverse’ was initially used mainly as an umbrella term for various contents that embodied real-life interactions in a virtual space. Based on this, some hospitals have built virtual hospitals simply mimicking the form of existing hospitals on the web, referring to them as “metaverse hospitals” and using them for promotional purposes. However, the metaverse in the medical field needs a more comprehensive and conceptual approach. That is, while acknowledging that visually embodied virtual spaces and avatars are not essential elements of what is generally considered the essence of the metaverse. The medical metaverse can be defined as any virtual spatiotemporal framework wherein higher-dimensional medical information is generated, exchanged and utilized through communication and interaction among medical personnel or patients, using various convergence technologies. Specifically, the medical metaverse refers to the interactive system where telepresence is embodied in a virtual space-time using convergence of technologies such as AR, VR, artificial intelligence (AI), big data, cloud computing, robotics, brain-machine interfaces (BMI), the Internet of Things (IoT), and Web 3.0. This facilitates interaction, communication, treatment and education, transcending temporal and spatial barriers with others.

There have also been various attempts to apply metaverse technologies to the neurosurgical fields. In this review, we envision what the future neurosurgical operating room might look like with the application of metaverse technologies. The concept of neurosurgical metaverse to be discussed here shares a commonality where the surgical participants communicate and cooperate for the surgery in the multifaceted virtual layers of surgery, monitoring, and diagnosis (Fig. 1).

Fig. 1.

Conceptual diagram of future neurosurgical operation room. Multi-layer of medical metaverse where various surgical participants communicate throughout the entire process of surgery, monitoring, and diagnosis. AI : artificial intelligence, AR : augmented reality.

NEUROLOGICAL SURGERY GUIDED BY SHARED EXTENDED REALITY

AR navigation integrates anatomical information into surgical operations by overlaying computer-generated three-dimensional (3D) anatomical information onto the surgeon’s visual field. This technology is especially valuable in the neurosurgical field, where surgeons benefit from a 3D semi-immersive experience through the visualization of complex anatomy using multimodal images such as magnetic resonance (MR) and computed tomography (CT), thereby enhancing surgical precision and efficiency. Additionally, AR navigation in neurosurgery facilitates the education of trainees in neuroanatomy and serves as a cornerstone for telemedicine [9,17,64,77].

The development of AR navigation has been active in the fields of brain and spine surgery. With 3D modeling technology enhanced by AI advancements, the auto-segmentation of most brain structures from brain MR/CT images is currently being used in clinical practice [70]. Technologies are being actively developed to accurately and swiftly align rendered AR images with the human body for a more realistic representation [80]. AR navigation systems can be categorized by their display type, which broadly falls into two categories : monitor-based, including head-mounted displays (HMDs), and projection-based [11]. Traditionally, neurosurgeons have been more familiar with monitor-based approaches, which merge reconstructed images on various monitors such as current navigation systems, microscope eyepieces, and endoscopy monitors. One of the earliest attempts at AR navigation was made in 1986 by Roberts et al. [60], who merged CT images with microscopic views. While providing real-time visualization alongside the surgical field through microscope eyepieces seemed ideal and initially sparked enthusiastic development, challenges arose due to technical limitations. These included the construction of 3D images from real data sources and maintaining 3D perception given the focus issues between eyepieces and objects [7,60]. Efforts to merge and align radiological images directly onto surgical views have been ongoing across various fields of neurosurgery [7,13,46]. Significant improvements have been achieved in the segmentation and 3D reconstruction of radiological images, including fully or semi-automatic manners. However, achieving accurate real-time alignment of 3D reconstructed images to anatomical structures remains a challenge [23]. Developments in separate monitors overlaying preoperative MR/CT images, akin to current neuro-navigation systems, include attempts to simplify displays, using smartphones or tablets [23,65] (Supplementary Video 1, our experience). Recent advancements have integrated glasses-free AR navigation with surgical robots [19]. However, these systems fail to provide unimpeded hands for surgeons, simultaneously compromising hand-eye coordination and limiting real-time monitoring capabilities, thus significantly constraining surgical maneuverability [37].

Consequently, recent advancements in HMDs have gained traction due to their ability to facilitate hands-free operation and their applicability to macroscopic views, making them adaptable to a wide range of surgical procedures [24,31,32,37,39,40,69,76,81] (Fig. 2). These HMD systems provide sterile alternatives to controllers, such as gesture, voice, and gaze, as exemplified by HoloLens 2 (Microsoft, Redmond, WA, USA), Magic Leap 2 (Magic Leap, Plantation, FL, USA) and the recently launched Apple Vision Pro (Apple, Cupertino, CA, USA) [25]. A recent report indicated that AR navigation with HMDs does not increase operation time and provides satisfactory results for surgeons in surgical planning, suggesting change is imminent in routine clinical practice [17]. However, HMD systems face unique challenges including user fatigue, dizziness during prolonged use, short battery life, and narrow field of view [25,39,77]. Additional concerns include privacy protection and device synchronization issues [6,25,40,56].

Fig. 2.

Timeline of head-mounted display development and clinical applications as augmented reality navigation systems in neurosurgery. VP shunt, ventriculoperitoneal shunt.

Although HMDs represent a promising direction in AR navigation, researchers have also explored alternative approaches to address existing limitations. Another approach, initially attempted by Iseki et al. [38] in 1997, involves directly projecting reconstructed AR images onto the patient’s body. While a commercially available video projector can be used to implement this method, even without the need for a hologram, technical challenges remain. These include image distortion and alignment issues caused by projecting 2D images onto the curved human body, as well as potential interference between the projected image and existing surgical tools [11,47]. These limitations currently hinder immediate practical application.

Clinical evidence, though preliminary, has demonstrated several advantages of AR navigation in three ways. First, regarding surgical precision and safety, traditional methods require surgeons to mentally integrate multiple image sources, leading to increased cognitive load and potential spatial orientation errors. AR navigation solves this through direct 3D visualization, reducing cognitive demand and allowing surgeons to maintain better focus on the surgical field [9]. Recent studies have demonstrated promising accuracy levels compared to conventional navigation system. Dho et al. [23] reported an impressive error margin of less than 1 mm in their series, while Fick et al. [29] conducted a meta-analysis showing that AR navigation achieved an mean target registration error of 2.3 mm (95% confidence interval, 0.7–4.4 mm), comparable to conventional navigation systems. Additionally, these accuracy lead to reduce radiation exposure from intraoperative imaging, providing long-term safety benefits for both patients and surgical staff [31]. Second, conventional navigation systems face efficiency challenges due to frequent probe-based checking requirements that interrupt surgical flow. AR navigation, particularly with HMDs, provides hands-free operation that enables uninterrupted surgical procedures [39]Despite these advantages, various limitations still exist, though recent innovations are providing solutions. Software improvements include order-independent transparency, multi-layer alpha blending, and filtered alpha compositing, which help address depth perception and alignment issues [22]. Manual fine-tuning functions and cutting-edge technologies like simultaneous localization and mapping and Light Detection and Raging sensing are enhancing real-time adjustment capabilities across all AR approaches [19,22,23]. Especially for HMDs, improvements are also emerging in devices development, which now feature swappable battery packs, improved weight distribution, enhanced eye-tracking, higher refresh rates with more efficient graphics processing units to reduce user fatigue while maintaining extended operation times [25].. This solution has led to measurable improvements - Yoon et al. [76] reported a 15.05% reduction in screw placement time, while Colombo et al. [17] demonstrated an average four-minute decrease in surgical preparation time without compromising surgical efficiency. The third challenge involves cost-effectiveness and accessibility of surgical navigation technology. Traditional neuronavigation systems need a significant investment, ranging from $250000 to $950000, with estimated per-procedure costs reported as $14480 [53]. In contrast, modern AR navigation utilizing HMDs offer a more economical alternative, with devices such as the HoloLens 2 and Apple Vision Pro costing approximately $3500–4000 [25]. The cost-effectiveness of AR navigation extends beyond its lower initial investment, as it fascilate minimally invasive procedures that lead to shorter hospital stays and faster patient recovery, potentially reducing overall healthcare costs [81].

Despite these advantages, various limitations still exist, though recent innovations are providing solutions. Software improvements include order-independent transparency, multi-layer alpha blending, and filtered alpha compositing, which help address depth perception and alignment issues [22]. Manual fine-tuning functions and cutting-edge technologies like simultaneous localization and mapping and Light Detection and Raging sensing are enhancing real-time adjustment capabilities across all AR approaches [19,22,23]. Especially for HMDs, improvements are also emerging in devices development, which now feature swappable battery packs, improved weight distribution, enhanced eye-tracking, higher refresh rates with more efficient graphics processing units to reduce user fatigue while maintaining extended operation times [25].

In the field of neurosurgery, the initial applications of AR navigation are anticipated to be in the following procedures : extraventricular drainage insertion, frameless stereotactic biopsy, and screw fixation in spine surgery [23,31,67,76,77]. However, before AR navigation systems can be seamlessly integrated into clinical practice, technical hurdles need to be overcome, such as automated registration, tracking and calibration, minimizing alignment errors, and optimizing time synchronization [22,38,46,80].

Despite these challenges, AR navigation system has not only economic and clinical advantages compared to existing neuro-navigation system but also enable remote communication and intuitive sharing of surgical anatomical findings, facilitating virtual collaborative surgery. With advancing hardware development and increasingly convenient applications, AR navigation will be able to establish itself as an essential tool for neurosurgery in the near future.

CENTRALIZED INTRAOPERATIVE NEUROPHYSIOLOGICAL MONITORING (IONM) POWERED BY AI

IONM is employed in the vast majority of neurosurgical operation rooms, where technicians handle the equipment and doctors interpret the results, classically all within the same space. However, IONM has practical challenges contingent upon the availability and proficiency of the personnel responsible for monitoring and interpretation. Efforts are underway to address these issues by developing an AI surveillance system utilizing big data and establishing a centralized system for remote interpretation.

In 2001, the Health Care Financing Administration of the United States approved real-time remote monitoring, and currently, over 80% of cases are monitored through telemedicine [8,14]. Despite initial technical and institutional hurdles, remote IONM has emerged as a routine practice in the United States, supporting over 200000 high-risk surgical procedures annually [8]. Furthermore, thanks to recent remarkable developments in AI technology, attempts are being made to enable more efficient monitoring by using AI to detect abnormal events in real-time during IONM. AI could be utilized in a stepwise manner in the neurosurgical field. Building upon the categorization proposed by Wilson et al. [73], we propose classifying these applications into three categories : machine learning (ML) based prediction models, intelligent decision support models, and postoperative outcome prediction models in the neurosurgical field (Table 1).

Summary of applications of machine learning in the field of intraoperative neurophysiologic monitoring

ML-based prediction model was employed in interpreting complex IONM signals under diverse conditions such as anesthesia, patient specific physiologic variables, and artifacts. Cui et al. [18] predicted dynamic baseline for somatosensory evoked potentials (SSEP) by calibrating non-surgical factors using a least squares-support vector regression model and showed mean squared error of 0.15. Wilson et al. [72] also predicted the dynamic baseline under the effect of anesthesia agent sevoflurane with a root mean square error value of 2.17. For electromyography signals, ML was used to classify the action potential signs in thyroid surgery achieving 90% accuracy [79].

Next step of combining AI with IONM is intelligent decision support model, which means real-time, accurate, automated identification of neurological insults and suggesting alert signals to human experts using AI algorithms. Fan et al. [28] proposed an intelligent decision system to reduce false warnings and ultimately improve true spinal cord injury detection by using least squares and multi-support vector regression models to create a dynamic baseline, overcoming the limitations of nominal baselines in SSEP monitoring during spinal surgery. Qiao et al. [58] developed a deep learning model to monitor visual evoked potential (VEP) classification during surgical resection of sellar region tumors, achieving performance comparable to human intelligence. Jiang et al. [43] introduced a supervised ML system to help surgeons distinguish ventral from dorsal roots during selective dorsal rhizotomy. These systems help surgeons to interrupt surgical procedures timely to prevent neurologic deterioration while reducing time lost to false positive signs.

Postoperative outcome prediction models could help human experts link IONM changes to postoperative neurological changes. Jamaludin et al. [41] predicted postoperative neurologic outcomes in lumbar surgery with sensitivity of 87.5% using k-nearest neighbors to interpret intraoperative motor evoked potential (MEP) signal changes. Pescador et al. [55] reported a model using Bayesian Networks to predict postoperative outcomes in brain surgery by interpreting various signals such as MEP, SSEP, and VEP. For the future development, merging AI and remote IONM with cloud data sharing system has been proposed, but not yet realized. Equipping a remote IONM system with these kinds of AI algorithms will further maximize efficiency.

The AI-powered remote IONM system has the potential to revolutionize surgical procedures. This innovative approach overcomes spatial and temporal constraints, allowing neurosurgeons to perform operations with a reduced monitoring team while mitigating the risks associated with insufficiently trained personnel. Such a system could potentially enable more secure surgeries by facilitating reliable IONM even in hospitals struggling with infrastructure deficiencies or non-routine surgical situations.

REAL-TIME AND REPEATABLE INTRAOPERATIVE TISSUE DIAGNOSIS

Traditional frozen section examination remains the gold standard for the intraoperative tissue diagnosis. However, it has several limitations, including time-consuming procedure, typically requiring approximately 20–30 minutes per biopsy even with the use of digital pathology systems [2,12]. Moreover, it is impossible to repeatedly request and confirm a diagnosis more than once or twice during surgery in most cases. The emerging in vivo real-time diagnosis systems, currently under active research and verification, are poised to revolutionize digital pathology, allowing continuous sharing and communication of surgical tissue images with pathologists without time or frequency constraints, enabling more precise surgery.

In the neurosurgical field, two major methods under development for in vivo real-time tissue diagnosis are Raman-based imaging and confocal laser microscopy [20,51]. The Raman effect, discovered in 1928, utilizes a small percentage (1 in 10 million) of inelastically scattered photons to detect molecular composition in a label free, non-destructive, and noninvasive manner [59]. Spontaneous Raman spectroscopy (SR), a fundamental form of Raman spectroscopy, was initially used in a laboratory setting to distinguish between edematous and normal brain tissue in rats in 1990, and later to differentiate brain tumors from normal tissue in humans in 1993 [50,68]. Despite its advantages in quantifying tissue molecular information, Raman imaging is limited by low Raman signal which is interfered by autofluorescence, low spatial resolution due to long infrared wavelengths and unintuitive datasets from SR [20,30]. Improved signal processing and data analysis techniques have bought SR to the neurosurgical field for qualified biopsies process. Koljenović et al. [44] distinguished glioblastoma from necrotic tissue with an accuracy of 100%. However, SR still has limitations such as a large probe, and low signal-to-noise ratio for in vivo application. Efforts to move towards clinical in-vivo applications have led to the development of several different Raman spectroscopy techniques, including coherent anti-Stokes Raman stimulated (CARS) microscopy, and stimulated Raman scattering (SRS) [20,30,83]. CARS microscopy, employing multiphoton spectroscopy techniques to generate strong non-linear anti-Stokes signals stronger than coherent Stokes Raman scattering and unaffected by autofluorescence, was introduced to the biomedical field by Zumbusch et al. [83] from the Xie group with high sensitivity, high spatial resolution and 3D sectioning capability in 1999. Subsequently, it was rapidly applied to image various biological structures and living cells and tissues [16,57]. SRS microscopy, remarkably improved by Freudiger et al. [30] also from the Xie group, uses the stimulated emission phenomenon and has some advantages over CARS in that it provides a linear relationship between signal intensity and chemical concentration, and it is not affected by non-resonant background sources. Despite the small size of patients, Jermyn et al. [42] reported intraoperative use of Raman spectroscopy for detecting brain cancer in vivo with an accuracy of 92% and introduced advanced in vivo probes to use intraoperative. Additionally, for the improvement of biopsy accuracy, development of biopsy needle combined with Raman spectroscopy and its implementation was reported by Desroches et al. [21]. For the future application, Raman spectroscopy was merged with AI technology to shorten the diagnosis time, leading to real-time in vivo pathology [34-36]. Recently, a multicenter clinical trial using a SRS system integrated with AI technology, called FastGlioma, showed rapid detection under 10 seconds to acquire images and the ability to detect and quantify the degree of tumor infiltration with an average area under the receiver operating characteristic curve of 92.1% [34].

Confocal microscopy, introduced in the mid-20th century by Marvin Minsky, revolutionized imaging technology by utilizing a confocal aperture (pinhole) to create high spatial resolution, and deep in-focus plane imaging [49]. In 1967, Egger and Petrăn [26] introduced a technique in the biological field for viewing unstained brain and ganglion cells. Despite early trials, confocal microscopy had limited applications until the early 2000s due to the underdevelopment of scanning image digitization and laser sources [5]. The transition from laboratory to clinical settings was hindered by the challenge of miniaturization. After a lot of effort, clinically applicable confocal laser endomicroscopy (CLE) is developed, maintaining the fundamental principle of the pinhole while employing a miniaturized probe. This is useful in intraoperative diagnosis by continuously achieving real-time, high-resolution images without tissue preparation. Initially deployed in gastroenterology, urology, and gynecology, this technology showed promise in improving the accuracy of biopsy procedures [78]. In 2010, Sankar et al. [63] conducted the first in vivo study using CLE in the neurosurgical field to distinguish between tumor and nontumor tissue, including infiltrative tumor margins, in a glioblastoma mouse model. Subsequently, Sanai et al. [61] from the Spetzler group conducted the first human in vivo study in 2011 to assess the feasibility of intraoperative CLE usage for brain tumor resection. Human in vivo studies are shown in Table 2.

Summary of human in vivo studies of confocal laser endomicroscopy in neurosurgical field

In CLE models, the selection of appropriate fluorescent agents is crucial. While current laser-based confocal systems can detect autofluorescence using low-power lasers, they often suffer from relatively low resolution [78]. Therefore, efforts have been made to explore and develop suitable fluorescent agents such as fluorescein, 5-aminolevulinic acid (5-ALA) and indocyanine green (ICG), which are well-known among neurosurgeons. Fluorescein sodium, approved by the Food and Drug Administration (FDA) for intravenous use, is the first and most popular agent employed in the neurosurgical field for CLE imaging [1,2,10,27,33,48,51,54,61,74,75,78]. It was employed by the aforementioned Sanai et al. [61] in a human study as a fluorescent agent for CLE, with a reported tumor detection accuracy of 92.9%. Another fluorescent agent is 5-ALA, which is used for oral intake in high grade glioma [35]. In the field of CLE imaging, Sanai et al. [62] employed 5-ALA to demonstrate the correspondence of tumor margins with standard histopathology in vivo in low grade glioma. Although ICG, approved by the FDA for intravenous usage, is not a popular agent used in the CLE system, Charalampaki et al. [15] reported results showing human cellular cytoarchitecture in vivo at 400 and 1000-fold magnification. Recently, a multicenter human ex vivo study was conducted by utilizing the ICG, which showed better negative predictive value, positive predictive value, specificity, and preparation time compared with frozen biopsy [12]. Supplementary Video 2 illustrates an example of in vivo intraoperative tissue image acquisition using CLE with ICG contrast (our experience).

Currently, three commercially available CLE systems exist : Optiscan/Pentax ISC-1000 (a joint venture between Pentax, Tokyo, Japan and Optiscan Pty Ltd., Melbourne, Australia), Cellvizio (Mauna Kea Technologies, Paris, France), and CONVIVO (Carl Zeiss Meditec AG, Oberkochen, Germany) [15,51,62,74,78]. Another new CLE system, cCeLL (VPIX medical, Daejeon, Korea) is under development [12]. However, it appears somewhat premature to transition from conventional pathology based on Hematoxylin & Eosin staining to image-based pathology derived from new imaging technology like in vivo microscopy. Additionally, it’s important to note that various studies have highlighted the operator-dependent nature of the examination method and the associated learning curve, both in conducting the examination and interpreting the results [1,33,51]. In a recent trial in 2023, Abramov et al. [2] reported the FDA clearance of a clinical-grade in vivo confocal system with a cloud data sharing platform, paving the way for future advancements in telepathology. Attempts to integrate deep learning with confocal imaging achieve real-time diagnosis is under active development [82].

If these endeavors are successful and in vivo microscopy becomes a standard practice during surgery, it is anticipated that the intraoperative tissue diagnosis system will serve as a revolutionary catalyst. Although these systems may come with a modest initial cost – with CLE devices costing approximately $200000 and per-case operational costs of $275 – the long-term cost-effectiveness could be achieved through several factors : reduced pathology processing costs, shortened operating times, and improved patient outcomes through enhanced tumor resection rates with minimal tissue damage [12,15,66,74]. The integration of advanced technologies like AI-assisted Raman spectroscopy and CLE systems with improved fluorescent agents has already demonstrated remarkable potential [36,82]. This could instigate fundamental changes in the approach to neurosurgery. The realization of collaborative surgery between neurosurgeons and neuropathologists is achievable through the virtual communication layer of image transfer. The successful implementation of these systems, combined with continuing technological advances, holds promise for establishing a new standard in precise and efficient neurosurgical procedures.

ETHICAL IMPLICATIONS IN MEDICAL METAVERSE

The implementation of the medical metaverse in real-world healthcare settings raises several critical ethical considerations. First, data privacy, security, and confidentiality emerge as paramount concerns. The multi-layered communications between medical personnel involve highly sensitive information, including not only conventional personal data like standard demographics and privacy sensitive-informations but also complex neurophysiological data not created or and shared before the emergence of these environment. Transferring data across multiple layers introduces the risks of errors that could have catastrophic for patients. While emerging technologies like blockchain offer potential solutions for secure data transfer and verification, implementation challenges remain [4,45]. The informed consent process becomes particularly challenging, as patients must understand and agree to various levels of data sharing across multiple virtual layers and participants [52].

Second, the integration of AI-powered technologies introduces additional ethical complexities. While AI-powered decision support models show promise for IONM and neuropathological diagnosis, they raise significant questions about responsibility and accountability. The “black box” nature of many AI models makes it difficult to trace the decision-making process when errors occur [4]. This challenge might be amplified in the medical metaverse, where multiple specialists interact across different digital layers. Another issues arising form AI is concerns about bias and representation. As highlighted by Wang et al. [71], AI systems primarily trained on data from specific demographic groups, such as white males, can produce biased or less accurate results for others, particularly women and people of color. These biases could lead to misdiagnoses or unequal treatment within the medical metaverse.

Finally, equitable accessibility remains a significant concern. The sophisticated technology required for the medical metaverse involves substantial costs, potentially limiting access to specialized hospitals and creating disparities in healthcare delivery [3]. This economic barrier could exacerbate existing healthcare inequalities and limit the technology’s broader benefits.

CONCLUSION

Technological advancements capable of actualizing the concept of a medical metaverse, could profoundly impact the future of medicine. Neurosurgical operation rooms will evolve into true medical metaverse environments, where various surgical participants communicate in overlapping virtual layers throughout the entire process of neurosurgery.

Notes

Conflicts of interest

Chul-Kee Park has been editorial board of JKNS since November 2020. He was not involved in the review process of this review article. No potential conflict of interest relevant to this article was reported.

Informed consent

This type of study does not require informed consent.

Author contributions

Conceptualization : CKP; Data curation : SMN, YHB, YSD; Formal analysis : SMN, YHB, YSD; Funding acquisition : CKP; Methodology : SMN, YHB, YSD; Project administration : CKP; Visualization : SMN; Writing - original draft : SMN; Writing - review & editing : SMN, YHB, YSD, CKP

Data sharing

None

Preprint

None

Acknowledgements

The authors wish to thank So Young Yim for support with the illustration included in this article. We also acknowledge MEDICALIP Co. Ltd. and VPIX Medical Co. Ltd. for allowing us to use the video clips used in this study.

This research was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (grant number : RS-2022-00197971); ‘Supporting Project to evaluation Domestic Medical Devices in Hospitals’ funded by the Korea government (the Ministry of Health & Welfare, Korea Health Industry Development Institute), supported in part by grants from the National Cancer Center, Korea (NCC-2411840-1).

We thank So Young Lim for her original artwork used in this manuscript.

Supplementary materials

The online-only data supplement is available with this article at https://doi.org/10.3340/jkns.2024.0160.

References

1. Abramov I, Park MT, Belykh E, Dru AB, Xu Y, Gooldy TC, et al. Intraoperative confocal laser endomicroscopy: prospective in vivo feasibility study of a clinical-grade system for brain tumors. J Neurosurg 138:587–597. 2023;
2. Abramov I, Park MT, Gooldy TC, Xu Y, Lawton MT, Little AS, et al. Real-time intraoperative surgical telepathology using confocal laser endomicroscopy. Neurosurg Focus 52:E9. 2022;
3. Ahuja AS, Polascik BW, Doddapaneni D, Byrnes ES, Sridhar J. The digital metaverse: applications in artificial intelligence, medical education, and integrative health. Integr Med Res 12:100917. 2023;
4. Amann J, Blasimme A, Vayena E, Frey D, Madai VI, ; Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 20:310. 2020;
5. Amos WB, White JG. How the confocal laser scanning microscope entered biological research. Biol Cell 95:335–342. 2003;
6. Andrews C, Southworth MK, Silva JNA, Silva JR. Extended reality in medical practice. Curr Treat Options Cardiovasc Med 21:18. 2019;
7. Aschke M, Wirtz CR, Raczkowsky J, Worn H, Kunze S. Augmented reality in operating microscopes for neurosurgical interventions. In : First International IEEE EMBS Conference on Neural Engineering, 2003.. IEEE; 2003. p. 652–655.
8. Balzer JR, Caviness J, Krieger D. The evolution of real-time remote intraoperative neurophysiological monitoring. Computer 56:28–38. 2023;
9. Baum ZMC, Lasso A, Ryan S, Ungi T, Rae E, Zevin B, et al. Augmented reality training platform for neurosurgical burr hole localization. J Med Robot Res 04:1942001. 2019;
10. Belykh E, Zhao X, Ngo B, Farhadi DS, Kindelin A, Ahmad S, et al. Visualization of brain microvasculature and blood flow in vivo: feasibility study using confocal laser endomicroscopy. Microcirculation 28e12678. 2021;
11. Besharati Tabrizi L, Mahvash M. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique. J Neurosurg 123:206–211. 2015;
12. Byun YH, Won JK, Hong DH, Kang H, Kim JH, Yu MO, et al. A prospective multicenter assessor blinded pilot study using confocal laser endomicroscopy for intraoperative brain tumor diagnosis. Sci Rep 14:6784. 2024;
13. Cabrilo I, Schaller K, Bijlenga P. Augmented reality-assisted bypass surgery: embracing minimal invasiveness. World Neurosurg 83:596–602. 2015;
14. Centers for Medicare & Medicaid Services. Physician Supervision of Diagnostic Tests. Available at : https://www.hhs.gov/guidance/document/physician-supervision-diagnostic-tests.
15. Charalampaki P, Nakamura M, Athanasopoulos D, Heimann A. Confocal-assisted multispectral fluorescent microscopy for brain tumor surgery. Front Oncol 9:583. 2019;
16. Cheng JX, Jia YK, Zheng G, Xie XS. Laser-scanning coherent anti-Stokes Raman scattering microscopy and applications to cell biology. Biophys J 83:502–509. 2002;
17. Colombo E, Regli L, Esposito G, Germans MR, Fierstra J, Serra C, et al. Mixed reality for cranial neurosurgical planning: a single-center applicability study with the first 107 subsequent holograms. Oper Neurosurg (Hagerstown) 26:551–558. 2023;
18. Cui H, Xie X, Xu S, Hu Y. A Dynamic Prediction Model for Intraoperative Somatosensory Evoked Potential Monitoring. In : 2015 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA). Shenzhen, China. IEEE; 2015. p. 31–35.
19. Cui Y, Zhou Y, Zhang H, Yuan Y, Wang J, Zhang Z. Application of glasses-free augmented reality localization in neurosurgery. World Neurosurg 180:e296–e301. 2023;
20. DePaoli D, Lemoine É, Ember K, Parent M, Prud’homme M, Cantin L, et al. Rise of Raman spectroscopy in neurosurgery: a review. J Biomed Opt 25:050901. 2020;
21. Desroches J, Lemoine É, Pinto M, Marple E, Urmey K, Diaz R, et al. Development and first in-human use of a Raman spectroscopy guidance system integrated with a brain biopsy needle. J Biophotonics 12e201800396. 2019;
22. Dho YS, Lee BC, Moon HC, Kim KM, Kang H, Lee EJ, et al. Validation of real-time inside-out tracking and depth realization technologies for augmented reality-based neuronavigation. Int J Comput Assist Radiol Surg 19:15–25. 2024;
23. Dho YS, Park SJ, Choi H, Kim Y, Moon HC, Kim KM, et al. Development of an inside-out augmented reality technique for neurosurgical navigation. Neurosurg Focus 51:E21. 2021;
24. Diaz R, Yoon J, Chen R, Quinones-Hinojosa A, Wharen R, Komotar R. Real-time video-streaming to surgical loupe mounted head-up display for navigated meningioma resection. Turk Neurosurg 28:682–688. 2018;
25. Egger J, Gsaxner C, Luijten G, Chen J, Chen X, Bian J, et al. Is the apple vision pro the ultimate display? A first perspective and survey on entering the wonderland of precision medicine. JMIR Serious Games 12e52785. 2024;
26. Egger MD, Petrăn M. New reflected-light microscope for viewing unstained brain and ganglion cells. Science 157:305–307. 1967;
27. Eschbacher J, Martirosyan NL, Nakaji P, Sanai N, Preul MC, Smith KA, et al. In vivo intraoperative confocal microscopy for real-time histopathological imaging of brain tumors. J Neurosurg 116:854–860. 2012;
28. Fan B, Li HX, Hu Y. An intelligent decision system for intraoperative somatosensory evoked potential monitoring. IEEE Trans Neural Syst Rehabil Eng 24:300–307. 2016;
29. Fick T, van Doormaal JAM, Hoving EW, Willems PWA, van Doormaal TPC. Current accuracy of augmented reality neuronavigation systems: systematic review and meta-analysis. World Neurosurg 146:179–188. 2021;
30. Freudiger CW, Min W, Saar BG, Lu S, Holtom GR, He C, et al. Label-free biomedical imaging with high sensitivity by stimulated raman scattering microscopy. Science 322:1857–1861. 2008;
31. Gibby J, Cvetko S, Javan R, Parr R, Gibby W. Use of augmented reality for image-guided spine procedures. Eur Spine J 29:1823–1832. 2020;
32. Heinrich F, Schwenderling L, Becker M, Skalej M, Hansen C. HoloInjection: augmented reality support for CT-guided spinal needle injections. Healthc Technol Lett 6:165–171. 2019;
33. Höhne J, Schebesch KM, Zoubaa S, Proescholdt M, Riemenschneider MJ, Schmidt NO. Intraoperative imaging of brain tumors with fluorescein: confocal laser endomicroscopy in neurosurgery. Clinical and user experience. Neurosurg Focus 50:E19. 2021;
34. Hollon T, Jiang C, Chowdury A, Nasir-Moin M, Kondepudi A, Aabedi A, et al. Artificial-intelligence-based molecular classification of diffuse gliomas using rapid, label-free optical imaging. Nat Med 29:828–832. 2023;
35. Hollon T, Kondepudi A, Pekmezci M, Hou X, Scotford K, Jiang C, et al. Visual foundation models for fast, label-free detection of diffuse glioma infiltration. Available at : https://doi.org/10.21203/rs.3.rs-4033133/v1.
36. Hollon TC, Pandian B, Adapa AR, Urias E, Save AV, Khalsa SSS, et al. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nat Med 26:52–58. 2020;
37. Incekara F, Smits M, Dirven C, Vincent A. Clinical feasibility of a wearable mixed-reality device in neurosurgery. World Neurosurg 118:e422–e427. 2018;
38. Iseki H, Masutani Y, Iwahara M, Tanikawa T, Muragaki Y, Taira T, et al. Volumegraph (overlaid three-dimensional image-guided navigation). Clinical application of augmented reality in neurosurgery. Stereotact Funct Neurosurg 68(1-4 Pt 1):18–24. 1997;
39. Ivan ME, Eichberg DG, Di L, Shah AH, Luther EM, Lu VM, et al. Augmented reality head-mounted display-based incision planning in cranial neurosurgery: a prospective pilot study. Neurosurg Focus 51:E3. 2021;
40. Jain S, Gao Y, Yeo TT, Ngiam KY. Use of mixed reality in neuro-oncology: a single centre experience. Life (Basel) 13:398. 2023;
41. Jamaludin MR, Lai KW, Chuah JH, Zaki MA, Hasikin K, Abd Razak NA, et al. Machine learning application of transcranial motor-evoked potential to predict positive functional outcomes of patients. Comput Intell Neurosci 2022:2801663. 2022;
42. Jermyn M, Mok K, Mercier J, Desroches J, Pichette J, Saint-Arnaud K, et al. Intraoperative brain cancer detection with Raman spectroscopy in humans. Sci Transl Med 7:274ra219. 2015;
43. Jiang W, Zhan Q, Wang J, Wei M, Li S, Mei R, et al. Quantitative identification of ventral/dorsal nerves through intraoperative neurophysiological monitoring by supervised machine learning. Front Pediatr 11:1118924. 2023;
44. Koljenović S, Choo-Smith LP, Bakker Schut TC, Kros JM, van den Berge HJ, Puppels GJ. Discriminating vital tumor from necrotic tissue in human glioblastoma tissue samples by Raman spectroscopy. Lab Invest 82:1265–1277. 2002;
45. Kuo TT, Kim HE, Ohno-Machado L. Blockchain distributed ledger technologies for biomedical and health care applications. J Am Med Inform Assoc 24:1211–1220. 2017;
46. Lai M, Skyrman S, Shan C, Babic D, Homan R, Edström E, et al. Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking. PLoS One 15e0227312. 2020;
47. Mahvash M, Besharati Tabrizi L. A novel augmented reality system of image projection for image-guided neurosurgery. Acta Neurochir (Wien) 155:943–947. 2013;
48. Martirosyan NL, Cavalcanti DD, Eschbacher JM, Delaney PM, Scheck AC, Abdelwahab MG, et al. Use of in vivo near-infrared laser confocal endomicroscopy with indocyanine green to detect the boundary of infiltrative tumor. J Neurosurg 115:1131–1138. 2011;
49. Minsky M. Memoir on inventing the confocal scanning microscope. Scanning 10:128–138. 1988;
50. Mizuno A, Kitajima H, Kawauchi K, Muraishi S, Ozaki Y. Near-infrared Fourier transform Raman spectroscopic study of human brain tissues and tumours. J Raman Spectrosc 25:25–29. 1994;
51. Mooney MA, Zehri AH, Georges JF, Nakaji P. Laser scanning confocal endomicroscopy in the neurosurgical operating room: a review and discussion of future applications. Neurosurg Focus 36:E9. 2014;
52. Nebeker C, Torous J, Bartlett Ellis RJ. Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med 17:137. 2019;
53. Paleologos TS, Wadley JP, Kitchen ND, Thomas DG. Clinical utility and cost-effectiveness of interactive image-guided craniotomy: clinical comparison between conventional and image-guided meningioma surgery. Neurosurgery 47:40–47. discussion 47-48. 2000;
54. Pavlov V, Meyronet D, Meyer-Bisch V, Armoiry X, Pikul B, Dumot C, et al. Intraoperative probe-based confocal laser endomicroscopy in surgery and stereotactic biopsy of low-grade and high-grade gliomas: a feasibility study in humans. Neurosurgery 79:604–612. 2016;
55. Pescador AM, Lavrador JP, Lejarde A, Bleil C, Vergani F, Baamonde AD, et al. Bayesian networks for risk assessment and postoperative deficit prediction in intraoperative neurophysiology for brain surgery. J Clin Monit Comput 38:1043–1055. 2024;
56. Porras JL, Khalid S, Root BK, Khan IS, Singer RJ. Point-of-view recording devices for intraoperative neurosurgical video capture. Front Surg 3:57. 2016;
57. Potma EO, de Boeij WP, van Haastert PJ, Wiersma DA. Real-time visualization of intracellular hydrodynamics in single living cells. Proc Natl Acad Sci U S A 98:1577–1582. 2001;
58. Qiao N, Song M, Ye Z, He W, Ma Z, Wang Y, et al. Deep learning for automatically visual evoked potential classification during surgical decompression of sellar region tumors. Transl Vis Sci Technol 8:21. 2019;
59. Raman CV, Krishnan KS. A new type of secondary radiation. Nature 121:501–502. 1928;
60. Roberts DW, Strohbehn JW, Hatch JF, Murray W, Kettenberger H. A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope. J Neurosurg 65:545–549. 1986;
61. Sanai N, Eschbacher J, Hattendorf G, Coons SW, Preul MC, Smith KA, et al. Intraoperative confocal microscopy for brain tumors: a feasibility analysis in humans. Neurosurgery 68(2 Suppl Operative):282–290. discussion 290. 2011;
62. Sanai N, Snyder LA, Honea NJ, Coons SW, Eschbacher JM, Smith KA, et al. Intraoperative confocal microscopy in the visualization of 5-aminolevulinic acid fluorescence in low-grade gliomas. J Neurosurg 115:740–748. 2011;
63. Sankar T, Delaney PM, Ryan RW, Eschbacher J, Abdelwahab M, Nakaji P, et al. Miniaturized handheld confocal microscopy for neurosurgery: results in an experimental glioblastoma model. Neurosurgery 66:410–417. discussion 417-418. 2010;
64. Shenai MB, Tubbs RS, Guthrie BL, Cohen-Gadol AA. Virtual interactive presence for real-time, long-distance surgical collaboration during complex microsurgical procedures. J Neurosurg 121:277–284. 2014;
65. Shu XJ, Wang Y, Xin H, Zhang ZZ, Xue Z, Wang FY, et al. Real-time augmented reality application in presurgical planning and lesion scalp localization by a smartphone. Acta Neurochir (Wien) 164:1069–1078. 2022;
66. Sievert M, Stelzle F, Aubreville M, Mueller SK, Eckstein M, Oetter N, et al. Intraoperative free margins assessment of oropharyngeal squamous cell carcinoma with confocal laser endomicroscopy: a pilot study. Eur Arch Otorhinolaryngol 278:4433–4439. 2021;
67. Skyrman S, Lai M, Edström E, Burström G, Förander P, Homan R, et al. Augmented reality navigation for cranial biopsy and external ventricular drain insertion. Neurosurg Focus 51:E7. 2021;
68. Tashibu K. Analysis of water content in rat brain using Raman spectroscopy. No to shinkei 42:999–1004. 1990;
69. Van Gestel F, Frantz T, Buyck F, Geens W, Neuville Q, Bruneau M, et al. Neuro-oncological augmented reality planning for intracranial tumor resection. Front Neurol 14:1104571. 2023;
70. Wang JY, Qu V, Hui C, Sandhu N, Mendoza MG, Panjwani N, et al. Stratified assessment of an FDA-cleared deep learning algorithm for automated detection and contouring of metastatic brain tumors in stereotactic radiosurgery. Radiat Oncol 18:61. 2023;
71. Wang Y, Su Z, Zhang N, Xing R, Liu D, Luan TH, et al. A survey on metaverse: fundamentals, security, and privacy. IEEE Commun Surv Tutor 25:319–352. 2023;
72. Wilson JP Jr, Kumbhare D, Ronkon C, Guthikonda B, Hoang S. Application of machine learning strategies to model the effects of sevoflurane on somatosensory-evoked potentials during spine surgery. Diagnostics (Basel) 13:3389. 2023;
73. Wilson JP Jr, Kumbhare D, Kandregula S, Oderhowho A, Guthikonda B, Hoang S. Proposed applications of machine learning to intraoperative neuromonitoring during spine surgeries. Neurosci Inform 3:100143. 2023;
74. Xu Y, Abramov I, Belykh E, Mignucci-Jiménez G, Park MT, Eschbacher JM, et al. Characterization of ex vivo and in vivo intraoperative neurosurgical confocal laser endomicroscopy imaging. Front Oncol 12:979748. 2022;
75. Xu Y, Mathis AM, Pollo B, Schlegel J, Maragkou T, Seidel K, et al. Intraoperative in vivo confocal laser endomicroscopy imaging at glioma margins: can we detect tumor infiltration? J Neurosurg 40:357–366. 2023;
76. Yoon JW, Chen RE, Han PK, Si P, Freeman WD, Pirris SM. Technical feasibility and safety of an intraoperative head-up display device during spine instrumentation. Int J Med Robot 13e1770. 2017;
77. Yoon JW, Chen RE, ReFaey K, Diaz RJ, Reimer R, Komotar RJ, et al. Technical feasibility and safety of image-guided parieto-occipital ventricular catheter placement with the assistance of a wearable head-up display. Int J Med Robot 13e1836. 2017;
78. Zehri AH, Ramey W, Georges JF, Mooney MA, Martirosyan NL, Preul MC, et al. Neurosurgical confocal endomicroscopy: a review of contrast agents, confocal systems, and future imaging modalities. Surg Neurol Int 5:60. 2014;
79. Zha X, Wehbe L, Sclabassi RJ, Mace Z, Liang YV, Yu A, et al. A deep learning model for automated classification of intraoperative continuous emg. IEEE Trans Med Robot Bionics 3:44–52. 2021;
80. Zhang J, Yang Z, Jiang S, Zhou Z. A spatial registration method based on 2D-3D registration for an augmented reality spinal surgery navigation system. Int J Med Robot 20e2612. 2024;
81. Zhang ZY, Duan WC, Chen RK, Zhang FJ, Yu B, Zhan YB, et al. Preliminary application of mxed reality in neurosurgery: development and evaluation of a new intraoperative procedure. J Clin Neurosci 67:234–238. 2019;
82. Ziebart A, Stadniczuk D, Roos V, Ratliff M, von Deimling A, Hänggi D, et al. Deep neural network for differentiation of brain tumor tissue displayed by confocal laser endomicroscopy. Front Oncol 11:668273. 2021;
83. Zumbusch A, Holtom GR, Xie XS. Three-dimensional vibrational imaging by coherent anti-stokes Raman scattering. Phys Rev Lett 82 82:4142–4145. 1999;

Article information Continued

Fig. 1.

Conceptual diagram of future neurosurgical operation room. Multi-layer of medical metaverse where various surgical participants communicate throughout the entire process of surgery, monitoring, and diagnosis. AI : artificial intelligence, AR : augmented reality.

Fig. 2.

Timeline of head-mounted display development and clinical applications as augmented reality navigation systems in neurosurgery. VP shunt, ventriculoperitoneal shunt.

Table 1.

Summary of applications of machine learning in the field of intraoperative neurophysiologic monitoring

Classification Study Type of monitoring No. of surgery/samples Type of cases ML model Best performance Summary of result
ML based prediction model Cui et al. [18] (2015) SSEP 9/158 Spine surgery (scoliosis correction surgery) LS-SVM MSE 0.15 Predicted dynamic baseline of SSEP on non-surgical factors
Training NA/125+ MAPE 20.96
Testing NA/the rest
Wilson et al. [72] (2023) SSEP 10/NA Spine surgery SVM RMSE 2.17 Predict alternation of SSEP according to sevoflurane concentration
Training 8/NA Regression tree model
Testing 2/NA
Intelligent decision support model Fan et al. [28] (2016) SSEP Successful* 10/158 Spine surgery LS-SVR MSE of M-SVR, retrospectively Reduce false-positive warnings and accurately detect spinal cord injury
Training 9/NA M-SVR
Testing 1/NA Successful 0.047
False-positive† 4/72 False-positive 0.315
Training 0/0 Trauma 0.207
Testing 4/72
Trauma‡ 1/14
Training 0/0
Testing 1/14
Qiao et al. [58] (2019) VEP 76/2843 Neuro-oncology (sellar region tumor) hybrid CNN & RNN Hybrid model. respectively Classification of VEP change (no change, increase, decrease)
Training NA/60%
Validation NA/20% Accuracy 87.4%
Testing NA/20% Simpler model
Jiang et al. [43] (2023) EMG 101/NA Congenital surgery (selective dorsal rhizotomy) DT Accuracy 83.1% Differentiate distinguish ventral/dorsal root
Training NA LR kNN
Testing NA NB Accuracy 95.9%, specificity 96.7%
SVM
kNN
NN
Postoperative outcome prediction model Jamaludi et al. [41] (2022) MEP 55 Spine surgery (lumbar) Hybrid kNN & bagged trees-based ML Fine kNN 8 : 2 ratio Predict postoperative
Training : testing ratio 7 : 3 and 8 : 2 Sensitivity 87.5%, specificity 33.3% neurologic outcome
Pescador et al. [55] (2024) MEP, SSEP, VEP 267/NA Neuro-oncology 198 BN NA Relation between intraoperative signal change and neurologic outcome
Training NA Neurovascular 69
Testing NA
*

Successful case : without interrupt, and signal change.

False positive case : surgery interrupted by an expert without spinal cord injury.

Trauma case : surgery interrupted by an expert, with spinal cord injury.

ML : machine learning, SSEP : somatosensory evoked potential, NA : not available, LS-SVM : least-squares support vector machine, MSE : mean squared error, MAPE : mean absolute percentage error, SVM : support vector machine, RMSE : root mean squared error, LS-SVR : least-squares support vector regression, M-SVR : multi support vector regression, VEP : visual evoked potential, CNN : convolutional neural network, RNN : recurrent neural network, kNN : k-nearest neighbors, EMG : electromyography, DT : decision tree, LR : logistic regression, NB : naïve bayes, NN : neural network, MEP : motor evoked potential, BN : bayesian network

Table 2.

Summary of human in vivo studies of confocal laser endomicroscopy in neurosurgical field

Study Commercial model Number of patients Diseases Fluorescent agent Summary of result
Sanai et al. [62] (2011) Optiscan 35 LGG 13 FNa Safety and feasibility (no statistical analysis)
HGG 8
Meningioma 8
Radiation necrosis 3
Sanai et al. [61] (2011) Optiscan 10 LGG 10 5-ALA Safety and feasibility in vivo & ex vivo (no statistical analysis)
Eschbacher et al. [27] (2012) Optiscan 50 Meningioma 24 FNa Comparing to H&E staining
HGG 12 Acc 92.9%
LGG 8
Schwannoma 4
Other tumors 2
Martirosyan et al. [48] (2011) Optiscan 74 Meningioma 30 FNa Mean duration time 16 minutes
Other tumors 14 Sn/Sp
HGG 13 HGGs 91/94
No tumor 7 Meningioma 97/93
Schwannoma 4
Metastasis 1
Pavlov et al. [54] (2016) Cellvizio 9 HGG 6 5-ALA 3, FNa 6 Safety and feasibility of fluorescence
LGG 2 Visualization of pathologic tissues (no statistical analysis)
Lymphoma 1
Charalampaki et al. [15] (2019) Cellvizio 13 and 22 rat Gliomas 5 ICG Safety and feasibility (no statistical analysis)
Meningioma 3
Metastasis 3
Schwannoma 2
Belykh et al. [10] (2021) CONVIVO 20 LGG 7 FNa Visualization of microvascular structure ex vivo & in vivo (no statistical analysis)
HGG 13
Höhne et al. [33] (2021) CONVIVO 12 Metastasis 5 FNa Safety and feasibility of CONVIVO system (no statistical analysis)
HGG 4
LGG 2
Gliosis 1
Abramov et al. [2] (2022) CONVIVO 11 (24 optical biopsies) Gliomas 6 FNa Feasibility of telepathology
Reactive gliosis 1 Acc of video/still image 96%/63%
Metastasis 1
Other tumors 3
Xu et al. [74] (2022) CONVIVO Ex vivo (43 patients, 118 optical biopsies, 14638 images) Ex vivo/in vivo FNa Comparing ex vivo/in vivo studies
Glioma 29/13 Brightness 60.7/112.1
Meningioma 7/5 Contrast 26.8/44.7
In vivo (30 patients, 87 optical biopsies, 6975 images) Metastasis 3/3 Sn 72/90
Treatment effect 0/4 Sp 90/94
Other tumors 3/6 PPV 97/97
1/0 AVM NPV 38/81
Xu et al. [75] (2023) CONVIVO 28 (56 ROIs) HGG 26 FNa Proposing scoring system using CLE comparing to permanent biopsy
LGG 2 Concordance 61.6%
Sn/Sp/PPV/NPV 79/37/65/53
Abramov et al. [1] (2023) CONVIVO 30 (10713 images, 31 tumors) Glioma 13 FNa Acc/Sn/Sp
Other tumor 6 Comparing to frozen section 94/94/100
Meningioma 5 Comparing to permanent biopsy 92/90/94
Metastasis 3
Reactive gliosis 4

LGG : low-grade glioma, HGG : high-grade glioma, FNa : fluorescein sodium, 5-ALA : 5-aminolevulinic acid, H&E : Hematoxylin and Eosin, Acc : accuracy, Sn : sensitivity, Sp : specificity, ICG : indocyanine green, AVM : arteriovenous malformation, PPV : positive predictive value, NPV : negative predictive value, CLE : confocal laser endomicroscopy