Envisioning the Future of the Neurosurgical Operating Room with the Concept of the Medical Metaverse
Article information
Abstract
The medical metaverse can be defined as a virtual spatiotemporal framework wherein higher-dimensional medical information is generated, exchanged, and utilized through communication among medical personnel or patients. This occurs through the integration of cutting-edge technologies such as augmented reality (AR), virtual reality (VR), artificial intelligence (AI), big data, cloud computing, and others. We can envision a future neurosurgical operating room that utilizes such medical metaverse concept such as shared extended reality (AR/VR) of surgical field, AI-powered intraoperative neurophysiological monitoring, and real-time intraoperative tissue diagnosis. The future neurosurgical operation room will evolve into a true medical metaverse where participants of surgery can communicate in overlapping virtual layers of surgery, monitoring, and diagnosis.
INTRODUCTION
In recent years, the term “metaverse” (a combination of the prefix “meta” meaning transcendental/virtual and “universe” meaning world) has reemerged as a major topic of discussion with the maturation of augmented reality (AR) and virtual reality (VR) technologies. The term ‘metaverse’ was initially used mainly as an umbrella term for various contents that embodied real-life interactions in a virtual space. Based on this, some hospitals have built virtual hospitals simply mimicking the form of existing hospitals on the web, referring to them as “metaverse hospitals” and using them for promotional purposes. However, the metaverse in the medical field needs a more comprehensive and conceptual approach. That is, while acknowledging that visually embodied virtual spaces and avatars are not essential elements of what is generally considered the essence of the metaverse. The medical metaverse can be defined as any virtual spatiotemporal framework wherein higher-dimensional medical information is generated, exchanged and utilized through communication and interaction among medical personnel or patients, using various convergence technologies. Specifically, the medical metaverse refers to the interactive system where telepresence is embodied in a virtual space-time using convergence of technologies such as AR, VR, artificial intelligence (AI), big data, cloud computing, robotics, brain-machine interfaces (BMI), the Internet of Things (IoT), and Web 3.0. This facilitates interaction, communication, treatment and education, transcending temporal and spatial barriers with others.
There have also been various attempts to apply metaverse technologies to the neurosurgical fields. In this review, we envision what the future neurosurgical operating room might look like with the application of metaverse technologies. The concept of neurosurgical metaverse to be discussed here shares a commonality where the surgical participants communicate and cooperate for the surgery in the multifaceted virtual layers of surgery, monitoring, and diagnosis (Fig. 1).
NEUROLOGICAL SURGERY GUIDED BY SHARED EXTENDED REALITY
AR navigation integrates anatomical information into surgical operations by overlaying computer-generated three-dimensional (3D) anatomical information onto the surgeon’s visual field. This technology is especially valuable in the neurosurgical field, where surgeons benefit from a 3D semi-immersive experience through the visualization of complex anatomy using multimodal images such as magnetic resonance (MR) and computed tomography (CT), thereby enhancing surgical precision and efficiency. Additionally, AR navigation in neurosurgery facilitates the education of trainees in neuroanatomy and serves as a cornerstone for telemedicine [9,17,64,77].
The development of AR navigation has been active in the fields of brain and spine surgery. With 3D modeling technology enhanced by AI advancements, the auto-segmentation of most brain structures from brain MR/CT images is currently being used in clinical practice [70]. Technologies are being actively developed to accurately and swiftly align rendered AR images with the human body for a more realistic representation [80]. AR navigation systems can be categorized by their display type, which broadly falls into two categories : monitor-based, including head-mounted displays (HMDs), and projection-based [11]. Traditionally, neurosurgeons have been more familiar with monitor-based approaches, which merge reconstructed images on various monitors such as current navigation systems, microscope eyepieces, and endoscopy monitors. One of the earliest attempts at AR navigation was made in 1986 by Roberts et al. [60], who merged CT images with microscopic views. While providing real-time visualization alongside the surgical field through microscope eyepieces seemed ideal and initially sparked enthusiastic development, challenges arose due to technical limitations. These included the construction of 3D images from real data sources and maintaining 3D perception given the focus issues between eyepieces and objects [7,60]. Efforts to merge and align radiological images directly onto surgical views have been ongoing across various fields of neurosurgery [7,13,46]. Significant improvements have been achieved in the segmentation and 3D reconstruction of radiological images, including fully or semi-automatic manners. However, achieving accurate real-time alignment of 3D reconstructed images to anatomical structures remains a challenge [23]. Developments in separate monitors overlaying preoperative MR/CT images, akin to current neuro-navigation systems, include attempts to simplify displays, using smartphones or tablets [23,65] (Supplementary Video 1, our experience). Recent advancements have integrated glasses-free AR navigation with surgical robots [19]. However, these systems fail to provide unimpeded hands for surgeons, simultaneously compromising hand-eye coordination and limiting real-time monitoring capabilities, thus significantly constraining surgical maneuverability [37].
Consequently, recent advancements in HMDs have gained traction due to their ability to facilitate hands-free operation and their applicability to macroscopic views, making them adaptable to a wide range of surgical procedures [24,31,32,37,39,40,69,76,81] (Fig. 2). These HMD systems provide sterile alternatives to controllers, such as gesture, voice, and gaze, as exemplified by HoloLens 2 (Microsoft, Redmond, WA, USA), Magic Leap 2 (Magic Leap, Plantation, FL, USA) and the recently launched Apple Vision Pro (Apple, Cupertino, CA, USA) [25]. A recent report indicated that AR navigation with HMDs does not increase operation time and provides satisfactory results for surgeons in surgical planning, suggesting change is imminent in routine clinical practice [17]. However, HMD systems face unique challenges including user fatigue, dizziness during prolonged use, short battery life, and narrow field of view [25,39,77]. Additional concerns include privacy protection and device synchronization issues [6,25,40,56].

Timeline of head-mounted display development and clinical applications as augmented reality navigation systems in neurosurgery. VP shunt, ventriculoperitoneal shunt.
Although HMDs represent a promising direction in AR navigation, researchers have also explored alternative approaches to address existing limitations. Another approach, initially attempted by Iseki et al. [38] in 1997, involves directly projecting reconstructed AR images onto the patient’s body. While a commercially available video projector can be used to implement this method, even without the need for a hologram, technical challenges remain. These include image distortion and alignment issues caused by projecting 2D images onto the curved human body, as well as potential interference between the projected image and existing surgical tools [11,47]. These limitations currently hinder immediate practical application.
Clinical evidence, though preliminary, has demonstrated several advantages of AR navigation in three ways. First, regarding surgical precision and safety, traditional methods require surgeons to mentally integrate multiple image sources, leading to increased cognitive load and potential spatial orientation errors. AR navigation solves this through direct 3D visualization, reducing cognitive demand and allowing surgeons to maintain better focus on the surgical field [9]. Recent studies have demonstrated promising accuracy levels compared to conventional navigation system. Dho et al. [23] reported an impressive error margin of less than 1 mm in their series, while Fick et al. [29] conducted a meta-analysis showing that AR navigation achieved an mean target registration error of 2.3 mm (95% confidence interval, 0.7–4.4 mm), comparable to conventional navigation systems. Additionally, these accuracy lead to reduce radiation exposure from intraoperative imaging, providing long-term safety benefits for both patients and surgical staff [31]. Second, conventional navigation systems face efficiency challenges due to frequent probe-based checking requirements that interrupt surgical flow. AR navigation, particularly with HMDs, provides hands-free operation that enables uninterrupted surgical procedures [39]Despite these advantages, various limitations still exist, though recent innovations are providing solutions. Software improvements include order-independent transparency, multi-layer alpha blending, and filtered alpha compositing, which help address depth perception and alignment issues [22]. Manual fine-tuning functions and cutting-edge technologies like simultaneous localization and mapping and Light Detection and Raging sensing are enhancing real-time adjustment capabilities across all AR approaches [19,22,23]. Especially for HMDs, improvements are also emerging in devices development, which now feature swappable battery packs, improved weight distribution, enhanced eye-tracking, higher refresh rates with more efficient graphics processing units to reduce user fatigue while maintaining extended operation times [25].. This solution has led to measurable improvements - Yoon et al. [76] reported a 15.05% reduction in screw placement time, while Colombo et al. [17] demonstrated an average four-minute decrease in surgical preparation time without compromising surgical efficiency. The third challenge involves cost-effectiveness and accessibility of surgical navigation technology. Traditional neuronavigation systems need a significant investment, ranging from $250000 to $950000, with estimated per-procedure costs reported as $14480 [53]. In contrast, modern AR navigation utilizing HMDs offer a more economical alternative, with devices such as the HoloLens 2 and Apple Vision Pro costing approximately $3500–4000 [25]. The cost-effectiveness of AR navigation extends beyond its lower initial investment, as it fascilate minimally invasive procedures that lead to shorter hospital stays and faster patient recovery, potentially reducing overall healthcare costs [81].
Despite these advantages, various limitations still exist, though recent innovations are providing solutions. Software improvements include order-independent transparency, multi-layer alpha blending, and filtered alpha compositing, which help address depth perception and alignment issues [22]. Manual fine-tuning functions and cutting-edge technologies like simultaneous localization and mapping and Light Detection and Raging sensing are enhancing real-time adjustment capabilities across all AR approaches [19,22,23]. Especially for HMDs, improvements are also emerging in devices development, which now feature swappable battery packs, improved weight distribution, enhanced eye-tracking, higher refresh rates with more efficient graphics processing units to reduce user fatigue while maintaining extended operation times [25].
In the field of neurosurgery, the initial applications of AR navigation are anticipated to be in the following procedures : extraventricular drainage insertion, frameless stereotactic biopsy, and screw fixation in spine surgery [23,31,67,76,77]. However, before AR navigation systems can be seamlessly integrated into clinical practice, technical hurdles need to be overcome, such as automated registration, tracking and calibration, minimizing alignment errors, and optimizing time synchronization [22,38,46,80].
Despite these challenges, AR navigation system has not only economic and clinical advantages compared to existing neuro-navigation system but also enable remote communication and intuitive sharing of surgical anatomical findings, facilitating virtual collaborative surgery. With advancing hardware development and increasingly convenient applications, AR navigation will be able to establish itself as an essential tool for neurosurgery in the near future.
CENTRALIZED INTRAOPERATIVE NEUROPHYSIOLOGICAL MONITORING (IONM) POWERED BY AI
IONM is employed in the vast majority of neurosurgical operation rooms, where technicians handle the equipment and doctors interpret the results, classically all within the same space. However, IONM has practical challenges contingent upon the availability and proficiency of the personnel responsible for monitoring and interpretation. Efforts are underway to address these issues by developing an AI surveillance system utilizing big data and establishing a centralized system for remote interpretation.
In 2001, the Health Care Financing Administration of the United States approved real-time remote monitoring, and currently, over 80% of cases are monitored through telemedicine [8,14]. Despite initial technical and institutional hurdles, remote IONM has emerged as a routine practice in the United States, supporting over 200000 high-risk surgical procedures annually [8]. Furthermore, thanks to recent remarkable developments in AI technology, attempts are being made to enable more efficient monitoring by using AI to detect abnormal events in real-time during IONM. AI could be utilized in a stepwise manner in the neurosurgical field. Building upon the categorization proposed by Wilson et al. [73], we propose classifying these applications into three categories : machine learning (ML) based prediction models, intelligent decision support models, and postoperative outcome prediction models in the neurosurgical field (Table 1).

Summary of applications of machine learning in the field of intraoperative neurophysiologic monitoring
ML-based prediction model was employed in interpreting complex IONM signals under diverse conditions such as anesthesia, patient specific physiologic variables, and artifacts. Cui et al. [18] predicted dynamic baseline for somatosensory evoked potentials (SSEP) by calibrating non-surgical factors using a least squares-support vector regression model and showed mean squared error of 0.15. Wilson et al. [72] also predicted the dynamic baseline under the effect of anesthesia agent sevoflurane with a root mean square error value of 2.17. For electromyography signals, ML was used to classify the action potential signs in thyroid surgery achieving 90% accuracy [79].
Next step of combining AI with IONM is intelligent decision support model, which means real-time, accurate, automated identification of neurological insults and suggesting alert signals to human experts using AI algorithms. Fan et al. [28] proposed an intelligent decision system to reduce false warnings and ultimately improve true spinal cord injury detection by using least squares and multi-support vector regression models to create a dynamic baseline, overcoming the limitations of nominal baselines in SSEP monitoring during spinal surgery. Qiao et al. [58] developed a deep learning model to monitor visual evoked potential (VEP) classification during surgical resection of sellar region tumors, achieving performance comparable to human intelligence. Jiang et al. [43] introduced a supervised ML system to help surgeons distinguish ventral from dorsal roots during selective dorsal rhizotomy. These systems help surgeons to interrupt surgical procedures timely to prevent neurologic deterioration while reducing time lost to false positive signs.
Postoperative outcome prediction models could help human experts link IONM changes to postoperative neurological changes. Jamaludin et al. [41] predicted postoperative neurologic outcomes in lumbar surgery with sensitivity of 87.5% using k-nearest neighbors to interpret intraoperative motor evoked potential (MEP) signal changes. Pescador et al. [55] reported a model using Bayesian Networks to predict postoperative outcomes in brain surgery by interpreting various signals such as MEP, SSEP, and VEP. For the future development, merging AI and remote IONM with cloud data sharing system has been proposed, but not yet realized. Equipping a remote IONM system with these kinds of AI algorithms will further maximize efficiency.
The AI-powered remote IONM system has the potential to revolutionize surgical procedures. This innovative approach overcomes spatial and temporal constraints, allowing neurosurgeons to perform operations with a reduced monitoring team while mitigating the risks associated with insufficiently trained personnel. Such a system could potentially enable more secure surgeries by facilitating reliable IONM even in hospitals struggling with infrastructure deficiencies or non-routine surgical situations.
REAL-TIME AND REPEATABLE INTRAOPERATIVE TISSUE DIAGNOSIS
Traditional frozen section examination remains the gold standard for the intraoperative tissue diagnosis. However, it has several limitations, including time-consuming procedure, typically requiring approximately 20–30 minutes per biopsy even with the use of digital pathology systems [2,12]. Moreover, it is impossible to repeatedly request and confirm a diagnosis more than once or twice during surgery in most cases. The emerging in vivo real-time diagnosis systems, currently under active research and verification, are poised to revolutionize digital pathology, allowing continuous sharing and communication of surgical tissue images with pathologists without time or frequency constraints, enabling more precise surgery.
In the neurosurgical field, two major methods under development for in vivo real-time tissue diagnosis are Raman-based imaging and confocal laser microscopy [20,51]. The Raman effect, discovered in 1928, utilizes a small percentage (1 in 10 million) of inelastically scattered photons to detect molecular composition in a label free, non-destructive, and noninvasive manner [59]. Spontaneous Raman spectroscopy (SR), a fundamental form of Raman spectroscopy, was initially used in a laboratory setting to distinguish between edematous and normal brain tissue in rats in 1990, and later to differentiate brain tumors from normal tissue in humans in 1993 [50,68]. Despite its advantages in quantifying tissue molecular information, Raman imaging is limited by low Raman signal which is interfered by autofluorescence, low spatial resolution due to long infrared wavelengths and unintuitive datasets from SR [20,30]. Improved signal processing and data analysis techniques have bought SR to the neurosurgical field for qualified biopsies process. Koljenović et al. [44] distinguished glioblastoma from necrotic tissue with an accuracy of 100%. However, SR still has limitations such as a large probe, and low signal-to-noise ratio for in vivo application. Efforts to move towards clinical in-vivo applications have led to the development of several different Raman spectroscopy techniques, including coherent anti-Stokes Raman stimulated (CARS) microscopy, and stimulated Raman scattering (SRS) [20,30,83]. CARS microscopy, employing multiphoton spectroscopy techniques to generate strong non-linear anti-Stokes signals stronger than coherent Stokes Raman scattering and unaffected by autofluorescence, was introduced to the biomedical field by Zumbusch et al. [83] from the Xie group with high sensitivity, high spatial resolution and 3D sectioning capability in 1999. Subsequently, it was rapidly applied to image various biological structures and living cells and tissues [16,57]. SRS microscopy, remarkably improved by Freudiger et al. [30] also from the Xie group, uses the stimulated emission phenomenon and has some advantages over CARS in that it provides a linear relationship between signal intensity and chemical concentration, and it is not affected by non-resonant background sources. Despite the small size of patients, Jermyn et al. [42] reported intraoperative use of Raman spectroscopy for detecting brain cancer in vivo with an accuracy of 92% and introduced advanced in vivo probes to use intraoperative. Additionally, for the improvement of biopsy accuracy, development of biopsy needle combined with Raman spectroscopy and its implementation was reported by Desroches et al. [21]. For the future application, Raman spectroscopy was merged with AI technology to shorten the diagnosis time, leading to real-time in vivo pathology [34-36]. Recently, a multicenter clinical trial using a SRS system integrated with AI technology, called FastGlioma, showed rapid detection under 10 seconds to acquire images and the ability to detect and quantify the degree of tumor infiltration with an average area under the receiver operating characteristic curve of 92.1% [34].
Confocal microscopy, introduced in the mid-20th century by Marvin Minsky, revolutionized imaging technology by utilizing a confocal aperture (pinhole) to create high spatial resolution, and deep in-focus plane imaging [49]. In 1967, Egger and Petrăn [26] introduced a technique in the biological field for viewing unstained brain and ganglion cells. Despite early trials, confocal microscopy had limited applications until the early 2000s due to the underdevelopment of scanning image digitization and laser sources [5]. The transition from laboratory to clinical settings was hindered by the challenge of miniaturization. After a lot of effort, clinically applicable confocal laser endomicroscopy (CLE) is developed, maintaining the fundamental principle of the pinhole while employing a miniaturized probe. This is useful in intraoperative diagnosis by continuously achieving real-time, high-resolution images without tissue preparation. Initially deployed in gastroenterology, urology, and gynecology, this technology showed promise in improving the accuracy of biopsy procedures [78]. In 2010, Sankar et al. [63] conducted the first in vivo study using CLE in the neurosurgical field to distinguish between tumor and nontumor tissue, including infiltrative tumor margins, in a glioblastoma mouse model. Subsequently, Sanai et al. [61] from the Spetzler group conducted the first human in vivo study in 2011 to assess the feasibility of intraoperative CLE usage for brain tumor resection. Human in vivo studies are shown in Table 2.
In CLE models, the selection of appropriate fluorescent agents is crucial. While current laser-based confocal systems can detect autofluorescence using low-power lasers, they often suffer from relatively low resolution [78]. Therefore, efforts have been made to explore and develop suitable fluorescent agents such as fluorescein, 5-aminolevulinic acid (5-ALA) and indocyanine green (ICG), which are well-known among neurosurgeons. Fluorescein sodium, approved by the Food and Drug Administration (FDA) for intravenous use, is the first and most popular agent employed in the neurosurgical field for CLE imaging [1,2,10,27,33,48,51,54,61,74,75,78]. It was employed by the aforementioned Sanai et al. [61] in a human study as a fluorescent agent for CLE, with a reported tumor detection accuracy of 92.9%. Another fluorescent agent is 5-ALA, which is used for oral intake in high grade glioma [35]. In the field of CLE imaging, Sanai et al. [62] employed 5-ALA to demonstrate the correspondence of tumor margins with standard histopathology in vivo in low grade glioma. Although ICG, approved by the FDA for intravenous usage, is not a popular agent used in the CLE system, Charalampaki et al. [15] reported results showing human cellular cytoarchitecture in vivo at 400 and 1000-fold magnification. Recently, a multicenter human ex vivo study was conducted by utilizing the ICG, which showed better negative predictive value, positive predictive value, specificity, and preparation time compared with frozen biopsy [12]. Supplementary Video 2 illustrates an example of in vivo intraoperative tissue image acquisition using CLE with ICG contrast (our experience).
Currently, three commercially available CLE systems exist : Optiscan/Pentax ISC-1000 (a joint venture between Pentax, Tokyo, Japan and Optiscan Pty Ltd., Melbourne, Australia), Cellvizio (Mauna Kea Technologies, Paris, France), and CONVIVO (Carl Zeiss Meditec AG, Oberkochen, Germany) [15,51,62,74,78]. Another new CLE system, cCeLL (VPIX medical, Daejeon, Korea) is under development [12]. However, it appears somewhat premature to transition from conventional pathology based on Hematoxylin & Eosin staining to image-based pathology derived from new imaging technology like in vivo microscopy. Additionally, it’s important to note that various studies have highlighted the operator-dependent nature of the examination method and the associated learning curve, both in conducting the examination and interpreting the results [1,33,51]. In a recent trial in 2023, Abramov et al. [2] reported the FDA clearance of a clinical-grade in vivo confocal system with a cloud data sharing platform, paving the way for future advancements in telepathology. Attempts to integrate deep learning with confocal imaging achieve real-time diagnosis is under active development [82].
If these endeavors are successful and in vivo microscopy becomes a standard practice during surgery, it is anticipated that the intraoperative tissue diagnosis system will serve as a revolutionary catalyst. Although these systems may come with a modest initial cost – with CLE devices costing approximately $200000 and per-case operational costs of $275 – the long-term cost-effectiveness could be achieved through several factors : reduced pathology processing costs, shortened operating times, and improved patient outcomes through enhanced tumor resection rates with minimal tissue damage [12,15,66,74]. The integration of advanced technologies like AI-assisted Raman spectroscopy and CLE systems with improved fluorescent agents has already demonstrated remarkable potential [36,82]. This could instigate fundamental changes in the approach to neurosurgery. The realization of collaborative surgery between neurosurgeons and neuropathologists is achievable through the virtual communication layer of image transfer. The successful implementation of these systems, combined with continuing technological advances, holds promise for establishing a new standard in precise and efficient neurosurgical procedures.
ETHICAL IMPLICATIONS IN MEDICAL METAVERSE
The implementation of the medical metaverse in real-world healthcare settings raises several critical ethical considerations. First, data privacy, security, and confidentiality emerge as paramount concerns. The multi-layered communications between medical personnel involve highly sensitive information, including not only conventional personal data like standard demographics and privacy sensitive-informations but also complex neurophysiological data not created or and shared before the emergence of these environment. Transferring data across multiple layers introduces the risks of errors that could have catastrophic for patients. While emerging technologies like blockchain offer potential solutions for secure data transfer and verification, implementation challenges remain [4,45]. The informed consent process becomes particularly challenging, as patients must understand and agree to various levels of data sharing across multiple virtual layers and participants [52].
Second, the integration of AI-powered technologies introduces additional ethical complexities. While AI-powered decision support models show promise for IONM and neuropathological diagnosis, they raise significant questions about responsibility and accountability. The “black box” nature of many AI models makes it difficult to trace the decision-making process when errors occur [4]. This challenge might be amplified in the medical metaverse, where multiple specialists interact across different digital layers. Another issues arising form AI is concerns about bias and representation. As highlighted by Wang et al. [71], AI systems primarily trained on data from specific demographic groups, such as white males, can produce biased or less accurate results for others, particularly women and people of color. These biases could lead to misdiagnoses or unequal treatment within the medical metaverse.
Finally, equitable accessibility remains a significant concern. The sophisticated technology required for the medical metaverse involves substantial costs, potentially limiting access to specialized hospitals and creating disparities in healthcare delivery [3]. This economic barrier could exacerbate existing healthcare inequalities and limit the technology’s broader benefits.
CONCLUSION
Technological advancements capable of actualizing the concept of a medical metaverse, could profoundly impact the future of medicine. Neurosurgical operation rooms will evolve into true medical metaverse environments, where various surgical participants communicate in overlapping virtual layers throughout the entire process of neurosurgery.
Notes
Conflicts of interest
Chul-Kee Park has been editorial board of JKNS since November 2020. He was not involved in the review process of this review article. No potential conflict of interest relevant to this article was reported.
Informed consent
This type of study does not require informed consent.
Author contributions
Conceptualization : CKP; Data curation : SMN, YHB, YSD; Formal analysis : SMN, YHB, YSD; Funding acquisition : CKP; Methodology : SMN, YHB, YSD; Project administration : CKP; Visualization : SMN; Writing - original draft : SMN; Writing - review & editing : SMN, YHB, YSD, CKP
Data sharing
None
Preprint
None
Acknowledgements
The authors wish to thank So Young Yim for support with the illustration included in this article. We also acknowledge MEDICALIP Co. Ltd. and VPIX Medical Co. Ltd. for allowing us to use the video clips used in this study.
This research was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (grant number : RS-2022-00197971); ‘Supporting Project to evaluation Domestic Medical Devices in Hospitals’ funded by the Korea government (the Ministry of Health & Welfare, Korea Health Industry Development Institute), supported in part by grants from the National Cancer Center, Korea (NCC-2411840-1).
We thank So Young Lim for her original artwork used in this manuscript.
Supplementary materials
The online-only data supplement is available with this article at https://doi.org/10.3340/jkns.2024.0160.