Skip to main content

Introducing a brain-computer interface to facilitate intraoperative medical imaging control – a feasibility study



Safe and accurate execution of surgeries to date mainly rely on preoperative plans generated based on preoperative imaging. Frequent intraoperative interaction with such patient images during the intervention is needed, which is currently a cumbersome process given that such images are generally displayed on peripheral two-dimensional (2D) monitors and controlled through interface devices that are outside the sterile filed. This study proposes a new medical image control concept based on a Brain Computer Interface (BCI) that allows for hands-free and direct image manipulation without relying on gesture recognition methods or voice commands.


A software environment was designed for displaying three-dimensional (3D) patient images onto external monitors, with the functionality of hands-free image manipulation based on the user’s brain signals detected by the BCI device (i.e., visually evoked signals). In a user study, ten orthopedic surgeons completed a series of standardized image manipulation tasks to navigate and locate predefined 3D points in a Computer Tomography (CT) image using the developed interface. Accuracy was assessed as the mean error between the predefined locations (ground truth) and the navigated locations by the surgeons. All surgeons rated the performance and potential intraoperative usability in a standardized survey using a five-point Likert scale (1 = strongly disagree to 5 = strongly agree).


When using the developed interface, the mean image control error was 15.51 mm (SD: 9.57). The user's acceptance was rated with a Likert score of 4.07 (SD: 0.96) while the overall impressions of the interface was rated as 3.77 (SD: 1.02) by the users. We observed a significant correlation between the users' overall impression and the calibration score they achieved.


The use of the developed BCI, that allowed for a purely brain-guided medical image control, yielded promising results, and showed its potential for future intraoperative applications. The major limitation to overcome was noted as the interaction delay.

Peer Review reports


Surgical planning, navigation and execution is heavily dependent on medical image modalities, including radiography, Computer Tomography (CT) and Magnetic Resonance Imaging (MRI) [1]. Such images are generally stored through the Picture Archiving and Communication System (PACS) and presented to the operating team via 2D monitors. The modern Operating Room (OR) represents a challenging environment for interaction with imaging modalities (Fig. 1) and as stated in [2, 3], inadequate data presentation can be noted as a major workflow bottleneck inside the OR. This can be attributed to multiple factors such as missing spatial context when viewing medical images on 2D monitors [4] and the use of non-sterile input devices for image control such as, keyboard, mouse and touch screens. These conventional input devices can even be a reservoir for pathogens [5]. Scrubbed surgeons are generally not able to touch such input devices and are often forced to request another member of the operating team to act as a proxy and interact with the medical images [6]. This often results in delay and frustration as precise 3D manipulation of medical images while solely relying on verbal commands is a cumbersome undertaking [7].

Fig. 1
figure 1

A typical operating room at the Balgrist University Hospital. Peripheral monitors can be seen outside the operating area

Given the ever-increasing presence of advanced medical technologies inside the operating room, there is currently a high demand for intuitive and touchless human–computer interfaces that allow for seamless interaction with such devices, while maintaining the integrity of the sterile filed [8].

Related Work

To overcome the abovementioned limitations, different touch-less interaction methods have been proposed that allow for direct image manipulation inside an operating room using gesture- or speech recognition technologies. One of the earliest examples of the vision-based gesture recognition technologies was presented in [9], where the authors developed a non-contact mouse for intraoperative use by detecting the surgeon's gestures based on a stereo camera setup. This was followed by several other publications that utilized image-based gesture recognition for medical image manipulation [10]. More recently, conceptually similar approaches haven been introduced that provide the possibility of remote, touch-less interaction with medical imagery based on gesture recognition using depth (i.e., RGB-D) sensors (e.g., [11,12,13,14,15,16]). Performing such gestures requires certain movements of either one or both hands, rendering such technologies limited for interventions where both of the surgeons' hands are occupied. Furthermore, such methods generally rely on outside-in tracking of the surgeon's gestures by placing a sensor next to each image modality of interest. This can result into an even more cluttered operating theater as multiple sensors are needed to interact with each image modality. Additionally, the required user gestures can be perceived as non-intuitive given that the user (i.e., the surgeon) should learn them for a smooth interaction experience with the technology [17]. While still relying on user's physical gestures, the authors in [18] proposed an inside-out gesture recognition method for medical image manipulation based a wearable RGB-D sensor. This alleviated the need for multiple gesture recognition sensors, by relying on a single head-mounted depth sensor. However, the system required user-specific and display-specific calibration steps and placement of external recognizable patterns (i.e., QR codes) on each display monitor. Further studies introduced a commercially available hand-tracking product (Leap Motion; San Francisco, United States) [19, 20] for the purpose of touch-less medical image control. As an overarching limitation associated to the abovementioned methods, gesture recognition based on external sensors can suffer from line-of-sight issues specially in the OR's crowded environment.

A parallel line of technology was developed in [21] where inertial measurement sensors were used to identify the user's gesture. Although these techniques have a smaller physical footprint in the OR and do not suffer from line-of-sight issues, they generally require a training phase based on a pre-acquired set of data and are attributed to the same limitations of gesture intuitiveness.

Speech recognition methods for medical image manipulation were presented in [13]; however, there can be substantial concerns with the efficiency of such algorithms in a noisy environment of an operating theater. In fact, the noise pollution inside the OR has been reported to be higher than the safe noise thresholds defined by the World Health Organization (WHO) [22]. Additionally, and in our own experience, relying on voice commands in the operating room environment can be a challenging undertaking even when input microphones are not covered.


Brain-Computer Interface (BCI) has been an active filed of research in the past decades with the promise of providing non-muscular means for communication of the users and machines [23]. Recent advances in signal processing and artificial intelligence have resulted in adoption of BCI systems in a variety of applications [24,25,26,27,28]. As a particular use-case of BCI in healthcare, researchers have extensively investigated the feasibility of BCI interfaces for rehabilitation medicine [29, 30]. However, the form-factor of the developed hardware and the specialized design of their associated interfaces have made it difficult to translate such technologies for intraoperative applications. However, with the recent introduction of consumer-grade BCI devices, we believe that the emerging BCI sensor technology is a suitable choice for the specific use case of intraoperative medical image manipulation, given that they do not rely on recognizing the surgeons' demands through external means of communication (e.g., hand movements, voice commands or foot pedals), but rather, detect the surgeon's desire directly by measuring their brain activity. Using the direct communication channel provided by BCI technology, the abovementioned shortcomings of the state-of-the-art techniques for touch-less image manipulation can be addressed. In this study, we present what we believe to be the first adoption of human-brain interface technology for intraoperative medical image manipulation. We developed a software environment that could provide touch-less and hands-free medical image control through real-time communication with a consumer-grade BCI device worn by the surgeon. The usability of our technique was assessed by orthopedic surgeons at our institution in a systematic fashion and metrics such as response time, usability, comfort, and accuracy were evaluated.


Choice of Sensor

Visually Evoked Potentials (VEPs) are brain activity modulations that take place in the visual cortex after being exposed to a visual stimulus [31], which can be robustly detected [32]. Building up on this technology, a new consumer-grade product was released recently that is capable of monitoring the brain activity using a small form-factor wearable sensor technology (NextMind; Paris, France, [33]). This device utilizes small dry electrodes that are in contact with the user's scull to monitor electrical activity in the visual cortex based on the Steady-State Visually Evoked Potentials (SSVEP) concept. This sensor technology is non-invasive and lightweight making it comfortable to be worn under a surgical cap (Fig. 2). To use this device as a computer interface, special buttons with a unique flickering visual patterns have to be implemented into the user interface, which can send corresponding software signals once the user wearing the sensor looks at them with appropriate level of contextual attention. This device is shipped with a Software Development Kit (SDK), sending appropriate software signals to be used in development of custom applications. Given that this device meets the clinical and application-specific requirements of our target application to a great extent, we used its hardware + SDK platform to develop the medical image manipulation application.

Fig. 2
figure 2

a) The BCI sensor being worn in a surgical setting by a surgeon. b) the position of the sensor on the surgeon's scull. Note that the sensor is worn over the surgical cap for visualization purposes but for the intended use-case, the sensor must be worn under the cap

Software Application Design

An application for intraoperative control of radiological images was designed using the abovementioned BCI platform and incorporating the functionality of the state-of-the-art PACS viewers. Given that the surgeons are familiar with the common PACS viewers to visualize the medical images in separate windows along different slice direction (coronal, sagittal and axial) and scroll through the slices with standard computer interfaces (e.g., computer mouse and keyboard), we considered them as the baseline for the design of our software application and developed a BCI-controllable medical image viewer software that allowed for touchless and gesture-free image manipulation while offering similar functionalities as a PACS image viewer software. After conducting an interview with a lead spine surgeon, we established the basic outline of the software application and the corresponding interface to best suit the communicated clinical needs. This software was developed by the authors and independent to the manufacturer of the BCI hardware. Details regarding the programming environment and the utilized libraries are included in Programming Environment section. This software was later used in the conducted user study (section User Study). The primary specifications of this software application were defined through direct consultations with our clinical collaborators and were iteratively refined through procurement of their feedback during several demo sessions. In each demo session, our clinical collaborators used the developed interface to navigate within the 3D medical images and to land on their desired anatomical markers. This was followed by one-on-one interview at the end of each demo session, during which our collaborating clinicians identified essential modifications to the software application for use in surgical conditions. Although the interface can be used for any 2D and 3D imaging modality, the herein specifications are explained for the use case of displaying patient CT scans used during spine surgery.

Similar to standard PACS viewers, a global view (i.e., main menu; Fig. 3-a) contained the three windows for displaying slices along each anatomical direction (coronal, sagittal and axial) as well as a 3D display showing the volume of the CT scan. The interface displayed the application on two monitors (primary and secondary) and by looking at each of the corresponding buttons in the global view, the user could switch the primary view to the desired axis. Once the primary display was setup; the secondary display showed the two other slice directions (e.g., primary: axial – secondary: coronal and sagittal). Once on a given slice, the user could navigate from the current slice to the immediately adjacent slices by looking at the "single arrow" buttons. In order to find the most recent viewing positing in the CT scan, Crosshairs were implemented to indicate the slice position of one slice view in the two others. Furthermore, each slice view along with its corresponding crosshair was assigned a unique color seen as a colored halo effect around the buttons of that slice view (axial: yellow, coronal: orange and sagittal: blue). Inside the axial slice view, the user could navigate to dominant anatomical landmarks that were predefined in the patient CT scan. This was designed to facilitate the navigation into most important anatomical areas (as defined by our consulting surgeons). For the presented use case of spine surgery, these landmarks were defined as the intervertebral disc space in the lumbar region (Fig. 3-b). Hover buttons were introduced on the coronal and the sagittal slice views that initiated a "free-move" along a given direction, which could be stopped by the user by looking at the "stop hover" button (Fig. 3-c,d). This was designed in a way that successive activations of the hover button would result in increase of the "free-move" speed. Based on the feedback from our clinical collaborators, the controls in the axial slice view should be more involved; therefore, we exchanged the hover functionality in this view to navigation in the cranial-caudal axis by a definite number of slices (Fig. 3-b).

Fig. 3
figure 3

Image control interface. a) the global view displaying the current position of the axial, coronal and sagittal views. b) the axial view including the pre-defined landmark controls as well as the next 10 slice control. c) the sagittal view including the single slice scroll and hover scroll functionality. d) the coronal view including the single slice scroll and hover scroll functionality

User Study

The primary objective of this prospective user study was to evaluate the feasibility of the developed imaging control interface in a simulated surgery setting. For this, we recruited ten orthopedic resident surgeons at Balgrist University Hospital from August 2021 to December 2021. Eight male and two female surgeons with a mean age of 33 years (range 28 to 36) who had completed a mean of 51 months (range 23 to 80) of their orthopedic residency program. Each participant completed a series of image control tasks using the developed interface and the software application recorded each individual's performance.

As per the manufacturer's recommendation, each participant underwent an initial calibration process of the BCI device. This one-time calibration process was performed for each participant at the beginning of their session and a calibration score (range 1: poor – 5: excellent) was calculated for each participant. Each participant was given three attempts to reach a minimum calibration score of 3 and additional two attempts to reach a minimum score of 2. After the calibration phase, the participants were allowed to familiarize themselves with the hardware and software interface and perform provisional image controls for 10 min.

As their primary task, the participants were asked to navigate to a predefined anatomical location in the patient CT scan while trying to follow a specific pre-defined trajectory to the best of their ability. This trajectory consisted of individual segments and had two different levels of difficulty. During the first task, the trajectory was constrained into segments that were strictly orthogonal to one of the slice directions (i.e., requiring the participant to only move in one of the axials, coronal or sagittal directions; Fig. 4-a). This constraint was lifted for the second and more difficult task, where the trajectories were designed in a way that a combination of movements along the three primary axes was needed for following each segment (Fig. 4-b).

Fig. 4
figure 4

a) an example of a simple trajectory (task 1); b) an example of a difficult trajectory (task2)

Before starting the tasks, the participants were presented with the 3D representation of the desired trajectory overlaid on the patient's 3D model and displayed on a touchscreen tablet (Fig. 5). For better comprehension, the participant could see the desired trajectory displayed on the tablet from different view angles by rotating the scene, panning, and zooming in–out. After this inspection phase, the participants were blinded to the desired trajectory and were asked to follow it using only the developed interface.

Fig. 5
figure 5

Trajectory visualization on a touchscreen tablet

While trying to follow each trajectory segment, the participant's trajectory and time was recorded in the background for the retrospective processing of their performance. Once satisfied with the end-point of their trajectory segment, the participants could choose to move on to the next segment. In the post-processing phase, and for each trajectory segment, we calculated the Euclidian distance between the participant's confirmed end-point and the corresponding pre-defined end-point (i.e., image control error).

After completion of the image control tasks, we asked the participants to evaluate the performance and the feasibility of the developed interface with respect to the following criteria: acceptance, input/output task, software application and overall personal impression (Additional file 1: User study questionnaire). This questionnaire consisted of 19 questions and the responses were recorded on a 5-point Likert scale. Responses were scored as strongly agree (5), agree (4), neutral (3), disagree (2) and strongly disagree (1).

Data related to descriptive statistics are presented as mean, Standard Deviation (SD) and range. In order to determine if there is a significant difference in the participants' image control errors between the two tasks, after checking for the normality of the data through Kolmogorov–Smirnov and Shapiro–Wilk tests, we ran a Wilcoxon's signed-rank test (significance was set at < 0.05). Correlation between rating scores and calibration score were analyzed using Spearman's rank correlation (rs) (significance was set at < 0.05).

Programming Environment

The software was developed using the Unity engine editor (2019.4.20f1). The NextMind SDK was used that provided a high-level Application Programming Interface (API) to create the input and callback events. The data post-processing was done using Python 3.8 and the Matplotlib library was used for data visualization. Statistical analyses were performed using Microsoft Excel 2019. The image control tasks were performed on an anonymized, publicly available human CT scan (the sample dataset available in 3D Slicer software)Footnote 1 with an in-plane resolution of 0.742 mm × 0.742 mm and slice thickness of 1.5 mm.


Mean calibration score of the BCI device was 3.7 (STD:0.8, range 3.0–5.0). In Table 1, the individual participants' calibration score and image control error (mm) is reported for both of the image control tasks. On average, the navigation error for task one (easier task) across all the participants was 16.9 mm (SD: 9.7) while the same error for task two (harder task) was 13.4 mm (SD: 9.0).

Table 1 Quantitative results of the participants' performance in the image control tasks

By comparing all the end-point errors for task one (10 participants × 5 end-points) and task two (10 participants × 4 end-points) through a Wilcoxon's signed-ranks test (the data was determined to not follow a normal distribution by both the Kolmogrov-Smirnov and the Shapiro–Wilk tests), we concluded that there is not a statistically significant difference in participants' performance between the two tasks (p = 0.07). The errors for navigating to specific end-points and the corresponding time that the participants took to reach to that end-point is illustrated in Fig. 6.

Fig. 6
figure 6

Image control error for navigating to each end-point (8 end points × 10 participants) versus the time required for completion of image control

Through the qualitative evaluation of the participants, the average acceptability of the device was rated as 4.07 (SD: 0.96), the input and output as 3.65 (SD: 1.26), the user study as 4.47 (SD: 0.67), the application as 4.13 (SD:1.22) and the overall impressions of the interface as 3.77 (1.02). Details of the qualitative assessments are reported in Table 2 and Fig. 7.

Table 2 Participant's rating of the computer brain interface
Fig. 7
figure 7

Participants' rating of the developed image control interface. Medians are displayed as red vertical lines. The individual box-plots depict IQR and whiskers show min and max

Through the Spearman's ranked correlation analysis, we identified a statistically significant correlation between the participants' calibration score and their acceptance rating (rs= 0.87; p=0.01) overall impression rating (rs= 0.66; p = 0.04)  while we found no statistically significant correlation between the participants' calibration score and their Input / Output rating, (rs= 0.61; p=0.06) user study rating (rs= 0.02; p=0.96) and application rating (rs= 0.59; p=0.07).


In this publication, we demonstrated the usability of a commercially available BCI device for the purpose of medical image control in intraoperative applications. For this, we designed a software interface that allowed for hands-free interaction with 3D medical imagery by following the commands of the users sensed by a head-mounted BCI device while not relying on specific physical gestures. We simulated an intraoperative image control task and environment and performed a user study to evaluate the feasibility of this interface in a simulated surgical setting. This early feasibility study showed that the existing limitations of currently available image control interfaces (e.g., sterility issues, line of sight problems and poor intuitiveness) can be facilitated through the use of the proposed interface.

As seen in Table 1, the participants could conduct the assigned image control tasks with a relatively low error rate. On average and for both of the tasks the participants' achieved an image control error of 15.5 mm (SD: 9.6). It should be noted that this error does not only stem from the developed interface but may also include the discrepancy in the participants' memorized trajectory and their executed trajectory as a confounding factor. Comparing this accuracy rate to the prior-art, to our knowledge most of the existing research on advanced image manipulation interfaces only report qualitative metrics or time of task completion (e.g., [9,10,11]) and lack quantitative analyses on spatial image control accuracy. However, compared to a study that implemented the closest counterpart to our 3D spatial accuracy metric [16] and despite the substantial differences in implementation of the metric and tasks, we found that our image control accuracy was better than the 3D target accuracy reported in that study (on average from 32.0 mm to 90.3 mm) despite the fact that our interface did not rely on any gesture recognition algorithms. We observed that the participant's error could be generally reduced if they spent more time in navigating to a specific target (Fig. 6). Furthermore, we did not observe a statistically significant difference between the two levels of task difficulty, which potentially demonstrate the insensitivity of the interface to the image control complexity (although a larger sample size is needed to derive conclusions to this end).

Based on the conducted user study, we showed that the participants perceived the developed interface as feasible in criteria such as acceptance and interface application design with respective average Likert scores of 4.1 and 4.1. We observed significant correlation between the participants' calibration scores and their acceptance and overall impression ratings, which demonstrates that the user experience can be improved if an adequate user-specific device calibration is accomplished.

While the user study provided valuable insight into the usability of the developed interface, our study cohort was rather small and therefore further investigation with a larger user study is planned for the future follow up studies. Furthermore, the image control tasks were performed in a simulated surgical setting that did not include some of the challenges that may arise inside a typical operating room (e.g., change in the lighting conditions, visual and auditory disruptions, etc.). To this end, our goal is to test the interface in real operating rooms in the future.

For a seamless integration of this interface in a surgical setting, several enhancements and modifications are required. The participants had a strong consensus on the slow response time of the device (average Likert score of 2.6), which can be noted as the most substantial limitation of the BCI device. Furthermore, although the utilized BCI sensor has a small form-factor, some of the participants expressed that they can envision ergonomic issues if this device is worn by the surgeon during long operations. Given that the utilized BCI is one of the earliest commercially available prototypes in the market, we hope that the proceeding generations of the product will have a quicker response time and smaller/lighter form-factor allowing for a more seamless image control experience.


We believe that the intraoperative application of BCI for image manipulation is a viable option given that it can streamline the surgeons' commands when interacting with image display units. The developed interface can potentially reduce the surgical time by providing the surgeons with a direct communication channel with medical images. Similar BCI-based concepts can be investigated for other intraoperative tasks and more complex user interface algorithms such as physical controlling of surgical robots. 

Availability of Data and Material

The code for the developed software application can be downloaded from the link below: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.





Two dimensional


Three dimensional


Brain Computer Interface


Computed Tomography


Standard Deviation


Magnetic Resonance Imaging


Picture Archiving and Communication System


Operating Room


World Health Organization


Visually Evoked Potential


Steady State Visually Evoked Potentials


Software Development Kit


Application Programming Interface


  1. Korb W, Bohn S, Burgert O, Dietz A, Jacobs S, Falk V, et al. Surgical PACS for the Digital Operating Room. Systems Engineering and Specification of User Requirements. Stud Health Technol Inform. 2006;119:267–72.

    PubMed  Google Scholar 

  2. Lemke HU, Berliner L. PACS for surgery and interventional radiology: Features of a Therapy Imaging and Model Management System (TIMMS). Eur J Radiol. 2011;78(2):239–42.

    Article  Google Scholar 

  3. Cleary K, Kinsella A, Mun SK. OR 2020 Workshop Report: Operating Room of the Future. Int Congr Ser. 2005;1281:832–8.

    Article  Google Scholar 

  4. Watts I, Boulanger P, Kawchuk G. ProjectDR: augmented reality system for displaying medical images directly onto a patient. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17). New York: Association for Computing Machinery; 2017. Article 70, 1–2.

  5. Hartmann B, Benson M, Junger A, Quinzio L, Röhrig R, Fengler B, et al. Computer Keyboard and Mouse as a Reservoir of Pathogens in an Intensive Care Unit. J Clin Monit Comput. 2003;18(1):7–12.

    Article  Google Scholar 

  6. Johnson R, O’Hara K, Sellen A, Cousins C, Criminisi A. Exploring the potential for touchless interaction in image-guided 10.1186/s12891-022-05384-9 interventional radiology. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). New York: Association for Computing Machinery; 2011. p 3323–3332.

  7. O’Hara K, Gonzalez G, Sellen A, Penney G, Varnavas A, Mentis H, et al. Touchless Interaction in Surgery. Commun ACM. 2014;57(1):70–7.

    Article  Google Scholar 

  8. Wachs JP, Kölsch M, Stern H, Edan Y. Vision-Based Hand-Gesture Applications. Commun ACM. 2011;54(2):60–71.

    Article  Google Scholar 

  9. Grätzel C, Fong T, Grange S, Baur C. A Non-Contact Mouse for Surgeon-Computer Interaction. Technol Health Care Off J Eur Soc Eng Med. 2004;12(3):245–57.

    Google Scholar 

  10. Wachs JP, Stern HI, Edan Y, Gillam M, Handler J, Feied C, et al. A Gesture-based Tool for Sterile Browsing of Radiology Images. J Am Med Inform Assoc JAMIA. 2008;15(3):321–3.

    Article  Google Scholar 

  11. Lopes DS, Parreira PD De F, Paulo SF, Nunes V, Rego PA, Neves MC, et al. On the Utility of 3D Hand Cursors to Explore Medical Volume Datasets with a Touchless Interface. J Biomed Inform. 2017;72:140–9.

    Article  Google Scholar 

  12. Jacob MG, Wachs JP. Context-Based Hand Gesture Recognition for the Operating Room. Pattern Recogn Lett. 2014;36:196–203.

    Article  Google Scholar 

  13. Ebert LC, Hatch G, Ampanozi G, Thali MJ, Ross S. You Can’t Touch This: Touch-free Navigation Through Radiological Images. Surg Innov. 2012;19(3):301–7.

    Article  Google Scholar 

  14. Strickland M, Tremaine J, Brigley G, Law C. Using a Depth-Sensing Infrared Camera System to Access and Manipulate Medical Imaging from Within the Sterile Operating Field. Can J Surg J Can Chir. 2013;56(3):E1–6.

    Article  Google Scholar 

  15. Tan JH, Chao C, Zawaideh M, Roberts AC, Kinney TB. Informatics in Radiology: developing a touchless user interface for intraoperative image control during interventional radiology procedures. Radiographics. 2013 Mar-Apr;33(2):E61–70.

  16. Paulo SF, Relvas F, Nicolau H, Rekik Y, Machado V, Botelho J, et al. Touchless Interaction with Medical Images Based on 3D Hand Cursors Supported by Single-Foot Input: A Case Study in Dentistry. J Biomed Inform. 2019;100:103316.

    Article  Google Scholar 

  17. Norman DA. Natural User Interfaces are Not Natural. Interactions. 2010;17(3):6–10.

    Article  Google Scholar 

  18. MA M, Fallavollita P, Habert S, Weidert S, Navab N. Device- and System-Independent Personal Touchless User Interface for Operating Rooms. Int J Comput Assist Radiol Surg 2016;11(6):853–861.

  19. Saalfeld P, Kasper D, Preim B, Hansen C. Touchless Measurement of Medical Image Data for Interventional Support. 2017-Tagungsband; 2017.

    Google Scholar 

  20. Rosa GM, Elizondo ML. Use of a Gesture User Interface as a Touchless Image Navigation System in Dental Surgery: Case Series Report. Imaging Sci Dent. 2014;44(2):155–60.

    Article  Google Scholar 

  21. Schwarz LA, Bigdelou A, Navab N. Learning Gestures for Customizable Human-Computer Interaction in the Operating Room. In: Fichtinger G, Martel A, Peters T, editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2011. Berlin, Heidelberg: Springer Berlin Heidelberg; 2011. p. 129–36. (Lecture Notes in Computer Science; vol. 6891).

    Chapter  Google Scholar 

  22. Giv MD, Sani KG, Alizadeh M, Valinejadi A, Majdabadi HA. Evaluation of noise pollution level in the operating rooms of hospitals: A study in Iran. Interv Med Appl Sci. 2017;9(2):61–66.

  23. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain–Computer Interfaces for Communication and Control. Clin Neurophysiol. 2002;113(6):767–91.

    Article  Google Scholar 

  24. Aznan NKN, Bonner S, Connolly JD, Moubayed NA, Breckon TP. "On the Classification of SSVEP-Based Dry-EEG Signals via Convolutional Neural Networks," 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC).2018 p. 3726–3731.

  25. Autthasan P, Du X, Arnin J, Lamyai S, Perera M, Itthipuripat S, et al. A Single-Channel Consumer-Grade EEG Device for Brain-Computer Interface: Enhancing Detection of SSVEP and Its Amplitude Modulation. IEEE Sensors J. 2020;20(6):3366–78.

    Article  Google Scholar 

  26. Xing X, Wang Y, Pei W, Guo X, Liu Z, Wang F, et al. A High-Speed SSVEP-Based BCI Using Dry EEG Electrodes. Sci Rep. 2018;8(1):14708.

    Article  Google Scholar 

  27. Rashid M, Sulaiman N, PP Abdul Majeed A, Musa RM, Ab. Nasir AF, Bari BS. Current Status, Challenges, and Possible Solutions of EEG-Based Brain-Computer Interface: A Comprehensive Review. Front Neurorobot. 2020;14:25

  28. Nicolas-Alonso LF, Gomez-Gil J. Brain Computer Interfaces, a Review. Sensors. 2012;12(2):1211–79.

    Article  Google Scholar 

  29. Bockbrader MA, Francisco G, Lee R, Olson J, Solinsky R, Boninger ML. Brain Computer Interfaces in Rehabilitation Medicine. PM&R. 2018;10(9S2):S233–43.

  30. Sebastián-Romagosa M, Cho W, Ortner R, Murovec N, Von Oertzen T, Kamada K, et al. Brain Computer Interface Treatment for Motor Rehabilitation of Upper Extremity of Stroke Patients-A Feasibility Study. Front Neurosci. 2020;14:591435.

  31. Galloway NR. Human Brain Electrophysiology: Evoked Potentials and Evoked Magnetic Fields in Science and Medicine. Br J Ophthalmol. 1990;74(4):255.

    Article  Google Scholar 

  32. Wang Y, Wang R, Gao X, Hong B, Gao S. A Practical VEP-Based Brain-Computer Interface. IEEE Trans Neural Syst Rehabil Eng Publ IEEE Eng Med Biol Soc. 2006;14(2):234–9.

    Article  CAS  Google Scholar 

  33. Kouider S, Zerafa R, Steinmetz N, Barascud N. Brain-Computer Interface. WO2021140247A1. 2021.

Download references


This study was performed in collaboration with the Residency Program of Balgrist University Hospital, Forchstrasse 340, 8008 Zurich, Switzerland. Collaboration group: Nicola Cavalcanti, Oliver Wetzel, Sylvano Mania, Frederic Cornaz, Farah Selman, Method Kabelitz, Christoph Zindel, Sabrina Weber, Samuel Haupt


This project is part of SURGENT under the umbrella of University Medicine Zurich/Hochschulmedizin Zürich. The SURGENT program provided partial salary support for the first author. The funding bodies were not involved in the design of the study, data collection, analysis, interpretation of the results and the writing of the manuscript. SURGENT under the umbrella of University Medicine Zurich/Hochschulmedizin Zürich

Author information

Authors and Affiliations




Conceptualization: HE, PF, MF; methodology: HE, PT; software: PT, HE; user study: HE, PT, SH, DS; resources: MF, HE, PF, DS, SH; data curation: HE, PT, SH; writing—original draft preparation: HE, PT, SH; writing-review and editing: HE, PF, PT, SH; supervision: PF, HE, MF; project administration: HE, SH; funding acquisition: PF, MF. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Hooman Esfandiari.

Ethics declarations

Ethical Approval and Consent to Participate

According to our national institutional guidelines, ethical approval from the respective ethics committee (Cantonal Ethics Committee of Zurich) was not required for this study as it included medical professionals who consented to participate. These experiments did not fall under the umbrella of the Human Research Act (HRA, and according to the same guidelines, oral consent was obtained from the participants of the user study. The human CT scan used in this study was acquired from an anonymized public dataset available at as a sample dataset in the 3D Slicer software (

Consent for Publication

Not applicable.

Competing Interests

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.


Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Esfandiari, H., Troxler, P., Hodel, S. et al. Introducing a brain-computer interface to facilitate intraoperative medical imaging control – a feasibility study. BMC Musculoskelet Disord 23, 701 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Brain computer interface
  • Medical image
  • Surgery
  • Image control
  • Display
  • Image manipulation