Use cases under this modality take place e.g. into hybrid operating rooms. Hybrid operating rooms (OR) are in general equipped with advanced imaging systems such as e.g. fixed C-arms (x-ray generator and intensifiers), CT scans (Computer Tomography) and MRI scans (Magnetic Resonance Imaging). The whole idea is that advanced imaging enables minimally-invasive surgery that is intended to be less traumatic for the patient as it minimizes incisions and allows to perform surgery procedure through one or several small cuts. This is as an example useful for cardio-vascular surgery or neurosurgery to place deep brain stimulation electrodes.
Due to its many benefits for the patients, image guided surgery is now the main stream for many specialties from cardiology to gastroenterology or ophthalmology. This is the underlying force for a very dynamic market predicted to reach $4,163 million by 2025 and experiencing a sustained growth of 11.2% from 2018 to 2025 (see 
But, as of now, a lack of real interfaces between technologies and devices inside operating rooms is putting progress at risk. In fact, devices and software must be able to work together to create a truly digitally integrated system in the operating room. Multiple vendors are proposing integrated OR proprietary solutions but they are often limited to their particular standpoint, depending on the category of equipment they usually provide: OR tables and lighting providers, anaesthesia and monitoring equipment, endoscopes or microscopes, medical imaging (X-Ray, ultrasounds), video monitors and streaming. No category dominates others with the capacity to impose a particular solution that could be adopted by all. This roadblock to full digitalization is addressed by standards like e.g. DICOM supplement 202: RTV which leverages on SMPTE (ST 2110 family of standards) to enable the deployment of equipment in a distributed way. The intention is to connect various video or multi-frame sources to various destinations, through a standard IP switch, instead of using a proprietary video switch. This is shown on the figure below (see 
Carriage of audio-visual signals in their digital form has historically been achieved using coaxial cables that interconnect equipment through Serial Digital Interface (SDI) ports. The SDI technology provides a reliable transport method to carry a multiplex of video, audio and metadata with strict timing relationships but as new image formats such as Ultra High Definition (UHD) get introduced, the corresponding SDI bit-rates increases way beyond 10Gb/s and the cost of equipment that need to be used at different points in a video system to embed, de-embed, process, condition, distribute, etc. the SDI signals becomes a major concern. The emergence of professional video over IP solutions, enabling high quality and very low latency performance, now calls for a reengineering of ORs, usually a long and costly process but that can be accelerated thanks to the adoption of wireless communications whose flexibility also reduces installation costs.
Witnessing the increasing interest of health industry actor in wireless technologies, 
predicts that the global wireless health market is projected to grow from $39 Billion in 2015 to $110 Billion by 2020. More specifically, 
points out the increasingly prevalence of wireless technology in hospital which has led to the vision of the connected hospital, a fully integrated hospital where caregivers use wireless medical equipment to provide the best quality of care to patients and automatically feed Electronic Health Records (EHR) systems. As a natural evolution, for wireless technologies that can cope with hospitals' difficult RF environment and can provide needed security warranties, it is expected that they can be a promising opportunity enabling surgeons to benefit from advanced imaging/control systems directly in operating rooms while still keeping the flexibility of wireless connectivity. In practice, one can also expect the following benefits from going wireless in O.R.:
Equipment sharing between operating rooms in the same hospital which makes procedures planning easier and allows hospitals to deploy an efficient resource optimization strategy,
On-demand addition of complementary imaging equipment in case of incident during a surgery procedure which eventually leads to better care provided to patients,
Suppression of a range of cables connecting a multitude of medical devices, constituting as many obstacles, that makes the job of a surgical team easier and reduces the infection risk.
In addition, hybrid O.R. trend makes operating rooms increasingly congested and complex with a multitude (up to 100) of medical devices and monitors from different vendors. In addition to surgical tables, surgical lighting, and room lighting positioned throughout the OR, multiple surgical displays, communication system monitors, camera systems, image capturing devices, and medical printers are all quickly becoming associated with a modern OR. Installing a hybrid O.R. represents therefore a significant cost, not only coming from the advanced imaging systems themselves, but also from the complex cabling infrastructure and the multiple translation systems that are needed to make all those proprietary devices communicating together. Enabling wireless connectivity in O.R. simplifies the underlying infrastructure, helps streamlining the whole setup and reducing associated installation costs.
Note that clock accuracy requirements defined here applies to all use cases defined in this modality unless specifically stated.
As far as medical images are real-time processed by applications to deliver results/information dedicated to ease or even guide the surgical gesture, tight latency constraints apply here and often mandate those applications to be hosted by hospital IT facilities at a short network distance from the operating room.
In case of a medical procedure also involving human beings, the round trip delay constraint is generally calculated based on the following formulae:
Round trip delay = Imaging System Latency + Human Reaction Time
Imaging System Latency = Image generation + end-to-end latency + Application Processing + Image Display
This principle is depicted on the figure below:
T1 = Time for image generation,
T2 = T4 = Time Delay through 5G Network, defined as the end-to-end latency
T3 = Application processing time,
T5 = Time for image display,
And Imaging System Latency = T1 + T2 + T3 + T4 + T5
The Imaging System Latency impairs the achievable precision at a given gesture speed and is defined based on the fact that surgeons often feel comfortable with a latency that gives 0.5cm precision at 30cm/s hand speed (a better precision implying slower hand movements). This translates into an Imaging System Latency from the image generation to their display on a monitor being around 16ms for procedures on a static organ where the only moving object is the surgeon's hand. As one can see, this figure is not calculated going through a rational process but instead depends on the surgeon perception as to whether the equipment introduces delays he can cope with or not. If the organ or body part targeted by an operation is not static (for instances a beating heart) then the Imaging System Latency shall be reduced further to achieve robust enough gesture precision.
Breaking down further Imaging System Latency is needed in order to derive sub-contributions from equipment on the data path:
Latency introduced by images generation and display generally comes from synchronisation issues, this is to say the availability of data versus the next clock front. In a first approach, one can consider that this latency is in the order of the time interval between two successive images and is equally distributed between generation and display. If we consider 120fps, latency contribution for generation plus display would be 8ms.
In a first approximation, as applications may take up quite heavy processing, especially when Augmented Reality is involved, it looks like a safe bet to set the end-to-end latency much lower than the application latency and one considers a distribution of 25/75%. Under the same assumption as before (120fps), this leaves a budget of 2ms for the transport of packets through 5G System and 6ms for application processing.
The rational described above will be used in the use cases defined as part of this modality.
Finally, humans beings' median reaction time to visual events is in the 200ms ballpark and adds to the imaging system latency estimated above. So the round trip delay may be rather high but is compensated by surgeons slowing down their movements as necessary.
220.127.116.11.2 Teleoperation Systems Word‑p. 14
The whole tele-operated system, including the human operator and the environment constitutes a closed loop system whose performance is a matter of transparency and stability. Transparency relates to the 'degree of invisibility' of the robotic system, where if perfectly transparent, the operator senses as if he is directly operating the patient. In the context of tele-surgery high transparency leads to marginally stable systems and high stability leads to poor transparency, so performance of the system is a compromise between stability and transparency and the performance is thus limited by the stability. Several master-slave control schemes are developed to deal with those challenges in a tele-operation system, as explained hereafter:
Position Position Control: This is the simplest one, the only information exchanged between the control console and the robot is the position of surgeon's hands and of the instruments and forces are estimated based on position's errors.
Force Position Control: This one is more intuitive as real forces resulting from the contact between instruments and the environment are measured thanks to force sensors and sent back to the control console after filtering.
4 Channel Control: This one utilizes both forces and position at both surgeon and robot side which improves stability and performances but at the price of added complexity and cost. We will assume that scheme in this document.
A typical robotic system setup is depicted on the figure below:
In the direction from the console to the robot:
T1 = Time for commands generation,
T2 = End-to-end latency from the console to the medical application located at network edge,
T3 = Application processing time. In this case, there might be a 3D patient body pre-operative model at work that prevents instruments to enter into certain critical pre-defined zones.
T4 = End-to-end latency from the medical application located at the network edge to the robot,
T5 = Time to render control commands into real instruments movements,
In the direction from the robot to the console:
T6 = Time for instrument control feedback (effort, velocity, position) and/or image generation,
T7 = End-to-end latency from the robot to the medical application located at network edge,
T8 = Application processing time. It may correspond to image processing delays or to haptic feedback generation based on instrument location, velocity, effort measurements data issued by surgical instruments and 3D pre-operative patient body model.
T9 = End-to-end latency from the medical application located at the network edge to the console,
T10 = Time to render haptic and visual feedback through the surgeon console.
The overall teleoperation system latency is therefore defined as T1 + T2 + T3 + T4 + T5 + T6 + T7 + T8 + T9 + T10.
Studies conducted on state-of-the-art robotic surgery systems (see ) allow to derive the following findings:
The maximum tolerable teleoperation system latency, up to which surgeons can still improve their performance through repeating the same simple task over and over again has been found to be around 300 ms. However, the effective latency is distinctly noticeable during the course of the operative procedures and can only be compensated by a slowing of movements and by operations of type move-pause-move-pause.
Longer latencies extend the operating time especially in case of complex surgery procedures such as laparoscopic kidney transplant, which is, technically speaking, an operation deemed as very demanding.
Depending on the skills of individual surgeons, on the complexity of the procedure that is tele-operated, on the importance to complete the surgery in a limited time, and depending on whether a short or no learning curve is mandated (to make the technology accessible to less experienced surgeons) much more stringent requirements for the teleoperation system latency may be appropriate.
Breakdown of the different delays when going through all the sub-systems constituting the robotic system is a very complex issue and depends heavily on the different technologies implemented for those sub-systems. However, progress in actuators and sensors seems to be pointing to (T1 + T5) = (T6 + T10) being below 10 ms and we can apply same rule as in clause 18.104.22.168.1
for the breakdown of the remaining time budget between transport time and application processing time: 25%/75%.
In this document latencies are evaluated according to the accepted error in the perception of surgical instruments' position that is introduced at a given hand speed. Then, considering that robotic systems can scale surgeons hand speed down to a 3:1 ratio, this allows to derive an overall outer control loop teleoperation system latency of 50 ms using principles and error targets explained in clause 22.214.171.124.1
. This leaves us therefore with roughly 2 ms end-to-end latency constraint on each of the four radio links involved in the robotic sub-systems connectivity.
Also, note that surgeons may be able to adapt to the overall teleoperation system latency through training under a constant delay. However, it is challenging to conduct telesurgery with variable latency.