Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

Decentralized Pub/Sub Architecture for Real-Time Remote Patient Monitoring: A Feasibility Study

verfasst von : Kazi Nymul Haque, Johirul Islam, Ijaz Ahmad, Erkki Harjula

Erschienen in: Digital Health and Wireless Solutions

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The confluence of the Internet of Things (IoT) within the healthcare sector, called Internet of Medical Things (IoMT), has ushered in a transformative approach to real-time patient monitoring. Traditional methods that typically involve the direct transmission of medical sensor data to the cloud, falter under the constraints of medical IoT devices. In response, Multi-access Edge Computing (MEC), as defined by the European Telecommunications Standards Institute (ETSI), brings forth an innovative solution by relocating computing resources closer to the origin of data. However, MEC alone does not fully address the exigencies of constrained medical IoTs in the realm of real-time monitoring. Our architecture advances the computing continuum by seamlessly integrating local edge computing for direct data capture, MEC for nuanced data processing, and cloud computing for the comprehensive synthesis and presentation of data. This synergy is further enhanced by the introduction of a robust message queue mechanism, assuring data resilience and uninterrupted data streaming during network disruptions. With a steadfast commitment to security, our system employs stringent measures to ensure the integrity and confidentiality of sensitive patient data during transmission. This architecture represents a significant leap in healthcare technology, emphasizing the criticality of patient safety, data security, and meticulous data management. The implications of this study are profound, indicating a trajectory for future exploration into the integration of sophisticated data types and AI-driven models to further refine patient monitoring and healthcare outcomes.

1 Introduction

The burgeoning integration of the Internet of Things (IoT) and the Internet of Medical Things (IoMT) with edge and cloud computing technologies is catalyzing a paradigm shift in healthcare delivery. This synergy is pivotal in fostering real-time, efficient healthcare services, transcending the conventional limitations posed by the traditional IoT-cloud models. These models often struggle with challenges such as latency and bandwidth constraints, which are exacerbated by the dependency on distant cloud servers for data processing and analysis. Such constraints are particularly pronounced in healthcare applications, where real-time data processing and analysis are crucial for patient care [1].
Edge Computing (EC) emerges as a cornerstone in this architecture, mitigating latency by facilitating data processing closer to the data source. This proximity not only accelerates data processing but also bolsters data security-a critical consideration in healthcare applications. Furthermore, messaging queues play an indispensable role in orchestrating high-throughput data synchronization from a plethora of medical devices, ensuring seamless and reliable communication within the healthcare IoT ecosystem. This feature is instrumental in maintaining uninterrupted patient care, even amidst network disruptions [2].
Our proposed three-tier architecture, depicted in Fig. 1, interfaces directly with medical sensors at the local edge, employs Multi-access Edge Computing (MEC) for comprehensive data processing, and utilizes cloud services to furnish healthcare professionals with holistic insights. This architecture, characterized by its modularity and scalability, leverages the prowess of microservices and containerization technologies such as Docker and Kubernetes, offering a superior alternative to traditional systems [3].
The advent of Artificial Intelligence (AI) and Machine Learning (ML) technologies heralds a new era in healthcare IoT, equipping it with sophisticated diagnostic and predictive analytics tools. These advancements facilitate improved patient outcomes and streamline healthcare operations by enabling informed, data-driven decision-making. However, the deployment of traditional AI/ML models at the edge is often hampered by computational complexities. This limitation underscores the significance of tiny machine learning models, which are tailor-made for on-device data processing, especially in critical care scenarios, ensuring the promptness of data analysis [4].
Security remains a cornerstone of our system, which adopts robust encryption and secure communication protocols to safeguard patient data, thereby ensuring compliance with healthcare regulations [7]. Our previous endeavors in IoMT have significantly contributed to enhancing user experience and technological innovation, spanning sensor integration [10], data analytics [6], and the development of energy-efficient IoT devices [5]. These innovations have laid a solid foundation for telemedicine and remote patient monitoring, offering actionable insights derived from healthcare data through exploratory data analysis and process mining techniques. These insights unveil hidden patterns and causal relationships within healthcare processes [9], marking a significant leap from conventional systems. Our architecture not only exemplifies robustness, scalability, and efficiency in real-time patient monitoring but also opens avenues for integrating advanced data types and AI-driven analytics to revolutionize patient care, paving the way for more personalized and proactive healthcare approaches.

2 Real-Time Patient Monitoring Concept

2.1 Use Case Scenario

Our architecture demonstrates a novel approach to real-time patient monitoring within a three-tier architecture, comprising Local Edge, Multi-Access Edge and Cloud tiers. In this article, we exemplify this through a healthcare monitoring system centered on ECG data analysis. The system employs a wearable sensor, attached to a patient, for collecting vital ECG data. The primary goal is to showcase a model capable of not just ECG classification but also adaptable for various healthcare tasks using any compact machine learning model (Fig. 2).

2.2 Service and Swarm Architecture

Our architecture is meticulously designed across three distinct tiers depicted in Fig. 3, each contributing significantly to the system’s overall functionality. The Docker Swarm architecture of this system is defined within a specialized YAML file, which serves as the operational framework for deploying and orchestrating the containers across the Local Edge, MEC, and Cloud tiers. The YAML file is configured with precise instructions on where each container should be allocated, ensuring an organized and efficient deployment [11].
Local Edge (Raspberry Pi 4): At the Local Edge Level, which utilizes a Raspberry Pi device, the setup begins with the Acquisition Service that gathers critical ECG data from a IoMT sensor. Next, the Publishing Service steps in to send this health data along the pipeline, ensuring it’s lined up for further processing. The Message Handling Service, crucial at this stage, manages this flow of information, keeping it organized and ready for further processing. The Local Edge tier is configured via a YAML file to deploy Docker containers for essential services. The Data Acquisition Service operates within the ADB container, capturing critical ECG data. The Data Publishing Service, running in the Publisher Container, manages the dissemination of the collected data. Lastly, the Messaging Queue Service within the RabbitMQ container orchestrates the queuing and secure transmission of messages.
Multi-access Edge (Virtual Machine): Moving to the Middle Tier, housed on a Virtual Machine, we have the Retrieval Service that picks up the health data for in-depth processing. A Standardization Service then prepares the data in a consistent format, which is crucial for accurate analysis. The Heartbeat Analysis Service, equipped with advanced algorithms, interprets the ECG data to classify and understand the received heart patterns. Following this, the Reporting Service disseminates these findings, paving the way for their final review and visualization. In the MEC tier the YAML file specifies the containers for advanced data handling. The Data Retrieval Service within the Consumer Container retrieves the queued data. The Data Standardization Service in the Normalizer Container ensures data uniformity, while the ECG Analysis Service within the ML Model Container performs the computational analysis. The Data Publishing Service in another Publisher Container then publishes the analyzed data for further processing.
Cloud (Virtual Machine): Another Virtual Machine operates at the Cloud tier. Here, the Aggregation Service compiles all the ECG data and analytical results, creating a comprehensive dataset. The Visualization Service [9] then presents this information in an easy-to-understand format, which is essential for healthcare providers to monitor the patient’s well-being effectively. The Cloud Tier is arranged in the YAML file to facilitate the final stages of data processing and visualization. The Data Aggregation Service, placed in the Consumer Container, compiles data from multiple sources. The Data Visualization Service, situated in the Grafana Container, converts this data into visual formats for effective monitoring. Figures 3 and 4 respectively shows the service and swarm architecture of our 3-tier model.

2.3 Message Queue Mechanism

In our system, the RabbitMQ container plays a crucial role in managing data during network disruptions [8] by temporarily queuing ECG data. This setup prevents data loss if the connection between the Local Edge and MEC tier is interrupted. Upon network restoration, RabbitMQ resumes normal operations, swiftly transmitting the stored data to the MEC for continued patient monitoring. This effective queueing and resumption process ensures complete data delivery and provides a robust buffer against network issues, significantly enhancing system reliability and efficiency.

2.4 RabbitMQ Security

The need for security in data transmission is paramount, especially when dealing with sensitive healthcare data. To this end, the architecture employs Transport Layer Security (TLS) based certificates to establish a secure communication channel [12]. The RabbitMQ container at the Local Edge is fortified with a TLS server certificate. This setup ensures that all data leaving the edge device is encrypted, ensuring the confidentiality, node authentication, and integrity of the ECG data [13]. Both the MEC and Cloud layers are configured with TLS client certificates. These certificates validate the authenticity of the RabbitMQ server and establish a secure connection, ensuring that data received and transmitted by these tiers is protected from unauthorized access and tampering. Figure 5 shows the data packet streaming with AMQP protocol between RabbitMQ server and client with an encryption through TLS certificate.

3 Proof-of-Concept: Patient ECG Monitoring

Continuous electrocardiogram (ECG) monitoring is imperative for the early detection and monitoring of cardiac conditions. It enables healthcare professionals to track heart rhythm in real-time, facilitating prompt intervention in acute cardiac events and providing valuable data for long-term cardiovascular health management.

3.1 Learning Model Deployment for ECG

The Local Machine Transformer Learning model demonstrates a comprehensive process for classifying ECG data [14]. Initially, ECG data is loaded and preprocessed, including data augmentation to enhance dataset robustness. The data is then batched, facilitating efficient training of the Transformer architecture, adept at handling sequential data. Each training epoch involves a forward pass over the dataset, generating predictions and calculating loss to inform the backpropagation process, thereby optimizing the model’s predictive accuracy. Concurrently, model validation tests its performance on unseen data, ensuring its applicability in real-world scenarios (Fig. 6).
Upon meeting training and validation criteria, the model’s learned parameters are recorded in a model_state_dict.pth file, marking the training phase’s completion. This structured approach not only converts raw medical data into actionable insights but also underscores the potential of such models in precise and timely cardiovascular diagnoses. This model’s training was conducted utilizing ECG data sourced from Kaggle. While the accuracy achieved as depicted in Fig. 7 did not reach perfection, primarily due to the limited volume of data, the results are indicative of the model’s potential. With a more substantial and varied dataset, it is anticipated that the model’s accuracy would improve significantly. This prospect underscores the feasibility of integrating such a model into our three-tier architecture, where it could be fine-tuned and then deployed to broadcast its insights to the necessary channels within the system.

3.2 Technical Specifications

Hardware Specifications: Our proof-of-concept leverages a three-tiered Docker Swarm architecture, composed of Local Edge, MEC, and Cloud environments, to demonstrate a real-time patient monitoring system using ECG data collected via a Movesense sensor. The Local Edge is materialized by a Raspberry Pi 4 (RPi 4) Model B to efficiently handle initial data collection and queuing operations. It has a CPU with Broadcom BCM2711 CPU, Quad-core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz, Memory with 4 GB LPDDR4-3200 SDRAM. Storage is MicroSD card slot for loading operating system and data storage with 32 GB storage, with Gigabit Ethernet connectivity, 2 USB 3.0 ports and 2 USB 2.0 ports.
Table 1.
Hardware and software specification
Services
Hardware
Build layer and scripts
ADB Container
Raspberry Pi 4 (Local Edge)
Alpine, Custom Scripts
Publisher Container
Python 3.8 slim, Custom Scripts
RabbitMQ Container
Official RabbitMQ Image
Consumer Container
Ubuntu x64 VM (VM1 at MEC)
Python 3.8 slim, Custom Scripts
Normalizer Container
ML Model Container
Python 3.8 slim, Trained .pth
Publisher Container
Python 3.8 slim, Custom Scripts
Consumer Container
Ubuntu x64 VM (VM2 at Cloud)
Python 3.8 slim, Custom Scripts
MySQL Container
Official MySQL Image
Grafana Container
Official Grafana Image
The MEC and Cloud infrastructures are simulated using Virtual Machines (VMs), each with robust configurations to ensure seamless data processing and visualization. MEC and Cloud Nodes is deployed in virtual machine having Ubuntu (64-bit) OS. Each of the VMs has 2 GB RAM with 2 CPU cores having 25 GB storage which are interfaced through the bridged network adapters. The overall hardware configurations of the experimental setup are mentioned in the following Table 1.
Software Specifications: In our setup, the local edge (Raspberry Pi 4) is hosting the 3 related services as depicted in Table 1. The ADB Container is built on top of the Linux Alpine base image to collect the ECG data from the Movesense sensor. Then the RabbitMQ publisher container publishes collected ECG data. RabbitMQ Container serves as a broker to manage the various Pub-Sub queues. The MEC node hosts 4 containers, a consumer container (RabbitMQ subscriber - to acquire data from RPi 4), a normalizer container (to normalize the data for the ML model), the ML model container (processes the normalized ECG data), and the publisher container (RabbitMQ publisher) publishes the data for Cloud processing. The cloud tier hosts the RabbitMQ consumer container and Grafana Container to visualize the data. The raw data and the processed are stored in MySQL database in the cloud.
Network Specifications: The local edge tier utilizes a SOHO Wi-Fi network for connectivity. Since MEC is deployed inside VirtualBox as a VM, bridged network mode is used to be able to access the external network. For the cloud tier, we used a server at CSC (cloud service provider), configured with a static IP to allow only specific ingress and egress traffic from the local Edge and MEC tiers, enhancing security by rejecting packets from unauthorized IP addresses. This setup ensured a seamless and secure data flow across the tiers. Figure 8 shows the overall network topology of our architecture.

3.3 Nanoservice Deployment

During this experiment, we considered multiple approaches to configure the setup. These include Bare metal installations, where necessary libraries and packages are installed directly onto hardware without involving any virtualization. Alternatively, the necessary libraries and packages can be configured in a virtualized environment using either an official docker image or a multi-stage docker build. A detailed comparison of the image size is provided in Table 2.
Table 2.
Estimated Docker Image Sizes
Container
Original Size
Bare Metal
Multi-Stage Build
adb_container
13.5 MB
153 MB
23 MB
publisher-image
111 MB
251 MB
108 MB
consumer-image (MEC)
134 MB
328 MB
123 MB
consumer-image (Cloud)
136 MB
323 MB
118 MB
normalizer-image
263 MB
422 MB
213 MB
ML-model-image
5.09 GB
5.6 GB
2.98 GB
We experimented using the original docker build in a virtualized environment since training the ML model takes long in multi-stage docker environment [20]. In the proposed architecture, all the containerized services are orchestrated through the Docker Swarm that ensures balanced service distribution across all tiers (local edge, MEC, cloud) for scalability and robustness as depicted in Fig. 9. Docker Swarm’s configuration and its service deployment is discussed in Sect. 2.2.

3.4 Data Storage and Visualization

The consumer container (at the cloud layer) stores the data into MySQL database as soon as it receives the raw data from the local edge and ML post-processed data from the MEC. For healthcare professionals to understand and react the data has to be visualized effectively and timely. The proposed architecture leverages the robustness of MySQL and Grafana containers within the cloud infrastructure. This combination is pivotal for effective data handling and visualization. Specifically, MySQL serves as persistent storage of both raw ECG data and the processed results. This approach effectively bridges the gap between raw data collection and actionable healthcare insights. Grafana’s visualization proficiency is clearly demonstrated in Fig. 10 and Fig. 11, where it effectively renders ECG signals and their classifications in an accessible manner. These visualizations offer real-time insights, enabling healthcare professionals to swiftly identify and act upon cardiac irregularities, which is vital for timely clinical intervention.

4 Evaluation Results

In this section, we evaluate our PoC with regard to key performance indicators (KPIs) including resource utilization, consumed energy, communication latency, resilience and robustness along with other delay components. It is important to note that in our experiment, both the MEC and cloud tiers are intentionally configured with constrained resources to simulate a real-world scenario where system resources may be limited. Despite these constraints, our architecture is designed to manage these resources efficiently. This uniform configuration also allows us to explore the system’s scalability and adaptability, ensuring that it can handle varying loads and still perform optimally. For instance, our configuration reflects a more conservative resource allocation, while a realistic setup would typically endow the cloud layer with abundant resources to handle intensive computational tasks, which can be considered in extending this experiment for the future. This approach enables us to test the system’s robustness under constrained conditions, mirroring potential real-life scenarios where resource optimization is essential. Here is a detailed breakdown of each evaluation metric (KPIs).

4.1 Resource Utilization

CPU and Memory Consumption: In the experiment, we measure the CPU and memory consumption to monitor the resource utilization. Linux-native htop command provides real-time insights into the CPU and memory usage across the tiers [17]. The local edge (Raspberry Pi) equipped with lower resource capacity, shows a higher percentage of CPU and memory utilization. The resource utilization as presented in Fig. 12 shows a lower utilization for MEC and Cloud but it is mainly due to the greater resource capacity these have.
Network Utilization: The bar chart in Fig. 13 shows network utilization across the tiers by evaluating the impact of network disruptions on data transmission. During disruptions, the Local Edge maintains its network utilization, implying resilient local processing. When normal conditions are restored, there’s a marked increase in network traffic from the Local Edge to the MEC and subsequently to the Cloud, indicating a rapid catch-up of data transfer to process the accumulated information. This demonstrates the system’s capacity to handle interruptions and quickly restore efficient data flow, which is essential for consistent real-time monitoring services. Section 4.5 of the article provides a detailed discussion of this aspect.
Storage Optimization: Figure 14 shows a comparative view of storage usage across the three-tier architecture. It shows the Raspberry Pi (Local Edge) with moderate storage usage, which is significantly less than the VM1 (MEC) and VM2 (Cloud) tiers. The latter two demonstrate higher storage usage, with the cloud tier requiring the most storage space. This reflects the varying storage demands and capacities of each tier within our patient monitoring system.

4.2 Computational Energy Consumption

Energy consumption was measured across the tiers using specialized tools. For the local tier (Raspberry Pi), energy usage is correlated with its operating temperature, which was monitored using the vcgencmd command-line utility, indicating an average consumption of around 7–8 W at full load [18]. For the MEC and cloud tiers, the energy use was directly gauged using the turbostat tool, which provides precise power metrics for Intel-based CPU cores [19]. We also calculated cumulative energy consumption by combining tier utilization to determine our architecture’s overall energy footprint.
The bar graph depicted in Fig. 15 illustrates the cumulative energy consumption across the tiers of our proposed patient monitoring architecture, with each tier’s energy usage visually stacked. For the local edge tier (Raspberry Pi), the graph shows a lower energy consumption level, which increases as we move to the MEC and then to the cloud tier. This visualization highlights the added energy demands when data processing extends from the edge of the network into more computationally intensive cloud services.

4.3 Communication Latency

Figure 16 illustrates the cumulative latency impacts across three computing environments in our system architecture. The communication latency is measured from the data origin at the Movesense sensor to the cloud tier. We employed the ‘traceroute’ utility to map the communication routes and identify sources of delay. The latency from the sensor to the Raspberry Pi, representing the local edge computing layer, is depicted in the first column. We then computed the latency for the Mobile Edge Computing (MEC) environment by adding the latency from the Raspberry Pi to the MEC tier. Finally, the cloud latency includes the latency from the MEC to the cloud tier. This tiered approach to latency measurement allows us to compare transmission times [15] from the sensor to the Local Edge, MEC, and Cloud comprehensively, highlighting the importance of optimization at each stage to minimize total system latency. Such optimization is critical for real-time patient monitoring systems that rely on efficient data processing.

4.4 Other Delay Components

Figure 17 present the comparative analysis of setup, runtime, and computational delays across three different environments: Local Edge, MEC, and Cloud. The setup delay is measured by the time it takes for system components to initialize and become ready for operation, as gauged by the profiling tool ‘GProf’. Runtime delay encompasses the time span from the creation of data to its full processing, which is monitored via precise timestamps. The computational delay reflects the duration of processing computational tasks, captured through logging timestamps before and after task execution.
Table 3.
Evaluation KPIs for Three-Tier Architecture
KPI
RPi (Local Edge)
VM1 (MEC)
VM2 (Cloud)
Latency (ms)
10–30
5–15
15–25
Setup Delay (s)
30–60
20–40
20–40
Runtime Delay (ms)
100–300
50–200
50–200
Computational Delay (ms)
200–500
100–400
100–400
CPU Usage (%)
40–70
30–60
30–60
Memory Usage (%)
60–90
70–90
70–90
Disk Usage (GB)
0.5–2
1–3
1–3
Network Throughput (Mbps)
10–50
50–100
50–100
Energy Consumption (W)
3–5
10–15
10–15
Contrary to the initial description, Table 3 indicates that while the Local Edge exhibits the highest average setup delay, the Cloud environment demonstrates the highest variability in setup delays. Runtime and computational delays [16], however, are generally higher in the Cloud environment, confirming that initiation and execution at the Local Edge are quicker, but computational tasks are more time-consuming in the Cloud. This comparison underscores the trade-offs between the immediacy of local processing and the intensive computational capabilities of the Cloud. In our test setup, the Local Edge exhibited higher setup delays due to resource constraints, while the Cloud, allocated lower resources for this specific test, showed medium setup times. However, in real-world applications, the Cloud is typically provisioned with ample resources, which would significantly reduce its setup delay compared to the Local Edge and MEC environments.

4.5 Resilience and Robustness

We evaluate the resilience and robustness of the proposed architecture by changing the network conditions. We simulated network disruptions between the local edge and MEC tier to evaluate the queue mechanism’s resilience within the ECG monitoring scenario. The goal was to assess the capability of RabbitMQ container to manage data during such interruptions effectively. During tests on network disruption resilience, the RabbitMQ queue’s performance was rigorously tested by simulating a disconnection of the local edge from the rest of the network infrastructure. Such situation can occur in real-life when, e.g., an ambulance transporting monitored patient goes temporarily outside the network coverage. With the MEC’s consumer container deactivated, vital signs data was queued at the local edge, testing the queue’s capacity. Upon connection re-establishment, the smooth processing of queued data confirmed the system’s robustness in maintaining data continuity during outages, demonstrating the queue’s effective management and the system’s resilience.
In testing RabbitMQ’s robustness, we confirmed the system’s resilience against network disruptions - an essential feature for healthcare monitoring. The queue capably handled large loads and maintained data integrity, proving its reliability. Our architecture’s adaptability was highlighted as it efficiently managed data flow during interruptions. The plot depicted in Fig. 18 illustrates the queue’s behavior over time, showing data accumulation during disconnection and rapid recovery upon reconnection, reflecting the system’s preparedness for real-world challenges.

5 Discussion

5.1 Key Findings

Resilience and Robustness: Our study validates the efficiency of a Raspberry Pi, MEC, and Cloud VMs-based system for edge computing, with RabbitMQ proving resilient in managing data during network disruptions, ensuring uninterrupted patient monitoring.
Reduced Latency: Data analytics with a tier closer to the patient (MEC) contributes to reducing the latency and performance of the data analytics algorithm and the healthcare staff can know the situation timely.
Resource Utilization: The three-tier architecture enables us the efficient utilization of resources across the swarm of cluster nodes (i.e., local edge, MEC and cloud nodes) in a distributed manner.
Enhanced Security: Distributing the data acquisition to the local edge and applying ML techniques at the MEC helps enhance the security of the architecture since the data is not exposed to a wider audience. TLS security measures safeguarded data transmission, a critical aspect of patient privacy. MySQL and Grafana integration in the Cloud tier facilitated real-time ECG signal analysis, improving clinical decision-making.
Realtime Remote Monitoring: Our three-tier architecture outperforms traditional cloud-IoT systems in scalability and real-time processing, with Docker Swarm containerization offering a significant advancement over inflexible centralized designs. Enhanced data visualization and alerting mechanisms could notably improve patient care, especially in telemedicine.
The findings encourage further advancements in container-based architectures and data analytics integration in healthcare settings. Future work may explore the inclusion of more complex data and AI-driven models for a more personalized approach to patient healthcare.

5.2 Limitations and Future Work

Our architecture presents a promising avenue for enhancing healthcare monitoring. However, addressing its limitations is crucial for continuing the progress. The current system efficiently manages data throughput, yet scaling to accommodate larger datasets and user loads is essential.
We have implemented SSL certificates for security, with no performance evaluation for post-TLS deployment and its impact, highlighting a gap for future study. Security improvements are planned, with a transition to Kubernetes clusters to leverage robust security features. [21] and best practices. Future efforts will refine system configurations, ensuring enhanced security and management of deployments in complex healthcare IoT environments.
Achieving energy efficiency, especially for edge devices, remains a challenge, with future work aimed at optimizing consumption for sustainability. Upcoming expansions include image-based data analytics, utilizing deep learning models like CNNs, U-Nets [22], and GANs [23] for advanced medical image analysis.

6 Conclusion

This work showcases a successful integration of IoT and IoMT with edge and cloud computing, forming a robust three-tier architecture for enhanced real-time patient monitoring. The proposed system merges local edge computing, MEC, and cloud computing to optimize healthcare data processing and insights extraction. At the local edge, rapid data acquisition and processing are achieved with no observable latency, while the MEC layer employs sophisticated models for deeper health data analysis. In the cloud, an advanced visualization service presents both raw and processed data, aiding medical decision-making and enhancing monitoring efficiency. The system utilizes Docker containers to create isolated environments for running applications, ensuring modular and scalable patient data processing. This setup streamlines updates and maintenance, crucial for real-time patient monitoring in hospital settings. This research marks a significant advancement in healthcare IoT, demonstrating the transformative potential of technology in patient care. Future research will aim at system optimization, integrating more complex data, employing adaptive security, and deploying AI models for further improvements in patient monitoring and healthcare technology overall.

Acknowledgements

This research is supported by the Business Finland projects Tomohead (grant 8095/31/2022), Eware-6G (grant 8819/31/2022), and the Research Council of Finland 6G Flagship program (grant number 346208).
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
1.
Zurück zum Zitat Islam, J., Harjula, E., Kumar, T., Karhula, P., Ylianttila, M.: Docker enabled virtualized nanoservices for local IoT edge networks. In: 2019 IEEE Conference on Standards for Communications and Networking (CSCN), pp. 1–7. IEEE (2019) Islam, J., Harjula, E., Kumar, T., Karhula, P., Ylianttila, M.: Docker enabled virtualized nanoservices for local IoT edge networks. In: 2019 IEEE Conference on Standards for Communications and Networking (CSCN), pp. 1–7. IEEE (2019)
3.
Zurück zum Zitat Singh, V., Peddoju, S.K.: Container-based microservice architecture for cloud applications. In: 2017 International Conference on Computing, Communication and Automation (ICCCA), pp. 847–852. IEEE (2017) Singh, V., Peddoju, S.K.: Container-based microservice architecture for cloud applications. In: 2017 International Conference on Computing, Communication and Automation (ICCCA), pp. 847–852. IEEE (2017)
10.
Zurück zum Zitat Leppänen, T., et al.: Mobile agents for integration of Internet of Things and wireless sensor networks. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 14–21. IEEE (2013) Leppänen, T., et al.: Mobile agents for integration of Internet of Things and wireless sensor networks. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 14–21. IEEE (2013)
11.
Zurück zum Zitat Marathe, N., Gandhi, A., Shah, J.M.: Docker Swarm and Kubernetes in cloud computing environment. In: 2019 International Conference on Trends in Electronics and Informatics (ICOEI), pp. 179–184 (2019) Marathe, N., Gandhi, A., Shah, J.M.: Docker Swarm and Kubernetes in cloud computing environment. In: 2019 International Conference on Trends in Electronics and Informatics (ICOEI), pp. 179–184 (2019)
16.
Zurück zum Zitat Ramaswamy, R., Weng, N., Wolf, T.: Characterizing network processing delay. In: 2004 IEEE Global Telecommunications Conference (GLOBECOM), pp. 1629–1634. IEEE (2004) Ramaswamy, R., Weng, N., Wolf, T.: Characterizing network processing delay. In: 2004 IEEE Global Telecommunications Conference (GLOBECOM), pp. 1629–1634. IEEE (2004)
18.
Zurück zum Zitat Bekaroo, G., Santokhee, A.: Power consumption of the Raspberry Pi: a comparative analysis. In: 2016 IEEE International Conference on Emerging Technologies and Innovative Business Practices for the Transformation of Societies (EmergiTech), pp. 361–366. IEEE (2016) Bekaroo, G., Santokhee, A.: Power consumption of the Raspberry Pi: a comparative analysis. In: 2016 IEEE International Conference on Emerging Technologies and Innovative Business Practices for the Transformation of Societies (EmergiTech), pp. 361–366. IEEE (2016)
21.
Zurück zum Zitat Shamim, M.S.I., Bhuiyan, F.A., Rahman, A.: XI commandments of Kubernetes security: a systematization of knowledge related to Kubernetes security practices. In: 2020 IEEE Secure Development (SecDev), pp. 58–64. IEEE (2020) Shamim, M.S.I., Bhuiyan, F.A., Rahman, A.: XI commandments of Kubernetes security: a systematization of knowledge related to Kubernetes security practices. In: 2020 IEEE Secure Development (SecDev), pp. 58–64. IEEE (2020)
22.
Zurück zum Zitat Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28 Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://​doi.​org/​10.​1007/​978-3-319-24574-4_​28
Metadaten
Titel
Decentralized Pub/Sub Architecture for Real-Time Remote Patient Monitoring: A Feasibility Study
verfasst von
Kazi Nymul Haque
Johirul Islam
Ijaz Ahmad
Erkki Harjula
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-59080-1_4

Premium Partner