Persons

Melchizedek Ibarrientos Alipio, MSc., Ph.D.

All publications

Current testing and performance evaluation methodologies of LoRa and LoRaWAN in IoT applications: Classification, issues, and future directives

  • DOI: 10.1016/j.iot.2023.101053
  • Link: https://doi.org/10.1016/j.iot.2023.101053
  • Department: System Testing IntelLigent Lab
  • Annotation:
    Long Range (LoRa) and Long Range Wide Area Network (LoRaWAN) are emerging technologies essential in connecting and managing a wide range of devices in various Internet of Things (IoT) systems. Testing and evaluation methods play a crucial role in assessing and optimizing the performance of these technologies before their deployment in real-world IoT applications. Previous review studies focused mainly on the comparison of current Low-Power Wide Area Networks (LPWAN) technologies, evaluating the performance of LoRa and LoRaWAN platforms for a general or specific application, or in a single testing methodology. However, the literature does not include any articles dedicated to comprehensively reviewing the current testing scenarios and performance evaluation methodologies used in LoRa-based or LoRaWAN-based networks deployed for IoT systems. Hence, this paper aims to review the state-of-the-art studies on LoRa and LoRaWAN test and evaluation methods in various IoT applications. In this paper, these studies are critically reviewed and classified according to their test parameters, test architectures, and performance evaluation methodologies. Additionally, a summary and unified view of test and evaluation methodologies to assess the performance characteristics of LoRa and LoRaWAN in IoT-driven applications is presented. Lastly, the issues and challenges behind these test cases and evaluation methods are identified, and the possible future directions of this research domain are discussed.

A Classification of Cross-Layer Optimization Approaches in LoRaWAN for Internet of Things

  • DOI: 10.1109/ICUFN57995.2023.10199434
  • Link: https://doi.org/10.1109/ICUFN57995.2023.10199434
  • Department: System Testing IntelLigent Lab
  • Annotation:
    The Internet of Things (IoT) uses Low-Power Wide Area Networks (LPWAN) for applications that require long-range, energy-efficient, and low-cost end devices. LoRaWAN is one of the most popular LPWAN technologies because of its key features and openness making it highly suitable for IoT. Despite its exceptional features, some challenges faced by this technology are optimizing the protocol used for scheduling, low data rate, and duty cycle restrictions. One possible way to address these challenges is by using cross-layer optimization. This optimization technique violates the restrictions of the traditional OSI protocol stack giving freedom to its protocol layers. However, there is currently no summary of cross-layer methods implemented in LoRaWAN. This paper presents a classification of state-of-the-art cross-layer approaches that were used in optimizing the LoRaWAN technology in IoT. The cross-layer techniques were classified based on the merging of adjacent layers, direct communication between layers, and completely new abstractions. In addition, this paper identified the issues and challenges featured in these state-of-the-art cross-layer approaches. Finally, this paper serves as an overview of the performance of cross-layer optimization in LoRaWAN technology for IoT applications.

Deep Reinforcement Learning Perspectives on Improving Reliable Transmissions in IoT Networks: Problem Formulation, Parameter Choices, Challenges, and Future Directions

  • DOI: 10.1016/j.iot.2023.100846
  • Link: https://doi.org/10.1016/j.iot.2023.100846
  • Department: System Testing IntelLigent Lab
  • Annotation:
    The majority of communication protocols used in IoT networks for caching and congestion control techniques were rule-based which implies that these protocols are dependent on explicitly stated static models. To solve this issue, techniques are becoming more adaptive to changes in the network environment by incorporating a learning-based approach using Machine Learning (ML) and Deep Learning (DL). Recent surveys and review papers have covered topics on the use of ML and DL in either caching or congestion control techniques used in various types of networks. However, there is not an article in the literature dedicated to surveying the design of caching and congestion control mechanisms in IoT networks from the perspective of a Deep Reinforcement Learning (DRL) problem. Hence, this work aimed to survey the state-of-the-art DRL-based caching and congestion control techniques in IoT networks from 2019 to 2023. It also presented general frameworks for DRL-based caching and congestion control techniques based on surveyed works as a baseline for designing future protocols in IoT networks. Moreover, this paper classified the parameter choices of surveyed DRL-based techniques and identified the issues and challenges behind these techniques. Finally, a discussion of the possible future directions of this research domain was presented.

Intelligent Network Maintenance Modeling for Fixed Broadband Networks in Sustainable Smart Homes

  • DOI: 10.1109/JIOT.2023.3277590
  • Link: https://doi.org/10.1109/JIOT.2023.3277590
  • Department: System Testing IntelLigent Lab
  • Annotation:
    Due to the emergence of sustainable smart homes, each smart device requires more bandwidth putting pressure on the existing home networks. A very good solution to ensure high-bandwidth home networks is the fiber-to-the-home (FTTH) technology. FTTH delivers high-speed Internet from a central point directly to the home through fiber optic cables. This fixed broadband network can transmit information at virtually unlimited speed and capacity enabling homes to be smarter. Hence, a well-monitored and well-maintained FTTH broadband network is necessary to obtain a high level of service availability and sustainability in smart homes. This study aims to develop a predictive model that will proactively monitor and maintain FTTH networks through the use of sophisticated modeling techniques such as machine learning (ML). The predictive model targets to classify the proposed technician resolution based on the historical FTTH field data set. The results show that the K-nearest neighbors (KNN)-based model obtained the highest accuracy of 89% followed by the feedforward artificial neural network (FF-ANN)-based model with 86%. In addition, the identified anomalies from the data set affecting service degradation and performance include FTTH access issues, optical network unit issues, and faults in customer premises equipment.

More Accurate Cost Estimation for Internet of Things Projects by Adaptation of Use Case Points Methodology

  • DOI: 10.1109/JIOT.2023.3281614
  • Link: https://doi.org/10.1109/JIOT.2023.3281614
  • Department: System Testing IntelLigent Lab
  • Annotation:
    This article adapts the use case points (UCPs) method to estimate the size and development effort (DE) required for the Internet of Things systems. Despite the extensive use of UCP in software engineering, it has yet to be adapted for IoT systems, which is essential for project management and resource planning. Our proposed adaptation, UCP for IoT, is based on a four-layer IoT architecture and tailors the standard software UCP to the specifications of IoT systems. It was validated using a case study of three IoT systems, demonstrating its applicability and effectiveness in estimating the DE required for IoT projects. However, the results also highlight the need for further improvements, particularly given the absence of historical data sets for IoT projects. Our future work will focus on gathering such data sets and further refining the proposed model.

Responsible person Ing. Mgr. Radovan Suk