Persons

Mostafa Kishanifarahani, Ph.D.

All publications

ELICA: Efficient and Load Balanced I/O Cache Architecture for Hyperconverged Infrastructures

  • DOI: 10.1109/TPDS.2025.3592275
  • Link: https://doi.org/10.1109/TPDS.2025.3592275
  • Department: Department of Telecommunications Engineering
  • Annotation:
    Hyperconverged Infrastructures (HCIs) combine processing and storage elements to meet the requirements of data-intensive applications in performance, scalability, and quality of service. As an emerging paradigm, HCI should couple with a variety of traditional performance improvement approaches such as I/O caching in virtualized platforms. Contemporary I/O caching schemes are optimized for traditional single-node storage architectures and suffer from two major shortcomings for multi-node architectures: a) imbalanced cache space requirement and b) imbalanced I/O traffic and load. This makes existing schemes inefficient in distributing cache resources over an array of separate physical nodes. In this paper, we propose an Efficient and Load Balanced I/O Cache Architecture (ELICA), managing the solid-state drive (SSD) cache resources across HCI nodes to enhance I/O performance. ELICA dynamically reconfigures and distributes the SSD cache resources throughout the array of HCI nodes and also balances the network traffic and I/O cache load by dynamic reallocation of cache resources. To maximize the performance, we further present an optimization problem defined by Integer Linear Programming to efficiently distribute cache resources and balance the network traffic and I/O cache relocations. Our experimental results on a real platform show that ELICA improves quality of service in terms of average and worst-case latency in HCIs by 3.1x and 23%, respectively, compared to the state-of-the-art.

Joint Management of Communication, Computing, and Storage Resources for Low Latency Vehicular Edge Computing

  • DOI: 10.1109/TITS.2025.3601353
  • Link: https://doi.org/10.1109/TITS.2025.3601353
  • Department: Department of Telecommunications Engineering
  • Annotation:
    Low-latency Vehicular Edge Computing (VEC) applications require an efficient VEC resource allocation considering all components contributing to application latency, i.e., computation, communication, and storage. While the optimization of communication and computation resources is broadly addressed in literature, storage, a significant source of latency in the computing stack, is often ignored in existing works. Thus, in this paper, we optimize the communication and computation resources together with the storage resources to minimize the latency of VEC applications. The problem of jointly minimizing communication, computation, and storage latency under practical constraints is NP-hard. Hence, we employ dual decomposition and Lagrangian relaxation to achieve a computationally viable solution for the joint communication, computing, and storage resource allocation to VEC applications. To this end, we define a dual problem of the assignment of VEC applications to base stations. This problem corresponds to the perfect matching problem in a weighted bipartite graph and optimally solvable by algorithms with polynomial computation complexity. Then, as the solution to the dual problem may violate some constraints of the main resource allocation problem, we find a feasible solution to the main resource allocation problem using Lagrangian relaxation. We show that the joint optimization of all three aspects, i.e., communication, computation, and storage, reduces the overall offloading latency up to 60% compared to state-of-the-art works.

Joint Optimization of Communication and Storage Latencies for Vehicular Edge Computing

  • DOI: 10.1109/TITS.2023.3336704
  • Link: https://doi.org/10.1109/TITS.2023.3336704
  • Department: Department of Telecommunications Engineering
  • Annotation:
    The latency associated with accessing data stored on edge computing servers for vehicles encompasses both the communication between a vehicle and a server as well as a latency of a data storage system. To enable low-latency vehicular services, an efficient resource management should consider the communication as well as the storage I/O cache resource allocation along with a data access pattern and a priority of individual vehicular services. Therefore, we focus on a joint optimization of communication and storage I/O cache resource allocation for access to data of vehicular services hosted by the edge computing servers. The proposed framework determines the data placement for the services and allocates communication and storage I/O cache resources to each service. The objective is to minimize the overall latency experienced by vehicular services for access to data. The edge computing platforms share storage and communication resources among various vehicular services, each having distinct priorities and data access rates or patterns. Hence, to reflect different priorities of services in resource allocation, our objective metric takes into account the service priority, data access frequency, and latency. We propose a feasible solution using dual relaxation considering both communication and storage latencies. The proposed solution reduces the average latency of vehicular services by up to 1.8x compared to the state-of-the-art resource allocation method for vehicular edge computing. Even more notable improvement is observed for high priority vehicular services, where the proposal leads to 2.5x lower latency compared to the state-of-the-art storage I/O cache architecture for virtualized cloud services.

Reducing Computation, Communication, and Storage Latency in Vehicular Edge Computing

  • DOI: 10.1109/VTC2024-Spring62846.2024.10683495
  • Link: https://doi.org/10.1109/VTC2024-Spring62846.2024.10683495
  • Department: Department of Telecommunications Engineering
  • Annotation:
    This paper addresses the challenge of optimizing communication, computation, and storage I/O caching in Vehicular Edge Computing (VEC) platforms for autonomous vehicles. The exponential data generated by the autonomous vehicles demands low-latency connectivity with nearby edge servers. However, the existing VEC platforms struggle to meet the performance requirements, especially in real-time applications like collision avoidance. This work proposes a novel algorithm for joint allocation of computing resources, storage I/O cache, and communication resources, considering the diverse priorities and demands of key vehicular services. Our approach integrates application-specific optimizations, prioritization, and joint latency reduction considering communication, computation, as well as storage. Accounting for distinct priorities and data access characteristics of various vehicular services, our proposed feasible solution, employing dual decomposition and Lagrangian relaxation, significantly reduces service latency by up to 64% compared to the current state-of-the-art resource allocation in vehicular edge computing.

Reducing Storage and Communication Latencies in Vehicular Edge Cloud

  • DOI: 10.1109/EuCNC/6GSummit54941.2022.9815597
  • Link: https://doi.org/10.1109/EuCNC/6GSummit54941.2022.9815597
  • Department: Department of Telecommunications Engineering
  • Annotation:
    Low-latency data access is crucial in edge clouds serving autonomous vehicles. Storage I/O caching is a promising solution to deliver the desired storage performance at a reasonable cost in vehicular edge platforms. Current storage I/O caching methods, however, are not specialized for workload characteristics and demands of autonomous vehicles and/or do not consider the communication latency between the vehicle and the base station hosting the edge cloud node. In this work, we propose a storage mechanism for vehicular edge cloud platforms taking communication, I/O cache, and storage latencies into account. We evaluate our proposed framework using realistic storage traces of vehicular services. Our framework reduces the average latency and the average latency of high-priority services by up to 1.56x and 2.43x, respectively, compared to the state-of-the-art works.

PADSA: Priority-Aware Block Data Storage Architecture for Edge Cloud Serving Autonomous Vehicles

  • DOI: 10.1109/VNC52810.2021.9644617
  • Link: https://doi.org/10.1109/VNC52810.2021.9644617
  • Department: Department of Telecommunications Engineering
  • Annotation:
    An efficient Input/Output (I/O) caching mechanism for data storage can deliver the desired performance at a reasonable cost to edge nodes serving autonomous vehicles. Current storage caching solutions are proposed to address common applications for autonomous vehicles that are less demanding in terms of the latency (e.g., map or software upgrades). However, a serious revision of these solutions is necessary for autonomous vehicles, which rely on safety- and time-critical communication for services, such as collision avoidance, requiring very low latency. In this paper, we propose a three-level storage caching architecture for virtualized edge cloud platforms serving autonomous vehicles. This architecture prioritizes safety-critical services and allocates the two top-level caches of Dynamic Random Access Memory (DRAM) and Non-Volatile Memory (NVM) to the top priority services. We further evaluate optimum cache space allocated to each service to minimize the average latency. The experimental results show that the proposed architecture reduces the average latency in safety-critical applications by up to 70% compared to the state-of-the-art.

Responsible person Ing. Mgr. Radovan Suk