Lidé

Mostafa Kishanifarahani, Ph.D.

Všechny publikace

Joint Optimization of Communication and Storage Latencies for Vehicular Edge Computing

  • DOI: 10.1109/TITS.2023.3336704
  • Odkaz: https://doi.org/10.1109/TITS.2023.3336704
  • Pracoviště: Katedra telekomunikační techniky
  • Anotace:
    The latency associated with accessing data stored on edge computing servers for vehicles encompasses both the communication between a vehicle and a server as well as a latency of a data storage system. To enable low-latency vehicular services, an efficient resource management should consider the communication as well as the storage I/O cache resource allocation along with a data access pattern and a priority of individual vehicular services. Therefore, we focus on a joint optimization of communication and storage I/O cache resource allocation for access to data of vehicular services hosted by the edge computing servers. The proposed framework determines the data placement for the services and allocates communication and storage I/O cache resources to each service. The objective is to minimize the overall latency experienced by vehicular services for access to data. The edge computing platforms share storage and communication resources among various vehicular services, each having distinct priorities and data access rates or patterns. Hence, to reflect different priorities of services in resource allocation, our objective metric takes into account the service priority, data access frequency, and latency. We propose a feasible solution using dual relaxation considering both communication and storage latencies. The proposed solution reduces the average latency of vehicular services by up to 1.8x compared to the state-of-the-art resource allocation method for vehicular edge computing. Even more notable improvement is observed for high priority vehicular services, where the proposal leads to 2.5x lower latency compared to the state-of-the-art storage I/O cache architecture for virtualized cloud services.

Reducing Computation, Communication, and Storage Latency in Vehicular Edge Computing

  • DOI: 10.1109/VTC2024-Spring62846.2024.10683495
  • Odkaz: https://doi.org/10.1109/VTC2024-Spring62846.2024.10683495
  • Pracoviště: Katedra telekomunikační techniky
  • Anotace:
    This paper addresses the challenge of optimizing communication, computation, and storage I/O caching in Vehicular Edge Computing (VEC) platforms for autonomous vehicles. The exponential data generated by the autonomous vehicles demands low-latency connectivity with nearby edge servers. However, the existing VEC platforms struggle to meet the performance requirements, especially in real-time applications like collision avoidance. This work proposes a novel algorithm for joint allocation of computing resources, storage I/O cache, and communication resources, considering the diverse priorities and demands of key vehicular services. Our approach integrates application-specific optimizations, prioritization, and joint latency reduction considering communication, computation, as well as storage. Accounting for distinct priorities and data access characteristics of various vehicular services, our proposed feasible solution, employing dual decomposition and Lagrangian relaxation, significantly reduces service latency by up to 64% compared to the current state-of-the-art resource allocation in vehicular edge computing.

Reducing Storage and Communication Latencies in Vehicular Edge Cloud

  • DOI: 10.1109/EuCNC/6GSummit54941.2022.9815597
  • Odkaz: https://doi.org/10.1109/EuCNC/6GSummit54941.2022.9815597
  • Pracoviště: Katedra telekomunikační techniky
  • Anotace:
    Low-latency data access is crucial in edge clouds serving autonomous vehicles. Storage I/O caching is a promising solution to deliver the desired storage performance at a reasonable cost in vehicular edge platforms. Current storage I/O caching methods, however, are not specialized for workload characteristics and demands of autonomous vehicles and/or do not consider the communication latency between the vehicle and the base station hosting the edge cloud node. In this work, we propose a storage mechanism for vehicular edge cloud platforms taking communication, I/O cache, and storage latencies into account. We evaluate our proposed framework using realistic storage traces of vehicular services. Our framework reduces the average latency and the average latency of high-priority services by up to 1.56x and 2.43x, respectively, compared to the state-of-the-art works.

PADSA: Priority-Aware Block Data Storage Architecture for Edge Cloud Serving Autonomous Vehicles

  • DOI: 10.1109/VNC52810.2021.9644617
  • Odkaz: https://doi.org/10.1109/VNC52810.2021.9644617
  • Pracoviště: Katedra telekomunikační techniky
  • Anotace:
    An efficient Input/Output (I/O) caching mechanism for data storage can deliver the desired performance at a reasonable cost to edge nodes serving autonomous vehicles. Current storage caching solutions are proposed to address common applications for autonomous vehicles that are less demanding in terms of the latency (e.g., map or software upgrades). However, a serious revision of these solutions is necessary for autonomous vehicles, which rely on safety- and time-critical communication for services, such as collision avoidance, requiring very low latency. In this paper, we propose a three-level storage caching architecture for virtualized edge cloud platforms serving autonomous vehicles. This architecture prioritizes safety-critical services and allocates the two top-level caches of Dynamic Random Access Memory (DRAM) and Non-Volatile Memory (NVM) to the top priority services. We further evaluate optimum cache space allocated to each service to minimize the average latency. The experimental results show that the proposed architecture reduces the average latency in safety-critical applications by up to 70% compared to the state-of-the-art.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk