Integrated sensing and communication have drawn great research attention in recent years. Specifically, 5G mmWave has demonstrated its capabilities not only in high-speed communications but also in perceiving the physical environment. Apart from providing locationing services for user equipment (UE), 5G mmWave can also estimate the position of target objects that does not carry any equipment (i.e., device-free). Existing works of device-free wireless localization often employs a single monostatic radar or a few transceivers in fixed positions. In this work, we examine a cooperative sensing case, where multiple UEs cooperate with the infrastructure of transmit/receive points (TRPs) to jointly locate device-free objects. This new setting brings in new challenges for existing locationing algorithms as the number and the locations of the UEs and sensing targets are all dynamic. Our work proposes a novel procedure that uses visualization methods to jointly represent the information in the mmWave channel impulse responses and the locations of UEs and TRPs. We then introduce an end-to-end deep learning transformer architecture inspired by popular models in the computer vision domain to estimate the target objects’ locations from the visualizations. On a dataset generated using 3D ray-tracing simulations, our system can locate multiple device-free objects with an average error of 0.47 meters within a 20 meterby-40 meter experiment area.
Visual Transformers for Cooperative Device-free Object Localization Using mmWave Signals
07.10.2024
3039033 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch