Neural Network-Based OFDM Receiver for Resource Constrained IoT Devices published in IEEE IoT Magazine

Neural Network-Based OFDM Receiver for Resource Constrained IoT Devices

The majority of current and emerging Internet of Things (IoT) applications, including the latest WiFi standards, use orthogonal frequency division multiplexing (OFDM)-based waveforms to establish communications. Channel estimation, demapping, and decoding are key functionalities of such systems that are traditionally baked in the hardware so as to maintain a high level of performance and low latency.

This paper explains how high performance and low latency can also be achieved with much more flexibility via a modular design where Machine Learning (ML), and specifically neural networks, is used to replace the channel estimation, demapping, and decoding operations executed on OFDM receivers. This work also shows how compression methods can be used to fit these networks onto Field Programmable Gate Arrays (FPGAs) to further reduce latency while delivering higher performance than their hardware-based counterparts.

Paper Abstract

 

“Orthogonal Frequency Division Multiplexing (OFDM)-based waveforms are used for communication links in many current and emerging Internet of Things (IoT) applications, including the latest WiFi standards. For such OFDM-based transceivers, many core physical layer functions related to channel estimation, demapping, and decoding are implemented for specific choices of channel types and modulation schemes, among others.

To decouple hard-wired choices from the receiver chain and thereby enhance the flexibility of IoT deployment in many novel scenarios without changing the underlying hardware, we explore a novel, modular Machine Learning (ML)-based receiver chain design. Here, ML blocks replace the individual processing blocks of an OFDM receiver, and we specifically describe this swapping for the legacy channel estimation, symbol demapping, and decoding blocks with Neural Networks (NNs). A unique aspect of this modular design is providing flexible allocation of processing functions to the legacy or ML blocks, allowing them to interchangeably coexist.

Furthermore, we study the implementation cost-benefits of the proposed NNs in resource-constrained IoT devices through pruning and quantization, as well as emulation of these compressed NNs within Field Programmable Gate Arrays (FPGAs). Our evaluations demonstrate that the proposed modular NN-based receiver improves bit error rate of the traditional non-ML receiver by averagely 61 percent and 10 percent for the simulated and over-the-air datasets, respectively. We further show complexity-performance tradeoffs by presenting computational complexity comparisons between the traditional algorithms and the proposed compressed NNs.”

Source: IEEE Explore

Related Links

 

Learn more about this publication:

WIoT Researchers Associated

  • Nasim Soltani

    Ph.D. Student

  • Hai Cheng

    Ph.D. Student

  • Mauro Belgiovine

    Ph.D. Student

  • Yanyu Li

    Ph.D. Student

  • Salvatore D'Oro

    Research Assistant Professor of Electrical and Computer Engineering

  • Tommaso Melodia

    William Lincoln Smith, Professor of Electrical and Computer Engineering
    WIoT Institute Director

  • Yanzhi Wang

    Assistant Professor of Electrical and Computer Engineering

  • Kaushik Chowdhury

    Professor of Electrical and Computer Engineering
    Institute Associate Director

Connect with the Institute

Privacy Policy

Monthly Newsletter

Brochure