
Enhancing Modulation Classification Techniques with Deep Learning
Explore how advancements in deep learning have revolutionized modulation classification techniques, focusing on models like CNN, ResNet, and CLDNN. Leveraging the RADIOML 2016.10A dataset, this study delves into the applications in military and civil domains through sniffer use cases.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Improvements to Modulation Classification Techniques using Deep Learning Team Members: Ankush Jolly, Payam Khorramshahi and Tabish Saeed Group Number: 76 Course: ECE 228 (Spring 2020) Thursday, June 4, 2020
Content Background Literature Survey Why Deep Learning? Dataset Used Models Used Results Pending Tasks References JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Background: Modulation Classification Typical Communication Scenario: Modulation scheme is known between TX and RX Sniffer Use Case Classification of modulation scheme without any a priori knowledge Applications: Military and Civil Constellation Diagrams [1] JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Literature Survey [2-4] Earliest works started in 1980s Traditional Approaches: Likelihood based, feature based, artificial neural network based Worked on specific modulation schemes and SNR levels JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Why Deep Learning? Advancements in model architecture, computing software and hardware Recent surge in interest from the research community Models with huge number of learnable weights can outperform traditional methods JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Dataset Used: RADIOML 2016.10A [5] 11 modulation types: 8 digital and 3 analog Digital: BPSK, QPSK, 8PSK, QAM16, QAM64, BFSK, CPFSK, and PAM4 Analog: WB- FM, AM-SSB and AM-DSB Data: 128-sample time-domain IQ samples generated on GNU Radio 20 different SNR values: -20 dB to 18 dB (step size of 2) Keras Tensorflow for training of DL Models JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
DL Models Implemented Convolutional Neural Network (CNN) Residual Network (ResNet) Convolutional Long Short Term Deep Neural network (CLDNN) JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Robust_CNN() [6] Difference: # classes present is dataset 11 Dropout : 0.3 Learning Rate: 0.0001 JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Results Tracking loss and accuracy for both training and validation dataset for SNR: 2 Model was let to run for 100 epochs. Training Duration ~ 3 hrs Best model was saved JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Results Confusion Matrices were created to compare model performance for every individual class: SNR: -20 SNR: -6 SNR: 2 JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Modifications to Robust_CNN() Batch normalization layers were added after each Convolution layers. Leading into better and smoother performance Training duration ~ 4.5 hrs JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
ResNet Model Dropout rate of 0.6, batch size of 128 Adam Optimizer with a learning rate of 0.0001 Epochs: 150 Model Training time ~ 2 hours for 20 SNR values ResNet Architecture [7] JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
ResNet Model Results JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
ResNet Model Results JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
ResNet Confusion Matrix Results JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
CLDNN Model 1 LSTM memory unit Dropout rate of 0.6, batch size of 128 Adam Optimizer with a learning rate of 0.0001 Epochs: 300 Model Training time ~ 3-4 hours for 20 SNR values Modifications: Added Max Pooling layer, changed dropout rate to 0.3 CLDNN Architecture [7] JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
CLDNN Model Results JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
CLDNN Model Results JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
CLDNN Confusion Matrix Results JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Pending Tasks Will run CLDNN and ResNet on a relatively bigger dataset (RadioML2016.10b) If time permits, implement one more model, Densely Connected Networks (DenseNet) Apply different dropout rate on the Robust_CNN model to check for regularization effect Can modify current models to improve accuracy based on TAs suggestions JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
References [1] https://www.nuwaves.com/constellation-diagrams/ [2] A. K. Nandi and E. E. Azzouz. Modulation recognition using artificial neural networks. Signal Process., 56(2):165 175, January 1997. [3] James A. Sills. Maximum-likelihood modulation classification for psk/qam. MILCOM 1999. IEEE Military Communications. Conference Proceedings (Cat. No.99CH36341), 1:217 220 vol.1, 1999. [4] S. S. Soliman and S. . Hsue. Signal classification using statistical moments. IEEE Transactions on Communications, 40(5):908 916, May 1992. [5] https://www.deepsig.ai/datasets [6] Tekbiyik, K r at et al. Robust and Fast Automatic Modulation Classification with CNN under Multipath Fading Channels. ArXiv abs/1911.04970 (2019): n. Pag. [7] X. Liu, D. Yang and A. E. Gamal, "Deep neural network architectures for modulation classification," 2017 51st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, 2017, pp. 915-919 JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering
Thank You!! :) Let s have a code walkthrough GitHub Link: https://github.com/pkhorram/Optimizing-Modulation-Classification-with-Deep-Learning JACOBS SCHOOL OF ENGINEERING | Electrical and Computer Engineering