Installation and Training

An introductory deep learning model creation framework for radio frequency (RF) signals on the Artificial Intelligence Radio Transceiver (AIR-T).

AirPack contains everything you need, including TensorFlow source code, to walk you through the crucial steps of training a simple convolutional neural network (CNN) to detect and classify RF signals. AirPack provides you a complete framework for training a neural network on RF signal data. By providing a custom Docker environment with AirPack, the hassle of installing all of the complicated drivers, files, and toolboxes are eliminated. Once installed, you will have your first trained model in less than an hour. After running the AirPack source code, the software will produce an RF signal classifier neural network that is deployable on the AIR-T software defined radio. Users of AirPack will shave months off of the learning and engineering development cycle, leading to a reduction in labor cost and faster time to market.


This software package is provided by Deepwave Digital, Inc.



  • Computer running Linux
  • An NVIDIA GPU card with CUDA Compute Capability 3.5+
  • At least 5.6 GBytes of free space
    • Docker image - 4.91 GB
    • AirPack and training data - 0.69 GB
  • Docker - Follow the procedure in the Additional Procedures section below for installation
  • NVIDIA Docker Support - Follow the procedure in the Additional Procedures section below for installation
  • Internet connection for the initial installation
    • Contact us if deployment is required without an internet connection. Deepwave Digital can provide the necessary files on an installation disk.

AirPack Contents

  • AirPack/DockerFile - Docker file used to create the NVIDIA and TensorFlow training environment

  • AirPack/airpack/ - Python class that utilizes - the optimized TensorFlow input data pipeline - for the reading from the binary data files. This allows you to easily achieve peak throuput performance with a data input pipeline that delivers data for the training next step before the current training step has finished.

  • AirPack/airpack/ - A simple convolution neural network (CNN) written in Python and TensorFlow that accurately classifies RF signals and is deployable on the AIR-T. Because all parameters of the model parameters are brought out as variables, the model may be easily modified.

  • AirPack/data - location to place training and test data set

    • Training data should be placed in an AirPack/data/train subfolder
    • Test data should be placed in an AirPack/data/test subfolder
  • AirPack Data Set - Training and inference data set containing:

    • 11 generic radar signal types for classification
      • Signal data is synthetically generated
    • SNR varied between -5, 20 dB in 1 dB increments for every signal
    • Randomized phase, timing, and frequency
    • 78,000 unique signal files for training including receiver background noise
    • 7,800 unique signal files for testing including receiver background noise
  • AirPack/test/ - Python script that:

    • Initializes the fileio.DataReader class for streamlined handling the training data
    • Defines the CNN model, loss, optimization, accuracy functions
    • Trains the model and tests/prints the accuracy periodically
    • Creates UFF file representation of trained neural network for deployment on the AIR-T
  • AirPack/test/ - Python script that runs the trained neural network classifier against the test data set and produces an output plot as shown below. This is to be executed on the training computer, not the AIR-T. Example Output

  • AirPack/deploy - Instructions along with all the tools needed to deploy the UFF file on the AIR-T.

AirPack Installation Procedure

  • Make sure all of the Requirements above are satisfied
  • Install the AirPack Docker container via the following commands:

    $ cd <path_to_AirPack> $ docker build -t airpack .

  • Verify AirPack docker installation

    • Start the docker container:

      $ docker run -it --gpus all airpack

    • Verify that the GPU for training is accessible:

      $ lspci | grep NVIDIA

      and make sure you see a GPU present. For example:

$ lspci | grep NVIDIA
03:00.0 VGA compatible controller: NVIDIA Corporation GP100GL [Quadro GP100]
03:00.1 Audio device: NVIDIA Corporation Device 0fb1 (rev a1)

Executing the Package Demonstration

Start AirPack Docker Container

Note: the AirPack directory is not contained within the docker image. It must be mounted when the container is started via the -v option. This also allows for the code and output of training to be accessible by the host machine. See below for details.

  • To start the airpack docker container:

    $ docker run -it -v <path_to_AirPack>:/home/deepwave/AirPack --gpus all airpack

  • After executing this command you are in a Linux environment within the Docker container. If you are unfamiliar with Docker, it is very similar to a virtual machine.

Train the Model on the Data

  • Run the training script

    $ cd AirPack/test $ python3

  • The script will periodically display a terminal output similar to the following:

$ python3
(0 of 6094): Training Loss = 2.494922, Testing Accuracy = 0.109375
(100 of 6094): Training Loss = 1.590902, Testing Accuracy = 0.445312
(200 of 6094): Training Loss = 0.962753, Testing Accuracy = 0.664062
(300 of 6094): Training Loss = 0.617013, Testing Accuracy = 0.812500
(400 of 6094): Training Loss = 0.499497, Testing Accuracy = 0.773438
(500 of 6094): Training Loss = 0.317061, Testing Accuracy = 0.890625
(600 of 6094): Training Loss = 0.381197, Testing Accuracy = 0.867188
(700 of 6094): Training Loss = 0.347956, Testing Accuracy = 0.843750
(800 of 6094): Training Loss = 0.464664, Testing Accuracy = 0.796875
(900 of 6094): Training Loss = 0.384519, Testing Accuracy = 0.820312
(5800 of 6094): Training Loss = 0.025528, Testing Accuracy = 0.968750
(5900 of 6094): Training Loss = 0.068839, Testing Accuracy = 0.960938
(6000 of 6094): Training Loss = 0.017364, Testing Accuracy = 0.975000
(6100 of 6094): Training Loss = 0.046915, Testing Accuracy = 0.976562
  • Once the script has completed the training iterations, it will produce multiple files in the AirPack/data/output directory including the following:
    • checkpoint - Checkpoint file that defines the location of the saved model files
    • saved_model.meta - saved model file that contains the graph and protocol buffer
    • saved_model.uff - File that will be used for deployment on the AIR-T using TensorRT

Perform Inference with Trained Model

  • You may use the script to evaluate the performance of the model and plot the result.
  • Run the inference script

    $ cd AirPack/test $ python3

  • Running this script will produce an image file in AirPack/data/output/test_output.png demonstrating the inference performance for each signal type.

Next Steps

Deployment on the AIR-T

The AirPack data set is designed to help engineers get started with classifying RF signals using deep learning. Because the data set is synthetically generated, is may not have the performance necessary for deployed operation. Customers are advised to modify the code with improved channel models or leverage their own, application specific data.

Deepwave provides all of the tools necessary for deploying the trained model using either our GR-Wavelearner module for GNU Radio, or using our built-in SoapyAIRT drivers.

Code Modifications

Now that you have a fully functional end-to-end framework for training and deploying a deep learning signal classification algorithm, the source code is yours to tailor, modify, and improve to fig your custom application.

When adding new layers to your nerual network, make sure to confirm that they are supported by the TensoRT deployment framework. You may find a list of supported layers in the TensorFlow section in the TensorRT List of Supported Ops.

Additional Procedures

Install Docker:

This section is only needed if you do not already have Docker working on your system.

The official instructions for installing Docker may be found here

  • Uninstall old versions:
$ sudo apt-get remove docker docker-engine containerd runc
  • Install Requirements
$ sudo apt install apt-transport-https ca-certificates curl \
                   gnupg-agent software-properties-common
  • Add Docker’s official GPG key:
$ curl -fsSL | sudo apt-key add -
  • Add Docker Repository and update:
$ sudo add-apt-repository "deb [arch=amd64] \ $(lsb_release -cs) stable
$ sudo apt update
  • Install Docker packages:
$ sudo apt-get install docker-ce docker-ce-cli
  • (Optional) Allow non sudo calls to docker:
$ sudo usermod -aG docker $USER
  • Verify Dockerinstallation The following command
$ docker run hello-world

should return:

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

Install NVIDIA Docker Support

This is only needed if you do not already have Docker working with an NVIDIA GPU on your system.

The NVIDIA Docker Container Toolkit is required to run GPU accelerated Docker Containers. The GitHub repo contains the latest information on installation and is repeated in the procedure below.

  • Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L | sudo apt-key add -
$ curl -s -L$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
  • Install the packages:
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
  • Restart Docker
$ sudo systemctl restart docker

Copyright (C) 2019 Deepwave Digtial, Inc - All Rights Reserved You may use, distribute and modify this code under the terms of the DEEPWAVE DIGITAL SOFTWARE SOURCE CODE TERMS OF USE, which is provided with the code. If a copy of the license was not received, please write to:

[email protected]


Deepwave Digital, Inc

1420 Walnut St, Suite 817

Philadelphia, PA 19102

Last update: October 8, 2020