Installation and Training¶
An introductory deep learning model creation framework for radio frequency (RF) signals on the Artificial Intelligence Radio Transceiver (AIR-T).
AirPack contains everything you need, including TensorFlow source code, to walk you through the crucial steps of training a simple convolutional neural network (CNN) to detect and classify RF signals. AirPack provides you a complete framework for training a neural network on RF signal data. By providing a custom Docker environment with AirPack, the hassle of installing all of the complicated drivers, files, and toolboxes are eliminated. Once installed, you will have your first trained model in less than an hour. After running the AirPack source code, the software will produce an RF signal classifier neural network that is deployable on the AIR-T software defined radio. Users of AirPack will shave months off of the learning and engineering development cycle, leading to a reduction in labor cost and faster time to market.
This software package is provided by Deepwave Digital, Inc. www.deepwavedigital.com.
- Please fill out a Customer Support Request
- Note: You must have a Deepwave Digital Customer Account
- Computer running Linux
- An NVIDIA GPU card with CUDA Compute Capability 3.5+
- See the List of CUDA-enabled GPU cards
- At least 5.6 GBytes of free space
- Docker image - 4.91 GB
- AirPack and training data - 0.69 GB
- Docker - Follow the procedure in the Additional Procedures section below for installation
- NVIDIA Docker Support - Follow the procedure in the Additional Procedures section below for installation
- Internet connection for the initial installation
- Contact us if deployment is required without an internet connection. Deepwave Digital can provide the necessary files on an installation disk.
AirPack/DockerFile - Docker file used to create the NVIDIA and TensorFlow training environment
AirPack/airpack/fileio.py - Python class that utilizes
tensorflow.data- the optimized TensorFlow input data pipeline - for the reading from the binary data files. This allows you to easily achieve peak throuput performance with a data input pipeline that delivers data for the training next step before the current training step has finished.
AirPack/airpack/model.py - A simple convolution neural network (CNN) written in Python and TensorFlow that accurately classifies RF signals and is deployable on the AIR-T. Because all parameters of the model parameters are brought out as variables, the model may be easily modified.
AirPack/data - location to place training and test data set
- Training data should be placed in an AirPack/data/train subfolder
- Test data should be placed in an AirPack/data/test subfolder
AirPack Data Set - Training and inference data set containing:
- 11 generic radar signal types for classification
- Signal data is synthetically generated
- SNR varied between -5, 20 dB in 1 dB increments for every signal
- Randomized phase, timing, and frequency
- 78,000 unique signal files for training including receiver background noise
- 7,800 unique signal files for testing including receiver background noise
- 11 generic radar signal types for classification
AirPack/test/run_training.py - Python script that:
- Initializes the fileio.DataReader class for streamlined handling the training data
- Defines the CNN model, loss, optimization, accuracy functions
- Trains the model and tests/prints the accuracy periodically
- Creates UFF file representation of trained neural network for deployment on the AIR-T
AirPack/test/run_inference.py - Python script that runs the trained neural network classifier against the test data set and produces an output plot as shown below. This is to be executed on the training computer, not the AIR-T.
AirPack/deploy - Instructions along with all the tools needed to deploy the UFF file on the AIR-T.
AirPack Installation Procedure¶
- Make sure all of the Requirements above are satisfied
Install the AirPack Docker container via the following commands:
$ cd <path_to_AirPack> $ docker build -t airpack .
Verify AirPack docker installation
Start the docker container:
$ docker run -it --gpus all airpack
Verify that the GPU for training is accessible:
$ lspci | grep NVIDIA
and make sure you see a GPU present. For example:
$ lspci | grep NVIDIA 03:00.0 VGA compatible controller: NVIDIA Corporation GP100GL [Quadro GP100] 03:00.1 Audio device: NVIDIA Corporation Device 0fb1 (rev a1)
Executing the Package Demonstration¶
Start AirPack Docker Container
Note: the AirPack directory is not contained within the docker image. It must be mounted when the container is started via the
-v option. This also allows for the code and output of training to be accessible by the host machine. See below for details.
To start the
$ docker run -it -v <path_to_AirPack>:/home/deepwave/AirPack --gpus all airpack
After executing this command you are in a Linux environment within the Docker container. If you are unfamiliar with Docker, it is very similar to a virtual machine.
Train the Model on the Data
Run the training script
$ cd AirPack/test $ python3 run_training.py
The script will periodically display a terminal output similar to the following:
$ python3 run_training.py ... (0 of 6094): Training Loss = 2.494922, Testing Accuracy = 0.109375 (100 of 6094): Training Loss = 1.590902, Testing Accuracy = 0.445312 (200 of 6094): Training Loss = 0.962753, Testing Accuracy = 0.664062 (300 of 6094): Training Loss = 0.617013, Testing Accuracy = 0.812500 (400 of 6094): Training Loss = 0.499497, Testing Accuracy = 0.773438 (500 of 6094): Training Loss = 0.317061, Testing Accuracy = 0.890625 (600 of 6094): Training Loss = 0.381197, Testing Accuracy = 0.867188 (700 of 6094): Training Loss = 0.347956, Testing Accuracy = 0.843750 (800 of 6094): Training Loss = 0.464664, Testing Accuracy = 0.796875 (900 of 6094): Training Loss = 0.384519, Testing Accuracy = 0.820312 ... ... (5800 of 6094): Training Loss = 0.025528, Testing Accuracy = 0.968750 (5900 of 6094): Training Loss = 0.068839, Testing Accuracy = 0.960938 (6000 of 6094): Training Loss = 0.017364, Testing Accuracy = 0.975000 (6100 of 6094): Training Loss = 0.046915, Testing Accuracy = 0.976562
- Once the script has completed the training iterations, it will produce multiple files in the
AirPack/data/outputdirectory including the following:
- checkpoint - Checkpoint file that defines the location of the saved model files
- saved_model.meta - saved model file that contains the graph and protocol buffer
- saved_model.uff - File that will be used for deployment on the AIR-T using TensorRT
Perform Inference with Trained Model
- You may use the
run_inference.pyscript to evaluate the performance of the model and plot the result.
Run the inference script
$ cd AirPack/test $ python3 run_inference.py
Running this script will produce an image file in
AirPack/data/output/test_output.pngdemonstrating the inference performance for each signal type.
Deployment on the AIR-T
The AirPack data set is designed to help engineers get started with classifying RF signals using deep learning. Because the data set is synthetically generated, is may not have the performance necessary for deployed operation. Customers are advised to modify the code with improved channel models or leverage their own, application specific data.
Deepwave provides all of the tools necessary for deploying the trained model using either our GR-Wavelearner module for GNU Radio, or using our built-in SoapyAIRT drivers.
Now that you have a fully functional end-to-end framework for training and deploying a deep learning signal classification algorithm, the source code is yours to tailor, modify, and improve to fig your custom application.
When adding new layers to your nerual network, make sure to confirm that they are supported by the TensoRT deployment framework. You may find a list of supported layers in the TensorFlow section in the TensorRT List of Supported Ops.
This section is only needed if you do not already have Docker working on your system.
- Uninstall old versions:
$ sudo apt-get remove docker docker-engine docker.io containerd runc
- Install Requirements
$ sudo apt install apt-transport-https ca-certificates curl \ gnupg-agent software-properties-common
- Add Docker’s official GPG key:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- Add Docker Repository and update:
$ sudo add-apt-repository "deb [arch=amd64] \ https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable $ sudo apt update
- Install Docker packages:
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
- (Optional) Allow non sudo calls to docker:
$ sudo usermod -aG docker $USER
- Verify Dockerinstallation The following command
$ docker run hello-world
Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
Install NVIDIA Docker Support
This is only needed if you do not already have Docker working with an NVIDIA GPU on your system.
The NVIDIA Docker Container Toolkit is required to run GPU accelerated Docker Containers. The GitHub repo contains the latest information on installation and is repeated in the procedure below.
- Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) $ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - $ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
- Install the packages:
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
- Restart Docker
$ sudo systemctl restart docker
Deepwave Digital, Inc
1430 Walnut St, Suite 313
Philadelphia, PA 19102