Deploy on AIR-T

Author

This software is written by Deepwave Digital, Inc. www.deepwavedigital.com.

For inquiries fill out a Customer Support Request. Note: You must have a Deepwave Digital Customer Account.

Application Notes

  • While the trained neural network model that results from AirPack may be deployed on the AIR-T, the use of synthetic training data will limit the functionality in real-world operation. AirPack is a software launching point to reduce development time, not a final solution. Contact us for more details on making the model robust to real-world effects.

  • A PLAN file is an optimized nerural network for inference execution. These files are platform specific and must therefore be created on the architecture in which they will be executed, e.g., the AIR-T.

  • AirStack v0.1 is based on NVIDIA Jetpack version 3.3, which does not have Python support for creating a PLAN file using TensorRT. Therefore, we provide a custom module TRT Plan From UFF that is written in C++ to help you convert your UFF file to a PLAN file on the AIR-T. This software tool is not needed for AirStack 0.2.0+, and may be deprecated in future releases.

 

Create the PLAN Inference File

The method of creating the .plan file will depend on the version of AirStack that your AIR-T is running.

AirStack v0.2.0+

  • Copy the AirPack output model, saved_model.uff from the training computer to your home directory on the AIR-T.

  • Create the saved_model.plan file by executing the uff2plan.py script. This will create an optimized neural network for inference on the AIR-T called saved_model.plan that will be executed in GR-Wavelearner. This script is also included with AirPack and GR-Wavelearner.

AirStack v0.1

  • Build the TRT Plan from UFF software tool:
    • $ cd <path-to-trt_plan_from_uff>
    • $ mkdir bin; mkdir obj; make
    • Install the tool via: $ sudo cp bin/make_trt_plan /usr/local/bin
    • You can see the required input arguments using the -h flag:
deepwave@air-t:~$ make_trt_plan -h
TensorRT PLAN Creation Command Line Options:
  -h [ --help ]                        Print Help Message
  -u [ --uff ] arg                     Path to UFF Input File
  -p [ --plan ] arg                    Path to PLAN Output File
  -i [ --input_node ] arg              Name of Input Node
  -C [ --input_channels ] arg          Input Channel Dimension
  -H [ --input_height ] arg            Input Height Dimension
  -W [ --input_width ] arg             Input Width Dimension
  -N [ --max_batch_size ] arg (=1)     Max Batch Size for Engine
  -w [ --max_workspace_size ] arg (=0) Max Engine Scratch Memory (in Bytes)
  -t [ --data_type ] arg (=float32)    Data Type Used by Engine
  • Copy the AirPack output model, saved_model.uff from the training computer to your home directory on the AIR-T.

  • Create the saved_model.plan file by executing the following command:

$ cd ~/
$ make_trt_plan -u "saved_model.uff"
                -p "saved_model.plan"
                -i "input/IteratorGetNext"
                -C "1"
                -H "1"
                -W "4096"
                -N "256"

This will create an optimized neural network for inference on the AIR-T called saved_model.plan that will be executed in GR-Wavelearner.

 

Execute Neural Network with GR-Wavelearner

  • Update GR Wavelearner using the this Tutorial
  • Open GNU Radio Companion
  • Choose File -> Open
  • Select the file example GRC Classifer Test:

    • AirStack 0.2.0+ - /usr/local/src/gr-wavelearner/examples/classifier_test.grc
    • AirStack 0.1 - /usr/local/src/deepwave/gr-wavelearner/examples/classifier_test.grc
  • Make sure that the saved_model.plan file is in your home directory. If not, point the Inference block to the file.

  • Set the location for the output file. By default, the inference output will be written to a file in the home directory. Because the AIR-T only has 32 GB of disk space, best practice is to point the File Sink block to an external drive.
  • Make sure that the variables in the flowgraph as set according to your saved_model.plan file.

  • Execute the flowgraph by pressing the green play button. Once the last line is printed, the output is being recorded to the file. You should see the following:

Generating: '/usr/local/src/deepwave/gr-wavelearner/examples/classifier_test.py'

Executing: /usr/bin/python -u /usr/local/src/deepwave/gr-wavelearner/examples/classifier_test.py

linux; GNU C++ version 5.3.1 20151219; Boost_105800; UHD_003.009.002-0-unknown

TensorRT INFO: Glob Size is 1075760 bytes.
TensorRT INFO: Added linear block of size 33095680
TensorRT INFO: Added linear block of size 4194304
TensorRT INFO: Deserialize required 1875533 microseconds.
  • Stop the flowgraph by pressing the red stop button.
  • Examine the output file to see what inference was performed. You may use the following python code to display the output of the file.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#!/usr/bin/env python

import numpy as np
import sys

input_file = 'output.bin'
n_classes = 12

data_arr = np.fromfile(input_file, np.float32)
data_mat = data_arr.reshape(-1, n_classes)

for row in data_mat:
    print_str = '['
    for val in row:
        print_str = print_str + ' {:0.2f}'.format(val)
    print_str = print_str + ']'
    print(print_str)

print('Data file is {:d} x {:d}'.format(data_mat.shape[0], data_mat.shape[1]))

Which should produce an output similar to the following:

   ...
   [ 0.94 0.00 0.00 0.00 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   [ 0.94 0.00 0.00 0.00 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   [ 0.94 0.00 0.00 0.00 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   [ 0.94 0.00 0.00 0.00 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   [ 0.97 0.00 0.00 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   [ 0.98 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   [ 0.98 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   [ 0.98 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00]
   data file is 768 x 12
  • You have now completed the full development cycle! Now you should be able to obtain your own training data, configure the AirPack source code, and make a robust model for your application.

 

Credits and License

TRT Plan from UFF software tool is designed and written by Deepwave Digital, Inc. and is licensed under the GNU General Public License. Copyright notices at the top of source files.

GR-Wavelearner is designed and written by Deepwave Digital, Inc. and is licensed under the GNU General Public License. Copyright notices at the top of source files.

AirPack is licensed under the Deepwave Digital, Inc Software Source Code Terms of Use. If a copy of the license was not received, please write to:

support@deepwavedigital.com

or

Deepwave Digital, Inc.

1430 Walnut St, Suite 313

Philadelphia, PA 19102


Last update: April 15, 2020