Make Your Nvidia Jetson Nano Deep Learning Ready..!

Nvidia is one of the companies that is democratizing Machine Learning (ML) at the edge. This really excites me as an IoT engineer who wants to add ML capabilities to my applications e.g. ensure safety at an Industrial Plant to check for people in forbidden areas.

Nvidia has released Jetson and Xavier developer kits which are great for people who want to build really cool computer vision projects ranging from self-driving cars, robots, smart cameras and whatnot. The possibilities are endless. Huge shout out to pyimagesearch blog which has provided detailed setup instructions from time to time to get your embedded boards like Raspberry Pi and Jetson Nano ready for machine learning and computer vision projects. In my opinion, such tutorials really make developers' lives easier as they have a guide to follow to get their embedded boards ready. My blog is also inspired from pyimagesearch blog with some self improvisations.

Recently Nvidia launched a 2 GB variant of the Nvidia Jetson Nano variant which has enticed the developers to buy the kit and get their hands dirty to build cool projects. However, if the development environment is not setup correctly, the journey could be really frustrating and discouraging to move forward.

In this article, I am focussing how we could get our Jetson Nano developer kit ready for Computer Vision Deep Learning AI projects.

This tutorial is a long one and will definitely take a time to follow along. But I will say, it’s worth the effort and will give you a smooth start to your next AI project.

We will be installing OpenCV 4.5, TensorFlow 2.3, Keras and TensorRT models. As said, the whole process is very time-consuming, so patience is the key. However, the results are worth the effort. So, let’s begin.

At the time of writing this article, the Jetpack version is 4.4.1, but the steps will not be too different with future releases. Just a bit of tweaking might be needed with the dependent package versions.

Materials Needed

  • Jetson Nano DevKit (I am using the 4GB variant pre 2020)
  • Barrel jack 5V 4A (20W) power supplies
  • High speed micro SD card (64GB recommended)
  • Monitor, keyboard and mouse
  • Internet Connection via Ethernet cable

PS: Jetson Nano does not have onboard Wi-Fi, however it does support M2 Wi-Fi cards or USB Wi-Fi adapters. Just we need to make sure the kernel has the drivers for the card you use or you will have to compile the driver to use your card.

Initial Setup

If you are totally new to the Jetson hardware, I would strongly recommend visiting the Jetson Nano official getting started page, to know about the hardware and flashing the Nvidia Jetson Nano base image on to the SD card.

You could use Etcher to flash the base image onto the SD card to boot the Jetson Nano.

Flash the image using Etcher

Before you can connect a 5V 4A power supply to the barrel jack on the Jetson Nano, you will need to put a jumper on J48. Refer to this hardware specification to know where is J48.

For an A02 carrier board (pre-2020) J48 is the solo header next to the camera port. See the yellow jumper in the picture associated with this article.

For a B01 carrier board (2020+) J48 is a solo header behind the barrel jack and the HDMI port.

Disclaimer: Please proceed with caution — The author takes no responsibility for any damages caused by the proper or improper use of information in this article.

Once that is done, just connect the monitor, keyboard and mouse to the Jetson Nano board. Connect the power adapter to the barrel jack to power on the board. This is absolutely necessary since we could use the Nano in its full 10W power mode.

Follow the onscreen instructions on first boot and install the operating system.

Base packages upgrade

Since we will be using Jetson Nano as a development environment, we will first remove unwanted packages to save some space. We will remove the LibreOffice suite as we don’t need it. After that we will do a base packages update. Once the update is complete, issue a reboot command to apply the changes to the bootloader. You should get a notification of bootloader update.

sudo apt-get purge libreoffice* && sudo apt-get clean
sudo apt-get autoremove
sudo apt-get update -y && sudo apt-get upgrade -y
sudo reboot

Setup remote access

Now we are ready to install packages needed for our environment for AI at the Edge projects. Note the IP address of your device using ifconfig command in case you want to remotely setup everything via ssh. We will be installing ssh server along with other packages.

sudo apt-get install openssh-server

Setup lightdm

After the installation is complete, open the terminal and change the display manager to lightdm using the below command. Using lightdm as a default display manager, you will save a whopping 1 GB of RAM, that’s 25% of the total RAM.

sudo dpkg-reconfigure gdm3

Select Ok to select lightdm as the default manager and reboot.

Disable GUI boot

For our installation, we would disable the GUI boot. By doing this we save more RAM and our builds turn out to run faster compared to when GUI is enabled. If we check, you will see the system uses around 300 to 400 MB of the total 4 GB RAM which is amazing.

Low RAM Usage with NoGUI boot

Make sure you have ssh enabled (on the card image it is, but make sure…)

Follow the below steps to disable the GUI boot.

  • Open the /boot/extlinux/extlinux.conf using your favorite editor.
sudo vi /boot/extlinux/extlinux.conf
  • At the end of the APPEND line, after the rootwait, add 3.
Boot Config File

That’s it. Nano will now boot in non GUI mode. You can now use ssh to login into the Jetson Nano and complete the rest of the instructions.

You can revert this change in the boot config file once you have completed the all the instructions to restore the GUI boot if you like.

Configuring and Installing Jetson Nano utilities

Jetson_stats

Jetson_stats is a system monitoring utility written in python that runs on the terminal, and we can see and control real time the status of your NVIDIA Jetson We can see the CPU, RAM, GPU status, frequency, IP, versions, etc. The interface is clickable and you can change parameters on the fly. You can install the jetson_stats utility using the below commands.

sudo apt-get install python3-pip
sudo -H pip3 install -U jetson-stats
sudo systemctl restart jetson_stats.service
sudo reboot

After reboot, invoke jtop to launch the interactive application.

sudo jtop

Jetson_clocks

For our tutorial, we will lock our cpu cores to max speed to get the maximum out of all the cpus. Jetson_clocks script helps to achieve this without any complexity. To learn more, have a look at this link in the Nvidia forums.Use the below commands to lock the cpus to maximum speed of 1.5GHz.

sudo nvpmodel -m 0
sudo jetson_clocks

Installing packages

We will start installing various libraries that we shall need in order to compile OpenCV, TensorFlow and friends. We will also create a python virtual environment as it is considered as a best practice when you want to work on multiple projects on the same device. That way no compatibility issues can arise. In this tutorial, we will create only one virtual environment with all relevant packages and libraries installed. It’s up to the readers if they want to extend it further, e.g. create another environment with TensorFlow version 1.X.

The below command installs all the required libraries and packages for us to install and build our targets. It should take some time to install so take a break… :)

sudo apt-get install curl apt-utils git cmake libatlas-base-dev gfortran libhdf5-serial-dev hdf5-tools python3-dev locate libfreetype6-dev python3-setuptools protobuf-compiler libprotobuf-dev openssl libssl-dev libcurl4-openssl-dev cython3 libxml2-dev libxslt1-dev build-essential pkg-config libtbb2 libtbb-dev  libavcodec-dev libavformat-dev libswscale-dev libxvidcore-dev libavresample-dev libtiff-dev libjpeg-dev libpng-dev  python-tk libgtk-3-dev libcanberra-gtk-module libcanberra-gtk3-module libv4l-dev libdc1394-22-dev virtualenv

Next, update the drivers

sudo apt-get install libgl1-mesa-dri mesa-va-drivers mesa-vdpau-drivers

The cmake that is installed by default is old. We need to update the cmake to compile the latest opencv. We shall download the cmake source, compile it and use that to build OpenCV. Use the below commands to build cmake from source. It would take some time to build cmake, so make yourself a tea or coffee.

cd
wget http://www.cmake.org/files/v3.13/cmake-3.13.0.tar.gz
tar xfz cmake-3.13.0.tar.gz
cd cmake-3.13.0/
./bootstrap --system-curl
make -j4

Upon completion of the compilation, add the following line at the end of the ~/.bashrc

export PATH=/home/`whoami`/cmake-3.13.0/bin/:$PATH

Invoke the bashrc script to load the PATH environment variable.

source ~/.bashrc

Python virtual environment and packages

As said above, virtual environments are the way to go when working with python projects especially with machine learning frameworks. Let's create a virtual environment and activate it. You could read about python virtual environments here.

cd
virtualenv -p python3 cv
source cv/bin/activate

It is strongly recommended having your isolated virtual environment separated from the system. All the packages need to be installed in your virtual environment. Once activated you should see something like this.

Python Virtual Environment

After that, we need to install python packages into this virtual environment.

pip install numpy cython 

Protobuf, Protoc and Scipy

We need protobuf as it is a key component in the TensorFlow performance. It's for that reason we will compile it from source and install it on Jetson Nano.

source cv/bin/activatewget https://gist.githubusercontent.com/abhatikar/df138122f2a1d44ce5f3343e2db95e67/raw/912ea9aa5ced0a99bad59df336d892cc9096b146/install_protobuf-3.14.0.shchmod +x install_protobuf-3.14.0.sh./install_protobuf-3.14.0.shcd ~ cp -r ~/src/protobuf-3.14.0/python/ .
cd python

python setup.py install --cpp_implementation
Protobuf Installation

We will build and install scipy from source on the Jetson Nano. Make sure you are within the virtual environment. This will take around 30 minutes.

cd
wget https://github.com/scipy/scipy/releases/download/v1.4.1/scipy-1.4.1.tar.gz
tar xfz scipy-1.4.1.tar.gz
cd scipy-1.4.1/
python setup.py install
Scipy package installed

Other packages that would be needed can be installed using the below command.

pip install matplotlib scikit-learn pillow imutils scikit-image flask jupyter lxml progressbar2 dataclasses
Python packages installation

TensorFlow and Keras Installation

Now that we have completed installing all the libraries and packages, we can begin installing Tensorflow. We will be installing TensorFlow 2.3.1 from the Nvidia's repository optimized for Jetpack 4.4. We will install Keras after the installation of TensorFlow is completed.

Make sure you are in the python virtual environment before you run the below commands.

source cv/bin/activatepip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==2.3.1+nv20.10pip install tf_slim#Install Keras
pip install keras
Tensorflow and Keras Installation

Dataset and Models for Object Detection

Now that we have TensorFlow and Keras installed, let's install TensorFlow object detection models along with TensorRT-optimized models for the Jetson Nano.

Let's first create a file to set up the correct paths. Create a file named setup_tf_models.sh

vi setup_tf_models.sh#!/bin/shexport PYTHONPATH=$PYTHONPATH:/home/`whoami`/models/research:/home/`whoami`/models/research/slim:/home/`whoami`/models

Make sure you are in the python virtual environment before you run the below commands.

source cv/bin/activate
source ~/setup_tf_models.sh
#coco api
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python setup.py install
#tensorflow models
git clone https://github.com/tensorflow/models
cd ~/models/research/
protoc object_detection/protos/*.proto --python_out=.

Clone the TensorRT models repository for Nvidia Jetson.

#tensor-rt models
git clone --recursive https://github.com/NVIDIA-Jetson/tf_trt_models.git

We need to make a small change to the install.sh script to point to the correct protoc binary which we built for Jetson Nano in the previous step. Comment out the line and replace with PROTOC=protoc as shown in the snapshot below.

cd ~/tf_trt_models/vi ./install.sh

Now, run the installation script to install the TensorRT models.

./install.sh

This completes installations of TensorFlow, Keras and AI Models on our Jetson Nano. In the next section, we will go over the configuration and installation of OpenCV library.

OpenCV Installation

We can now begin installation of OpenCV 4.5. We will build it from source which certain configurations that are needed for our deep learning projects. Let us begin.

Make sure you are in the python virtual environment before you run the below commands.

#Get to your HOME directory
cd
#Activate your python virtual environment
source cv/bin/activate
#Download Opencv packages
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.5.0.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.5.0.zip#Extract the archives
unzip opencv_contrib.zip
unzip opencv.zip
mv opencv-4.5.0/ opencv
mv opencv_contrib-4.5.0/ opencv_contrib
rm opencv.zip opencv_contrib.zip
cd opencv
mkdir build
#cmake from the build directory
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_CUDA=ON -D CUDA_ARCH_PTX=”” -D CUDA_ARCH_BIN=”5.3,6.2,7.2" -D WITH_CUBLAS=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python3=ON -D BUILD_opencv_python2=OFF -D BUILD_opencv_java=OFF -D WITH_GSTREAMER=ON -D WITH_GTK=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D OPENCV_ENABLE_NONFREE=ON -D OPENCV_EXTRA_MODULES_PATH=/home/`whoami`/opencv_contrib/modules ..

Upon completion, you should to see something like this. Make sure no errors are seen in the cmake output.

--     Disabled:                    python2 world
-- Disabled by dependency: -
-- Unavailable: cnn_3dobj cvv java js julia matlab ovis sfm ts viz
-- Applications: apps
-- Documentation: NO
-- Non-free algorithms: YES
--
-- GUI:
-- GTK+: YES (ver 3.22.30)
-- GThread : YES (ver 2.56.4)
-- GtkGlExt: NO
-- VTK support: NO
--
-- Media I/O:
-- ZLib: /usr/lib/aarch64-linux-gnu/libz.so (ver 1.2.11)
-- JPEG: /usr/lib/aarch64-linux-gnu/libjpeg.so (ver 80)
-- WEBP: build (ver encoder: 0x020f)
-- PNG: /usr/lib/aarch64-linux-gnu/libpng.so (ver 1.6.34)
-- TIFF: /usr/lib/aarch64-linux-gnu/libtiff.so (ver 42 / 4.0.9)
-- JPEG 2000: build (ver 2.3.1)
-- OpenEXR: build (ver 2.3.0)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
-- PFM: YES
--
-- Video I/O:
-- DC1394: YES (2.2.5)
-- FFMPEG: YES
-- avcodec: YES (57.107.100)
-- avformat: YES (57.83.100)
-- avutil: YES (55.78.100)
-- swscale: YES (4.8.100)
-- avresample: YES (3.7.0)
-- GStreamer: YES (1.14.5)
-- v4l/v4l2: YES (linux/videodev2.h)
--
-- Parallel framework: pthreads
--
-- Trace: YES (with Intel ITT)
--
-- Other third-party libraries:
-- Lapack: NO
-- Eigen: YES (ver 3.3.4)
-- Custom HAL: YES (carotene (ver 0.0.1))
-- Protobuf: build (3.5.1)
--
-- NVIDIA CUDA: YES (ver 10.2, CUFFT CUBLAS)
-- NVIDIA GPU arch: 53 62 72
-- NVIDIA PTX archs:
--
-- cuDNN: YES (ver 8.0.0)
--
-- OpenCL: YES (no extra features)
-- Include path: /home/abhatikar/opencv/3rdparty/include/opencl/1.2
-- Link libraries: Dynamic load
--
-- Python 3:
-- Interpreter: /home/abhatikar/cv/bin/python3 (ver 3.6.9)
-- Libraries: /usr/lib/aarch64-linux-gnu/libpython3.6m.so (ver 3.6.9)
-- numpy: /home/abhatikar/cv/lib/python3.6/site-packages/numpy/core/include (ver 1.18.5)
-- install path: lib/python3.6/site-packages/cv2/python-3.6
--
-- Python (for build): /usr/bin/python2.7
--
-- Java:
-- ant: NO
-- JNI: NO
-- Java wrappers: NO
-- Java tests: NO
--
-- Install to: /usr/local
-- -----------------------------------------------------------------
--
-- Configuring done
-- Generating done
-- Build files have been written to: /home/abhatikar/opencv/build

Start the compilation using the below command. Here with -j4 we are using all 4 cores for this job which makes compilation faster.

#Compile OpenCV, this will take a some time
make -j4
OpenCV Compilation completed

Install the OpenCV library system wide using this command.

sudo make install
OpenCV installed

Now that we have installed OpenCV library system wide globally, we need to link it into our python virtual environment. That way all our projects using the python virtual environment would just import it ease.

So, we will just create a soft link into our python virtual environment. If you create a second virtual environment, you just need to create a symbolic link into that environment. How cool is that :)

To do it, navigate to the virtual environment lib folder for python packages

cd ~/cv/lib/python3.6/site-packages

Get the path of the system installed OpenCV library.

ls /usr/local/lib/python3.6/site-packages/cv2/python-3.6/cv2*
OpenCV Library path

Create a symlink of OpenCV library (cv2.so) in the virtual environment to point to the system library. Use ls -l command to check if the link is created correctly.

ln -s /usr/local/lib/python3.6/site-packages/cv2/python3.6/cv2.cpython-36m-aarch64-linux-gnu.so cv2.sols -l cv2.so
Symlink creation

That's it. We have completed the installation of OpenCV in our python virtual environment. You can verify your installed OpenCV version using the below command.

python -c ‘import cv2; print(cv2.__version__)’
Opencv Version

You could also view the version in the jtop utility under the info tab as shown below.

Libraries Versions in Jtop

Restore GUI Boot

At last, if you want to restore the GUI boot, just revert the change you did in the /boot/extlinux/extlinux.conf to remove the 3, save and reboot.

Summary

In this tutorial, we have configured our Nvidia Jetson Nano for doing deep learning and AI projects using Python. We installed the operating system, dependent libraries, and Jetson utilities.

We then, installed TensorFlow, Keras, TensorFlow object detection models and TensorRT optimized models for Jetson Nano.

Lastly, we configured, built and installed OpenCV library.

All these libraries and frameworks are necessary for a developer to get started with AI at the Edge. Most of the computer vision projects with deep learning projects make use of TensorFlow based models and having a right optimized setup is absolutely necessary.

This is my attempt to give back and share my experience and knowledge to people who struggle setting up the right environment to get started with AI at the edge.

If you have any questions, or if you like this article and this was helpful for your project, please leave me a comment.

Thanks Lila Mullany for taking time to proof read my article.

Linux Enthusiast, Embedded systems, Quick Learner, IoT Developer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store