Stephen Smith's Blog

Musings on Machine Learning…

TensorFlow on the Raspberry Pi and Beyond

with one comment


Introduction

You’ve been able to use TensorFlow on a Raspberry Pi for a while, but you’ve had to build it yourself. With TensorFlow 1.9, Google added native support, so you can just use pip3 to install precompiled binaries and be up and running in no time. Although you can do this, general TensorFlow usage on the Raspberry Pi is slow. In this article I’ll talk about some challenges to running TensorFlow on the Raspberry Pi and look at some useful cases that do work. I’ll also compare some operations against my Intel i3 based laptop and the rather beefy servers available through Google’s subsidiary Kaggle.

Installing TensorFlow on a Pi

I saw the press release about how easy it was to install TensorFlow on a Raspberry Pi, so I read the TensorFlow install page for the Pi, checked the prerequisites, and followed the instructions. All I got was strange unhelpful error messages about how there was no package for my version of Python. The claim on the TensorFlow web page is that Python 3.4 or greater is required and I was running 3.4.2, so all should be good. I installed all the prerequisites and dependencies from the TensorFlow script and those all worked, including TensorBoard. But no luck with TensorFlow itself.

After a bit of research, it appeared that the newest version of Raspbian is Stretch, but I was running Jessie. I had assumed that since my operating system was updating that it would have installed any newer version of Raspbian. That turns out to not be true. The Raspberry people were worried about breaking things, so didn’t provide an automatic upgrade path. Their recommendation is to just install a new image on a new SD card. I could have done that, but I found instructions on the web on how to upgrade from Jessie to Stretch. I followed the instructions available here, and it all worked fine.

To me, this is really annoying since I wasted quite a bit of time on this. I don’t understand why Raspbian didn’t at least ask if I wanted to upgrade to Stretch offering the risks and trade-offs. At any rate now I know, not to trust “sudo apt-get dist-upgrade”, it doesn’t necessarily do what it claims.

After I upgraded to Stretch, doing a “sudo pip3 install TensorFlow” worked quickly and I was up and running.

Giving TensorFlow a Run

To try out TensorFlow on my Raspberry Pi, I just copied the first TensorFlow tutorial into IDLE (the Python IDE) and gave it a run.

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

This tutorial example trains on the MNINST dataset which is a set of handwritten digits and then evaluates the test set to see how accurate the model is. This little sample typically achieves 98% accuracy in identifying the digits. The dataset has 60,000 images for training and then 10,000 for testing.

I set this running on the Raspberry Pi and it was still running hours later when I went to bed. My laptop ran this same tutorial in just a few minutes. The first time you run the program, it downloads the test data, on the Pi this was very slow. After that it seems to be cached locally.

Benchmarking

To compare performance, I’ll look at a few different factors. The tutorial program really has three parts:

  1. Downloading the training and test data into memory (from the local cache)
  2. Training the model
  3. Evaluating the test data

Then I’ll compare the Raspberry Pi to my laptop and the Kaggle virtual environment, both with and without GPU acceleration.

 

Load Time Fit Time Eval Time
Raspberry Pi 3.6 630 4.7
I3 Laptop 0.6 95 0.5
Kaggle w/o GPU 1.7 68 0.6
Kaggle with GPU 1.1 44 0.6

 

Keep in mind that my Raspberry Pi is only a 3 and not the newer slightly faster 3 B+. The GPU in the Kaggle environment is the NVIDIA Tesla K80. The server is fairly beefy with 16GB of RAM. The Kaggle environment is virtual and shared, so performance does vary depending on how much is going on from other users.

Results

As you can see the Raspberry Pi is very slow fitting a model. The MNINST data is fairly compact as these things go and represents a relatively small data set. If you want to fit a model and only have a Raspberry Pi, I would recommend doing it in a Kaggle environment from an Internet browser. After all it is free.

I think the big problem is that the Raspberry Pi only has 1Gig of RAM and will be swapping to the SD Card which isn’t the greatest in performance. My laptop has 4Gig RAM and a good SSD Hard Drive. I suspect these are more key than comparing the Intel i3 to the ARM Cortex processor.

So why would you want TensorFlow on the Raspberry Pi then? The usage would be to run pre-trained models for specific applications. For instance perhaps you would want to make a smart door camera. The camera could be hooked up to a Raspberry Pi and then a TensorFlow image recognition model could be run to determine if someone approaching the door should be admitted, and if so, send a signal from a GPIO pin to unlock the door.

From above you might think that evaluation is still too slow on a Raspberry Pi. However, x_test which we are evaluating actually contains 10,000 test images. So performing 10,000 image evaluations in under 5 seconds is actually pretty decent.

A good procedure would be to train the models on a more powerful computer or in the cloud, then run the model on the Pi to create some sort of smart device utilizing the Pi’s great I/O capabilities.

Summary

The Raspberry Pi with its great DIY interface abilities combined with its ability to run advanced machine learning AI applications provides a great platform to develop smart devices. I look forward to seeing all sorts of new smart projects appearing on the various Raspberry Pi project boards.

Advertisements

Written by smist08

August 17, 2018 at 12:09 am

Posted in Uncategorized

One Response

Subscribe to comments with RSS.

  1. […] Julia using Pkg.add(“TensorFlow”) as well as view the source code on GitHub. Since I wrote an article recently comparing TensorFlow running on a Raspberry Pi to running on my laptop, I thought I’d use the […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: