A Quick / Short Example of Using TFLite with Golang and a GAN Model

Why and Background

ML with TensorFlow is almost like a programming superpower... no actually it is a programming superpower. There are plenty of tutorials, books, courses, and videos on building models, transfer learning, etc. All the awesome Colab Notebooks make ML pretty accessible, but there seem to be fewer resources around deploying models. If you stick to the well-worn path, it's well documented and easy. I think TensorFlow Serving is pretty awesome. There is always a bit of figuring out how to set up your data correctly once you have left behind the comforts of Python and the associated tooling. But what if you want to run your model in odd places that aren't necessarily suited for a Docker image w/ TensorFlow Serving or on your mobile device w/ TensorFlow Lite? That's what this example attempts to tackle - using TensorFlow Lite and Golang to deploy simple ML models.

The code and example is completely standalone, but I'll give plenty of background on how I built the pieces. Also, if you really want to use Golang and TFLite, I'd recommend you check out go-tflite. It's an actual Go package that has more bells and whistles than what I'm doing here.

Setup

Source Code

GAN Model

First we need a GAN model. If you aren't familiar with GANs, the short tutorial on the TensorFlow site is a decent intro. There are plenty of prebuilt GAN models out in the wild. There's a simple one on TF Hub that's pretty good to explore, with an accompanying tutorial as well.

In this example we're going to use a simple model that I built, trained on the Met Faces dataset. The model isn't great and was more of an exploration - so I won't show how it was built to avoid confusing you and embarrassing myself. It has the same inputs/outputs as the ProGAN model from TF Hub listed in the above example. So feel free to swap in that model for the one in the code repo - it should just work once you've converted it to TFLite format. If you are converting, it's pretty straightforward with the TensorFlow 2.X releases. I recommend not using the conversion command line tool but instead doing it with the Python libs inside tensorflow.lite.

TF Lite

Since the repo includes all the parts needed - header files and prebuilt libs for Linux, macOS, and Raspberry Pi - there's no need to install anything, but I'll walk you through a couple pieces. TensorFlow has lots of ways to install and interact with it. The easiest are probably using a Google Colab or Docker install. Next easiest is pip install if you're on a major platform. If using Raspberry Pi it takes a bit of tweaking, and it's easiest to find a community prebuilt whl. The general suggestion is to create a virtual env for pip installs before taking the pip install tensorflow route. There is no shortage of posts, links, and videos to lead you through the process.

But what about TFLite - how do we get the libraries and header files? If you're on mobile it's as simple as including the pod or using a newer version of Android Studio. Using TFLite outside of those mobile contexts is simple but not super well documented. You'll probably need to build TFLite for your platform of choice. I've already done this and included the libs and header files in the repo, so the following is just a couple of quick pointers on how I did it.

  1. Pull down the TensorFlow src from GitHub
  2. Make sure you have a working C compiler, JDK, CMake installed.

    You could try building using TensorFlow's preferred build system of Bazel, but that really just made everything more difficult when I tried it. CMake was very straightforward by comparison.

  3. Create a build directory outside of the src tree, run cmake $srctree/tensorflow/tensorflow/lite/c/ and then make.

    Depending on your machine speed, you might have a few minutes to make yourself a cup of tea. Also note when building on Raspberry Pi, you may need to patch the CMake build code to make sure it links against libatomic - hopefully this is fixed soon.

Golang

If you aren't familiar with Golang, then I'm not sure why you've read this far. Anyway, it's fair to say that even with all of its shortcomings, I like it. We're going to use cgo to link into the TFLite libs we just built to serve the GAN model. The code follows the standard steps of using TFLite:

  1. Load model
  2. Create the interpreter options
  3. Create the interpreter with the model and the options
  4. Allocate the tensors
  5. Copy data into the input tensors
  6. Invoke the interpreter
  7. Copy data out of the output tensors
Knowing the steps ahead of time makes all of this pretty easy to follow. The Golang code is pretty standard and boring - which is the way I like it. There are a few specific things for cgo that set up the CFLAGS and LDFLAGS with platform modifiers at the top, and there is a breakdown of copying to and from tensors. There are platform-level ARM vs amd64 differences with unsigned long vs unsigned int, so when building you need to specify which types.go you want to use.

Build

To keep things simple, no external Go packages are used and I'm using Go modules. Sorry, no fancy build scripts. Just go build example.go types.go should do it for Linux/macOS (for you ARM M1 Macs, you're on your own for now). If you're compiling on Raspberry Pi then go build example.go types_arm.go is all you need.

Run

To serve a new GAN face: LD_LIBRARY_PATH=platform ./example -m models/model.tflite. This sets up an HTTP listener on port 8080 that will deliver a 128x128 image. Hit reload and see variations.

Done

Hope this was readable and helpful - let me know if you have questions derek@codecubed.com

Sat Mar 27 14:49:52 EDT 2021