Firebase ML Kit 6: Using Custom TensorFlow Lite Models

Firebase ML Kit 6: Using Custom TensorFlow Lite Models

If you’re already an experienced ML Developer, chances are you already have your own model that can perform operations such as Text Recognition and Face Detection.

So why might you want to host your model using Firebase ML Kit? Well, here are the advantages of doing so:

  • Reduce your app’s binary size
  • Choose whether to host your model on-device or on the cloud… or both
  • Automatic handling of multiple model sources for graceful fallback
  • Automatic downloading of new versions of your model

By no means am I an ML expert, so I don’t know just how much of an advantage each of these are (I’d love to know though). They do seem pretty neat.

If this is your first time hearing about Firebase ML Kit, you can check out my introduction on it right here.

Info on Model Storage and Security

To make your model available to ML Kit, you can either store your model remotely on the Firebase Console, or bundled with your app, or both. In doing both, you can ensure that your app model is always up to date (from being stored on the Console), and when network connection isn’t great, your ML features will still work using the model bundled with the app.

Regardless of where you store it, your model will be stored in the standard serialized protobuf format in local storage. In theory, anyone will be able to copy your model, but in practice, most models are so application-specific and obfuscated by optimisations that the risk is similar to that of competitors disassembling and reusing your code.

(I couldn’t word this very differently from the official docs, so LINK)

For Android API 20 and lower, the model is downloaded to a directory named com.google.firebase.ml.custom.models in app-private internal storage. For API 21 and up, the model is downloaded to a directory that is excluded from automatic backup.

Implementation

Prerequisites

Make sure your app is already connected to Firebase. If you’re not sure how to do that, here’s a really quick way.

Dependencies and Setup

Add this dependency to your app-level  build.gradle  file.

Add this permission to your  AndroidManifest.xml

If you’re targeting Android API 18 and lower, add this to your manifest as well.

Make the Model Available

On the Cloud

Go to the Firebase Console > MK Kit > Custom,  and add your model there.

On-Device (Asset)

Copy the model file to your app’s assets/ folder, then add this code to your project-level  build.gradle

On-Device (Downloaded into Local Storage)

Well, just download it at an appropriate point in your app. You’ll just be referencing its location later when you load it.

Load the Model

If you hosted your model on the cloud, build a  FirebaseCloudModelSource passing in the name you gave to your model in the console when you uploaded it.

You can also set conditions for when the model should be downloaded initially and whenever a new update is available.

If you hosted your model on-device, build a  FirebaseLocalModelSource passing in the filename of the model and whether it’s been stored as an asset or downloaded into local storage.

Then build a  FirebaseModelOptions  passing in your cloud and local names (which ones are made available), and a  FirebaseModelInterpreter which will handle using the cloud model, or if not that’s not available, the local model.

Specify Model Input and Output

The model’s input and output uses one or more multidimensional arrays which contain either byteintlong, or float values. Using a  FirebaseModelInputOutputOptions you should define the number and dimensions your array uses.

This example from the official docs For example, an image classification model might take as input a 1x640x480x3 array of bytes, representing a single 640×480 truecolor (24-bit) image, and produce as output a list of 1000 float values, each representing the probability the image is a member of one of the 1000 categories the model predicts.

Perform Inference on Input Data

Prepare your model inputs, create a  FirebaseModelInputs with your inputs, then call run on your interpreter.

In your success method, call getOutput() specifying the format of the output as well. From here on out, what you do next depends on your model and its intended use. For example, if you are performing classification, you could map the indexes to the labels they represent.

Conclusion

As little as I know about making an ML model, this looks pretty neat. A streamlined way of getting the model into the app, though I have no basis for comparison. This does however make me want to learn more about ML.

This is the final entry in the ML Kit mini-course. If you haven’t already done so, why not check it out?