Firebase ML Kit 1: Text Recognition

Firebase ML Kit 1: Text Recognition

Here marks the beginning of another mini-series, Firebase ML Kit! Machine Learning for beginners and experts alike, even though the only really applicable chapter of this series for experts would be the very last one.

For this first entry, we’ll learn how to use Text Recognition within our app. Though if you’re interested in ML Kit as a whole and this is your first time reading about it, check out the full introduction here.

Text Recognition is as simple as it gets. You pass in an image, Firebase processes that image and provides you the text it’s detected.

Note that since Firebase ML Kit’s cloud-based APIs are currently not production-ready, we’ll only be covering Text Recognition handled on the device (If you didn’t know, yes, you can choose to handle the text recognition process on the cloud… well, in good time you’ll be able to).

Add the Dependency and Metadata

As with any other Firebase Service, we’ll start by importing this dependency which is the same one used for all the ML Kit features.

Although this is optional, it’s highly recommended to at this to your AndroidManifest.xml as well. Doing so will have the machine learning model downloaded along with your app in the Play Store. Otherwise, the model will be downloaded during the first ML request you make, at which point, you can’t get any results from ML operations before the model is downloaded.

Create the FirebaseVisionImage

This object will prepare the image for ML Kit processing. You can make a FirebaseVisionImage from a bitmap, media.Image, ByteBuffer, byte array, or a file on the device.

From Bitmap

The simplest way to do it. The above code will work as long as your image is upright.

From media.Image

Such as when taking a photo using your device’s camera. You’ll need to get the angle by which the image must be rotated to be turned upright, given the device’s orientation while taking a photo, and calculate that against the default camera orientation of the device (which is 90 on most devices, but can be different for other devices).

Long method do make all those calculations, but it’s pretty copy-pastable. Then you can pass in the mediaImage and the rotation to generate your FirebaseVisionImage.

From ByteBuffer

You’ll need the above (from media.Image) rotation method as well, on top of having to build the FirebaseVisionImage with the metadata of your image.

From File

Simple to present here in one line, but you’ll be wrapping this in a try-catch block.

Instantiate a FirebaseVisionTextDetector

The actual text recognition method belongs to this object.

Call detectInImage

Use the detector, call detectInImage, pass in the image, add success and failure listeners, the success listener has your text in a FirebaseVisionText object. The code above says it all really.

Extract the text from blocks of recognised text

In your onSuccess method, you’ll have access to a FirebaseVisionText object. This contains blocks of text which contains lines which of text which contain elements (words). Iterate through them and choose how you want to extract your text.

Conclusion

I love how easy this is to use. It gets weird when you have to do all that rotation work with media.Image and ByteBuffer, but even that’s just a copy-paste job.

This is only the first part of the ML Kit series so expect the tutorials on the other Firebase ML features to come in the following weeks!