Welcome to the second part of this blog series “How to use Google’s ML Kit to enhance Pepper with AI“! In case you missed the first part, I recommend you start reading here for an introduction on what we’re building and how to start.
In this article, we are going to look at something cool we can build by combining Google’s ML Kit with the humanoid-shaped social robot Pepper. Imagine you can ask the robot via natural language to go pick something up for you. For some reason, an all-time favorite and the most-asked question ever about Pepper is “Can it bring me a coffee?“ While that is a considerably challenging endeavor that encompasses complex tasks in several areas of AI and robotics for which Pepper is not quite ready yet, we can start going a step in this direction by embedding image classification to recognize the objects around it or, better yet, object detection to identify the position of those objects in the room and also point at them. Have a look at this video illustrating what that looks like:
ML Kit’s Object Detection API
With this demo, our idea is to leverage object detection to recognize objects in an image and their position so that Pepper can localize them in a room. Pepper should be able to respond to the question of what objects he can see and even point at them when asked to.
To simply categorize everyday objects, the base model of the Image Labeling API returns pretty good results. It is a general-purpose classifier that can identify general objects, places, activities, animal species, and products out of more than 400 categories and takes approximately 200 ms for inference when run on Pepper. However, since we also want to know the position of the objects, Image Labeling is not enough. The Object detection and tracking API should be the right choice.
Custom models
The API offers two modes that are optimized for two core use cases: tracking the most prominent object across images and detecting multiple objects in a static image. However, although it can optionally classify the detected objects, the base coarse classifier used by default and trained by Google is not enough for our use case as it only classifies into five broad categories: Place, Fashion good, Home good, Plant, and Food. Same as for Image Labeling, you can use the API either with the base models or custom TensorFlow Lite models that are more targeted. They can be bundled with the app or downloaded from the cloud using Firebase. The APIs are compatible with a selection of pre-trained models published on TensorFlow Hub or a custom model trained with TensorFlow, AutoML Vision Edge, or TensorFlow Lite Model Maker, provided it meets certain requirements.
Google released a family of image classification models called EfficientNet in May 2019, which achieved state-of-the-art accuracy with an order of magnitude of fewer computations and parameters and EfficientNet-Lite, which runs on TensorFlow Lite and is designed for performance on mobile CPU, GPU, and EdgeTPU in 2021. It brings the power of EfficientNet to edge devices and comes in five variants, starting from the low latency/model size option (EfficientNet-Lite0) to the high accuracy option (EfficientNet-Lite4). The largest variant, integer-only quantized EfficientNet-Lite4, achieves 80.4 % ImageNet top-1 accuracy. However, running this model on Pepper’s processor has an inference time of around 15 seconds! Unfortunately, that is too long for any kind of interactive application. That is why we will have to trade off accuracy and settle for the lower accuracy (and lower model size) variants. Even the B0 variant still has a higher latency (over one second) than previous mobile models such as mobilenet v2, which makes the interaction less fluid. That is why, although this could also make a very good candidate for the job, we are using an object labeler based on MobileNet V2 and optimized for TFLite trained by Google using quantization-aware training as our custom model with the Object Detector that yields pretty good results in about 0.8 seconds. This model can be found on TensorFlow Hub as a “Google Mobile Object Labeler“.
Implementation
Here you can find the full code of the application we are building throughout this series.
When this demo has been selected either via voice or via touch, the activity replaces the menu with this fragment. Its layout includes a PreviewView with the images of the camera currently being processed, on top of which the predicted information will be drawn, the home button to go back, and a button to repeat the rules. For our demo purposes, the analyzer will be continuously running in the background and updating the results in the form of bounding boxes and text with the labels on the screen even if no question was asked, whenever we are showing this fragment. Once the view is created, Pepper briefly explains how it works.
With regards to the architecture, we have a fragment where we use data binding to access the views, use a ViewModel to store the data, and an Analyzer helper class for the recognizer.
How to build the model
In the onCreate method of our fragment, after inflating and initializing the views, we start by building the LocalModel we are going to use with the analyzer. So that it can be found, our custom TFLite model needs to be located in the assets folder of the project.
1 2 3 |
val localModel = LocalModel.Builder() .setAssetFilePath("lite-model_object_detection_mobile_object_labeler_v1_1") .build() |
How to analyze the images with an object detector
Let’s start with our recognizer, which is quite simple. It takes the image, the model, and the function type of a lambda function that we use as a callback with the list of detected objects as parameters. In that way, we will be informed asynchronously when the results are ready.
We create a CustomObjectDetector with our LocalModel and in the options, we enable the recognition of multiple objects and their classification. We select SINGLE_IMAGE_MODE. This mode analyzes each image independently.
Preparing the input to our detector is also straightforward, since conversion from a bitmap, which is the format in which we get the camera image from the QiSDK action to get a picture, is easily done in one line of code.
On completion, we return the detected objects sorted by the confidence of their labels to be processed in the fragment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
class ImageAnalyzer { fun analyzeImageWithMLKitObjectDetector( picture: Bitmap, localModel: LocalModel, completion: (List<DetectedObject>?) -> Unit ) { val image = InputImage.fromBitmap(picture, 0) val options = CustomObjectDetectorOptions.Builder(localModel) .setDetectorMode(ObjectDetectorOptions.SINGLE_IMAGE_MODE) .enableMultipleObjects() .enableClassification() .build() val objectDetector = ObjectDetection.getClient(options) // Extract the recognition results objectDetector.process(image) .addOnSuccessListener { detectedObjects -> Timber.i("ImageAnalyzer found ${detectedObjects.size} objects") for (o in detectedObjects) { Timber.i("ImageAnalyzer Object: ${detectedObjects.indexOf(o)}") for (l in o.labels) { Timber.i("ImageAnalyzer ${l.text}") } } completion( detectedObjects.onEach { detectedObject -> detectedObject.labels.sortByDescending { it.confidence } }.take(MAX_RESULT_DISPLAY).toList() ) } .addOnFailureListener { e -> Timber.e("Error processing the image: $e") completion(null) } } } |
How to process and show the results
The process will work asynchronously in the following way: in the fragment, every time a new image is created, we will set it as the source in the preview and start the analyzer, passing the created local model and the image as arguments. We then observe the results of the analyzer in order to process them and present them to the user via tablet and voice.
For that purpose, we set up the image observer and the analyzer observer and take the first image. The results from the analyzer have the form of a list of objects of type DetectedObjects (part of the ML Kit vision package) that enclose a bounding box, a tracking id, and the labels for each object. The labels, in turn, each have fields for the text, its confidence, and its index.
We save the labels to a chat variable for them to be available in the voice interaction. We will also use the labels to update the results on the screen, which might need to be translated depending on the language of the robot. This is because the models return the recognition in English. To that end, we use the TextTranslate API, if necessary.
The next thing to do is calculate roughly in which area of the image they are situated. We do that by simply dividing the image into six areas and calculating in which of them the center of the object (that can be known since we know the bounding box) lies. We then combine the labels, the areas, and the bounding boxes to show them on the screen whenever the confidence is higher than the recommended 0.35 threshold, by drawing them with a helper class over the preview in our resultsView .
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
viewModel.getMLKitRecognitionObjects().observe(viewLifecycleOwner) { objects -> // Prepare the chat variable including all results, translating if necessary var labelsText = "" objects.forEach { if (it.labels.size > 0 && it.labels[0].confidence > MIN_CONFIDENCE) { if (labelsText.isNotBlank()) labelsText += ", " labelsText += " ${it.labels[0].text}" } } when (mainViewModel.language) { Language.ENGLISH -> updateRecognizedTextChatVariable(labelsText) else -> mainViewModel.translate( Language.ENGLISH, mainViewModel.language, labelsText ).addOnSuccessListener { updateRecognizedTextChatVariable(it) } } // Create list of objects to be shown over the preview try { requireActivity().runOnUiThread { if (binding.seeingResultsView.height > 0 && binding.seeingResultsView.width > 0) { items = mutableListOf() objects.forEach { var firstLabel = "" if (it.labels.size <= 0 || it.labels[0].confidence < MIN_CONFIDENCE) return@forEach // Translate labels if the robot language is other than english and update the text when (mainViewModel.language) { Language.ENGLISH -> firstLabel = it.labels[0].text else -> mainViewModel.translate( Language.ENGLISH, mainViewModel.language, it.labels[0].text ).addOnSuccessListener { firstLabel = it } } // Add the label, the confidence and the bounding box items.add( Recognition( firstLabel, it.labels[0].confidence.roundConfidence(), it.boundingBox.toRectF() ) ) } // Calculate in which area the objects are and show the results showResults(calculateObjectArea(items)) } } } catch (e: Exception) { Timber.w("Could not show recognition results due to $e") } // Take a new image this.viewModel.takeImage(mainViewModel.qiContext) } |
The results, including bounding boxes, labels, and confidence, drawn over the preview look something like this:
If you see that the bounding boxes do not match 100% to the objects and you’re wondering if it is an issue with the object detector: it is not. This is caused by Pepper’s constant lively movement. Do not forget it is a social robot imitating natural human movements. Therefore, sometimes he might move just a little too fast before the image is updated with the new content and cause these small differences.
Voice interaction: “Pepper, what do you see?“
Now, to the voice interaction part: When asked, we want Pepper to be able to respond to the question of what is around it or which objects it can see. To make it robust, we include many different ways how you can phrase the question. Whenever heard, it will check the contents of the chat variable updated with the last results and, if it is available, tell its content. If nothing was detected, it will adapt its answer according to that.
1 2 3 |
concept:(askedwhatdoyousee) ["what do you see" "what can you see" "what is this" "what's [that this]" "what's ["in the room" "around you"] " "do you see [something anything]" "can you see [something anything]" "["what" "what [sort kind] of"] object is [this that]" "do you know what [this that] is" "do you recognize this" "tell me what [this that] is"] u:(~askedwhatdoyousee) %recognizedInImageBookmark I see \pau=500\ ["^exist(recognizedInImage) $recognizedInImage" "nothing"] |
Voice interaction: “Pepper, where is it?“
In this demo, we also want Pepper to point to an object when asked about its location. As the SDK does not currently provide a method to point in a specific direction, we will simplify he task and approximate it by defining areas from 1 to 6. The area we determine by dividing the image in six parts by a 3×2 grid and calculating to which one the center of each object belongs. For each of those areas, we programmed short animations using the Animation Editor tool included in the plugin. The Editor allows defining a series of movements of all the robot’s moveable parts and their position for a time period. In our case, we want Pepper to point with either the right or the left arm to the wanted area.
Although the recognition will be continuously running, the pointing needs to be triggered by the user by asking where an object is to be found:
1 2 3 4 |
concept:(whereis) ["where is the _*" "where do you see the _*" "do you know where the _* ["is" "is at" "is located"]" "where do i find the _*" "where is there a _*"] u:(~whereis) $objectToLocate=$1 %askedWhereItIsBookmark $1 is there u:(^empty) %notFoundBookmark Oops, something happened, I can't show you where that is because i can not run the animation |
Once again, we make use of a bookmark to connect with the logic and reach the method in the fragment from its listener in the Activity. Using the variable, we check among the current results to find the object with the mentioned name and get its area.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
fun locateObject() { var name = mainViewModel.getQiChatVariable(getString(R.string.objectToLocate)) // Translate if necessary if (mainViewModel.language != Language.ENGLISH) { mainViewModel.translate( Language.ENGLISH, mainViewModel.language, name ).addOnSuccessListener { name = it } } // Get the area of the asked object finding the object in the list val area = items.find { it.label == name.lowercase() }?.area ?: Area.NONE Timber.d("Asked to locate object: $name which is in area: $area") doAnimationForTheArea(area) } |
The only thing that’s left is playing the animation to point in the direction:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
private fun doAnimationForTheArea(area: Area) { val animation = when (area) { Area.ONE -> R.raw.raise_left_hand_b006 Area.TWO -> R.raw.raise_right_hand_b007 Area.THREE -> R.raw.raise_right_hand_b006 Area.FOUR -> R.raw.raise_left_hand_a003 Area.FIVE -> R.raw.raise_both_hands_b001 Area.SIX -> R.raw.raise_right_hand_a001 Area.NONE -> null } animation?.let { Timber.d("Doing animation for area: $area") mainViewModel.pepperActions.doAnimationAsync( requireContext(), mainViewModel.qiContext, animation ) } ?: run { mainViewModel.goToQiChatBookmark(getString(R.string.notFoundBookmark)) } } |
Conclusion and next steps
That is it! That is how we can make a robot point to a certain object identified by object detection. Building on this, it can be improved to much more sophisticated and precise pointing. Another possible use of object detection with the robot is to track an object. One could make Pepper follow a presented object with its head or even the entire body, by walking towards it, similarly as it follows humans. Thus, getting closer to Pepper fetching a coffee, the wish of many 🙂
I hope you enjoyed this demo! Check out the other articles of this series, where we’re going to see more use cases and how to implement them in our ML-Kit-powered Android app for the Pepper robot!
- Introduction
- demo with ML Kit’s Object Detection API (this article)