Using Microsoft AI Builder for Object Detection

By - March 4, 2020

Using Microsoft AI Builder for Object Detection

Hi, I’m George Casey here at the RSM Technology Experience Center in Denver talking to you today about using Microsoft’s AI builder for object detection. So in this case, object detection uses AI to identify objects in a photo and provide that data back to the user. So I’m going to use the detect function and take a quick picture using the camera on my phone to take a picture of these cans and let the model detect what’s available. Now this is common maybe for an inventory situation where I have existing inventory and I want to use a picture to help me do physical inventory. So I see in this case the model has correctly identified the cans as one can of Sprite, one can of Coke and one can of Dr. Pepper. It even gives me the ability to edit my actual inventory values, at least what’s related from the system and save that.

So you think about how quickly I could do a physical inventory just using my camera. So let’s take a little bit of a look at how we set up this model and trained it to be able to recognize these images. So here we are in the AI builder and the first step in our model is to choose objects for the model to detect and give it a scope of what it’s looking for. In this case, we’ve selected the three objects, Coca-Cola, Sprite, and Dr. Pepper. When we click next, we have the opportunity to upload images of these examples and kind of tag them so that we can show from multiple angles, multiple sides, different lighting conditions, what these models look like. So I’ll click next and now I can tag the objects in my images. So if I were to look at this one, I can see it’s applied as a Dr. Pepper.

And you’ll note that the software advises us that we must tag at least 15 images for each object. But the more we tag, the more angles, light conditions, etc., The better we’re giving the model to find a similar situation when it takes a picture in real life. So once we’re done tagging all of our images, we kind of run that through. We have our model created and we’ve created 21 Sprite, 20 Dr. Pepper and 24 Coca-Cola images, we click next and then we train the model. And in this case, the models looking for identifying characteristics, whether it’s color, logo, orientation of text, what have you, to be able to provide that guidance back to the overall model.

And in this case, we can see our performance is 94% which means 94% of the time we correctly assign the right object to the right image and then we can deploy that model in our case, through a power app that just gives us a simple ability to leverage access to the phone’s camera to take the image and then return the data of the objects we found. So again, the key value or opportunity here is that there’s no code involved in creating this model and this can be done very quickly and have a reinforcement learning experience where it gets better and better over time.

Ready to take the next step? Contact us. You may also contact us by telephone: 800.274.3978.

Collaborative leader, data scientist, and problem solver aligning clients with technology and process. Specialties include predictive analytics, marketing automation, CRM, and ERP.

Receive Posts by Email

Subscribe and receive notifications of new posts by email.