IBM’s Platform As A Service (PAAS) IBM Cloud makes it suuuuuper simple to make something like an image classifier. As I’m new to playing with any of this stuff, just the high level concept and usage is fun enough to start playing around with. It’s almost too easy using this though. Perhaps I will use the same images in a future effort machine learning exercise where a bit more actual coding is done. This is my first attempt at playing with and making a service on the IBM Cloud platform.

The project will be image classification via machine learning. To accomplish this, we’ll need to define our classes and provide a set of images that fall within each class definition. We will then use these images to train IBM Watson’s Visual Recognition service. Once it’s completed it’s training, we’ll send images to it and see what it replies back with in it’s classification!

So for this example, I’m going to take the car I drive; a Subaru BRZ, also known as a Scion FRS or Toyota 86. I told more about this car on my BRZ FRS 86 site. We’re going to take a bunch of pictures and classify them; I’ve made 3 categories of classification; stock, race, and rice.

The more pictures the better, and the free IBM Cloud Lite account allows us up to 5000! Take each class and put them into a zip; stock.zip for all of the pictures of stock looking cars. Head on over to the IBM Cloud console and it’s pretty easy to build the classifier. Once we’ve added the service to the account, we can easily find it under existing services.

The hyperlink to our named service takes us to an overview as well as an area with our credentials (API key) – the project hyperlink takes us to our service(s) for that project type, which provides the credentials as well as a link to the API Reference for the Visual Recognition service. Finally, the launch tool button actually takes us to the Visual Recognition Tool site.

I didn’t feel like destroying and remaking mine but the process is pretty straightforward. In the Lite account you are able to create one classifier, and you can can create classes with in that and upload your .zip files to train the service. You can also create a not something folder to rule out false positives, although I’m not sure what they would be in our case. Not cars or something. This doesn’t need to be done graphically; it could be done with programming languages that interact with the service as well. In theory you could have some method of collecting and categorizing pics and then constantly keep retraining the model based on that input, all done programmatically.

Once the pictures are uploaded, Watson uses them to train the classifier. This did not take very long for me with around 140 images total. The API reference is pretty good, and I definitely needed to use it to fix some errors when my first attempts to call my classifier were producing errors or unexpected results. Let’s break it down here, and use the pic at this URL:

https://tinyurl.com/y8bajrsg

URL to call: https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify?threshold=0&api_key=163351bc58d25e0581b112d25c986f6247a57fe3&version=2016-05-20&classifier_ids=86xclassifier_802697860&url=https://tinyurl.com/y8bajrsg

thresholdthis is the minimum confidence to get a result; the default is 0.5. with 0 we’ll see the score for every type of class.
api_keyour service credentials
versionthe release date of the API we want to use
classifier_idsin this case, telling it to specifically only use my classifier. we could also use google’s to have it tell us what it thinks it’s seeing, which would result in things like car and wheels and such. but we’re just looking for rice or nice here!
urlfinally we take a url for an image. tinyurl is fine, the system will resolve it until the final image; at least most of the time. there were times when it would not work, namely some of the facebook image urls with multiple parameters on the end which appeared necessary to display some of their hosted images.

The result, at least at the time of writing this with my service still up, is json output:

"classes": [
{
"class": "race",
"score": 0.0434696
},
{
"class": "rice",
"score": 0.541787
},
{
"class": "stock",
"score": 0.0623504
}
]

So, 54% rice – which would have crossed the default threshold as well. Not bad! You can try it out by putting in a (publicly viewable) URL below to a jpg or png of an 86.


URL: