Microsoft’s Project Oxford was released almost a year ago at Microsoft’s BUILD 2015 developer conference and provided developers with access to advanced APIs that gave them access to machine learning capabilities to build apps that can perform functions such as recognizing faces, interpreting natural language and automating tasks.
The developers would build the interface into an app or web based service and then Microsoft would do the heavy lifting on the back end using Azure services.
Over the last year we have seen real time demos of Project Oxford in action on sites such as:
The premise is simple. Upload an image of a dog and the machine learning backend will identify what breed it is. On the main Fetch! website they have several pictures you can use to test the service out.
However, it would seem that many on social media are not using the service as intended to identify their dogs breed but instead to see what breeds they might match.
The folks at Microsoft Garage had an inkling that we humans would use the service in this way and built the system to work with pictures of people:
“…if you take a picture of a person, it’ll kick into its hidden fun mode. And in a playful way, it’ll communicate to you not only what type of dog it thinks you are, but also why. It’s fun to see if the app knows it’s not a dog. A lot of the time, it’ll tell you what that image is. When there’s not a dog, you still want to use it.”
Of course, I could not avoid having some fun with this and used a few images of myself from over the years.
Funny enough but some of those characteristics fit my personality!
You can see all of the apps/services that have been built using Project Oxford in the Project Oxford App Gallery.