Difficulty: Beginner
Estimated Time: 5 minutes

Welcome

In this scenario, you will learn how to quickly deploy, and query an object detection application called YOLO from a pre-trained TensorFlow model using the Skymind Intelligence Layer (SKIL).

Finished

You've finished yet another SKIL example, running and deploying an advanced object detection application. All you needed to do was to:

  • Download the model
  • Define a SKIL workspace and experiment
  • Register your model in SKIL and deploy it as a service.
  • Use the SKIL service method annotate_image to find objects in an input image.

Deploy an Object Detection App With SKIL

Step 1 of 4

Step 1

Downloading a pretrained object detection model

We've pretrained and uploaded a state-of-the-art object detection model called You only look once, or YOLO for short, so that you can easily download it into this scenario and deploy it from here. To do so, just run

curl -o ./yolo_v2.pb https://github.com/deeplearning4j/dl4j-test-resources/raw/4fbca7f8286b7e0856903828193f50c08ceb1cee/src/main/resources/tf_graphs/examples/yolov2_608x608/frozen_model.pb

YOLO can detect objects in images by giving you bounding boxes around these objects, together with a probability assessing how likely the model thinks this box actually contains that object. The version of YOLO we provide here is trained on the so called COCO dataset, which contains 80 real-world categories, such as person, dog or cat. For instance, if you take the following image, you can expect this model to find all people, cars, bikes and umbrellas in it.

YOLO input

In the next step you'll see how to deploy your downloaded model quickly to get an image with labeled bounding boxes for this input image. Go ahead and download this image:

curl -o input.jpg https://raw.githubusercontent.com/SkymindIO/skil-python/cc99a0d9bb67d63f21233fad264a0fa5c1eae4c9/examples/tensorflow-yolo/input.jpg