Hacky Hour 18, 19, & 21: Identify a Package at the DoorStep
What is the Model Training Toolkit?
alwaysAI’s Model Training Toolkit is a full suite of tools designed to help developers create a Computer Vision model unique to the needs of the application. For this Hacky Hour, Lila Mullany and Todd Gleed generated a dataset using packages at the doorstep, trained a model using the Model Training toolkit, and built an anti-theft application that can notify users whether there’s been a theft of the package at their doorstep.
The focus of this three-part Hacky Hour, Identify a Package at the Doorstep, was to demonstrate how users can quickly generate a dataset, train a model using alwaysAI, and deploy their application to production using Eyecloud's Open NCC. The purpose of the application is to notify homeowners that 1.) There is a package at their doorstep, and 2.) whether the package at their doorstep was removed by a person.
You can access the GitHub repository for the applications demonstrated at Hacky Hour below:
Training the Model
To train a model that can identify packages at the doorstep, Todd created his data generation environment. In the Hacky Hour, he identified the environmental conditions a developer should consider when training a computer vision model. For this specific model, Todd leveraged the data from his production environment. For details about camera mounting position, lighting, and optics, watch the full Hacky Hour below. To watch the complete process of how Todd training the model, watch the Model Training Hacky Hour- Identify a Package.
Building the Application
The second part of the Hacky Hour was a demonstration of the CV application built using the custom trained model. The goal of this application is to serve as an anti-theft service that will notify home owners the presence of a package and whether someone has picked them up. Lila Mullany demonstrates how to build such an application. See Identify a Package Part 2 for the full demo.
Deploying to Production
The third part of the Hacky Hour was a demonstration of how one can convert their CV model and deploy to Eyecloud's OpenNCC. Eyecloud is a production-grade camera, integrated with the Myriad accelerator. Together, alwaysAI and EyeCloud.AI expedite the process of developing and deploying computer vision for production. In order to deploy an application to the OpenNCC, models first need to be compiled to be deployed to the Myriad accelerator. To do this, Taiga Ishida, Software Engineer at alwaysAI, ran the conversion command and output the model in the Eyecloud format. To view the complete process of converting the model, watch Identify a Package Part 3 for the full demo.
QUESTION: How do I create a dataset to identify packages at night?
ANSWER (Todd): Developers can use an IR camera or the Raspberry Pi Noir Camera to create the dataset and apply the same device to the production environment at night.
QUESTION: Is it possible to identify names and characters on packages using Optical Character Recognition (OCR)?
ANSWER (Lila): Yes and No. There are OCR libraries in python and we've shown that you can pass a portion of an image that contains text to one of these libraries and get OCR results (for instance, the license plate model detects license plates, and you can then pass the results to an OCR library method). So the idea would be that you could train a model to look for labels on boxes, then use edgeiq to cutout that bounding box from the original image and pass it to an OCR library function, but we (alwaysAI) do not have OCR built/baked in.
QUESTION: When you trained your model, how many classes did it have?
ANSWER (Todd): Just one, for “package.” I used classic object detection, and drew bounding boxes around objects (packages).
QUESTION:There’s a portion where the model misidentifies a person as a package, why?
ANSWER (Todd): It is a false positive but the confidence level of the bounding box is fairly low 56%. You can edit the confidence level to identify packages with greater accuracy by changing the confidence level on your app.py file.
QUESTION: What is OpenVino version for the default EdgeIQ container?
Answer (Taiga): To compile the model, in our environment, the version is 1.1 for OpenVino.
See below for the full videos of the Model Training Hacky Hours:
Identify a Package at the Doorstep Part 1:
Identify a Package at the Doorstep Part 2:
Identify a Package at the Doorstep Part 3:
Join us every Thursday at 10:30 AM PST for weekly Hacky Hour! Whether you are new to the community or an experienced user of alwaysAI, you are welcome to join, ask questions, and provide the community with information about what you're working on. Register here.