Hacky Hour 17: Build a Virtual Green Screen with Semantic Segmentation
What is Semantic Segmentation?
Semantic Segmentation is a computer vision primitive with which detections are done pixel-by-pixel, rather than with bounding boxes. In Semantic Segmentation each pixel in an image is assigned a classID depending on which object of interest it belongs to. Although most convolutional neural networks are designed to detect objects and ignore the background, Semantic Segmentation models actually label and find the background in an input image or video. Applications of Semantic Segmentation include self-driving cars, delivery bots, medical x-ray analysis, and any use case that requires detailed detection of objects. For this week’s Hacky Hour, Lila demonstrated how one can build a Virtual Green Screen using Semantic Segmentation.
The focus of this Hacky Hour, Build a Virtual Green Screen using Semantic Segmentation, was to show how users can build their own green screens using the alwaysAI platform. You may have seen this feature in video conferences, or in ‘behind-the-scenes’ footage from movies. This year a large number of people are working from home, learning from home, and using video chat features to connect with friends and co-workers instead of meeting in person. Adding a green screen feature to video streaming applications offers users more privacy or interesting options while still being visually present for online meetings, parties, or game nights.
This virtual green screen app uses semantic segmentation to segment out a person from background noise in a video stream and replace the background with an image or blur it out. This app builds off of a methodology for segmenting out areas of interested, which can be found here. This app also demonstrates how to separate your app configuration information into a separate JSON file. For more details on this aspect of the app, please see the original blog.
Click here for the GitHub repository of the Virtual Green Screen project.
QUESTION: Is it possible to use a variable background? For instance, can you use powerpoint presentation as your background?
ANSWER (Lila): Yes! Although I just used an image. But you can place the images of the powerpoint slides and add it to the folder in JPEG format.
QUESTION: Is there a directory for all commands available for alwaysAI?
ANSWER (Lila): Yes, you can write “aai” into the CLI and this will provide you with all of the available commands for alwaysAI.
QUESTION: Is the poor performance a training problem?
ANSWER (Lila): Poor is a relative term. It may have poor performance for my application, but perhaps not for the use case the model was originally trained for.
QUESTION: How can one improve the performance of a semantic segmentation model?
ANSWER (Steve): Semantic Segmentation is a very heavy weight model. It is evaluating each pixel and identifying an object’s class ID. In order to improve the performance, you might try deploying the model onto the NVIDIA Jetson devices such as the Jetson nano, or Xavier.
QUESTION: What is alwaysAI?
ANSWER (Komal): alwaysAI brings deep learning computer vision (CV) to embedded devices. We provide developers an easy-to-use platform to quickly build and deploy deep learning CV applications on "IoT" devices like cameras, drones, wearables, robots and transportation units. We give 'intelligent sight' to these devices and enable them to autonomously make smart decisions in real-time.
See below for the full video of last week's Hacky Hour:
Join us every Thursday at 10:30 AM PST for weekly Hacky Hour! Whether you are new to the community or an experienced user of alwaysAI, you are welcome to join, ask questions, and provide the community with information about what you're working on. Register here.