Build a shareable object detection application with VDP and Streamlit
When YOLOv7 was out, we were so excited to test it out. Therefore, we built a web app to side-by-side compare the classic YOLOv4 and the freshly released YOLOv7. Once completed, we shared the app with our team and then deployed it online to share with the community.
👉 Check the YOLOv4 vs. YOLOv7 live demo
The app was built with two best-in-class machine learning tools:
- VDP as the backbone of the Vision task solver, and
- Streamlit as the application framework to build beautiful UI components.
For anyone who is not familiar with VDP, it is an open-source unstructured data ETL tool that we’ve been working on. The goal of VDP is to streamline the end-to-end unstructured data flow, with the transform component being able to flexibly import AI models to process the unstructured data for a specific task for Vision, Language and more.
It is the future for unstructured data ETL, where developers won’t need to build their own data connectors, high-maintenance model serving platform or ELT pipeline automation tool.
Streamlit removes the barriers for Data/ML practitioners to build shareable web apps. No need to write HTML, CSS and Javascript to create beautiful UIs, you can just write everything in pure Python.
This tutorial will demonstrate how to replicate the YOLOv4 vs. YOLOv7 web app. It shows that VDP and Streamlit are a perfect match if you work with ML/Data and would like to build AI prototypes fast to share with your team, clients or the world.
Prerequisites
- Docker and Docker Compose
- Python 3.8+ with an environment-management tool such as Conda
Build object detection pipelines
VDP standardises output formats for AI tasks. Therefore, a model is modularised in a pipeline, and model outputs are in standard format for use in data integration or ETL pipeline.
Vision tasks focus on analysing and understanding the content of unstructured visual data in the same way as the human visual system does. Some classic Vision tasks include image classification, object detection, image segmentation and keypoint detection. These primitive Vision tasks are the foundation for building many real-world industrial computer vision applications.
In the following section, we will build two object detection pipelines with YOLOv4 and YOLOv7 in VDP, respectively. The pipelines will serve as the AI backbone for the Streamlit app.
Run VDP locally
$ git clone https://github.com/instill-ai/vdp.git && cd vdp
$ make all
Once the services are up, the Console is ready to go at http://localhost:3000.
Build a SYNC object detection pipeline with YOLOv4 via no-code Console
A pipeline in SYNC
mode responds to a request synchronously. It is suitable for our Streamlit app to perform real-time inference where low latency is of concern. Check here for more details.
No matter where your model stores, we want to keep your models in the same place without changes. VDP integrates with many model platforms and tools to make importing models as easy as possible.
After onboarding, you will be redirected to the Pipeline page on the left sidebar, where you can build a SYNC
pipeline with YOLOv4. Please follow Build a SYNC classification pipeline with a few alterations:
- add a HTTP source,
- import a model from GitHub repository
instill-ai/model-yolov4-dvc
with IDyolov4
, - deploy a model instance
v1.0-cpu
of the imported model, - add a HTTP data destination, and
- set up a pipeline with ID
yolov4
.
Build a SYNC object detection pipeline with YOLOv7 via low-code
You could build a pipeline with YOLOv7 in the same way by importing instill-ai/model-yolov7-dvc
via no-code Console. Or, you can build it via REST API.
VDP is implemented with API-first design principle. It enables seamless integration to your data stack at any scale.
Step 1: Add an HTTP data source
$ curl -X POST http://localhost:8082/v1alpha/source-connectors -d '{
"id": "source-http",
"source_connector_definition": "source-connector-definitions/source-http",
"connector": {
"configuration": {}
}
}'
Step 2: Import a model from the GitHub repository instill-ai/model-yolov7-dvc
with ID yolov7
$ curl -X POST http://localhost:8083/v1alpha/models -d '{
"id": "yolov7",
"model_definition": "model-definitions/github",
"configuration": {
"repository": "instill-ai/model-yolov7-dvc"
}
}'
Step 3: Deploy a model instance v1.0-cpu
of the imported model
$ curl -X POST http://localhost:8083/v1alpha/models/yolov7/instances/v1.0-cpu:deploy
Step 4: Add an HTTP data destination
$ curl -X POST http://localhost:8082/v1alpha/destination-connectors -d '{
"id": "destination-http",
"destination_connector_definition": "destination-connector-definitions/destination-http",
"connector": {
"configuration": {}
}
}'
Step 5: Set up a pipeline with ID yolov7
$ curl -X POST http://localhost:8081/v1alpha/pipelines -d '{
"id": "yolov7",
"recipe": {
"source": "source-connectors/source-http",
"model_instances": [
"models/yolov7/instances/v1.0-cpu"
],
"destination": "destination-connectors/destination-http"
}
}'
Now you should see two pipelines yolov4
and yolov7
in the Console.
In the next section, we will build a Streamlit app to send requests triggering the pipelines and visualise the detection outputs with a beautiful UI.
Build the app
Create a Python virtual environment
In this tutorial, we’ll use Conda as the package management system. You can install Conda via anaconda or miniconda. Using a virtual environment is not required but recommended.
Create and activate an environment named vdp-streamlit
with Python 3.8:
$ conda create --name vdp-streamlit python=3.8
$ conda activate vdp-streamlit
Once activated, you can run scripts from this environment.
Install app dependencies
Go to /examples/streamlit/yolov7
directory of the VDP project.
$ cd examples/streamlit/yolov7
The directory of the app will look like the following:
├── Dockerfile
├── README.md
├── main.py
├── requirements.txt
└── utils.py
where requirements.txt
file contains all the app dependencies. Install all the dependencies required to run the app from the activated virtual environment.
$ pip install -r requirements.txt
Trigger the VDP pipelines
In the main app script main.py
, we use a Streamlit text.input
to enable user to provide an image URL for inference.
The pipelines we built are SYNC
with HTTP connectors, so we create a trigger_detection_pipeline
function to trigger a pipeline by sending a HTTP request with payload constructed with the provided image_url
.
Since the pipeline output is standardised, we also create a parse_detection_response
function to parse the response into a list of bounding boxes, categories and scores according to the standardised format. Learn more about standardising object detection task.
In the main function, the input image is sent to trigger both pipelines for a side-by-side comparison.
Visualise the detections
Thanks to Steamlit's powerful visualisation features, we create and use functions in utils.py
to visualise the detections in different ways:
- draw the detections on the input image
- display the detections as
pandas.Dataframe
in an interactive table
Run the app
$ streamlit run main.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.0.10:8501
Now go to http://localhost:8501 in the browser and have some fun with your app!
Fill the input field with a random image URL and press Enter to see the detection results of YOLOv4 and YOLOv7 side-by-side.
Conclusion
🥳 Congratulations! You’ve built a beautiful app to showcase STOA object detectors using Streamlit powered by VDP.
What’s next
By the end of the demo, we hint that you can manipulate the detection results using other structured data toolings in the modern data stack. Check the building an ASYNC object detection pipeline tutorial to transform unstructured images into analysable structured insights, and send the structured insights to a Postgres database.
If you enjoyed VDP, we’re building a fully managed service for VDP — Instill Cloud (Alpha):
- Painless setup
- Maintenance-free infrastructure
- Start for free, pay as you grow
We also invite you to join our Discord community to share your use cases and showcase your work with Data/AI practitioners.