This is a image fusion demo with YOLOv8. You can use this project to paste a specific texture on the left shoulder of pedestrian objects in the input image. Specifically, using YOLOv8 to detect keypoint information in the input image, adjust the size and position of the texture based on the keypoints.
This has been tested and deployed on a reComputer Jetson J4012. However, you can use any NVIDIA Jetson device to deploy this demo.
Select the position coordinates of the nose and left shoulder from the inference results of YOLOv8-Pose. Determine the size of the texture based on the distance between the nose and shoulder. Determine the position of the texture based on the location of the left shoulder. Based on above-mentioned mechanism, we can fusion two images and produce a very interesting application.
-
Step 1: Flash JetPack OS to reComputer Jetson device (Refer to here).
-
Step 2: Access the terminal of Jetson device, install pip and upgrade it
sudo apt update
sudo apt install -y python3-pip
pip3 install --upgrade pip
- Step 3: Clone the following repo
mkdir demo && cd demo
git clone https://github.com/ultralytics/ultralytics.git
- Step 4: Open requirements.txt
cd ultralytics
vi requirements.txt
- Step 5: Edit the following lines. Here you need to press i first to enter editing mode. Press ESC, then type :wq to save and quit
# torch>=1.7.0
# torchvision>=0.8.1
Note: torch and torchvision are excluded for now because they will be installed later.
- Step 6: Install the necessary packages
pip3 install -e .
- Step 7: If there is an error in numpy version, install the required version of numpy
pip3 install numpy==1.20.3
-
Step 8: Install PyTorch and Torchvision (Refer to here).
-
Step 9: Run the following command to make sure yolo is installed properly
yolo detect predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
- Step 10: Clone exercise counter demo
cd ..
git clone https://github.com/yuyoujiang/Paste-Texture-into-Target-Image-with-YOLOv8.git
YOLOv8-pose pretrained pose models are PyTorch models and you can directly use them for inferencing on the Jetson device. However, to have a better speed, you can convert the PyTorch models to TensorRT optimized models by following below instructions.
-
Step 1: Download model weights in PyTorch format (Refer to here).
-
Step 2: Execute the following command to convert this PyTorch model into a TensorRT model
# TensorRT FP32 export
yolo export model=<path to model>/yolov8m-pose.pt format=engine device=0
# TensorRT FP16 export
yolo export model=<path to model>/yolov8m-pose.pt format=engine half=True device=0
Tip: Click here to learn more about yolo export
- Step 3: Organize resource files(model weights and textures).
mkdir sources
demo
├── ultralytics
├── Paste-Texture-into-Target-Image-with-YOLOv8
├── MakerFinding.py
├── soueces
├── makey02.png
├── yolov8m-pose.pt
...
python3 MakerFinding.py --model_path ./sources/yolov8m-pose.pt --input 0 --texture_path ./sources/makey02.png
https://github.com/ultralytics/
https://wiki.seeedstudio.com