ComfyOnline
ComfyUI YoloNasPose Tensorrt
<div align="center">

ComfyUI Yolo Nas Pose TensorRT

python cuda trt by-nc-sa/4.0

</div> <p align="center"> <img src="assets/demo.PNG" /> </p>

This project is licensed under CC BY-NC-SA, everyone is FREE to access, use, modify and redistribute with the same license.

For commercial purposes, please contact me directly at yuvraj108c@gmail.com

If you like the project, please give me a star! ⭐


This repo provides a ComfyUI Custom Node implementation of YOLO-NAS-POSE, powered by TensorRT for ultra fast pose estimation. It has been adapted to work with openpose controlnet (experimental)

⏱️ Performance

The benchmarks were performed on 1225 frames

| Device | Model | Precision | Model Input (WxH) | Image Resolution (WxH) | FPS | | :----: | :-------------: | :-------: | :---------------: | :--------------------: | --- | | H100 | YOLO-NAS-POSE-L | FP32 | 640x640 | 1280x720 | 105 | | H100 | YOLO-NAS-POSE-L | FP16 | 640x640 | 1280x720 | 115 |

🚀 Installation

Navigate to the ComfyUI /custom_nodes directory

git clone https://github.com/yuvraj108c/ComfyUI-YoloNasPose-Tensorrt
cd ./ComfyUI-YoloNasPose-Tensorrt
pip install -r requirements.txt

🛠️ Building Tensorrt Engine

  1. Download one of the available onnx models. The number at the end represents the confidence threshold for pose detection (e.g yolo_nas_pose_l_0.5.onnx)
  2. Edit model paths inside export_trt.py accordingly and run python export_trt.py
  3. Place the exported tensorrt engine inside ComfyUI /models/tensorrt/yolo-nas-pose directory

☀️ Usage

  • Insert node by Right Click -> tensorrt -> Yolo Nas Pose Tensorrt
  • Choose the appropriate engine from the dropdown

🤖 Environment tested

  • Ubuntu 22.04 LTS, Cuda 12.4, Tensorrt 10.1.0, Python 3.10, H100 GPU
  • Windows (Not tested, but should work)

👏 Credits

License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)