[04/10/2024-15:24:17] [I] [TRT] [MemUsageChange] Init CUDA: CPU +221, GPU +0, now: CPU 249, GPU 16196 (MiB)
[04/10/2024-15:24:19] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +431, now: CPU 574, GPU 16646 (MiB)
[04/10/2024-15:24:19] [I] [TRT] ----------------------------------------------------------------
[04/10/2024-15:24:19] [I] [TRT] Input filename: ../weights/rtdetr/rtdetr_hgnetv2_x_6x_coco.onnx
[04/10/2024-15:24:19] [I] [TRT] ONNX IR version: 0.0.8
[04/10/2024-15:24:19] [I] [TRT] Opset version: 16
[04/10/2024-15:24:19] [I] [TRT] Producer name:
[04/10/2024-15:24:19] [I] [TRT] Producer version:
[04/10/2024-15:24:19] [I] [TRT] Domain:
[04/10/2024-15:24:19] [I] [TRT] Model version: 0
[04/10/2024-15:24:19] [I] [TRT] Doc string:
[04/10/2024-15:24:19] [I] [TRT] ----------------------------------------------------------------
[04/10/2024-15:24:20] [W] [TRT] onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[04/10/2024-15:24:48] [W] [TRT] DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
[04/10/2024-15:24:48] [E] [TRT] 4: [network.cpp::validate::3088] Error Code 4: Internal Error (im_shape: dynamic input is missing dimensions in profile 0.)
[04/10/2024-15:24:48] [E] [TRT] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
object_detection: /media/user/sdcard/orin/TensorRT_Inference_Demo/src/basemodel.cpp:57: void Model::OnnxToTRTModel(): Assertion `data' failed.
Aborted (core dumped)
Hi,
I tried to use your repo but unable to convert the model to TRT. Below is the log file for RTDETR
Can you tell me how to disable dynamic batch in the
configfile?