embeded/jetson2022. 3. 29. 11:32

생각해보니 deepstream onnx  github프로젝트의 경우

tiny_yolov2를 기반으로 작동하도록 libnvdsinfer_custom_bbox_tiny_yolo.so 를 생성했으니

ssd 와는 구조가 달라 당연히(?) 맞지 않으니 에러가 발생하고 죽는 듯.

[링크 : https://github.com/thatbrguy/Deep-Stream-ONNX]

 

ERROR: [TRT]: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:08:17.537206102  9070     0x3f617730 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:08:17.545680634  9070     0x3f617730 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:08:17.545766053  9070     0x3f617730 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:08:17.546456543  9070     0x3f617730 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:08:17.546521285  9070     0x3f617730 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /home/jetson/work/Deep-Stream-ONNX/config/config_infer_custom_yolo.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:658>: Failed to set pipeline to PAUSED

 

azure의 custom vision 의 README에 기재된 링크를 가보았는데

[링크 : https://github.com/Azure-Samples/customvision-export-samples]

 

onnx 포맷으로는 python과 c#만 제공하고

해당 사이트에서 python을 받아서 실행해보니 하나의 사진에 대해서 처리가 가능한 예제를 제공한다.

[링크 : https://github.com/Azure-Samples/customvision-export-samples/tree/main/samples/python/onnx]

[링크 : https://github.com/Azure-Samples/customvision-export-samples/tree/main/samples/csharp/onnx]

 

 

+

ssd deepstream 예제가 있는데

python 스크립트에 h264 elementary stream을 넣어주어야 한댄다

[링크 : https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-ssd-parser]

 

-h h264가 포인트 인 듯.

$ ffmpeg -f video4linux2 -s 320x240 -i /dev/video0 -vcodec libx264 -f h264 test.264

[링크 : https://stackoverflow.com/questions/27090114/what-does-elementary-stream-mean-in-terms-of-h264]

 

JVT NAL sequence, H.264 라는 타입으로 변경된 듯.

sample_0.h264: JVT NAL sequence, H.264 video @ L 31
sample_0.mp4:  ISO Media, MP4 v2 [ISO 14496-14]

 

Joint Video Team (JVT)
NAL: Network Abstraction Layer

[링크 : http://iphome.hhi.de/suehring/tml/JM%20Reference%20Software%20Manual%20(JVT-AE010).pdf]

 

+

sample_ssd_relu6.uff 파일은 ssd inception v2 기반 모델인가?

[링크 :  https://eva-support.adlinktech.com/docs/ssdnbspinception-v2-nbsp-nbsp-nbsp-nbspnbsp]

'embeded > jetson' 카테고리의 다른 글

deepstream triton server  (0) 2022.03.30
deepstream part.3  (0) 2022.03.29
jetson nano python numpy Illegal instruction (core dumped)  (0) 2022.03.29
deepstream onnx  (0) 2022.03.28
azure custom vision - precision, recall  (0) 2022.03.28
Posted by 구차니