embeded/raspberry pi2022. 5. 25. 12:25

음.. 10만원 근처인데 개인적인 취향은 아니라 패스~ 하는걸로

 

사이즈는 텐키리스

 

Raspberry Pi 400 이라는 모델. 영국내 제조인 듯.

 

LAN도 넣어두고는 저 놈의 USB는 왜 일렬로 해서 -_-

차라리 USB 3.0은 세로로 두개 하고 한개는 full size HDMI 해주지! 라는 아쉬움이 드는 설계 -_ㅠ

 

 

귀찮아서(!) 라즈베리 4 64bit OS 설치해 놓은걸로 두세번 부팅하니 켜지긴 한데

그게 HDMI 0번에 연결하지 않아 발생한 문제인진 모르겠다.

'embeded > raspberry pi' 카테고리의 다른 글

PI 400 써봄  (0) 2022.05.25
라즈베리 파이2 / 마인크래프트  (0) 2022.03.18
rpi opencv python pid servo  (0) 2022.03.08
rpi i2c oled  (0) 2022.02.10
라즈베리 파이 부품 도착  (0) 2022.02.09
rpi csi to dsi...  (0) 2022.01.21
Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 22. 15:34

deepstream 6.0 부터 NHWC 네트워크 입력을 지원한다고 한다.

그런데 이 NHWC가 먼지 몰라서 헤매는 중...

그러니까.. 이전에는 NHWC는 지원하지 않았으니 NCHW만 지원했던건가?

Support for NHWC network input DS 6.0

 

nvinfer 모듈에서 NCHW가 언급되는 항목은 아래와 같은데

입력 레이어, 출력 레이어의 순서 그리고 uff 관련 입력 차원/순서에 대한 내용인데..

UFF 안쓰면 network-input-order와 segmentation-output-order만 보면 될 것 같긴하다.

network-input-order Order of the network input layer (ignored if input-tensor-meta enabled) Integer 0:NCHW 1:NHWC network-input-order=1 All
Both
segmentation-output-order Segmentation network output layer order Integer 0: NCHW 1: NHWC segmentation-output-order=1 Segmentation
Both
uff-input-dims DEPRECATED. Use infer-dims and uff-input-order instead.
Dimensions of the UFF model
channel; height; width; input-order All integers, ≥0 input-dims=3;224;224;0
Possible values for input-order are:
0: NCHW
1: NHWC
All
Both

 

uff-input-order UFF input layer order Integer 0: NCHW 1: NHWC 2: NC uff-input-order=1 All
Both

[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html]

 

conv 연산시 nhwc가 nchw에 비해서 빠르다고 하는데 데이터 순서에 따른 인접성 때문에 그런가?

[링크 : https://moon-walker.medium.com/train-faster-텐서플로우-성능-최적화-기법-d67d3faee959]

 

약어는 아래와 같다고.

N: number of images in the batch
H: height of the image
W: width of the image
C: number of channels of the image (ex: 3 for RGB, 1 for grayscale...)

[링크 : https://stackoverflow.com/questions/37689423/convert-between-nhwc-and-nchw-in-tensorflow]

[링크 : https://code-examples.net/ko/q/23f184f]

 

일단.. deepstream 에서 받아들이는 텐서는 3x320x320 인데

INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT image_tensor    3x320x320
1   OUTPUT kHALF  detected_boxes  256x4
2   OUTPUT kINT32 detected_classes 256
3   OUTPUT kHALF  detected_scores 256

 

NCHW를 NHWC로 바꾸니까

1x3x320x320 으로 들어올 것이

1x320x320x3 으로 들어오는지 범위를 벗어났다고 경고를 띄운다.

WARNING: Backend context bufferIdx(0) request dims:1x320x320x3 is out of range, [min: 1x3x320x320, max: 1x3x320x320]
ERROR: [TRT]: 4: [network.cpp::validate::2959] Error Code 4: Internal Error (image_tensor: for dimension number 1 in profile 0 does not match network definition (got min=320, opt=320, max=320), expected min=opt=max=3).)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:05.708555452 30995     0x2a01fa70 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:05.713238769 30995     0x2a01fa70 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:05.713377574 30995     0x2a01fa70 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
** ERROR: <main:707>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_SSD/config_infer_primary_ssd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

 

'embeded > jetson' 카테고리의 다른 글

deepstream NCHW, NHWC  (0) 2022.04.22
FLIR ETS320 / v4l  (0) 2022.04.21
deepstream nvinfer  (0) 2022.04.18
deepstream SSD  (0) 2022.04.15
deepstream  (0) 2022.04.15
ssd_inception_v2_coco_2017_11_17.tar.gz  (0) 2022.04.13
Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 21. 11:54

ETS320 장비를 jetson nano에 연결하니 스토리지와 v4l/uvc 장비로 인식이 되는 것 같다.

$ dmesg
[264102.798221] usb 1-2.3: new high-speed USB device number 5 using tegra-xusb
[264102.825756] usb 1-2.3: New USB device found, idVendor=09cb, idProduct=1007
[264102.825770] usb 1-2.3: New USB device strings: Mfr=7, Product=8, SerialNumber=0
[264102.825780] usb 1-2.3: Product: FLIR Ex-Series
[264102.825790] usb 1-2.3: Manufacturer: FLIR Systems
[264102.852306] uvcvideo: Unknown video format 304d3746-0000-0010-8000-00aa00389b71
[264102.852439] uvcvideo: Found UVC 1.00 device FLIR Ex-Series (09cb:1007)
[264108.030765] uvcvideo: UVC non compliance - GET_DEF(PROBE) not supported. Enabling workaround.
[264108.034120] uvcvideo 1-2.3:1.0: Entity type for entity Extension 6 was not initialized!
[264108.042461] uvcvideo 1-2.3:1.0: Entity type for entity Processing 5 was not initialized!
[264108.050726] uvcvideo 1-2.3:1.0: Entity type for entity Selector 4 was not initialized!
[264108.058830] uvcvideo 1-2.3:1.0: Entity type for entity Camera 1 was not initialized!
[264108.067201] input: FLIR Ex-Series as /devices/70090000.xusb/usb1/1-2/1-2.3/1-2.3:1.0/input/input4
[264108.067689] usb-storage 1-2.3:1.2: USB Mass Storage device detected
[264108.067964] scsi host0: usb-storage 1-2.3:1.2
[264109.087991] scsi 0:0:0:0: Direct-Access     FLIR     Removable        1.00 PQ: 0 ANSI: 0
[264109.100246] sd 0:0:0:0: [sda] 89664 2048-byte logical blocks: (184 MB/175 MiB)
[264109.110068] sd 0:0:0:0: [sda] Write Protect is off
[264109.115117] sd 0:0:0:0: [sda] Mode Sense: 00 06 00 00
[264109.116447] sd 0:0:0:0: [sda] Asking for cache data failed
[264109.122082] sd 0:0:0:0: [sda] Assuming drive cache: write through
[264109.151779]  sda:
[264109.170721] sd 0:0:0:0: [sda] Attached SCSI removable disk

 

젯슨 나노 하나 전체를 찍을 만큼의 배율과 촛점 거리는 안되는 듯..

[링크 : https://www.flirkorea.com/products/ets320]

 

사용자 설명서를 보니 FOV 고정, 촛점거리 7cm 로 고정 -_ㅠ

영상은 초당 9프레임

 

하지만 UVC 상에서는 15프레임으로 프레임이 변경되는 듯.

$ v4l2-ctl -d /dev/video1 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'YUYV'
        Name        : YUYV 4:2:2
                Size: Discrete 320x240
                        Interval: Discrete 0.067s (15.000 fps)
                        Interval: Discrete 0.133s (7.500 fps)
                        Interval: Discrete 0.267s (3.750 fps)

        Index       : 1
        Type        : Video Capture
        Pixel Format: ''
        Name        : 304d3746-0000-0010-8000-00aa003
                Size: Discrete 320x246
                        Interval: Discrete 0.067s (15.000 fps)
                        Interval: Discrete 0.133s (7.500 fps)
                        Interval: Discrete 0.267s (3.750 fps)

 

와.. 가격 미친.. 예상보다 2배.. ㄷㄷ

[링크 : http://prod.danawa.com/info/?pcode=5595571]

'embeded > jetson' 카테고리의 다른 글

deepstream NCHW, NHWC  (0) 2022.04.22
FLIR ETS320 / v4l  (0) 2022.04.21
deepstream nvinfer  (0) 2022.04.18
deepstream SSD  (0) 2022.04.15
deepstream  (0) 2022.04.15
ssd_inception_v2_coco_2017_11_17.tar.gz  (0) 2022.04.13
Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 18. 15:12

소스를 분석해보니

objectList에는 x,y,w,h 로 infercence 할 크기 기준으로 결과를 넣어주면 된다.

즉, x = output * width 식으로 계산해야 한다.

 

cat config_infer_primary_ssd.txt
num-detected-classes=91

위의 설정은 deepstream plugin의 아래 값으로 넘어옴
detectionParams.numClassesConfigured

 

 

$ gst-inspect-1.0 nvinfer
Factory Details:
  Rank                     primary (256)
  Long-name                NvInfer plugin
  Klass                    NvInfer Plugin
  Description              Nvidia DeepStreamSDK TensorRT plugin
  Author                   NVIDIA Corporation. Deepstream for Tesla forum: https://devtalk.nvidia.com/default/board/209

Plugin Details:
  Name                     nvdsgst_infer
  Description              NVIDIA DeepStreamSDK TensorRT plugin
  Filename                 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
  Version                  6.0.1
  License                  Proprietary
  Source module            nvinfer
  Binary package           NVIDIA DeepStreamSDK TensorRT plugin
  Origin URL               http://nvidia.com/

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstNvInfer

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { (string)NV12, (string)RGBA }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
  
  SRC template: 'src'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { (string)NV12, (string)RGBA }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "nvinfer0"
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  unique-id           : Unique ID for the element. Can be used to identify output of the element
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 15 
  process-mode        : Infer processing mode
                        flags: readable, writable, changeable only in NULL or READY state
                        Enum "GstNvInferProcessModeType" Default: 1, "primary"
                           (1): primary          - Primary (Full Frame)
                           (2): secondary        - Secondary (Objects)
  config-file-path    : Path to the configuration file for this instance of nvinfer
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        String. Default: ""
  infer-on-gie-id     : Infer on metadata generated by GIE with this unique ID.
Set to -1 to infer on all metadata.
                        flags: readable, writable, changeable only in NULL or READY state
                        Integer. Range: -1 - 2147483647 Default: -1 
  infer-on-class-ids  : Operate on objects with specified class ids
Use string with values of class ids in ClassID (int) to set the property.
 e.g. 0:2:3
                        flags: readable, writable, changeable only in NULL or READY state
                        String. Default: ""
  filter-out-class-ids: Ignore metadata for objects of specified class ids
Use string with values of class ids in ClassID (int) to set the property.
 e.g. 0;2;3
                        flags: readable, writable, changeable only in NULL or READY state
                        String. Default: ""
  model-engine-file   : Absolute path to the pre-generated serialized engine file for the model
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        String. Default: ""
  batch-size          : Maximum batch size for inference
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 1024 Default: 1 
  interval            : Specifies number of consecutive batches to be skipped for inference
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 2147483647 Default: 0 
  gpu-id              : Set GPU Device ID
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  raw-output-file-write: Write raw inference output to file
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  raw-output-generated-callback: Pointer to the raw output generated callback funtion
(type: gst_nvinfer_raw_output_generated_callback in 'gstnvdsinfer.h')
                        flags: readable, writable, changeable only in NULL or READY state
                        Pointer.
  raw-output-generated-userdata: Pointer to the userdata to be supplied with raw output generated callback
                        flags: readable, writable, changeable only in NULL or READY state
                        Pointer.
  output-tensor-meta  : Attach inference tensor outputs as buffer metadata
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  output-instance-mask: Instance mask expected in network output and attach it to metadata
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  input-tensor-meta   : Use preprocessed input tensors attached as metadata instead of preprocessing inside the plugin
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false

Element Signals:
  "model-updated" :  void user_function (GstElement* object,
                                         gint arg0,
                                         gchararray arg1,
                                         gpointer user_data);

 

 

[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html]

 

model-file (Caffe model)
proto-file (Caffe model)
uff-file (UFF models)
onnx-file (ONNX models)
model-engine-file, if already generated
int8-calib-file for INT8 mode
mean-file, if required
offsets, if required
maintain-aspect-ratio, if required
parse-bbox-func-name (detectors only)
parse-classifier-func-name (classifiers only)
custom-lib-path
output-blob-names (Caffe and UFF models)
network-type
model-color-format
process-mode
engine-create-func-name
infer-dims (UFF models)
uff-input-order (UFF models)

[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html]

'embeded > jetson' 카테고리의 다른 글

deepstream NCHW, NHWC  (0) 2022.04.22
FLIR ETS320 / v4l  (0) 2022.04.21
deepstream nvinfer  (0) 2022.04.18
deepstream SSD  (0) 2022.04.15
deepstream  (0) 2022.04.15
ssd_inception_v2_coco_2017_11_17.tar.gz  (0) 2022.04.13
Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 15. 18:04

 

 

Pre-requisites:
- Copy the model's label file "ssd_coco_labels.txt" from the data/ssd directory
  in TensorRT samples to this directory.
- Steps to generate the UFF model from ssd_inception_v2_coco TensorFlow frozen
  graph. These steps have been referred from TensorRT sampleUffSSD README:
  1. Make sure TensorRT's uff-converter-tf package is installed.
  2. Install tensorflow-gpu package for python:
     For dGPU:
       $ pip install tensorflow-gpu
     For Jetson, refer to https://elinux.org/Jetson_Zoo#TensorFlow
  3. Download and untar the ssd_inception_v2_coco TensorFlow trained model from
     http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz
  4. Navigate to the extracted directory and convert the frozen graph to uff:
     $ cd ssd_inception_v2_coco_2017_11_17
     $ python /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py \
         frozen_inference_graph.pb -O NMS \
         -p /usr/src/tensorrt/samples/sampleUffSSD/config.py \
         -o sample_ssd_relu6.uff
  5. Copy sample_ssd_relu6.uff to this directory.
$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py

[링크 : https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_FAQ.html]

 

변환해서 netron.app 에서 보니 읭... output 이름이 NMS

타입과 텐서 차원이 안보인다?

'embeded > jetson' 카테고리의 다른 글

FLIR ETS320 / v4l  (0) 2022.04.21
deepstream nvinfer  (0) 2022.04.18
deepstream SSD  (0) 2022.04.15
deepstream  (0) 2022.04.15
ssd_inception_v2_coco_2017_11_17.tar.gz  (0) 2022.04.13
nvidia jetson deepstream objectDetector_SSD 플러그인 분석  (0) 2022.04.13
Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 15. 17:14

 

 

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/nvdsinfer__custom__impl_8h.html]

 

NvDsInferLayerInfo Struct

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/structNvDsInferLayerInfo.html]

 

 

 

   45 typedef struct
   46 {
   48   unsigned int numDims;
   50   unsigned int d[NVDSINFER_MAX_DIMS];
   52   unsigned int numElements;
   53 } NvDsInferDims;

   71 typedef enum
   72 {
   74   FLOAT = 0,
   76   HALF = 1,
   78   INT8 = 2,
   80   INT32 = 3
   81 } NvDsInferDataType;

   86 typedef struct
   87 {
   89   NvDsInferDataType dataType;
   91   union {
   92       NvDsInferDims inferDims;
   93       NvDsInferDims dims _DS_DEPRECATED_("dims is deprecated. Use inferDims instead");
   94   };
   96   int bindingIndex;
   98   const char* layerName;
  100   void *buffer;
  103   int isInput;
  104 } NvDsInferLayerInfo;

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/nvdsinfer_8h_source.html]

[링크 : https://docs.nvidia.com/metropolis/deepstream/sdk-api/group__ee__nvinf.html#ga6a35747b3bb45d13db9be3a2aa981e49]

Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 13. 15:39

netron 웹 버전에서 받아서 보는 중

 

받아볼 녀석은 아래의 링크이고..

[링크 : http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz]

 

어떤 이름의 레이어에서 출력을 내주는지 한번 찾아보는 중

크게 4개인 것 같고 명칭은 아래와 같은데..

detection_boxes, detection_scores, detection_classes, num_detections

detection_boxes의 경우 4개 좌표로 나와있을줄 알았는데 단순히 float32라고만 되어있어서 멘붕..

 

 

 

 

 

 

 

 

 

Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 13. 11:48

코드 기본 구조만 남기고 상세 코드는 분석을 위해 삭제

$ cat nvdsiplugin_ssd.cpp
#include "NvInferPlugin.h"
#include <vector>
#include "cuda_runtime_api.h"
#include <cassert>
#include <cublas_v2.h>
#include <functional>
#include <numeric>
#include <algorithm>
#include <iostream>

using namespace nvinfer1;

class FlattenConcat : public IPluginV2
{
public:
    FlattenConcat(int concatAxis, bool ignoreBatch)
        : mIgnoreBatch(ignoreBatch)
        , mConcatAxisID(concatAxis)
    {
        assert(mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3);
    }
    //clone constructor
    FlattenConcat(int concatAxis, bool ignoreBatch, int numInputs, int outputConcatAxis, int* inputConcatAxis)
        : mIgnoreBatch(ignoreBatch)
        , mConcatAxisID(concatAxis)
        , mOutputConcatAxis(outputConcatAxis)
        , mNumInputs(numInputs)
    {
        CHECK(cudaMallocHost((void**) &mInputConcatAxis, mNumInputs * sizeof(int)));
        for (int i = 0; i < mNumInputs; ++i)
            mInputConcatAxis[i] = inputConcatAxis[i];
    }

    FlattenConcat(const void* data, size_t length)     {    }
    ~FlattenConcat()    {    }
    int getNbOutputs() const noexcept override { return 1; }
    Dims getOutputDimensions(int index, const Dims* inputs, int nbInputDims) noexcept override    {    }
    int initialize() noexcept override    {    }
    void terminate() noexcept override    {    }
    size_t getWorkspaceSize(int) const noexcept override { return 0; }
    int enqueue(int batchSize, void const* const* inputs, void* const* outputs, void*, cudaStream_t stream) noexcept override    {    }
    size_t getSerializationSize() const noexcept override   {    }
    void serialize(void* buffer) const noexcept override    {   }
    void configureWithFormat(const Dims* inputs, int nbInputs, const Dims* outputDims, int nbOutputs, nvinfer1::DataType type, nvinfer1::PluginFormat format, int maxBatchSize) noexcept override   {    }
    bool supportsFormat(DataType type, PluginFormat format) const noexcept override    {    }
    const char* getPluginType() const noexcept override { return "FlattenConcat_TRT"; }
    const char* getPluginVersion() const noexcept override { return "1"; }
    void destroy() noexcept override { delete this; }
    IPluginV2* clone() const noexcept override    {    }
    void setPluginNamespace(const char* libNamespace) noexcept override { mNamespace = libNamespace; }
    const char* getPluginNamespace() const noexcept override { return mNamespace.c_str(); }

private:
    template <typename T>    void write(char*& buffer, const T& val) const    {    }
    template <typename T>    T read(const char*& buffer)    {    }
    size_t* mCopySize = nullptr;
    bool mIgnoreBatch{false};
    int mConcatAxisID{0}, mOutputConcatAxis{0}, mNumInputs{0};
    int* mInputConcatAxis = nullptr;
    nvinfer1::Dims mCHW;
    cublasHandle_t mCublas;
    std::string mNamespace;
};

namespace
{
const char* FLATTENCONCAT_PLUGIN_VERSION{"1"};
const char* FLATTENCONCAT_PLUGIN_NAME{"FlattenConcat_TRT"};
} // namespace

class FlattenConcatPluginCreator : public IPluginCreator
{
public:
    FlattenConcatPluginCreator()
    {
        mPluginAttributes.emplace_back(PluginField("axis", nullptr, PluginFieldType::kINT32, 1));
        mPluginAttributes.emplace_back(PluginField("ignoreBatch", nullptr, PluginFieldType::kINT32, 1));
        mFC.nbFields = mPluginAttributes.size();
        mFC.fields = mPluginAttributes.data();
    }

    ~FlattenConcatPluginCreator() {}
    const char* getPluginName() const noexcept override { return FLATTENCONCAT_PLUGIN_NAME; }
    const char* getPluginVersion() const noexcept override { return FLATTENCONCAT_PLUGIN_VERSION; }
    const PluginFieldCollection* getFieldNames() noexcept override { return &mFC; }
    IPluginV2* createPlugin(const char* name, const PluginFieldCollection* fc) noexcept override    {    }
    IPluginV2* deserializePlugin(const char* name, const void* serialData, size_t serialLength) noexcept override    {        return new FlattenConcat(serialData, serialLength);    }
    void setPluginNamespace(const char* libNamespace) noexcept override { mNamespace = libNamespace; }
    const char* getPluginNamespace() const noexcept override { return mNamespace.c_str(); }

private:
    static PluginFieldCollection mFC;
    bool mIgnoreBatch{false};
    int mConcatAxisID;
    static std::vector<PluginField> mPluginAttributes;
    std::string mNamespace = "";
};

PluginFieldCollection FlattenConcatPluginCreator::mFC{};
std::vector<PluginField> FlattenConcatPluginCreator::mPluginAttributes;

REGISTER_TENSORRT_PLUGIN(FlattenConcatPluginCreator);

 

$ cat nvdsparsebbox_ssd.cpp
#include <cstring>
#include <iostream>
#include "nvdsinfer_custom_impl.h"

#define MIN(a,b) ((a) < (b) ? (a) : (b))
#define MAX(a,b) ((a) > (b) ? (a) : (b))
#define CLIP(a,min,max) (MAX(MIN(a, max), min))

/* This is a sample bounding box parsing function for the sample SSD UFF
 * detector model provided with the TensorRT samples. */

extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList);

/* C-linkage to prevent name-mangling */
extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList)
{
  for (int i = 0; i < keepCount; ++i)
  {
    NvDsInferObjectDetectionInfo object;
        object.classId = classId;
        object.detectionConfidence = det[2];
        object.left = CLIP(rectx1, 0, networkInfo.width - 1);
        object.top = CLIP(recty1, 0, networkInfo.height - 1);
        object.width = CLIP(rectx2, 0, networkInfo.width - 1) - object.left + 1;
        object.height = CLIP(recty2, 0, networkInfo.height - 1) - object.top + 1;
        objectList.push_back(object);
  }

  return true;
}

/* Check that the custom function has been defined correctly */
CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsInferParseCustomSSD);

 

 

+ ?

[링크 : https://github.com/AastaNV/eLinux_data/blob/main/deepstream/ssd-jetson_inference/ssd-jetson_inference.patch]

 

+

NvDsInferDataType  dataType
union {
   NvDsInferDims   inferDims
}; 
int  bindingIndex
const char *  layerName
void *  buffer
int  isInput

[링크 : https://docs.nvidia.com/metropolis/deepstream/5.0DP/dev-guide/DeepStream_Development_Guide/baggage/structNvDsInferLayerInfo.html]

Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 13. 11:32

 

- With gst-launch-1.0
  For Jetson:
  $ gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! \
        decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! \
        nvinfer config-file-path= config_infer_primary_ssd.txt ! \
        nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

- With deepstream-app
  $ deepstream-app -c deepstream_app_config_ssd.txt

 

$ cat deepstream_app_config_ssd.txt
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file://../../samples/streams/sample_1080p_h264.mp4
gpu-id=0
cudadec-memtype=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=-1
## Set muxer output width and height
width=1920
height=1080
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=1
interval=0

labelfile-path=/home/nvidia/tmp_onnx/labels.txt
#labelfile-path=ssd_coco_labels.txt

model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine
config-file=config_infer_primary_ssd.txt
nvbuf-memory-type=0

 

$ cat config_infer_primary_ssd.txt
[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0

# yw
onnx-file=/home/nvidia/tmp_onnx/model.onnx
labelfile=/home/nvidia/tmp_onnx/labels.txt

model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine
labelfile-path=ssd_coco_labels.txt
uff-file=sample_ssd_relu6.uff
infer-dims=3;300;300
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=91
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=MarkOutput_0
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

## Per class configuration
#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800

Posted by 구차니

댓글을 달아 주세요

embeded/jetson2022. 4. 7. 15:36

AA64-EL3 라는게 보여서 검색

[링크 : https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/bootflow_jetson_nano.html#wwpID0E02B0HA]

 

[링크 : http://egloos.zum.com/rousalome/v/9966116]

[링크 : https://m.blog.naver.com/eldkrpdla121/221522612959]

Posted by 구차니

댓글을 달아 주세요