'프로그램 사용/yolo_tensorflow'에 해당되는 글 104건

  1. 2021.06.10 tensorflow lite yolov4
  2. 2021.05.26 tensorflow lite interpreter->AllocateTensors()
  3. 2021.05.21 tflite common.h
  4. 2021.05.10 imx6q neon tensorlow lite
  5. 2021.05.01 tflite type
  6. 2021.04.19 tflite example
  7. 2021.04.16 tflite convert
  8. 2021.04.16 LSTM - Long short-term memory
  9. 2021.04.15 quantization: 0.003921568859368563 * q
  10. 2021.04.14 tflite_converter quantization

output 텐서의 차원에 대한 정보는 세개 변수에 저장된다.

TfLiteType type;
TfLiteIntArray* dims;
size_t bytes;

 

type은 개별 엘리먼트의 출력 포맷(float라던가 uint8 이라던가)의 변수 크기를 저장하고

dims.size 에는 몇 차원 텐서인지

dims.data[] 에는 차원별 갯수를 저장한다.

dims.size = 4

dims.data[] = [1 7 7 18] 식으로 저장되며

float32형이기에

1*7*7*18*sizeof(float) = 3528 byte로

해당 내용은 bytes에 저장된다.

 

TfLitePtrUnion data;

그리고 그 용량의 데이터는 data에 저장되어 접근을 하면 되는데..

차원을 어떻게 접근해야 하려나.. -_-

 

대충 label_image.cc 예제에서 이런 식으로 출력하면 okay

    printf("dims size[%d]\n",output_dims->size);
    for(int idx = 0; idx < output_dims->size; idx++)
    {
            printf("%d\t",output_dims->data[idx]);
    }

    printf("tensor pos bytes [%d]\n",interpreter->tensor(interpreter->outputs()[0])->bytes);
    printf("tensor per bytes [%d]\n",interpreter->tensor(interpreter->outputs()[1])->bytes);

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tflite typed_tensor(), tensor()  (0) 2021.06.25
tflite yolov4  (0) 2021.06.14
tensorflow lite interpreter->AllocateTensors()  (0) 2021.05.26
tflite common.h  (0) 2021.05.21
imx6q neon tensorlow lite  (0) 2021.05.10
Posted by 구차니

segmentation fault가 떠서 어디서 죽나 찾는데 꽤나 고생했네 -_-

(빌드가 느리면 역시 스트레스..)

 

그런데 도무지 allocateTensors() 라는 함수가 어떤 역활을 하는지 모르겠네..

한번 할당하고 또 할당하면 오류나는 느낌..

 

  // Allocate memory for the model's input `Tensor`s.
  try interpreter.allocateTensors()

[링크 : https://www.tensorflow.org/lite/guide/inference]

[링크 : https://www.tensorflow.org/lite/api_docs/cc/class/tflite/interpreter#allocatetensors]

 

여기 소스를 많이 참고했는데.. tensorflow의 label_image쪽이랑은 또 많이 달라서

어느 가이드를 따라가야 하나 고민..

[링크 : https://github.com/Qengineering/TensorFlow_Lite_SSD_RPi_64-bits/blob/master/MobileNetV1.cpp]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tflite yolov4  (0) 2021.06.14
tensorflow lite yolov4  (0) 2021.06.10
tflite common.h  (0) 2021.05.21
imx6q neon tensorlow lite  (0) 2021.05.10
tflite type  (0) 2021.05.01
Posted by 구차니

label_image 예제를 보면 get_top_n 에서

(prediction[i] + 128) / 256.0 나 prediction[i] / 255.0 

같은 quantization 범위에 맞춘 무언가가 보이는데 

TfLiteQuantizationParams.scale 와 TfLiteQuantizationParams.zero_point을 이용하면 자동화 가능할 느낌.

template <class T>
void get_top_n(T* prediction, int prediction_size, size_t num_results,
               float threshold, std::vector<std::pair<float, int>>* top_results,
               TfLiteType input_type) {
  // Will contain top N results in ascending order.
  std::priority_queue<std::pair<float, int>, std::vector<std::pair<float, int>>,
                      std::greater<std::pair<float, int>>>
      top_result_pq;

  const long count = prediction_size;  // NOLINT(runtime/int)
  float value = 0.0;

  for (int i = 0; i < count; ++i) {
    switch (input_type) {
      case kTfLiteFloat32:
        value = prediction[i];
        break;
      case kTfLiteInt8:
        value = (prediction[i] + 128) / 256.0;
        break;
      case kTfLiteUInt8:
        value = prediction[i] / 255.0;
        break;
      default:
        break;
    }
    // Only add it if it beats the threshold and has a chance at being in
    // the top N.
    if (value < threshold) {
      continue;
    }

    top_result_pq.push(std::pair<float, int>(value, i));

    // If at capacity, kick the smallest value out.
    if (top_result_pq.size() > num_results) {
      top_result_pq.pop();
    }
  }

  // Copy to output vector and reverse into descending order.
  while (!top_result_pq.empty()) {
    top_results->push_back(top_result_pq.top());
    top_result_pq.pop();
  }
  std::reverse(top_results->begin(), top_results->end());
}

 

netron 에서 보면 quantization 범위가 나오는데, 어딘가 저장하고는 있는 듯 해서 검색 중

// SupportedQuantizationTypes.
typedef enum TfLiteQuantizationType {
  // No quantization.
  kTfLiteNoQuantization = 0,
  // Affine quantization (with support for per-channel quantization).
  // Corresponds to TfLiteAffineQuantization.
  kTfLiteAffineQuantization = 1,
} TfLiteQuantizationType;

// Structure specifying the quantization used by the tensor, if-any.
typedef struct TfLiteQuantization {
  // The type of quantization held by params.
  TfLiteQuantizationType type;
  // Holds a reference to one of the quantization param structures specified
  // below.
  void* params;
} TfLiteQuantization;

// Legacy. Will be deprecated in favor of TfLiteAffineQuantization.
// If per-layer quantization is specified this field will still be populated in
// addition to TfLiteAffineQuantization.
// Parameters for asymmetric quantization. Quantized values can be converted
// back to float using:
//     real_value = scale * (quantized_value - zero_point)
typedef struct TfLiteQuantizationParams {
  float scale;
  int32_t zero_point;
} TfLiteQuantizationParams;

// Parameters for asymmetric quantization across a dimension (i.e per output
// channel quantization).
// quantized_dimension specifies which dimension the scales and zero_points
// correspond to.
// For a particular value in quantized_dimension, quantized values can be
// converted back to float using:
//     real_value = scale * (quantized_value - zero_point)
typedef struct TfLiteAffineQuantization {
  TfLiteFloatArray* scale;
  TfLiteIntArray* zero_point;
  int32_t quantized_dimension;
} TfLiteAffineQuantization;

typedef struct TfLiteTensor {
  TfLiteType type;
  TfLitePtrUnion data;
  TfLiteIntArray* dims;
  TfLiteQuantizationParams params;
  TfLiteAllocationType allocation_type;
  size_t bytes;
  const void* allocation;
  const char* name;
  struct TfLiteDelegate* delegate;
  TfLiteBufferHandle buffer_handle;
  bool data_is_stale;
  bool is_variable;
  TfLiteQuantization quantization;
  TfLiteSparsity* sparsity;
  const TfLiteIntArray* dims_signature;
} TfLiteTensor;

 

+

2021.05.26

typedef struct TfLiteIntArray {
  int size;
// gcc 6.1+ have a bug where flexible members aren't properly handled
// https://github.com/google/re2/commit/b94b7cd42e9f02673cd748c1ac1d16db4052514c
#if (!defined(__clang__) && defined(__GNUC__) && __GNUC__ == 6 && \
     __GNUC_MINOR__ >= 1) ||                                      \
    defined(HEXAGON) || (__clang_major__ == 7 && __clang_minor__ == 1)
  int data[0];
#else
  int data[];
#endif
} TfLiteIntArray;

// Fixed size list of floats. Used for per-channel quantization.
typedef struct TfLiteFloatArray {
  int size;
// gcc 6.1+ have a bug where flexible members aren't properly handled
// https://github.com/google/re2/commit/b94b7cd42e9f02673cd748c1ac1d16db4052514c
// This also applies to the toolchain used for Qualcomm Hexagon DSPs.
#if !defined(__clang__) && defined(__GNUC__) && __GNUC__ == 6 && \
    __GNUC_MINOR__ >= 1
  float data[0];
#else
  float data[];
#endif
} TfLiteFloatArray;

 

class Interpreter {
  /// Invoke the interpreter (run the whole graph in dependency order).
  ///
  /// NOTE: It is possible that the interpreter is not in a ready state
  /// to evaluate (i.e. if a ResizeTensor() has been performed without an
  /// AllocateTensors().
  /// Returns status of success or failure.
  TfLiteStatus Invoke();

  /// WARNING: Experimental interface, subject to change
  Subgraph& primary_subgraph() {
    return *subgraphs_.front();  /// Safe as subgraphs_ always has 1 entry.
  }
  
  /// Read only access to list of inputs.
  const std::vector<int>& inputs() const { return primary_subgraph().inputs(); }
  
  /// Read only access to list of outputs.
  const std::vector<int>& outputs() const {
    return primary_subgraph().outputs();
  }
  
  // Subgraphs
  std::vector<std::unique_ptr<Subgraph>> subgraphs_;
};

[링크 : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/interpreter.h]

 

class Subgraph {
  // Array of indices representing the tensors that are inputs to the
  // interpreter.
  std::vector<int> inputs_;

  // Array of indices representing the tensors that are outputs to the
  // interpreter.
  std::vector<int> outputs_;
  
  // Read only access to list of inputs.
  std::vector<int>& inputs() { return inputs_; }

  // Read only access to list of inputs.
  const std::vector<int>& inputs() const { return inputs_; }

  // Read only access to list of outputs.
  std::vector<int>& outputs() { return outputs_; }

  // Read only access to list of outputs.
  const std::vector<int>& outputs() const { return outputs_; }
};

[링크 : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/core/subgraph.h]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow lite yolov4  (0) 2021.06.10
tensorflow lite interpreter->AllocateTensors()  (0) 2021.05.26
imx6q neon tensorlow lite  (0) 2021.05.10
tflite type  (0) 2021.05.01
tflite example  (0) 2021.04.19
Posted by 구차니

이번 빌드 옵션들을 뒤져보다 보니 빼먹고 안 넣은게 있었네

(라고 쓰고 실은 귀찮았던...)

 

-mvectorize-with-neon-quad -ffast-math

[링크 : http://stackoverflow.com/questions/14962447/gcc-options-for-a-freescale-imx6q-arm-processor]

 

 

In addition GCC offers the -ffast-math flag which is a shortcut for several options, presenting the least conforming but fastest math mode. It enables -fno-trapping-math, -funsafe-math-optimizations, -ffinite-math-only, -fno-errno-math, -fno-signaling-nans, -fno-rounding-math, -fcx-limited-range and -fno-signed-zeros. Each of these flags violates IEEE in a different way. 

[링크 : http://gcc.gnu.org/wiki/FloatingPointMath]

 

 

Cortex-A9 -mcpu=cortex-a9 -mfpu=vfpv3-fp16 -mfpu=vfpv3-d16-fp16 -mfpu=neon-fp16

[링크 : http://www.programmersought.com/article/5320739655/]

 

-DENABLE_VFPV3=ON ARM VFPv3 floating 

[링크 : http://www.google.com/amp/s/learnopencv.com/build-and-install-opencv-4-for-raspberry-pi/amp/]

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tensorflow lite interpreter->AllocateTensors()  (0) 2021.05.26
tflite common.h  (0) 2021.05.21
tflite type  (0) 2021.05.01
tflite example  (0) 2021.04.19
tflite convert  (0) 2021.04.16
Posted by 구차니

 

변수 추적해보니 그게 그거인가?

  int output = interpreter->outputs()[0];
  TfLiteIntArray* output_dims = interpreter->tensor(output)->dims;
  // assume output dims to be something like (1, 1, ... ,size)
  auto output_size = output_dims->data[output_dims->size - 1];

 

    const float* detection_locations = interpreter->tensor(interpreter->outputs()[0])->data.f;
    const float* detection_classes=interpreter->tensor(interpreter->outputs()[1])->data.f;
    const float* detection_scores = interpreter->tensor(interpreter->outputs()[2])->data.f;
    const int    num_detections = *interpreter->tensor(interpreter->outputs()[3])->data.f;

    //there are ALWAYS 10 detections no matter how many objects are detectable
    //cout << "number of detections: " << num_detections << "\n";

    const float confidence_threshold = 0.5;
    for(int i = 0; i < num_detections; i++){
        if(detection_scores[i] > confidence_threshold){
            int  det_index = (int)detection_classes[i]+1;
            float y1=detection_locations[4*i  ]*cam_height;
            float x1=detection_locations[4*i+1]*cam_width;
            float y2=detection_locations[4*i+2]*cam_height;
            float x2=detection_locations[4*i+3]*cam_width;

            Rect rec((int)x1, (int)y1, (int)(x2 - x1), (int)(y2 - y1));
            rectangle(src,rec, Scalar(0, 0, 255), 1, 8, 0);
            putText(src, format("%s", Labels[det_index].c_str()), Point(x1, y1-5) ,FONT_HERSHEY_SIMPLEX,0.5, Scalar(0, 0, 255), 1, 8, 0);
        }
    }

 

 

typedef struct {
  int size;
#if !defined(__clang__) && defined(__GNUC__) && __GNUC__ == 6 && \
    __GNUC_MINOR__ >= 1
  int data[0];
#else
  int data[];
#endif
} TfLiteIntArray;

typedef union {
  int* i32;
  int64_t* i64;
  float* f;
  char* raw;
  const char* raw_const;
  uint8_t* uint8;
  bool* b;
  int16_t* i16;
  TfLiteComplex64* c64;
  int8_t* int8;
} TfLitePtrUnion;

typedef struct {
  TfLiteType type;
  TfLitePtrUnion data;
  TfLiteIntArray* dims;
  TfLiteQuantizationParams params;
  TfLiteAllocationType allocation_type;
  size_t bytes;
  const void* allocation;
  const char* name;
  TfLiteDelegate* delegate;
  TfLiteBufferHandle buffer_handle;
  bool data_is_stale;
  bool is_variable;
  TfLiteQuantization quantization;
} TfLiteTensor;

[링크 : https://android.googlesource.com/platform/external/tensorflow/.../tensorflow/lite/c/c_api_internal.h]

 

현재 소스에서는 common.h 로 옮겨진듯

[링크 : https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/common.h]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tflite common.h  (0) 2021.05.21
imx6q neon tensorlow lite  (0) 2021.05.10
tflite example  (0) 2021.04.19
tflite convert  (0) 2021.04.16
LSTM - Long short-term memory  (0) 2021.04.16
Posted by 구차니

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

imx6q neon tensorlow lite  (0) 2021.05.10
tflite type  (0) 2021.05.01
tflite convert  (0) 2021.04.16
LSTM - Long short-term memory  (0) 2021.04.16
quantization: 0.003921568859368563 * q  (0) 2021.04.15
Posted by 구차니

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tflite type  (0) 2021.05.01
tflite example  (0) 2021.04.19
LSTM - Long short-term memory  (0) 2021.04.16
quantization: 0.003921568859368563 * q  (0) 2021.04.15
tflite_converter quantization  (0) 2021.04.14
Posted by 구차니

tensorflow model 뒤져보다 보니 lstm 이라는 용어는 본적이 있는데

귀찮아서 넘기다가 이번에도 또 검색중에 걸려나와서 조사.

 

RNN(Recurrent nerural network) 에서 사용하는 기법(?)으로 문맥을 강화해주는 역활을 하는 듯.

 

[링크 : http://euzl.github.io/hackday_1/]

[링크 : https://en.wikipedia.org/wiki/Long_short-term_memory]

 

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tflite example  (0) 2021.04.19
tflite convert  (0) 2021.04.16
quantization: 0.003921568859368563 * q  (0) 2021.04.15
tflite_converter quantization  (0) 2021.04.14
tensorboard graph  (0) 2021.04.14
Posted by 구차니

tflite로 변환시 unit8로 양자화 하면

분명 범위는 random으로 들어가야 해서 quantization 범위가 조금은 달라질 것으로 예상을 했는데

항상 동일한 0.003921568859368563 * q로 나와 해당 숫자로 검색을 하니

0~255 범위를 float로 정규화 하면 해당 숫자가 나온다고..

 

0.00392 * 255 = 0.9996 이 나오긴 하네?

quantization of input tensor will be close to (0.003921568859368563, 0). mean is the integer value from 0 to 255 that maps to floating point 0.0f. std_dev is 255 / (float_max - float_min). This will fix one possible problem

[링크 : https://stackoverflow.com/questions/54830869/]

[링크 : https://github.com/majidghafouri/Object-Recognition-tf-lite/issues/1]

 

+

output_format: Output file format. Currently must be {TFLITE, GRAPHVIZ_DOT}. (default TFLITE)
quantized_input_stats: Dict of strings representing input tensor names mapped to tuple of floats representing the mean and standard deviation of the training data (e.g., {"foo" : (0., 1.)}). Only need if inference_input_type is QUANTIZED_UINT8. real_input_value = (quantized_input_value - mean_value) / std_dev_value. (default {})
default_ranges_stats: Tuple of integers representing (min, max) range values for all arrays without a specified range. Intended for experimenting with quantization via "dummy quantization". (default None)
post_training_quantize: Boolean indicating whether to quantize the weights of the converted float model. Model size will be reduced and there will be latency improvements (at the cost of accuracy). (default False)

[링크 : http://man.hubwiz.com/.../python/tf/lite/TFLiteConverter.html]

 

TOCO(Tensorflow Lite Optimized Converter)

[링크 : https://junimnjw.github.io/%EA%B0%9C%EB%B0%9C/2019/08/09/tensorflow-lite-2.html]

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

tflite convert  (0) 2021.04.16
LSTM - Long short-term memory  (0) 2021.04.16
tflite_converter quantization  (0) 2021.04.14
tensorboard graph  (0) 2021.04.14
generate_tfrecord.py  (0) 2021.04.13
Posted by 구차니

이것저것.. 원본 소스까지 뒤지고 있는데 이렇다 할 원하는 답이 안보인다.

[링크 : https://www.tensorflow.org/model_optimization/guide/quantization/training]

[링크 : https://www.tensorflow.org/model_optimization/guide/quantization/training_example]

[링크 : https://github.com/tensorflow/.../lite/g3doc/performance/post_training_quantization.md]

[링크 : https://github.com/tensorflow/.../lite/g3doc/performance/quantization_spec.md]

 

util_test.py

def _generate_integer_tflite_model(quantization_type=dtypes.int8):
  """Define an integer post-training quantized tflite model."""
  # Load MNIST dataset
  n = 10  # Number of samples
  (train_images, train_labels), (test_images, test_labels) = \
      tf.keras.datasets.mnist.load_data()
  train_images, train_labels, test_images, test_labels = \
      train_images[:n], train_labels[:n], test_images[:n], test_labels[:n]

  # Normalize the input image so that each pixel value is between 0 to 1.
  train_images = train_images / 255.0
  test_images = test_images / 255.0

  # Define TF model
  model = tf.keras.Sequential([
      tf.keras.layers.InputLayer(input_shape=(28, 28)),
      tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
      tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation="relu"),
      tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
      tf.keras.layers.Flatten(),
      tf.keras.layers.Dense(10)
  ])

  # Train
  model.compile(
      optimizer="adam",
      loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
      metrics=["accuracy"])

  model.fit(
      train_images,
      train_labels,
      epochs=1,
      validation_split=0.1,
  )

  # Convert TF Model to an Integer Quantized TFLite Model
  converter = tf.lite.TFLiteConverter.from_keras_model(model)
  converter.optimizations = {tf.lite.Optimize.DEFAULT}
  def representative_dataset_gen():
    for _ in range(2):
      yield [
          np.random.uniform(low=0, high=1, size=(1, 28, 28)).astype(
              np.float32)
      ]
  converter.representative_dataset = representative_dataset_gen
  if quantization_type == dtypes.int8:
    converter.target_spec.supported_ops = {tf.lite.OpsSet.TFLITE_BUILTINS_INT8}
  else:
    converter.target_spec.supported_ops = {
        tf.lite.OpsSet
        .EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
    }
  tflite_model = converter.convert()

  return tflite_model

 

lite_v2_test.py

  def _getIntegerQuantizeModel(self):
    np.random.seed(0)

    root = tracking.AutoTrackable()

    @tf.function(
        input_signature=[tf.TensorSpec(shape=[1, 5, 5, 3], dtype=tf.float32)])
    def func(inp):
      conv = tf.nn.conv2d(
          inp, tf.ones([3, 3, 3, 16]), strides=[1, 1, 1, 1], padding='SAME')
      output = tf.nn.relu(conv, name='output')
      return output

    def calibration_gen():
      for _ in range(5):
        yield [np.random.uniform(-1, 1, size=(1, 5, 5, 3)).astype(np.float32)]

    root.f = func
    to_save = root.f.get_concrete_function()
    return (to_save, calibration_gen)


 def testInvalidIntegerQuantization(self, is_int16_quantize,
                                     inference_input_output_type):
    func, calibration_gen = self._getIntegerQuantizeModel()

    # Convert quantized model.
    quantized_converter = lite.TFLiteConverterV2.from_concrete_functions([func])
    quantized_converter.optimizations = [lite.Optimize.DEFAULT]
    quantized_converter.representative_dataset = calibration_gen
    if is_int16_quantize:
      quantized_converter.target_spec.supported_ops = [
          lite.OpsSet.\
          EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8,
          lite.OpsSet.TFLITE_BUILTINS
      ]
    with self.assertRaises(ValueError) as error:
      quantized_converter.inference_input_type = dtypes.int8
      quantized_converter.inference_output_type = dtypes.int8
      quantized_converter.convert()
    self.assertEqual(
        'The inference_input_type and inference_output_type '
        "must be in ['tf.float32', 'tf.int16'].", str(error.exception))


  def testCalibrateAndQuantizeBuiltinInt16(self):
    func, calibration_gen = self._getIntegerQuantizeModel()

    # Convert float model.
    float_converter = lite.TFLiteConverterV2.from_concrete_functions([func])
    float_tflite_model = float_converter.convert()
    self.assertIsNotNone(float_tflite_model)

    converter = lite.TFLiteConverterV2.from_concrete_functions([func])
    # TODO(b/156309549): We should add INT16 to the builtin types.
    converter.optimizations = [lite.Optimize.DEFAULT]
    converter.target_spec.supported_ops = [lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.representative_dataset = calibration_gen
    converter._experimental_calibrate_only = True
    calibrated_tflite = converter.convert()
    quantized_tflite_model = mlir_quantize(
        calibrated_tflite, inference_type=_types_pb2.QUANTIZED_INT16)

    self.assertIsNotNone(quantized_tflite_model)

    # The default input and output types should be float.
    interpreter = Interpreter(model_content=quantized_tflite_model)
    interpreter.allocate_tensors()
    input_details = interpreter.get_input_details()
    self.assertLen(input_details, 1)
    self.assertEqual(np.float32, input_details[0]['dtype'])
    output_details = interpreter.get_output_details()
    self.assertLen(output_details, 1)
    self.assertEqual(np.float32, output_details[0]['dtype'])

    # Ensure that the quantized weights tflite model is smaller.
    self.assertLess(len(quantized_tflite_model), len(float_tflite_model))

'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글

LSTM - Long short-term memory  (0) 2021.04.16
quantization: 0.003921568859368563 * q  (0) 2021.04.15
tensorboard graph  (0) 2021.04.14
generate_tfrecord.py  (0) 2021.04.13
Learning without Forgetting (LwF)  (0) 2021.04.12
Posted by 구차니