RM新时代网站-首页

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

【飛騰派4G版免費試用】 第二章:在PC端使用 TensorFlow2 訓(xùn)練目標(biāo)檢測模型

Red Linux ? 來源:Red Linux ? 作者:Red Linux ? 2023-12-15 06:40 ? 次閱讀

使用 TensorFlow2 訓(xùn)練目標(biāo)檢測模型

因為我的項目是計劃在飛騰派上實現(xiàn)一個目標(biāo)檢測跟蹤算法,通過算法輸出控制信號控制電機跟隨目標(biāo)運行。在第一章完成了Ubuntu系統(tǒng)的構(gòu)建和燒寫,這幾天就在研究如何訓(xùn)練目標(biāo)檢測模型和部署,經(jīng)過一段時間的資料搜集和測試,目前已經(jīng)順利使用 TensorFlow2 完成了模型的訓(xùn)練的測試,首先描述下我測試的 PC 配置。
pchw.png

單個 step 實際測試大概2s+,為了加快測試,我設(shè)置了訓(xùn)練的 step 為 300 ,實際測試15分鐘左右完成了模型訓(xùn)練,這個在后續(xù)配置文件中可以看到。

PC端關(guān)鍵的軟件配置

內(nèi)核Linux fedora 6.6.4-100.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 3 18:11:27 UTC 2023 x86_64 GNU/Linux

Python :Python 3.8.18 (default, Aug 28 2023, 00:00:00)

參考內(nèi)容

  • [How to train your own Object Detector with TensorFlow’s Object Detector API]
  • [How to Train Your Own Object Detector Using TensorFlow Object Detection API]

環(huán)境準(zhǔn)備

為了訓(xùn)練的方便,建議安裝一個虛擬的python環(huán)境,首先創(chuàng)建一個新的文件夾demo,然后 進入到 demo 目錄

  1. 首先接著使用到 python 的 venv 模塊創(chuàng)建一個虛擬環(huán)境。
python -m venv tf2_api_env
  1. 接著激活創(chuàng)建的虛擬環(huán)境
? source ../tf2_api_env/bin/activate
(tf2_api_env) ┏─?[red]?─?[17:01:44]?─?[0]
┗─?[~/Projects/ai_track_feiteng/demo2/workspace]
?
  1. 接下來的操作都在這個虛擬環(huán)境中完成,下面開始安裝 tensorflow2:
pip install tensorflow==2.*
  1. 下載,安裝編譯 models 下的 Protobuf
git clone https://github.com/tensorflow/models.git
cd models/research/
protoc models/research/object_detection/protos/*.proto --python_out=../../
  1. 下載,安裝編譯 coco API
pip install cython
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
make
cp -r pycocotools ./models/research/
  1. 對象檢測 API 安裝
cd models/research
cp object_detection/packages/tf2/setup.py .
python3.8 -m pip install .
  1. 測試是否安裝正確
python3.8 object_detection/builders/model_builder_tf2_test.py
2023-12-14 18:30:03.462617: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-
off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-12-14 18:30:03.463746: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-14 18:30:03.489237: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-14 18:30:03.489587: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical
 operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-14 18:30:03.994817: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-12-14 18:30:04.975870: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there m
ust be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-12-14 18:30:04.976136: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above
are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your
platform.
Skipping registering GPU devices...
Running tests under Python 3.8.18: /home/red/Projects/ai_track_feiteng/demo2/tf2_api_env/bin/python3.8
[ RUN      ] ModelBuilderTF2Test.test_create_center_net_deepmac
WARNING:tensorflow:`tf.keras.layers.experimental.SyncBatchNormalization` endpoint is deprecated and will be removed in a future release. Please use `tf.keras.layers.BatchNorm
alization` with parameter `synchronized` set to True.
W1214 18:30:05.009487 140273879242560 batch_normalization.py:1531] `tf.keras.layers.experimental.SyncBatchNormalization` endpoint is deprecated and will be removed in a futur
e release. Please use `tf.keras.layers.BatchNormalization` with parameter `synchronized` set to True.
/home/red/Projects/ai_track_feiteng/demo2/tf2_api_env/lib64/python3.8/site-packages/object_detection/builders/model_builder.py:1112: DeprecationWarning: The 'warn' function i
s deprecated, use 'warning' instead
  logging.warn(('Building experimental DeepMAC meta-arch.'
...... 省略 ......
[ RUN      ] ModelBuilderTF2Test.test_session
[  SKIPPED ] ModelBuilderTF2Test.test_session
[ RUN      ] ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor
INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor): 0.0s
I1214 18:30:21.144221 140273879242560 test_util.py:2462] time(__main__.ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor): 0.0s
[       OK ] ModelBuilderTF2Test.test_unknown_faster_rcnn_feature_extractor
[ RUN      ] ModelBuilderTF2Test.test_unknown_meta_architecture
INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_unknown_meta_architecture): 0.0s
I1214 18:30:21.144374 140273879242560 test_util.py:2462] time(__main__.ModelBuilderTF2Test.test_unknown_meta_architecture): 0.0s
[       OK ] ModelBuilderTF2Test.test_unknown_meta_architecture
[ RUN      ] ModelBuilderTF2Test.test_unknown_ssd_feature_extractor
INFO:tensorflow:time(__main__.ModelBuilderTF2Test.test_unknown_ssd_feature_extractor): 0.0s
I1214 18:30:21.144848 140273879242560 test_util.py:2462] time(__main__.ModelBuilderTF2Test.test_unknown_ssd_feature_extractor): 0.0s
[       OK ] ModelBuilderTF2Test.test_unknown_ssd_feature_extractor
----------------------------------------------------------------------
Ran 24 tests in 16.167s

OK (skipped=1)
  1. 數(shù)據(jù)準(zhǔn)備,這里為了重點關(guān)注模型訓(xùn)練過程,我們這里從倉庫[raccoon_dataset]獲取已經(jīng)標(biāo)注好的數(shù)據(jù)集。
    然后放在對應(yīng)的目錄 workspace/data 目錄,如下所示:
? ls workspace/data/
object-detection.pbtxt  raccoon_labels.csv  test_labels.csv  test.record  train_labels.csv  train.record
  1. 模型選擇和訓(xùn)練參數(shù)配置(重點?。。?,這里為了演示不會詳細介紹每一個參數(shù)的意義,具體參數(shù)的意義可以查看)
  • 模型選擇,現(xiàn)在有很多現(xiàn)成的模型可以加快我們的訓(xùn)練,我們需要在此基礎(chǔ)上進行調(diào)參,TensorFlow2 對象檢測已有的算法模型在這里 [tf2_detection_zoo],這里我們需要從中下載一個模型進行訓(xùn)練,本章中我選擇的是 [efficientdet_d0_coco17_tpu-32.tar.gz]。將這個模型的壓縮包解壓到 demo/workspace/pre_trained_models 目錄下。
? tree -L 3 workspace/pre_trained_models/efficientdet_d0_coco17_tpu-32/
workspace/pre_trained_models/efficientdet_d0_coco17_tpu-32/
├── checkpoint
│   ├── checkpoint
│   ├── ckpt-0.data-00000-of-00001
│   └── ckpt-0.index
├── pipeline.config
└── saved_model
    ├── assets
    ├── saved_model.pb
    └── variables
        ├── variables.data-00000-of-00001
        └── variables.index

5 directories, 7 files

這里關(guān)鍵的是 chekpoint 目錄和 pipeline.config,checkpoint 包含了目標(biāo)訓(xùn)練的切入點,pipeline.config 是我們后續(xù)需要調(diào)整的模型訓(xùn)練配置文件。

  • 訓(xùn)練參數(shù)微調(diào),這里為了加快介紹模型訓(xùn)練的過程,直接看下對該文件的 diff 文件可以更直觀看到做了哪些修改
--- workspace/pre_trained_models/efficientdet_d0_coco17_tpu-32/pipeline.config	2020-07-11 08:12:31.000000000 +0800
+++ workspace/models/efficientdet_d0/v2/pipeline.config	2023-12-14 14:10:58.998130084 +0800
@@ -1,6 +1,6 @@
 model {
   ssd {
-    num_classes: 90
+    num_classes: 1
     image_resizer {
       keep_aspect_ratio_resizer {
         min_dimension: 512
@@ -131,7 +131,7 @@
   }
 }
 train_config {
-  batch_size: 128
+  batch_size: 8
   data_augmentation_options {
     random_horizontal_flip {
     }
@@ -149,29 +149,29 @@
       learning_rate {
         cosine_decay_learning_rate {
           learning_rate_base: 0.07999999821186066
-          total_steps: 300000
+          total_steps: 300
           warmup_learning_rate: 0.0010000000474974513
-          warmup_steps: 2500
+          warmup_steps: 25
         }
       }
       momentum_optimizer_value: 0.8999999761581421
     }
     use_moving_average: false
   }
-  fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED"
-  num_steps: 300000
+  fine_tune_checkpoint: "/home/red/Projects/ai_track_feiteng/demo2/workspace/pre_trained_models/efficientdet_d0_coco17_tpu-32/checkpoint/ckpt-0"
+  num_steps: 300
   startup_delay_steps: 0.0
   replicas_to_aggregate: 8
   max_number_of_boxes: 100
   unpad_groundtruth_tensors: false
-  fine_tune_checkpoint_type: "classification"
-  use_bfloat16: true
+  fine_tune_checkpoint_type: "detection"
+  use_bfloat16: false
   fine_tune_checkpoint_version: V2
 }
 train_input_reader: {
-  label_map_path: "PATH_TO_BE_CONFIGURED/label_map.txt"
+  label_map_path: "/home/red/Projects/ai_track_feiteng/demo2/workspace/data/object-detection.pbtxt"
   tf_record_input_reader {
-    input_path: "PATH_TO_BE_CONFIGURED/train2017-?????-of-00256.tfrecord"
+    input_path: "/home/red/Projects/ai_track_feiteng/demo2/workspace/data/train.record"
   }
 }
 
@@ -182,10 +182,10 @@
 }
 
 eval_input_reader: {
-  label_map_path: "PATH_TO_BE_CONFIGURED/label_map.txt"
+  label_map_path: "/home/red/Projects/ai_track_feiteng/demo2/workspace/data/object-detection.pbtxt"
   shuffle: false
   num_epochs: 1
   tf_record_input_reader {
-    input_path: "PATH_TO_BE_CONFIGURED/val2017-?????-of-00032.tfrecord"
+    input_path: "/home/red/Projects/ai_track_feiteng/demo2/workspace/data/test.record"
   }
 }

其中關(guān)鍵的修改點:

num_classes = 1  表示識別一類目標(biāo)
batch_size = 8   表示這個參數(shù)會影響訓(xùn)練時候消耗的內(nèi)存
fine_tune_checkpoint_type: "detection" 表示進行目標(biāo)檢測
use_bfloat16: false 不使用 TPU
fine_tune_checkpoint: "/home/red/Projects/ai_track_feiteng/demo2/workspace/pre_trained_models/efficientdet_d0_coco17_tpu-32/checkpoint/ckpt-0" 設(shè)置模型訓(xùn)練的切入點
num_steps: 300 總的學(xué)習(xí)步數(shù)
  1. 模型訓(xùn)練和導(dǎo)出
    經(jīng)過前面的鋪墊,目前已經(jīng)具備了訓(xùn)練條件,執(zhí)行如下腳本開始訓(xùn)練,我這邊訓(xùn)練了大概15分鐘:
#!/bin/sh
python3.8 model_main_tf2.py 
  --pipeline_config_path=./models/efficientdet_d0/v2/pipeline.config 
  --model_dir=./models/efficientdet_d0/v2 
  --checkpoint_every_n=8 
  --num_workers=12 
  --alsologtostderr

訓(xùn)練完成后,就可以將模型導(dǎo)出,使用如下命令:

python3.8 exporter_main_v2.py 
  --pipeline_config_path=./models/efficientdet_d0/v2/pipeline.config 
  --trained_checkpoint_dir=./models/efficientdet_d0/v2 
  --output_directory=./exported_models/efficientdet_d0 
  --input_type=image_tensor

上述命令會將模型導(dǎo)出到 ./exported_models/efficientdet_d0 目錄,導(dǎo)出成功后會看到如下內(nèi)容:

? tree -L 3 workspace/exported_models/efficientdet_d0/
workspace/exported_models/efficientdet_d0/
├── checkpoint
│   ├── checkpoint
│   ├── ckpt-0.data-00000-of-00001
│   └── ckpt-0.index
├── pipeline.config
└── saved_model
    ├── assets
    ├── fingerprint.pb
    ├── saved_model.pb
    └── variables
        ├── variables.data-00000-of-00001
        └── variables.index

5 directories, 8 files

可以看到這是我們自己訓(xùn)練出來的模型和前面提到的和網(wǎng)上下載的模型efficientdet_d0_coco17_tpu-32.tar.gz解壓之后的結(jié)構(gòu)很像。
11. 最后就演示下訓(xùn)練模型的精度,這里提供了網(wǎng)上的一個示例代碼,針對我的代碼結(jié)構(gòu),我做了下微調(diào)(代碼之前是在.ipynb格式文件中的,為此,我還改了一個 python 腳本用來提取其中的 python代碼),該代碼會對測試圖像進行檢測,將識別出來的目標(biāo)用框標(biāo)注出來。首先看下測試的腳本:

#!/bin/python3.8

import os # importing OS in order to make GPU visible
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # do not change anything in here

# specify which device you want to work on.
# Use "-1" to work on a CPU. Default value "0" stands for the 1st GPU that will be used
os.environ["CUDA_VISIBLE_DEVICES"]="0" # TODO: specify your computational device
import tensorflow as tf # import tensorflow

# checking that GPU is found
if tf.test.gpu_device_name():
    print('GPU found')
else:
    print("No GPU found")
# other import
import numpy as np
from PIL import Image
import matplotlib
from matplotlib import pyplot as plt
from tqdm import tqdm
import sys # importyng sys in order to access scripts located in a different folder

print(matplotlib.get_backend())

path2scripts = ['../models/research/', '../models/'] # TODO: provide pass to the research folder
sys.path.insert(0, path2scripts[0]) # making scripts in models/research available for import
sys.path.insert(0, path2scripts[1]) # making scripts in models/research available for import
print(sys.path)
# importing all scripts that will be needed to export your model and use it for inference
from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder
# NOTE: your current working directory should be Tensorflow.

# TODO: specify two pathes: to the pipeline.config file and to the folder with trained model.
path2config ='exported_models/efficientdet_d0/pipeline.config'
path2model = 'exported_models/efficientdet_d0/'
# do not change anything in this cell
configs = config_util.get_configs_from_pipeline_file(path2config) # importing config
model_config = configs['model'] # recreating model config
detection_model = model_builder.build(model_config=model_config, is_training=False) # importing model
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(path2model, 'checkpoint/ckpt-0')).expect_partial()
path2label_map = 'data/object-detection.pbtxt' # TODO: provide a path to the label map file
category_index = label_map_util.create_category_index_from_labelmap(path2label_map,use_display_name=True)
def detect_fn(image):
    """
    Detect objects in image.

    Args:
      image: (tf.tensor): 4D input image

    Returs:
      detections (dict): predictions that model made
    """

    image, shapes = detection_model.preprocess(image)
    prediction_dict = detection_model.predict(image, shapes)
    detections = detection_model.postprocess(prediction_dict, shapes)

    return detections
def load_image_into_numpy_array(path):
    """Load an image from file into a numpy array.

    Puts image into numpy array to feed into tensorflow graph.
    Note that by convention we put it into a numpy array with shape
    (height, width, channels), where channels=3 for RGB.

    Args:
      path: the file path to the image

    Returns:
      numpy array with shape (img_height, img_width, 3)
    """

    return np.array(Image.open(path))
def inference_with_plot(path2images, box_th=0.25):
    """
    Function that performs inference and plots resulting b-boxes

    Args:
      path2images: an array with pathes to images
      box_th: (float) value that defines threshold for model prediction.

    Returns:
      None
    """
    for image_path in path2images:

        print('Running inference for {}... '.format(image_path), end='')

        image_np = load_image_into_numpy_array(image_path)

        input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
        detections = detect_fn(input_tensor)

        # All outputs are batches tensors.
        # Convert to numpy arrays, and take index [0] to remove the batch dimension.
        # We're only interested in the first num_detections.
        num_detections = int(detections.pop('num_detections'))
        detections = {key: value[0, :num_detections].numpy()
                      for key, value in detections.items()}

        detections['num_detections'] = num_detections

        # detection_classes should be ints.
        detections['detection_classes'] = detections['detection_classes'].astype(np.int64)

        label_id_offset = 1
        image_np_with_detections = image_np.copy()

        viz_utils.visualize_boxes_and_labels_on_image_array(
                image_np_with_detections,
                detections['detection_boxes'],
                detections['detection_classes']+label_id_offset,
                detections['detection_scores'],
                category_index,
                use_normalized_coordinates=True,
                max_boxes_to_draw=200,
                min_score_thresh=box_th,
                agnostic_mode=False,
                line_thickness=5)

        plt.figure(figsize=(15,10))
        plt.imshow(image_np_with_detections)
        print('Done')
        marked_file_name="marked_"+image_path
        plt.savefig(marked_file_name)
        print('Saved {} Done'.format(marked_file_name))
    matplotlib.use('TkAgg')
    plt.show()
def nms(rects, thd=0.5):
    """
    Filter rectangles
    rects is array of oblects ([x1,y1,x2,y2], confidence, class)
    thd - intersection threshold (intersection divides min square of rectange)
    """
    out = []

    remove = [False] * len(rects)

    for i in range(0, len(rects) - 1):
        if remove[i]:
            continue
        inter = [0.0] * len(rects)
        for j in range(i, len(rects)):
            if remove[j]:
                continue
            inter[j] = intersection(rects[i][0], rects[j][0]) / min(square(rects[i][0]), square(rects[j][0]))

        max_prob = 0.0
        max_idx = 0
        for k in range(i, len(rects)):
            if inter[k] >= thd:
                if rects[k][1] > max_prob:
                    max_prob = rects[k][1]
                    max_idx = k

        for k in range(i, len(rects)):
            if (inter[k] >= thd) & (k != max_idx):
                remove[k] = True

    for k in range(0, len(rects)):
        if not remove[k]:
            out.append(rects[k])

    boxes = [box[0] for box in out]
    scores = [score[1] for score in out]
    classes = [cls[2] for cls in out]
    return boxes, scores, classes


def intersection(rect1, rect2):
    """
    Calculates square of intersection of two rectangles
    rect: list with coords of top-right and left-boom corners [x1,y1,x2,y2]
    return: square of intersection
    """
    x_overlap = max(0, min(rect1[2], rect2[2]) - max(rect1[0], rect2[0]));
    y_overlap = max(0, min(rect1[3], rect2[3]) - max(rect1[1], rect2[1]));
    overlapArea = x_overlap * y_overlap;
    return overlapArea


def square(rect):
    """
    Calculates square of rectangle
    """
    return abs(rect[2] - rect[0]) * abs(rect[3] - rect[1])
def inference_as_raw_output(path2images,
                            box_th = 0.25,
                            nms_th = 0.5,
                            to_file = False,
                            data = None,
                            path2dir = False):
    """
    Function that performs inference and return filtered predictions

    Args:
      path2images: an array with pathes to images
      box_th: (float) value that defines threshold for model prediction. Consider 0.25 as a value.
      nms_th: (float) value that defines threshold for non-maximum suppression. Consider 0.5 as a value.
      to_file: (boolean). When passed as True = > results are saved into a file. Writing format is
      path2image + (x1abs, y1abs, x2abs, y2abs, score, conf) for box in boxes
      data: (str) name of the dataset you passed in (e.g. test/validation)
      path2dir: (str). Should be passed if path2images has only basenames. If full pathes provided = > set False.

    Returs:
      detections (dict): filtered predictions that model made
    """
    print (f'Current data set is {data}')
    print (f'Ready to start inference on {len(path2images)} images!')

    for image_path in tqdm(path2images):

        if path2dir: # if a path to a directory where images are stored was passed in
            image_path = os.path.join(path2dir, image_path.strip())

        image_np = load_image_into_numpy_array(image_path)

        input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
        detections = detect_fn(input_tensor)

        # checking how many detections we got
        num_detections = int(detections.pop('num_detections'))

        # filtering out detection in order to get only the one that are indeed detections
        detections = {key: value[0, :num_detections].numpy() for key, value in detections.items()}

        # detection_classes should be ints.
        detections['detection_classes'] = detections['detection_classes'].astype(np.int64)

        # defining what we need from the resulting detection dict that we got from model output
        key_of_interest = ['detection_classes', 'detection_boxes', 'detection_scores']

        # filtering out detection dict in order to get only boxes, classes and scores
        detections = {key: value for key, value in detections.items() if key in key_of_interest}

        if box_th: # filtering detection if a confidence threshold for boxes was given as a parameter
            for key in key_of_interest:
                scores = detections['detection_scores']
                current_array = detections[key]
                filtered_current_array = current_array[scores > box_th]
                detections[key] = filtered_current_array

        if nms_th: # filtering rectangles if nms threshold was passed in as a parameter
            # creating a zip object that will contain model output info as
            output_info = list(zip(detections['detection_boxes'],
                                   detections['detection_scores'],
                                   detections['detection_classes']
                                  )
                              )
            boxes, scores, classes = nms(output_info)

            detections['detection_boxes'] = boxes # format: [y1, x1, y2, x2]
            detections['detection_scores'] = scores
            detections['detection_classes'] = classes

        if to_file and data: # if saving to txt file was requested

            image_h, image_w, _ = image_np.shape
            file_name = f'pred_result_{data}.txt'

            line2write = list()
            line2write.append(os.path.basename(image_path))

            with open(file_name, 'a+') as text_file:
                # iterating over boxes
                for b, s, c in zip(boxes, scores, classes):

                    y1abs, x1abs = b[0] * image_h, b[1] * image_w
                    y2abs, x2abs = b[2] * image_h, b[3] * image_w

                    list2append = [x1abs, y1abs, x2abs, y2abs, s, c]
                    line2append = ','.join([str(item) for item in list2append])

                    line2write.append(line2append)

                line2write = ' '.join(line2write)
                text_file.write(line2write + os.linesep)

        return detections
inference_with_plot(["1.jpg", "2.jpg"], 0.6)

這個腳本會檢測當(dāng)前目錄下的 1.jpg 和 2.jpg 文件,然后將識別出來概率>0.5的目標(biāo)用框框起來,并分別命名為marked_1.jpg和marked_2.jpg。原始圖像分別是:
2.jpg
1.jpg

執(zhí)行腳本進行檢測處理:

? source ../t./b.py
2023-12-15 06:28:50.519691: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-12-15 06:28:50.520813: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-15 06:28:50.545707: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-12-15 06:28:50.546025: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-12-15 06:28:50.990588: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-12-15 06:28:51.480008: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:268] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2023-12-15 06:28:51.480053: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:168] retrieving CUDA diagnostic information for host: fedora
2023-12-15 06:28:51.480057: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:175] hostname: fedora
2023-12-15 06:28:51.480104: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:199] libcuda reported version is: 535.146.2
2023-12-15 06:28:51.480114: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:203] kernel reported version is: 535.146.2
2023-12-15 06:28:51.480117: I tensorflow/compiler/xla/stream_executor/cuda/cuda_diagnostics.cc:309] kernel version seems to match DSO: 535.146.2
No GPU found
TkAgg
['../models/', '../models/research/', '/home/red/Projects/ai_track_feiteng/demo2/workspace', '/usr/lib64/python38.zip', '/usr/lib64/python3.8', '/usr/lib64/python3.8/lib-dynload', '/home/red/.local/lib/python3.8/site-packages', '/usr/lib64/python3.8/site-packages', '/usr/lib/python3.8/site-packages']
Running inference for 1.jpg... Done
Saved marked_1.jpg Done
Running inference for 2.jpg... Done
Saved marked_2.jpg Done

檢測后并處理的圖像是:
marked_2.jpg
marked_1.jpg

因為訓(xùn)練的次數(shù)較少,導(dǎo)致識別的準(zhǔn)確度并不是特別高,但是整個訓(xùn)練和演示的流程的還是完整的。希望能對大家了解 TensorFlow2 進行目標(biāo)檢測有所幫助。

這里再附下,提取.ipynb格式文件中python代碼的示例代碼:

#!/bin/python3.8

import json
import sys
import os
from pathlib import Path

out_file_name=Path(sys.argv[1]).stem+'.py'

with open(sys.argv[1],'r') as f:
    text=json.load(f)

if len(sys.argv) > 2:
    out_file_name = sys.argv[2]

print('args:{}nout_file:{}'.format(sys.argv[1:], out_file_name))
with open(out_file_name, 'w') as fp:
    fp.writelines("#!/bin/python3.8nn")
    for x in text['cells']:
        if x['cell_type'] == "code":
            fp.writelines([i.rstrip()+'n' for i in x['source']])

下一章,我會介紹如何獲取圖像數(shù)據(jù), 標(biāo)柱圖像 ,然后進行模型訓(xùn)練,敬請期待。

審核編輯 黃宇

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請聯(lián)系本站處理。 舉報投訴
  • 目標(biāo)檢測
    +關(guān)注

    關(guān)注

    0

    文章

    209

    瀏覽量

    15605
  • tensorflow
    +關(guān)注

    關(guān)注

    13

    文章

    329

    瀏覽量

    60527
  • 飛騰派
    +關(guān)注

    關(guān)注

    2

    文章

    9

    瀏覽量

    214
收藏 人收藏

    評論

    相關(guān)推薦

    飛騰4G免費試用第二章PC使用 TensorFlow2 訓(xùn)練目標(biāo)檢測模型

    使用 TensorFlow2 訓(xùn)練目標(biāo)檢測模型 因為我的項目是計劃在飛騰派上實現(xiàn)一個
    發(fā)表于 12-15 06:44

    飛騰4G免費試用】第四:部署模型飛騰的嘗試

    PC飛騰使用同一個模型進行檢測的時間,可以對比看下: 飛騰
    發(fā)表于 12-20 21:10

    飛騰4G免費試用】第五:使用C++部署tflite模型飛騰

    來的文章匯總: 【飛騰4G免費試用】第一:從 Armbian 構(gòu)建并安裝 jammy 到
    發(fā)表于 12-27 21:17

    飛騰4G免費試用2飛騰openwrt固件燒錄

    接上文【飛騰4G免費試用】環(huán)境搭建 9-工具包 Win32DiskImager2.0.1.8寫鏡像文件。 選擇:
    發(fā)表于 12-27 21:37

    飛騰4G免費試用】初步認(rèn)識飛騰4G版開發(fā)板

    這幾天收到飛騰 4G 基礎(chǔ)套件,給大家做個介紹,讓大家可以了解一下這塊開發(fā)板, 飛騰 4G
    發(fā)表于 01-02 22:23

    飛騰4G免費試用】大家來了解飛騰4G版開發(fā)板

    國產(chǎn)高性能、低功耗通用計算微處理器的設(shè)計研發(fā)和產(chǎn)業(yè)化推廣。飛騰是一款面向行業(yè)工程師、學(xué)生和愛好者的開源硬件,采用飛騰嵌入式四核處理器,兼容ARM V8架構(gòu),板載64位 DDR4內(nèi)存,
    發(fā)表于 01-02 22:43

    飛騰4G免費試用飛騰開發(fā)板運行Ubuntu系統(tǒng)

    、UART、CAN、HDMI、 音頻等接口,集成一路miniPCIE接口,可實現(xiàn)AI加速卡與4G、5G通信等多種功能模塊的擴展。操作系統(tǒng)層面,飛騰
    發(fā)表于 01-08 22:40

    飛騰4G免費試用】紅綠燈項目-2飛騰 openkylin 進行IO控制2

    | 接上文【飛騰4G免費試用】紅綠燈項目-2飛騰
    發(fā)表于 01-17 19:46

    飛騰4G免費試用】來更多的了解飛騰4G版開發(fā)板!

    。 飛騰4G版開發(fā)板有豐富的接口,下面是各接口介紹: 產(chǎn)品技術(shù)規(guī)格 CPU 飛騰四核處理器,兼容ARM v8指令集,2xFTC664@1.8
    發(fā)表于 01-22 00:34

    飛騰4G免費試用飛騰4G版開發(fā)板套裝測試及環(huán)境搭建

    先簡單介紹一下這款飛騰4G版開發(fā)板套裝; 飛騰是由中電港螢火工場研發(fā)的一款面向行業(yè)工程師、學(xué)生和愛好者的開源硬件。主板處理器采用
    發(fā)表于 01-22 00:47

    【新品體驗】飛騰4G版基礎(chǔ)套裝免費試用

    飛騰是由飛騰攜手中電港螢火工場研發(fā)的一款面向行業(yè)工程師、學(xué)生和愛好者的開源硬件,采用飛騰嵌入式四核處理器,兼容ARM V8架構(gòu),板載64位 DDR
    發(fā)表于 10-25 11:44

    飛騰4G免費試用】開發(fā)環(huán)境搭建

    : 用戶名:user 密碼:user IP地址,用戶名及密碼輸入無誤后通過ssh成功登錄飛騰。 2)文件傳輸 我這里用的是WINSCP。打開軟件后,先進行一些設(shè)置,如下圖: 成功連上開發(fā)板后,左邊是
    發(fā)表于 12-09 17:53

    飛騰4G免費試用】開箱測評

    ,其中 FTC664 核主頻可達 1.8GHz,F(xiàn)TC310 核主頻可達 1.5GHz。 板載 64 位 DDR4 內(nèi)存,有 2G4G 兩個版本,支持 SD 或者 eMMC 外部存儲。主板板載
    發(fā)表于 12-10 21:27

    飛騰4G免費試用】第三:抓取圖像,手動標(biāo)注并完成自定義目標(biāo)檢測模型訓(xùn)練和測試

    的過程和第二章一樣,只是數(shù)據(jù)集變了,這里只是展示下訓(xùn)練過程和模型導(dǎo)出過程的截圖。 模型訓(xùn)練模型
    發(fā)表于 12-16 10:05

    飛騰4G免費試用】第四:部署模型飛騰的嘗試

    本章記錄這幾天嘗試將訓(xùn)練的佩奇檢測模型部署到飛騰的階段總結(jié)。
    的頭像 發(fā)表于 12-20 20:54 ?2576次閱讀
    【<b class='flag-5'>飛騰</b><b class='flag-5'>派</b><b class='flag-5'>4G</b>版<b class='flag-5'>免費</b><b class='flag-5'>試用</b>】第四<b class='flag-5'>章</b>:部署<b class='flag-5'>模型</b>到<b class='flag-5'>飛騰</b><b class='flag-5'>派</b>的嘗試
    RM新时代网站-首页