๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ
  • Tried. Failed. Logged.
๐Ÿค–๋จธ์‹ ๋Ÿฌ๋‹/YOLO

YOLO - ๊ฐ„๋‹จํ•œ ์‚ฌ๋ฌผ ์ธ์‹ ์˜ˆ์ œ(YOLOv5, Colab)

by Janger 2023. 3. 12.
728x90

YOLO(You Only Look Once)๋Š” ๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์‚ฌ๋ฌผ ์ธ์‹ ํ”„๋ ˆ์ž„์›Œํฌ๋‹ค. ๋งŽ์€ ์ธ๊ธฐ ํƒ“์— ๋‹ค์–‘ํ•œ ๋ฒ„์ „๋“ค(v3, v4, v5...)์ด ์ƒ๊ฒจ๋‚˜๊ณ  ์žˆ๋‹ค. ๋‚ด๊ฐ€ ์‚ฌ์šฉํ•  ์˜ˆ์ œ์˜ ๋ฒ„์ „์€ YOLOv5์ด๋‹ค. 

 

๊นƒํ—ˆ๋ธŒ

https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data

 

Train Custom Data

YOLOv5 ๐Ÿš€ in PyTorch > ONNX > CoreML > TFLite. Contribute to ultralytics/yolov5 development by creating an account on GitHub.

github.com

์œ„ ๊นƒํ—ˆ๋ธŒ ํŽ˜์ด์ง€๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋”ฐ๋ผ ํ•˜์˜€๋‹ค. 

 

Roboflow - ์ปค์Šคํ…€ ๋ฐ์ดํ„ฐ์…‹ ๋งŒ๋“ค๊ธฐ

https://app.roboflow.com/

 

Sign in to Roboflow

Even if you're not a machine learning expert, you can use Roboflow train a custom, state-of-the-art computer vision model on your own data.

app.roboflow.com

Roboflow์—์„  ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ๋งˆ์šฐ์Šค๋กœ ๋ผ๋ฒจ๋ง์„ ํ•˜์—ฌ ๋‚˜๋งŒ์˜ ๋ฐ์ดํ„ฐ์…‹์„ ๋งŒ๋“ค ์ˆ˜๊ฐ€ ์žˆ๋‹ค. 

๋ฌผ๋ก  ๋‹ค๋ฅธ ์‚ฌ๋žŒ์ด ๋ฏธ๋ฆฌ ๋งŒ๋“  ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜๋„ ์žˆ์Œ

 

์ƒ์„ฑ์„ ํ•œ ๋ฐ์ดํ„ฐ์…‹์€ Export ๋ฒ„ํŠผ์„ ๋ˆŒ๋Ÿฌ ํฌ๋งท์„ YOLO v5 PyTorch๋กœ ์ง€์ •ํ•ด์•ผ ํ•œ๋‹ค. 

 

 

 

Colab ์˜ˆ์ œ ๋…ธํŠธ(YOLOv5)

https://colab.research.google.com/github/roboflow-ai/yolov5-custom-training-tutorial/blob/main/yolov5-custom-training.ipynb

 

YOLOv5-Custom-Training.ipynb

Run, share, and edit Python notebooks

colab.research.google.com

 

 

* ๋…ธํŠธ ์ค‘๊ฐ„์— ๊ฒฝ๋กœ ๋ฌธ์ œ๊ฐ€ ์žˆ์–ด์„œ ์•ฝ๊ฐ„์˜ ์ˆ˜์ •์ด ํ•„์š”ํ•˜์˜€๋‹ค. 

 

ํŠธ๋ ˆ์ด๋‹(train.py)
!python train.py --img 416 --batch 16 --epochs 150 --data data.yaml --weights yolov5s.pt --cache

YOLOv5 ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๋ช…๋ น์–ด์ด๋‹ค. 

 

--img 416 : ์ž…๋ ฅ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ๋ฅผ 416x416 ํ”ฝ์…€๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.

--batch 16 : ํ•™์Šตํ•  ๋•Œ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ 16์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.

--epochs 150 : ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์„ ํ•™์Šตํ•˜๋Š” ํšŸ์ˆ˜๋ฅผ 150์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.

--data data.yaml : ํ•™์Šต์— ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ์…‹์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค.

--weights yolov5s.pt : ๋ฏธ๋ฆฌ ํ•™์Šต๋œ YOLOv5 ๋ชจ๋ธ ๊ฐ€์ค‘์น˜ ํŒŒ์ผ์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด์ „์— ํ•™์Šตํ•œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ฑฐ๋‚˜, YOLOv5 ๋ชจ๋ธ์˜ ํฌ๊ธฐ๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค(yolov5s.pt, yolov5m.pt, yolov5l.pt, yolov5x.pt).

--cache : ํ•™์Šต ๊ณผ์ •์—์„œ ์‚ฌ์šฉ๋˜๋Š” ์บ์‹œ๋ฅผ ์ €์žฅํ•˜๊ณ  ์žฌ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ํ•™์Šต ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

 

 

 

๊ฐ์ง€(detect.py)

 

 

!python detect.py --weights /content/yolov5/runs/train/exp/weights/best.pt --img 416 --conf 0.1 --source /content/yolov5/test/images

์œ„์˜ ํŠธ๋ ˆ๋‹ˆ์ž‰์„ ๋‹ค ๋งˆ์น˜๊ฒŒ ๋˜๋ฉด /content/yolov5/runs/train/exp/weights/best.pt๋ผ๋Š” ์ตœ์ข… ๋ชจ๋ธ์ด ์ƒ์„ฑ์ด ๋œ๋‹ค. (.pt๋Š” ํŒŒ์ดํ† ์น˜์˜ ๋ชจ๋ธ ํ™•์žฅ์ž)

 

--source ์ธ์ž๋กœ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์‹œ์ผœ์ค„ ์›๋ณธ ์ด๋ฏธ์ง€๋“ค์˜ ํด๋” ๊ฒฝ๋กœ๋ฅผ ์ž…๋ ฅํ•œ๋‹ค. 

 

 

๊ฐ์ง€ ๊ฒฐ๊ณผ ํ™•์ธ
import glob
from IPython.display import Image, display

for imageName in glob.glob('/content/yolov5/runs/detect/exp/*.jpg'): #assuming JPG
    display(Image(filename=imageName))
    print("\n")

๊ฐ์ง€๊ฐ€ ์™„๋ฃŒ ๋˜์—ˆ๋‹ค๋ฉด /content/yolov5/runs/detect/exp/ ํด๋”(exp2, exp3๋กœ ์ฆ๊ฐ€ํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ) ์•ˆ์—๋Š” ์ถœ๋ ฅ๋œ ์ด๋ฏธ์ง€๊ฐ€ ์žˆ์œผ๋‹ˆ ํŒŒ์ด์ฌ ์Šคํฌ๋ฆฝํŠธ๋กœ ์ถœ๋ ฅ๋œ ์ด๋ฏธ์ง€๋“ค์„ ํ™•์ธํ•œ๋‹ค. 

 

 

 

 

pypi - yolov5 ๋ชจ๋“ˆ๋กœ ์‹คํ–‰

 

!pip install yolov5

(yolov5 ๋ชจ๋“ˆ ์ถ”๊ฐ€ ์„ค์น˜)

 

https://pypi.org/project/yolov5/

 

yolov5

Packaged version of the Yolov5 object detector

pypi.org

 

๋˜ ๊ฐ„๋‹จํžˆ yolov5 ๋ชจ๋“ˆ์„ ์„ค์น˜ํ•ด .pt ๋ชจ๋ธ ํŒŒ์ผ์„ ๋ถˆ๋Ÿฌ์™€ ์„ค์ • ๊ฐ’์„ ๋‚ด ๋ง˜๋Œ€๋กœ ์กฐ์ ˆํ•˜๊ฑฐ๋‚˜ ์ด๋ฏธ์ง€ ์ถœ๋ ฅ์„ cv2๋กœ ํŽธ์ง‘ํ•ด์„œ ๋‚˜์˜ค๊ฒŒ๋” ์ˆ˜์ •ํ•˜๋Š” ๋“ฑ ๋‹ค์–‘ํ•œ ์ปค์Šคํ„ฐ๋งˆ์ด์ง•์ด ๊ฐ€๋Šฅํ•˜๋‹ค. 

 

import yolov5
import cv2
from IPython.display import display, Image


model = yolov5.load('/content/yolov5/runs/train/exp2/weights/best.pt')

# set model parameters
model.conf = 0.25  # NMS confidence threshold
model.iou = 0.45  # NMS IoU threshold
model.agnostic = False  # NMS class-agnostic
model.multi_label = False  # NMS multiple labels per box
model.max_det = 1000  # maximum number of detections per image

# set image
img = '/content/image.jpg'

# perform inference
results = model(img)

# inference with larger input size
results = model(img, size=1280)

# inference with test time augmentation
results = model(img, augment=True)

# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]

# ์ด๋ฏธ์ง€์— ๊ฒฝ๊ณ„ ์ƒ์ž ๊ทธ๋ฆฌ๊ธฐ
img = cv2.imread(img)
for box, score, label in zip(boxes, scores, categories):
    x1, y1, x2, y2 = box.int().tolist()
    cv2.rectangle(img, (x1, y1), (x2, y2), (0, 0, 255), 2)
    cv2.putText(img, str(round(score.item(), 2) * 100) + "%", (x1, y1-10), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

display(Image(data=cv2.imencode('.png', img)[1]))


# show detection bounding boxes on image
# results.show()

 

 

 

 

 

์—ฐ์Šต์šฉ์œผ๋กœ ๋งŒ๋“  ์–ผ๊ตด ์ธ์‹ ์ฝ”๋žฉ ๋…ธํŠธ๋ถ

https://colab.research.google.com/drive/1GdaLVQF9XJuGUDdo21uTxVvXkv6oIPwe

 

Google Colaboratory Notebook

Run, share, and edit Python notebooks

colab.research.google.com

 

728x90