[PYTHON] Try Object detection with Raspberry Pi 4 + Coral

1) Install Open CV

Reference page: Easy installation of OpenCV on Raspberry Pi 4 First, install OpenCV to make the USB camera ready for use.

Introduce dependencies

Start the terminal and build the environment with the following command

LXterminal


sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-103
sudo apt-get install libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5
sudo apt-get install python3-dev

Install OpenCV with pip command

LXterminal


pip3 install opencv-contrib-python==4.1.0.25

This completes the image-related settings!

2) Set Coral

Introducing the environment by referring to the coral homepage https://coral.ai/docs/edgetpu/api-intro/

LXterminal


echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update

sudo apt-get install libedgetpu1-std

Install settings

LXterminal


wget https://dl.google.com/coral/edgetpu_api/edgetpu_api_latest.tar.gz -O edgetpu_api.tar.gz --trust-server-names
tar xzf edgetpu_api.tar.gz
cd edgetpu_api
bash ./install.sh

Even when ** errors occur frequently ** here ... An error occurred in the bash command. Cause unknown. (Another method is (3)) (If you know it, it would be helpful if you could tell me the cause.)

After that, install edgetpu with apt-get.

LXterminal


sudo apt-get install python3-edgetpu

3) Execution of Sample script

First, download sample from github

LXterminal


git clone https://github.com/leswright1977/RPi4-Google-Coral

There is an easy-to-understand sample script in _ ** src ** _ in this folder.

-The person who got the bash error in (2) worked by copying install.sh in src to "edgetpu_api". (The cause is unknown ...)

After that, you can perform Object detection just by connecting the USB camera and Coral and running the sample.

Please comment if you have any concerns. (Correct each time.)

python_sample


import cv2
import time
import numpy as np
from multiprocessing import Process
from multiprocessing import Queue

import edgetpu.detection.engine
from edgetpu.utils import image_processing
from PIL import Image

#Misc vars
font = cv2.FONT_HERSHEY_SIMPLEX
queuepulls = 0.0
detections = 0
fps = 0.0
qfps = 0.0
confThreshold = 0.6

#init video
cap = cv2.VideoCapture(0)
print("[info] W, H, FPS")
print(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
print(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
print(cap.get(cv2.CAP_PROP_FPS))

frameWidth = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frameHeight = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

labels_file = 'tpu_models/coco_labels.txt'
# Read labels from text files. Note, there are missing items from the coco_labels txt hence this!
labels = [None] * 10
with open(labels_file, 'r') as f:
    lines = f.readlines()
for line in lines:
    parts = line.strip().split(maxsplit=1)
    labels.insert(int(parts[0]),str(parts[1])) 
print(labels)

#define the function that handles our processing thread
def classify_frame(img, inputQueue, outputQueue):
    engine = edgetpu.detection.engine.DetectionEngine(\
    'tpu_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite')
    # keep looping
    while True:
        # check to see if there is a frame in our input queue
        if not inputQueue.empty():
            # grab the frame from the input queue
            img = inputQueue.get()
            results = engine.DetectWithImage(img, threshold=0.4,\
            keep_aspect_ratio=True, relative_coord=False, top_k=10)
            data_out = []

            if results:
                for obj in results:
                    inference = []
                    box = obj.bounding_box.flatten().tolist()
                    xmin = int(box[0])
                    ymin = int(box[1])
                    xmax = int(box[2])
                    ymax = int(box[3])

                    inference.extend((obj.label_id,obj.score,xmin,ymin,xmax,ymax))
                    data_out.append(inference)
            outputQueue.put(data_out)
# initialize the input queue (frames), output queue (out),
# and the list of actual detections returned by the child process
inputQueue = Queue(maxsize=1)
outputQueue = Queue(maxsize=1)
img = None
out = None

# construct a child process *indepedent* from our main process of
# execution
print("[INFO] starting process...")
p = Process(target=classify_frame, args=(img,inputQueue,outputQueue,))
p.daemon = True
p.start()

print("[INFO] starting capture...")

#time the frame rate....
timer1 = time.time()
frames = 0
queuepulls = 0
timer2 = 0
t2secs = 0

while(cap.isOpened()):
    # Capture frame-by-frame
    ret, frame = cap.read()
    if ret == True:
        if queuepulls ==1:
            timer2 = time.time()
        # Capture frame-by-frame
        #frame = frame.array
        img = Image.fromarray(frame)
        # if the input queue *is* empty, give the current frame to
        # classify
        if inputQueue.empty():
            inputQueue.put(img)
        # if the output queue *is not* empty, grab the detections
        if not outputQueue.empty():
            out = outputQueue.get()
        if out is not None:
            # loop over the detections
            for detection in out:
                objID = detection[0]
                labeltxt = labels[objID]
                confidence = detection[1]
                xmin = detection[2]
                ymin = detection[3]
                xmax = detection[4]
                ymax = detection[5]
                if confidence > confThreshold:
                    #bounding box
                    cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 255))
                    #label
                    labLen = len(labeltxt)*5+40
                    cv2.rectangle(frame, (xmin-1, ymin-1),\
                    (xmin+labLen, ymin-10), (0,255,255), -1)
                    #labeltext
                    cv2.putText(frame,' '+labeltxt+' '+str(round(confidence,2)),\
                    (xmin,ymin-2), font, 0.3,(0,0,0),1,cv2.LINE_AA)
                    detections +=1 #positive detections    
            queuepulls += 1
        # Display the resulting frame
        cv2.rectangle(frame, (0,0),\
        (frameWidth,20), (0,0,0), -1)

        cv2.rectangle(frame, (0,frameHeight-20),\
        (frameWidth,frameHeight), (0,0,0), -1)
        cv2.putText(frame,'Threshold: '+str(round(confThreshold,1)), (10, 10),\
        cv2.FONT_HERSHEY_SIMPLEX, 0.3,(0, 255, 255), 1, cv2.LINE_AA)

        cv2.putText(frame,'VID FPS: '+str(fps), (frameWidth-80, 10),\
        cv2.FONT_HERSHEY_SIMPLEX, 0.3,(0, 255, 255), 1, cv2.LINE_AA)

        cv2.putText(frame,'TPU FPS: '+str(qfps), (frameWidth-80, 20),\
        cv2.FONT_HERSHEY_SIMPLEX, 0.3,(0, 255, 255), 1, cv2.LINE_AA)

        cv2.putText(frame,'Positive detections: '+str(detections), (10, frameHeight-10),\
        cv2.FONT_HERSHEY_SIMPLEX, 0.3,(0, 255, 255), 1, cv2.LINE_AA)

        cv2.putText(frame,'Elapsed time: '+str(round(t2secs,2)), (150, frameHeight-10),\
        cv2.FONT_HERSHEY_SIMPLEX, 0.3,(0, 255, 255), 1, cv2.LINE_AA)
        

        cv2.namedWindow('Coral',cv2.WINDOW_NORMAL)
        #cv2.resizeWindow('Coral',frameWidth,frameHeight)
        cv2.imshow('Coral',frame)
        
        # FPS calculation
        
        frames += 1
        if frames >= 1:
            end1 = time.time()
            t1secs = end1-timer1
            fps = round(frames/t1secs,2)
        if queuepulls > 1:
            end2 = time.time()
            t2secs = end2-timer2
            qfps = round(queuepulls/t2secs,2)
        

        keyPress = cv2.waitKey(1) & 0xFF #Altering waitkey val can alter the framreate for vid files.
        if keyPress == ord('q'):
            break
        if keyPress == ord('r'):
            confThreshold += 0.1
        if keyPress == ord('t'):
            confThreshold -= 0.1
        if confThreshold >1:
            confThreshold = 1
        if confThreshold <0.4:
            confThreshold = 0.4
    # Break the loop
    else: 
        break
#Everything done, release the vid
cap.release()

cv2.destroyAllWindows()

Recommended Posts

Try Object detection with Raspberry Pi 4 + Coral
Try L Chika with raspberry pi
Try moving 3 servos with Raspberry Pi
Try to detect an object with Raspberry Pi ~ Part 1: Comparison of detection speed ~
Try fishing for smelt with Raspberry Pi
GPGPU with Raspberry Pi
DigitalSignage with Raspberry Pi
Mutter plants with Raspberry Pi
Machine learning with Raspberry Pi 4 and Coral USB Accelerator
Try to visualize the room with Raspberry Pi, part 1
Try debugging Python on Raspberry Pi with Visual Studio.
Face detection from images taken with Raspberry Pi camera
Image analysis with Object Detection API to try in 1 hour
Use vl53l0x with Raspberry Pi (python)
Servo motor control with Raspberry Pi
Serial communication with Raspberry Pi + PySerial
OS setup with Raspberry Pi Imager
Try using ArUco on Raspberry Pi
VPN server construction with Raspberry Pi
Using a webcam with Raspberry Pi
Measure SIM signal strength with Raspberry Pi
Pet monitoring with Rekognition and Raspberry pi
Hello World with Raspberry Pi + Minecraft Pi Edition
Build a Tensorflow environment with Raspberry Pi [2020]
Get BITCOIN LTP information with Raspberry PI
Programming normally with Node-RED programming on Raspberry Pi 3
Improved motion sensor made with Raspberry Pi
Power SG-90 servo motor with raspberry pi
Working with sensors on Mathematica on Raspberry Pi
Use PIR motion sensor with raspberry Pi
Make a wash-drying timer with a Raspberry Pi
Infer Custom Vision model with Raspberry Pi
Operate an oscilloscope with a Raspberry Pi
Create a car meter with raspberry pi
Cooking object detection with yolo + image classification
Inkbird IBS-TH1 value logged with Raspberry Pi
[Python] Real-time object detection with iPad camera
Working with GPS on Raspberry Pi 3 Python
Try tweeting arXiv's RSS feed on twitter from Raspberry Pi with python
Discord bot with python raspberry pi zero with [Notes]
Media programming with Raspberry Pi (preparation for audio)
Try using a QR code on a Raspberry Pi
Raspberry Pi backup
Enjoy electronic work with GPIO on Raspberry Pi
MQTT RC car with Arduino and Raspberry Pi
Power on / off your PC with raspberry pi
Use Majoca Iris elongated LCD with Raspberry Pi
CSV output of pulse data with Raspberry Pi (CSV output)
Observe the Geminids meteor shower with Raspberry Pi 4
Get CPU information of Raspberry Pi with Python
Let's try real-time object detection using Faster R-CNN
Play with your Ubuntu desktop on your Raspberry Pi 4
Get temperature and humidity with DHT11 and Raspberry Pi
Stock investment analysis app made with Raspberry Pi
Logging Inkbird IBS-TH1 mini values with Raspberry Pi
Try to use up the Raspberry Pi 2's 4-core CPU with Parallel Python
Connect to MySQL with Python on Raspberry Pi
GPS tracking with Raspberry Pi 4B + BU-353S4 (Python)
Measure CPU temperature of Raspberry Pi with Python
Record temperature and humidity with systemd on Raspberry Pi
Run LEDmatrix interactively with Raspberry Pi 3B + on Slackbot