I tried to touch OpenPose's Python API, so I will write a memorandum
The procedure for installing the Python API is explained in the following article. https://qiita.com/hac-chi/items/0e6f910a9b463438fa81
The official Python API sample code can be found here I'm just explaining the sample code, so if you are faster to read the source, I think you should refer to it. https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/examples/tutorial_api_python
# Starting OpenPose
opWrapper = op.WrapperPython()
opWrapper.configure(params)
opWrapper.start()
To start using
The params passed here is a dictionary type.
Pass various settings in params for using OpenPose.
For example, the model path is specified as follows
params = dict()
params["model_folder"] = "../../../models/"
Below is a list of model parameters and default values. https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/include/openpose/flags.hpp
If you pick something that seems important with your own judgment and prejudice
DEFINE_int32(number_people_max, -1, "This parameter will limit the maximum number of people detected, by keeping the people with"
" top scores. The score is based in person area over the image, body part score, as well as"
" joint score (between each pair of connected body parts). Useful if you know the exact"
" number of people in the scene, so it can remove false positives (if all the people have"
" been detected. However, it might also include false negatives by removing very small or"
" highly occluded people. -1 will keep them all.");
DEFINE_string(model_pose, "BODY_25", "Model to be used. E.g., `BODY_25` (fastest for CUDA version, most accurate, and includes"
" foot keypoints), `COCO` (18 keypoints), `MPI` (15 keypoints, least accurate model but"
" fastest on CPU), `MPI_4_layers` (15 keypoints, even faster but less accurate).");
DEFINE_bool(3d, false, "Running OpenPose 3-D reconstruction demo: 1) Reading from a stereo camera system."
" 2) Performing 3-D reconstruction from the multiple views. 3) Displaying 3-D reconstruction"
" results. Note that it will only display 1 person. If multiple people is present, it will"
" fail.");
DEFINE_string(write_json, "", "Directory to write OpenPose output in JSON format. It includes body, hand, and face pose"
" keypoints (2-D and 3-D), as well as pose candidates (if `--part_candidates` enabled).");
DEFINE_string(udp_host, "", "Experimental, not available yet. IP for UDP communication. E.g., `192.168.0.1`.");
DEFINE_string(udp_port, "8051", "Experimental, not available yet. Port number for UDP communication.");
Load the image Please note that ** PIL cannot be used, please use OpenCV **.
Officially there is a description as follows
Do not use PIL In order to read images in Python, make sure to use OpenCV (do not use PIL). We found that feeding a PIL image format to OpenPose results in the input image appearing in grey and duplicated 9 times (so the output skeleton appear 3 times smaller than they should be, and duplicated 9 times). Source: OpenPose Python Module and Demo
How to read
First, create an object for data transfer with ʻop.Datum () Store the image read by OpenCV indatum.cvInputData And let's pass it as a list to ʻopWrapper.emplaceAndPop
ʻOpWrapper.emplaceAndPop` has no return value, but the analysis result (output image, joint position, etc.) is included in the passed datum.
datum = op.Datum()
imageToProcess = cv2.imread("image_path")
datum.cvInputData = imageToProcess
opWrapper.emplaceAndPop([datum])
#Joint coordinates
print("Body keypoints:" + str(datum.poseKeypoints))
#Image showing joints
cv2.imshow("Output Image", datum.cvOutputData
cv2.waitKey(0)
Besides, in datum
datum.faceKeypoints #Coordinates of each part of the face
datum.handKeypoints[0]#Coordinates of each part on the left hand
datum.handKeypoints[1]#Coordinates of each part on the right hand
Etc. are included in various ways
See below for more information https://cmu-perceptual-computing-lab.github.io/openpose/html/structop_1_1_datum.html
Recommended Posts