vitis::ai::OpenPose

openpose model, input size is 368x368.

Base class for detecting poses of people.

Input is an image (cv:Mat).

Output is a OpenPoseResult.

Sample code :

 auto image = cv::imread("sample_openpose.jpg");
 if (image.empty()) {
   std::cerr << "cannot load image" << std::endl;
   abort();
 }
 auto det = vitis::ai::OpenPose::create("openpose_pruned_0_3");
 int width = det->getInputWidth();
 int height = det->getInputHeight();
 vector<vector<int>> limbSeq = {{0,1}, {1,2}, {2,3}, {3,4}, {1,5}, {5,6},
{6,7}, {1,8}, \ {8,9}, {9,10}, {1,11}, {11,12}, {12,13}}; float scale_x =
float(image.cols) / float(width); float scale_y = float(image.rows) /
float(height); auto results = det->run(image); for(size_t k = 1; k <
results.poses.size(); ++k){ for(size_t i = 0; i < results.poses[k].size();
++i){ if(results.poses[k][i].type == 1){ results.poses[k][i].point.x *=
scale_x; results.poses[k][i].point.y *= scale_y; cv::circle(image,
results.poses[k][i].point, 5, cv::Scalar(0, 255, 0), -1);
       }
   }
   for(size_t i = 0; i < limbSeq.size(); ++i){
       Result a = results.poses[k][limbSeq[i][0]];
       Result b = results.poses[k][limbSeq[i][1]];
       if(a.type == 1 && b.type == 1){
           cv::line(image, a.point, b.point, cv::Scalar(255, 0, 0), 3, 4);
       }
   }
 }

Display of the openpose model results:

Figure 1: openpose result image

Image sample_openpose_result.jpg

Quick Function Reference

The following table lists all the functions defined in the vitis::ai::OpenPose class:

Table 1. Quick Function Reference
TypeNameArguments
std::unique_ptr< OpenPose >create
  • const std::string & model_name
  • bool need_preprocess
OpenPoseResultrun
  • const cv::Mat & image
std::vector< OpenPoseResult >run
  • const std::vector< cv::Mat > & images
intgetInputWidth
  • void
intgetInputHeight
  • void
size_tget_input_batch
  • void

create

Factory function to get an instance of derived classes of class OpenPose.

Prototype

std::unique_ptr< OpenPose > create(const std::string &model_name, bool need_preprocess=true);

Parameters

The following table lists the create function arguments.

Table 2. create Arguments
Type Name Description
const std::string & model_name Model name
bool need_preprocess Normalize with mean/scale or not, default value is true.

Returns

An instance of OpenPose class.

run

Function to get running result of the openpose neuron network.

Prototype


            OpenPoseResult run(const cv::Mat &image)=0;

Parameters

The following table lists the run function arguments.

Table 3. run Arguments
Type Name Description
const cv::Mat & image Input data of input image (cv::Mat).

Returns

OpenPoseResult.

run

Function to get running results of the openpose neuron network in batch mode.

Prototype

std::vector< OpenPoseResult > run(const std::vector< cv::Mat > &images)=0;

Parameters

The following table lists the run function arguments.

Table 4. run Arguments
Type Name Description
const std::vector< cv::Mat > & images Input data of batch input images (vector<cv::Mat>). The size of input images equals batch size obtained by get_input_batch.

Returns

The vector of OpenPoseResult.

getInputWidth

Function to get InputWidth of the openpose network (input image columns).

Prototype

int getInputWidth() const =0;

Returns

InputWidth of the openpose network

getInputHeight

Function to get InputHeight of the openpose network (input image rows).

Prototype

int getInputHeight() const =0;

Returns

InputHeight of the openpose network.

get_input_batch

Function to get the number of images processed by the DPU at one time.

Note: Different DPU core the batch size may be different. This depends on the IP used.

Prototype

size_t get_input_batch() const =0;

Returns

Batch size.