Face Detection Results
The following XML shows a single record produced by face detection.
<output> <record> ... <trackname>FaceDetect.Result</trackname> <FaceResult> <id>4895aeee-6a8f-44f9-915c-b86eff702118</id> <face> <region> <left>282</left> <top>84</top> <width>236</width> <height>236</height> </region> <outOfPlaneAngleX>0</outOfPlaneAngleX> <outOfPlaneAngleY>0</outOfPlaneAngleY> <percentageInImage>100</percentageInImage> <confidence>100</confidence> <ellipse> <center> <x>398.5</x> <y>194.25</y> </center> <a>106.25</a> <b>148.75</b> <angle>0</angle> </ellipse> <lefteye> <center> <x>441</x> <y>173</y> </center> <radius>16</radius> </lefteye> <righteye> <center> <x>356</x> <y>173</y> </center> <radius>16</radius> </righteye> </face> <parentID>4d69390f-a8c4-4c5d-a0b0-705a3f98aa9b</parentID> </FaceResult> </record> </output>
The record contains the following information:
-
The
id
element provides a unique identifier for the detected face. The face detection engine issues an ID for each detected appearance of a face. If you are detecting faces in video and consecutive frames show the same face in a near-identical location, all records related to that appearance will have the same ID.For example, if a face appears in the same location for a hundred consecutive video frames, the engine uses the same ID for each record in the data track and the single record in the result track. The record in the result track will have a timestamp that covers all of the frames.
If the face disappears and then reappears, the engine considers this as a new detection and produces a new ID and a new record in the result track.
-
The
face
element contains the location of the detected face:region
describes the location of the face within the image or video frame. Theleft
andtop
elements provide the position of the top-left corner of a rectangle that surrounds the face, and thewidth
andheight
elements provide its size.outOfPlaneAngleX
indicates how far the person is looking to the left or right.outOfPlaneAngleY
indicates how far the person is looking up or down. When both of these angles are zero, the person is looking directly at the camera.percentageInImage
indicates how much of the face is within the image boundary. If a face appears on the edge of an image and is only partially visible, this value will be less than100
.confidence
is the face detection confidence score, from 0 to 100 where 100 indicates the greatest confidence. For Media Server to output a detected face, the confidence score must meet the threshold set by theDetectionThreshold
parameter.- the
ellipse
element describes the location of the detected face as a circle or ellipse. WhenDetectEyes=FALSE
, Media Server returns a circle that describes the approximate position of the face. WhenDetectEyes=TRUE
and the person is looking towards the camera (so thatoutOfPlaneAngleX
is less than90
), Media Server returns an ellipse that should more accurately describe the position. - the
lefteye
andrighteye
elements describe eye locations. These are returned only ifDetectEyes=TRUE
and the person is looking towards the camera (so thatoutOfPlaneAngleX
is less than90
).
TIP: Face detection can return co-ordinates that are negative. For example, in the
region
element the values forleft
andtop
can be negative if a face is detected on the edge of an image. In cases where a face fills the source image, the values forwidth
andheight
might also exceed the image dimensions. -
The
parentID
element is empty, unless you configure the analysis engine withRegion=Input
in which case it contains the UUID of the input record. This provides a way to link the result with other records (from another analysis task) that supplied the region to analyze. To generate a single record combining the information, you can use the Combine ESP engine and the example Lua scriptparentuuidMatch.lua
.