Face State Analysis
Face State analysis provides additional information about detected faces, for example:
- The facial expression.
- Whether the person's eyes are open.
- Whether the person is wearing spectacles.
Configuration Parameter | Description |
---|---|
Input | The track that you want to process. |
NumParallel | The maximum number of video frames to analyze simultaneously. |
SegmentDuration | The maximum duration of a record in the SegmentedResult or SegmentedResultWithSource track. |
Type | The analysis engine to use. Set this parameter to FaceState . |
Output Tracks
The following table describes the tracks that are generated by this engine. The Output column indicates whether the information contained in the track is included by default in the output created by an output task (when you don't set the Input
parameter for the output task).
Output track | Description | Output |
---|---|---|
Data
|
Contains one record, describing the analysis results, for each input record. | No |
DataWithSource
|
The same as the |
No |
Result
|
Contains one record, describing the analysis results, for each detected face. | Yes |
ResultWithSource
|
The same as the |
No |
SegmentedResult
|
(Available only when processing video). The same as the |
Yes |
SegmentedResultWithSource
|
(Available only when processing video). The same as the |
No |
For more information see Face Expression Analysis Results or use the action GetExampleRecord.