Face State Analysis

Face State analysis provides additional information about detected faces, for example:

  • The facial expression.
  • Whether the person's eyes are open.
  • Whether the person is wearing spectacles.
Configuration Parameter Description
Input The track that you want to process.
NumParallel The maximum number of video frames to analyze simultaneously.
SegmentDuration The maximum duration of a record in the SegmentedResult or SegmentedResultWithSource track.
Type The analysis engine to use. Set this parameter to FaceState.

Output Tracks

The following table describes the tracks that are generated by this engine. The Output column indicates whether the information contained in the track is included by default in the output created by an output task (when you don't set the Input parameter for the output task).

Output track Description Output
Data Contains one record, describing the analysis results, for each input record. No
DataWithSource

The same as the Data track, but each record also includes the source frame.

No
Result Contains one record, describing the analysis results, for each detected face. Yes
ResultWithSource

The same as the Result track, but each record also includes the best source frame.

No
SegmentedResult

(Available only when processing video). The same as the Result track, except the duration of each record is limited by the parameter SegmentDuration. If a result exceeds the specified duration, Media Server begins a new record with the same ID.

Yes
SegmentedResultWithSource

(Available only when processing video). The same as the ResultWithSource track, except the duration of each record is limited by the parameter SegmentDuration. If a result exceeds the specified duration, Media Server begins a new record with the same ID.

No

For more information see Face Expression Analysis Results or use the action GetExampleRecord.