The Draw
transformation task draws regions on images or video frames.
To draw regions on images and video frames
Create a new configuration to send to HPE Media Server with the process
action, or open an existing configuration that you want to modify.
In the [Transform]
section, add a new transformation task by setting the TransformEngineN
parameter. You can give the task any name, for example:
[Transform] TransformEngine0=Draw
Create a new configuration section to contain the task settings and set the following parameters:
Type
|
The transformation engine to use. Set this parameter to Draw . |
Input
|
The name of the track that contains the images to draw on, with region data. The track must supply records that contain both an image and at least one region. |
Color
|
(Optional) The line color to use when drawing. |
Thickness
|
(Optional) The line thickness to use when drawing. Specify the thickness in pixels. |
For example:
[Draw] Type=Draw Input=CombineESP.Output Color=Red Thickness=1
Save and close the configuration file. HPE recommends that you save your configuration files in the location specified by the ConfigDirectory
parameter.
The following example configuration ingests a video file or stream and encodes one image for each detected face. Each image will show a red ellipse around the detected face.
[Ingest] IngestEngine=AV [AV] Type=libav [Analysis] AnalysisEngine0=FaceDetect [FaceDetect] Type=FaceDetect NumParallel=6 SizeUnit=percent MinSize=10 [Transform] TransformEngine0=Draw [Draw] Type=Draw Input=FaceDetect.ResultWithSource Color=Red Thickness=1 [Encoding] EncodingEngine0=Images [Images] Type=ImageEncoder ImageInput=Draw.Output OutputPath=./_output/%session.token%/%record.starttime.iso8601%.png
The following example configuration ingests a video file or stream and encodes a video that shows detected faces by surrounding them with a red ellipse.
In this example, an ESP task combines the Image_1
track and the FaceDetect.Data
track. This is necessary because the tracks produced by face detection only contain frames with detected faces, but to encode the video correctly we need to encode every frame.
[Ingest] IngestEngine=AV [AV] Type=libav [Analysis] AnalysisEngine0=FaceDetect [FaceDetect] Type=FaceDetect NumParallel=6 SizeUnit=percent MinSize=10 [EventProcessing] EventProcessingEngine0=Combine [Combine] Type=Combine Input0=Image_1 Input1=FaceDetect.Data MaxTimeInterval=50ms [Transform] TransformEngine0=Draw [Draw] Type=Draw Input=Combine.Output Color=Red Thickness=1 [Encoding] EncodingEngine0=MPEG [MPEG] Type=mpeg VideoProfile=mpeg4video_h264_720p ImageInput=Draw.Output OutputPath=./_output/%session.token%/%segment.starttime.iso8601%.ts
|