Recognize Video Clips

To recognize video clips

  1. Create a new configuration to send to Media Server with the process action, or open an existing configuration that you want to modify.

  2. In the [Session] section, add a new analysis task by setting the EngineN parameter. You can give the task any name, for example:

    [Session]
    Engine0=Ingest
    Engine1=VideoMatch
  3. Create a new section to contain the settings for the task and set the following parameters:

    Type The analysis engine to use. Set this parameter to VideoMatch.
    Input (Optional) The image track to analyze. If you do not specify an input track, Media Server processes the first track of the correct type produced by the ingest engine.
    Database (Optional) The name of the video match database to use for recognizing video clips. If you do not set this parameter, Media Server uses all video match databases.
    MinimumMatchDuration

    (Optional) By default, video matching reports when the whole of the clip in the training database is present in the ingested video. To detect partial matches you must set the configuration parameter MinimumMatchDuration.

    For example, if you train a video clip with a duration of 10 minutes and set MinimumMatchDuration=60s, Media Server will report a match when the analyzed video contains at least one minute of the trained clip.

    For example:

    [VideoMatch]
    Type=VideoMatch
    Database=movies

    For more information about the parameters that you can use to configure video matching, refer to the Media Server Reference.

  4. Save and close the configuration file. OpenText recommends that you save your configuration files in the location specified by the ConfigDirectory parameter.