Query Manipulation Server

Query Manipulation Server is an ACI server. For details of changes that affect all ACI servers, see ACI Server Framework.

24.1.0

New Features

  • The new ModelSummarize action has been added. This action allows you to use a third-party vector model to summarize a document or set of documents. This option requires you to configure a generative model in the [Generative] configuration section. For more information, refer to the QMS Help.

  • The vector embeddings that the vector generator returns now include offset information. Each embedding has the start and end byte offsets of the chunk of text that corresponds to the embedding, and the length of the chunk, in bytes.

    This change is always on for the SentenceTransformer module. For the Lua module, you can choose whether to return the offset information from the generateembeddings Lua function. To return offsets, the generateembeddings must return two tables; the first is the embeddings, and the second is the offsets.

    The ModelEncode action returns the offset information as attributes of the vector in the response.

  • The new configuration parameters ModelSequenceOverlap and ModelMinimumFinalSequenceLength have been added to allow you to control how QMS splits the text used to generate embeddings when multiple embeddings are required. For more information, refer to the QMS Help.

Resolved Issues

  • When processing query texts with synonym rules (that is, setting ExpandQuery to True), QMS removed terms from the query text that were followed by a colon.

23.4.0

New Features

  • QMS can now provide a spell checked version of original query text, rather than the expanded query, when you send a query with the ExpandQuery and SpellCheck parameters set to True. Previously, QMS returned a corrected version of the expanded query.

    To use this feature, you must set the SpellCheckShowOriginal parameter to True in the Content component configuration.

  • You can now configure QMS to create embeddings to use in your IDOL Content component index for vector searches. The new [Embeddings] configuration section allows you to set the location of your model files. You then generate the embeddings by using the new ModelEncode action.

    [Embeddings]
    0=SentenceTransformer [SentenceTransformer] Type=Transformer ModelPath=C:\modelfiles\model.pt TokenizerPath=C:\modelfiles\tokenizer.spiece.model ModelMaxSequenceLength=128
    action=ModelEncode&Model=SentenceTransformer&Text=my%20text

    For more information, refer to the QMS Help.

Resolved Issues

There were no resolved issues in this release.

23.3.0

New in this Release

There were no new features in this release.

Resolved Issues

  • When using QMS through IDOL Admin, the ExpandNames parameter did not work, because the responseformat=json parameter was incorrectly passed from IDOL Admin through to the name variant library.

  • When IntentRankedQuery was activated, if Content did not return any results, QMS incorrectly returned a BADPARAMETER error.

23.2.0

New in this Release

  • QMS can now expand names in query text to include other variants of the same name, such as variants that use initials or titles, nicknames, phonetically similar names, and translations.

    NOTE: This feature requires the IDOL Eduction combined_names.ecr grammar and pii_postprocessing.lua script, which you must obtain from the IDOL Eduction Grammars package.

    To use the new expansions, you must configure the locations of the required Eduction grammar, and some data files that are included in your QMS package.

    [NameVariants]
    GrammarDirectory=grammars/pii
    DataDirectory=staticdata

    You can then send a query with the new ExpandNames parameter to expand any names in the original query with the matching variants. For example:

    action=Query&Text=John Smith&ExpandNames=True

    For more information, refer to the Query Manipulation Server Help.

Resolved Issues

  • When queries were sent through QMS, the <autn:predicted> tag (indicating whether the reported totalresults value was estimated or exact) always returned with value false (regardless of whether prediction was used or not).