Answer Server

Answer Server is an ACI server. For details of changes that affect all ACI servers, see ACI Server Framework.

24.1.0

New Features

  • You can now configure Passage Extractor systems to use an LLM to extract or generate answers. To configure these systems, you set the Type to PassageExtractorLLM. Like a standard passage extractor, you must configure the location of an IDOL index to use to find answers, and classifier files to describe the types of different questions.

    For LLM passage extractor, you must also configure the location of model and tokenizer files for the LLM to use to generate or extract answers.

    You can also use these models in a Lua script, for example so that you can access an LLM through a HTTP endpoint.

    For example:

    [passageextractorLLM]
    // Data store IDOL
    IdolHost=localhost
    IdolAciport=6002
    Type=PassageExtractorLLM
    // Classifier Files
    ClassifierFile=./passageextractor/classifiertraining/svm_en.dat
    LabelFile=./passageextractor/classifiertraining/labels_en.dat
    // Module to use
    ModuleID=LLMExtractiveQuestionAnswering-Small
    
    [LLMExtractiveQuestionAnswering-Small]
    Type=ExtractiveQuestionAnsweringLLM
    ModelPath=modelfiles/model.pt
    TokenizerPath=modelfiles/tokenizer.spiece.model

    For more information, refer to the Answer Server Help.

  • When you use a passage extract LLM system, the Ask action returns a highlighted paragraph in the response metadata to show the passage that the answer was extracted from, to allow you to verify automatically generated answers.

  • You can now configure a Passage Extractor or Passage Extractor LLM system to run vector queries against the IDOL Content component to identify candidate documents that might contain answers to an input question. You can use this option when you index vectors in your IDOL Content component and want to use vector search to retrieve answers.

    To use this option, you must set the AnswerCandidateEmbeddingsSettings parameter in your system configuration section to the name of a configuration section where you configure the Content vector field and an embeddings configuration for how to generate embeddings to send to Content. For example:

    [PassageExtractorSystem]
    idolhost=localhost
    idolaciport=6002
    type=passageextractor
    ...
    AnswerCandidateEmbeddingsSettings=VectorSettings
    
    [VectorSettings]
    EmbeddingsConfig=EmbeddingsSystem
    VectorField=VECTORA
    
    [EmbeddingsSystem]
    Type=Transformer
    ModelPath=path/to/model.pt
    TokenizerPath=path/to/tokenizer.spiece.model
    ModelMaxSequenceLength=128

    For more information, refer to the Answer Server Help.

Resolved Issues

There were no resolved issues in this release.

23.4.0

There were no new features or resolved issues in this release.

23.3.0

There were no new features or resolved issues in this release.

23.2.0

There were no new features or resolved issues in this release.