TopicTemplatePath

The path to the topic template file. This parameter applies only when Type is set to RAG.

The topic template allows you to use the answer LLM to generate additional context to send in the query to the Content component. The template must include the token {{history}}, which Answer Server replaces with the conversation history. The LLM uses this topic prompt and conversation history to return the topics in the conversation. Answer Server uses these topics in the query to Content, which might help provide more relevant candidate document summaries to use for the RAG query.

You must design your topic prompt template so that it generates answers that contain a landmark string, for example Topics, immediately followed by an array of strings. You can configure the landmark that Answer Server looks for by setting TopicLandmark. Answer Server then parses the strings to use as topics in its candidate generation query. It ignores any text before the landmark, so you can optionally use it for generated explanations for the answers (chain of thought prompting).

If the content that follows the landmark is not a valid array, Answer Server does not add any contextual topics for the query.

When you create your topic template, you must consider any PromptTokenLimit that you set for your RAG system. Answer Server applies the prompt token limit to all tokens it sends to the LLM as part of a single Ask action, so the size of your template also affects the amount of candidate document summaries that Answer Server sends to the LLM to generate the final response.

For example:

A snippet of a conversation will be shown below after >>>>>Input:<<<<<. 
Read the conversation carefully and decide the current topic of the conversation. In 
particular, you should decide what words like "it", "they", "he" or "she" might be 
referring to. Be aware that topics can change during a conversation, and if it has 
changed then you should choose the topic that is related to the most recent question 
that the user sent.

Use the following format for the output, listing the topics in an array:

Explanation: <Your explanation>
Topics: ["<First topic>", "<Second topic>"]

Do not attempt to continue the conversation yourself.
Please keep your answer as brief as possible.

>>>>>Input:<<<<<
{{history}}
Output:

CAUTION: Answer Server substitutes user-supplied input, in the form of the question text, into the template before it passes the resulting text to the LLM.

You must design your prompt templates to mitigate against prompt engineering attacks, where malicious users create input that changes the LLM behavior. The best practice varies according to the model that you use, so you must consult the recommendations for your chosen model.

Type: String
Default:  
Required: No
Configuration Section: MySystem
Example: TopicTemplatePath=./rag/prompts/rag_topics.txt
See Also: