ModelPath
The path to the exported model file to use. This parameter applies when Type is set to ExtractiveQuestionAnsweringLLM
or GenerativeLLM
.
You can generate an appropriate model file to use by using the export_transformers_model.py
script, which is provided in your Answer Server installation.
-
When Type is set to
ExtractiveQuestionAnsweringLLM
, you must use an ALBERT model (see https://huggingface.co/docs/transformers/model_doc/albert). -
When Type is set to
GenerativeLLM
, you must use a T5 model (see https://huggingface.co/docs/transformers/model_doc/t5).
The script converts these models into a format that Answer Server can use. For more information about how to use the script, see Create the Model Files.
Type: | String |
Default: | |
Required: | Yes, when Type is set to ExtractiveQuestionAnsweringLLM or GenerativeLLM |
Configuration Section: | MyLLMModule |
Example: | ModelPath=C:\Models\model.pt
|
See Also: | TokenizerPath |