ChunkSize
The maximum number of tokens of the full tokenized question and context that is provided to the generative model. If the total number of tokens is larger, IDOL sends the input context in multiple chunks and combines the results from each chunk.
This parameter has an effect only when Type is set to GenerativeLLM
.
Type: | String |
Default: | 512 |
Required: | No |
Configuration Section: | MyLLMModule |
Example: | ChunkSize=120
|
See Also: | Type
|