ChunkSize
The maximum number of tokens of the full tokenized text that is provided to the generative model. If the total number of tokens is larger, QMS sends the input context in multiple chunks and combines the results from each chunk.
This parameter has an effect only when Type is set to GenerativeLLM
.
Type: | String |
Default: | 512 |
Required: | No |
Configuration Section: | MyGenerativeModel |
Example: | ChunkSize=120
|
See Also: | Type
|