aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorMohamed Bassem <me@mbassem.com>2025-04-13 17:03:58 +0000
committerMohamed Bassem <me@mbassem.com>2025-04-13 17:03:58 +0000
commit1373a7b21d7b04f0fe5ea2a008c88b6a85665fe0 (patch)
treeeb88bb3c6f04d8d4dea1be889cb8a8e552ca91ba /docs
parentf3c525b7f7dd360f654d8621bbf64e31ad5ff48e (diff)
downloadkarakeep-1373a7b21d7b04f0fe5ea2a008c88b6a85665fe0.tar.zst
fix: Allow using JSON mode for ollama users. Fixes #1160
Diffstat (limited to 'docs')
-rw-r--r--docs/docs/03-configuration.md3
1 files changed, 2 insertions, 1 deletions
diff --git a/docs/docs/03-configuration.md b/docs/docs/03-configuration.md
index 51ee23a5..3790d289 100644
--- a/docs/docs/03-configuration.md
+++ b/docs/docs/03-configuration.md
@@ -63,7 +63,8 @@ Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic taggin
| INFERENCE_LANG | No | english | The language in which the tags will be generated. |
| INFERENCE_JOB_TIMEOUT_SEC | No | 30 | How long to wait for the inference job to finish before timing out. If you're running ollama without powerful GPUs, you might want to increase the timeout a bit. |
| INFERENCE_FETCH_TIMEOUT_SEC | No | 300 | \[Ollama Only\] The timeout of the fetch request to the ollama server. If your inference requests take longer than the default 5mins, you might want to increase this timeout. |
-| INFERENCE_SUPPORTS_STRUCTURED_OUTPUT | No | true | Whether the inference model supports structured output or not. |
+| INFERENCE_SUPPORTS_STRUCTURED_OUTPUT | No | Not set | \[DEPRECATED\] Whether the inference model supports structured output or not. Use INFERENCE_OUTPUT_SCHEMA instead. Setting this to true translates to INFERENCE_OUTPUT_SCHEMA=structured, and to false translates to INFERENCE_OUTPUT_SCHEMA=plain. |
+| INFERENCE_OUTPUT_SCHEMA | No | structured | Possible values are "structured", "json", "plain". Structured is the preferred option, but if your model doesn't support it, you can use "json" if your model supports JSON mode, otherwise "plain" which should be supported by all the models but the model might not output the data in the correct format. |
:::info