aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorMohamedBassem <me@mbassem.com>2024-10-12 17:29:35 +0000
committerMohamedBassem <me@mbassem.com>2024-10-12 17:37:42 +0000
commit5e44cfbc45fd09caa414f9b676e2458758df4e5d (patch)
tree6eae3519abc6564139915d2b62091b05444a1765 /docs
parent6035dff9ee3aae4dc945f77c56b2f736dcd8d0e1 (diff)
downloadkarakeep-5e44cfbc45fd09caa414f9b676e2458758df4e5d.tar.zst
docs: Remove the warning about ollama being new
Diffstat (limited to 'docs')
-rw-r--r--docs/docs/02-Installation/01-docker.md7
-rw-r--r--docs/docs/03-configuration.md2
2 files changed, 5 insertions, 4 deletions
diff --git a/docs/docs/02-Installation/01-docker.md b/docs/docs/02-Installation/01-docker.md
index 4745529a..fc46eb6c 100644
--- a/docs/docs/02-Installation/01-docker.md
+++ b/docs/docs/02-Installation/01-docker.md
@@ -52,15 +52,16 @@ OPENAI_API_KEY=<key>
Learn more about the costs of using openai [here](/openai).
<details>
- <summary>[EXPERIMENTAL] If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>
+ <summary>If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>
- **Note:** The quality of the tags you'll get will depend on the quality of the model you choose. Running local models is a recent addition and not as battle tested as using openai, so proceed with care (and potentially expect a bunch of inference failures).
+ **Note:** The quality of the tags you'll get will depend on the quality of the model you choose.
- Make sure ollama is running.
- Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
- - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `mistral`)
+ - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama3.1`)
- Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
- Make sure that you `ollama pull`-ed the models that you want to use.
+ - You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be.
</details>
diff --git a/docs/docs/03-configuration.md b/docs/docs/03-configuration.md
index 98fa7a1a..9abd6fb2 100644
--- a/docs/docs/03-configuration.md
+++ b/docs/docs/03-configuration.md
@@ -45,7 +45,7 @@ Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic taggin
:::warning
- The quality of the tags you'll get will depend on the quality of the model you choose.
-- Running local models is a recent addition and not as battle tested as using OpenAI, so proceed with care (and potentially expect a bunch of inference failures).
+- You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be (money-wise on OpenAI and resource-wise on ollama).
:::
| Name | Required | Default | Description |