diff options
Diffstat (limited to 'docs')
| -rw-r--r-- | docs/docs/01-intro.md | 2 | ||||
| -rw-r--r-- | docs/docs/02-installation.md | 11 | ||||
| -rw-r--r-- | docs/docs/03-configuration.md | 15 |
3 files changed, 24 insertions, 4 deletions
diff --git a/docs/docs/01-intro.md b/docs/docs/01-intro.md index e5eac1dc..7413e163 100644 --- a/docs/docs/01-intro.md +++ b/docs/docs/01-intro.md @@ -15,7 +15,7 @@ Hoarder is an open source "Bookmark Everything" app that uses AI for automatical - ⬇️ Automatic fetching for link titles, descriptions and images. - 📋 Sort your bookmarks into lists. - 🔎 Full text search of all the content stored. -- ✨ AI-based (aka chatgpt) automatic tagging. +- ✨ AI-based (aka chatgpt) automatic tagging. With supports for local models using ollama! - 🔖 [Chrome plugin](https://chromewebstore.google.com/detail/hoarder/kgcjekpmcjjogibpjebkhaanilehneje) for quick bookmarking. - 📱 An iOS app that's pending apple's review. - 🌙 Dark mode support. diff --git a/docs/docs/02-installation.md b/docs/docs/02-installation.md index 0a25c7bf..50069e31 100644 --- a/docs/docs/02-installation.md +++ b/docs/docs/02-installation.md @@ -46,9 +46,18 @@ To enable automatic tagging, you'll need to configure OpenAI. This is optional t Learn more about the costs of using openai [here](/openai). +<details> + <summary>If you want to use Ollama (https://ollama.com/) instead for local inference.</summary> -### 5. Start the service + - Make sure ollama is running. + - Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API. + - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama2`) + - Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`) + - Make sure that you `ollama pull`-ed the models that you want to use. + +</details> +### 5. Start the service Start the service by running: diff --git a/docs/docs/03-configuration.md b/docs/docs/03-configuration.md index 585d25b5..bba81b70 100644 --- a/docs/docs/03-configuration.md +++ b/docs/docs/03-configuration.md @@ -8,6 +8,17 @@ The app is mainly configured by environment variables. All the used environment | NEXTAUTH_SECRET | Yes | Not set | Random string used to sign the JWT tokens. Generate one with `openssl rand -base64 36`. | | REDIS_HOST | Yes | localhost | The address of redis used by background jobs | | REDIS_POST | Yes | 6379 | The port of redis used by background jobs | -| OPENAI_API_KEY | No | Not set | The OpenAI key used for automatic tagging. If not set, automatic tagging won't be enabled. More on that in [here](/openai). | -| MEILI_ADDR | No | Not set | The address of meilisearch. If not set, Search will be disabled. E.g. (`http://meilisearch:7700`) | +| MEILI_ADDR | No | Not set | The address of meilisearch. If not set, Search will be disabled. E.g. (`http://meilisearch:7700`) | | MEILI_MASTER_KEY | Only in Prod and if search is enabled | Not set | The master key configured for meilisearch. Not needed in development environment. Generate one with `openssl rand -base64 36` | + +## Inference Configs (For automatic tagging) + +Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic tagging to be enabled. Otherwise, automatic tagging will be skipped. + +| Name | Required | Default | Description | +| --------------------- | -------- | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| OPENAI_API_KEY | No | Not set | The OpenAI key used for automatic tagging. More on that in [here](/openai). | +| OPENAI_BASE_URL | No | Not set | If you just want to use OpenAI you don't need to pass this variable. If, however, you want to use some other openai compatible API (e.g. azure openai service), set this to the url of the API. | +| OLLAMA_BASE_URL | No | Not set | If you want to use ollama for local inference, set the address of ollama API here. | +| INFERENCE_TEXT_MODEL | No | gpt-3.5-turbo-0125 | The model to use for text inference. You'll need to change this to some other model if you're using ollama. | +| INFERENCE_IMAGE_MODEL | No | gpt-4-vision-preview | The model to use for image inference. You'll need to change this to some other model if you're using ollama and that model needs to support vision APIs (e.g. llava). | |
