From 1aa7e2d49c4b4b6558ab0c9d9734359cd0ea04af Mon Sep 17 00:00:00 2001 From: MohamedBassem Date: Sun, 13 Oct 2024 14:59:49 +0000 Subject: docs: Release the 0.18 docs --- .../02-Installation/04-kubernetes.md | 71 ++++++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 docs/versioned_docs/version-v0.18.0/02-Installation/04-kubernetes.md (limited to 'docs/versioned_docs/version-v0.18.0/02-Installation/04-kubernetes.md') diff --git a/docs/versioned_docs/version-v0.18.0/02-Installation/04-kubernetes.md b/docs/versioned_docs/version-v0.18.0/02-Installation/04-kubernetes.md new file mode 100644 index 00000000..2a418227 --- /dev/null +++ b/docs/versioned_docs/version-v0.18.0/02-Installation/04-kubernetes.md @@ -0,0 +1,71 @@ +# Kubernetes + +### Requirements + +- A kubernetes cluster +- kubectl +- kustomize + +### 1. Get the deployment manifests + +You can clone the repository and copy the `/kubernetes` directory into another directory of your choice. + +### 2. Populate the environment variables + +To configure the app, edit the configuration in `.env`. + + +You **should** change the random strings. You can use `openssl rand -base64 36` to generate the random strings. You should also change the `NEXTAUTH_URL` variable to point to your server address. + +Using `HOARDER_VERSION=release` will pull the latest stable version. You might want to pin the version instead to control the upgrades (e.g. `HOARDER_VERSION=0.10.0`). Check the latest versions [here](https://github.com/hoarder-app/hoarder/pkgs/container/hoarder-web). + +### 3. Setup OpenAI + +To enable automatic tagging, you'll need to configure OpenAI. This is optional though but hightly recommended. + +- Follow [OpenAI's help](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key) to get an API key. +- Add the OpenAI API key to the `.env` file: + +``` +OPENAI_API_KEY= +``` + +Learn more about the costs of using openai [here](/openai). + +
+ [EXPERIMENTAL] If you want to use Ollama (https://ollama.com/) instead for local inference. + + **Note:** The quality of the tags you'll get will depend on the quality of the model you choose. Running local models is a recent addition and not as battle tested as using openai, so proceed with care (and potentially expect a bunch of inference failures). + + - Make sure ollama is running. + - Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API. + - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `mistral`) + - Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`) + - Make sure that you `ollama pull`-ed the models that you want to use. + + +
+ +### 4. Deploy the service + +Deploy the service by running: + +``` +make deploy +``` + +### 5. Access the service + +By default, these manifests expose the application as a LoadBalancer Service. You can run `kubectl get services` to identify the IP of the loadbalancer for your service. + +Then visit `http://:3000` and you should be greated with the Sign In page. + +> Note: Depending on your setup you might want to expose the service via an Ingress, or have a different means to access it. + +### [Optional] 6. Setup quick sharing extensions + +Go to the [quick sharing page](/quick-sharing) to install the mobile apps and the browser extensions. Those will help you hoard things faster! + +## Updating + +Edit the `HOARDER_VERSION` variable in the `kustomization.yaml` file and run `make clean deploy`. -- cgit v1.2.3-70-g09d2