aboutsummaryrefslogtreecommitdiffstats
path: root/docs/versioned_docs/version-v0.18.0/02-Installation/04-kubernetes.md
blob: 2a418227fecc3e1f8c0ac0e0e75e3568c4a6187a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
# Kubernetes

### Requirements

- A kubernetes cluster
- kubectl
- kustomize

### 1. Get the deployment manifests

You can clone the repository and copy the `/kubernetes` directory into another directory of your choice.

### 2. Populate the environment variables

To configure the app, edit the configuration in `.env`.


You **should** change the random strings. You can use `openssl rand -base64 36` to generate the random strings. You should also change the `NEXTAUTH_URL` variable to point to your server address.

Using `HOARDER_VERSION=release` will pull the latest stable version. You might want to pin the version instead to control the upgrades (e.g. `HOARDER_VERSION=0.10.0`). Check the latest versions [here](https://github.com/hoarder-app/hoarder/pkgs/container/hoarder-web).

### 3. Setup OpenAI

To enable automatic tagging, you'll need to configure OpenAI. This is optional though but hightly recommended.

- Follow [OpenAI's help](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key) to get an API key.
- Add the OpenAI API key to the `.env` file:

```
OPENAI_API_KEY=<key>
```

Learn more about the costs of using openai [here](/openai).

<details>
    <summary>[EXPERIMENTAL] If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>

    **Note:** The quality of the tags you'll get will depend on the quality of the model you choose. Running local models is a recent addition and not as battle tested as using openai, so proceed with care (and potentially expect a bunch of inference failures).

    - Make sure ollama is running.
    - Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
    - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `mistral`)
    - Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
    - Make sure that you `ollama pull`-ed the models that you want to use.


</details>

### 4. Deploy the service

Deploy the service by running:

```
make deploy
```

### 5. Access the service

By default, these manifests expose the application as a LoadBalancer Service. You can run `kubectl get services` to identify the IP of the loadbalancer for your service.

Then visit `http://<loadbalancer-ip>:3000` and you should be greated with the Sign In page.

> Note: Depending on your setup you might want to expose the service via an Ingress, or have a different means to access it.

### [Optional] 6. Setup quick sharing extensions

Go to the [quick sharing page](/quick-sharing) to install the mobile apps and the browser extensions. Those will help you hoard things faster!

## Updating

Edit the `HOARDER_VERSION` variable in the `kustomization.yaml` file and run `make clean deploy`.