aboutsummaryrefslogtreecommitdiffstats
path: root/docs/versioned_docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs/versioned_docs')
-rw-r--r--docs/versioned_docs/version-v0.23.1/01-intro.md49
-rw-r--r--docs/versioned_docs/version-v0.23.1/02-Installation/01-docker.md99
-rw-r--r--docs/versioned_docs/version-v0.23.1/02-Installation/02-unraid.md19
-rw-r--r--docs/versioned_docs/version-v0.23.1/02-Installation/03-archlinux.md48
-rw-r--r--docs/versioned_docs/version-v0.23.1/02-Installation/04-kubernetes.md112
-rw-r--r--docs/versioned_docs/version-v0.23.1/02-Installation/05-pikapods.md32
-rw-r--r--docs/versioned_docs/version-v0.23.1/02-Installation/06-debuntu.md70
-rw-r--r--docs/versioned_docs/version-v0.23.1/02-Installation/07-minimal-install.md49
-rw-r--r--docs/versioned_docs/version-v0.23.1/03-configuration.md133
-rw-r--r--docs/versioned_docs/version-v0.23.1/04-screenshots.md34
-rw-r--r--docs/versioned_docs/version-v0.23.1/05-quick-sharing.md18
-rw-r--r--docs/versioned_docs/version-v0.23.1/06-openai.md11
-rw-r--r--docs/versioned_docs/version-v0.23.1/07-Development/01-setup.md122
-rw-r--r--docs/versioned_docs/version-v0.23.1/07-Development/02-directories.md28
-rw-r--r--docs/versioned_docs/version-v0.23.1/07-Development/03-database.md11
-rw-r--r--docs/versioned_docs/version-v0.23.1/07-Development/04-architecture.md9
-rw-r--r--docs/versioned_docs/version-v0.23.1/08-security-considerations.md14
-rw-r--r--docs/versioned_docs/version-v0.23.1/09-command-line.md109
-rw-r--r--docs/versioned_docs/version-v0.23.1/10-import.md49
-rw-r--r--docs/versioned_docs/version-v0.23.1/11-FAQ.md60
-rw-r--r--docs/versioned_docs/version-v0.23.1/12-troubleshooting.md33
-rw-r--r--docs/versioned_docs/version-v0.23.1/13-community-projects.md47
-rw-r--r--docs/versioned_docs/version-v0.23.1/14-Guides/01-legacy-container-upgrade.md66
-rw-r--r--docs/versioned_docs/version-v0.23.1/14-Guides/02-search-query-language.md69
-rw-r--r--docs/versioned_docs/version-v0.23.1/14-Guides/03-singlefile.md30
-rw-r--r--docs/versioned_docs/version-v0.23.1/14-Guides/04-hoarder-to-karakeep-migration.md18
26 files changed, 1339 insertions, 0 deletions
diff --git a/docs/versioned_docs/version-v0.23.1/01-intro.md b/docs/versioned_docs/version-v0.23.1/01-intro.md
new file mode 100644
index 00000000..f590df74
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/01-intro.md
@@ -0,0 +1,49 @@
+---
+slug: /
+---
+
+# Introduction
+
+Karakeep (previously Hoarder) is an open source "Bookmark Everything" app that uses AI for automatically tagging the content you throw at it. The app is built with self-hosting as a first class citizen.
+
+![Screenshot](https://raw.githubusercontent.com/hoarder-app/hoarder/main/screenshots/homepage.png)
+
+
+## Features
+
+- ๐Ÿ”— Bookmark links, take simple notes and store images and pdfs.
+- โฌ‡๏ธ Automatic fetching for link titles, descriptions and images.
+- ๐Ÿ“‹ Sort your bookmarks into lists.
+- ๐Ÿ”Ž Full text search of all the content stored.
+- โœจ AI-based (aka chatgpt) automatic tagging. With supports for local models using ollama!
+- ๐ŸŽ† OCR for extracting text from images.
+- ๐Ÿ”– [Chrome plugin](https://chromewebstore.google.com/detail/hoarder/kgcjekpmcjjogibpjebkhaanilehneje) and [Firefox addon](https://addons.mozilla.org/en-US/firefox/addon/hoarder/) for quick bookmarking.
+- ๐Ÿ“ฑ An [iOS app](https://apps.apple.com/us/app/hoarder-app/id6479258022), and an [Android app](https://play.google.com/store/apps/details?id=app.hoarder.hoardermobile&pcampaignid=web_share).
+- ๐Ÿ“ฐ Auto hoarding from RSS feeds.
+- ๐Ÿ”Œ REST API.
+- ๐ŸŒ Multi-language support.
+- ๐Ÿ–๏ธ Mark and store highlights from your hoarded content.
+- ๐Ÿ—„๏ธ Full page archival (using [monolith](https://github.com/Y2Z/monolith)) to protect against link rot. Auto video archiving using [youtube-dl](https://github.com/marado/youtube-dl).
+- โ˜‘๏ธ Bulk actions support.
+- ๐Ÿ” SSO support.
+- ๐ŸŒ™ Dark mode support.
+- ๐Ÿ’พ Self-hosting first.
+- [Planned] Downloading the content for offline reading in the mobile app.
+
+**โš ๏ธ This app is under heavy development and it's far from stable.**
+
+
+## Demo
+
+You can access the demo at [https://try.karakeep.app](https://try.karakeep.app). Login with the following creds:
+
+```
+email: demo@karakeep.app
+password: demodemo
+```
+
+The demo is seeded with some content, but it's in read-only mode to prevent abuse.
+
+## About the name
+
+The name Karakeep is inspired by the Arabic word "ูƒุฑุงูƒูŠุจ" (karakeeb), a colloquial term commonly used to refer to miscellaneous clutter, odds and ends, or items that may seem disorganized but often hold personal value or hidden usefulness. It evokes the image of a messy drawer or forgotten box, full of stuff you can't quite throw awayโ€”because somehow, it matters (or more likely, because you're a hoarder!).
diff --git a/docs/versioned_docs/version-v0.23.1/02-Installation/01-docker.md b/docs/versioned_docs/version-v0.23.1/02-Installation/01-docker.md
new file mode 100644
index 00000000..ab5e7123
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/02-Installation/01-docker.md
@@ -0,0 +1,99 @@
+# Docker Compose [Recommended]
+
+### Requirements
+
+- Docker
+- Docker Compose
+
+### 1. Create a new directory
+
+Create a new directory to host the compose file and env variables.
+
+This is where youโ€™ll place the `docker-compose.yml` file from the next step and the environment variables.
+
+For example you could make a new directory called "hoarder-app" with the following command:
+```
+mkdir hoarder-app
+```
+
+
+### 2. Download the compose file
+
+Download the docker compose file provided [here](https://github.com/hoarder-app/hoarder/blob/main/docker/docker-compose.yml) directly into your new directory.
+
+```
+wget https://raw.githubusercontent.com/hoarder-app/hoarder/main/docker/docker-compose.yml
+```
+
+### 3. Populate the environment variables
+
+To configure the app, create a `.env` file in the directory and add this minimal env file:
+
+```
+HOARDER_VERSION=release
+NEXTAUTH_SECRET=super_random_string
+MEILI_MASTER_KEY=another_random_string
+NEXTAUTH_URL=http://localhost:3000
+```
+
+You **should** change the random strings. You can use `openssl rand -base64 36` in a seperate terminal window to generate the random strings. You should also change the `NEXTAUTH_URL` variable to point to your server address.
+
+Using `HOARDER_VERSION=release` will pull the latest stable version. You might want to pin the version instead to control the upgrades (e.g. `HOARDER_VERSION=0.10.0`). Check the latest versions [here](https://github.com/hoarder-app/hoarder/pkgs/container/hoarder-web).
+
+Persistent storage and the wiring between the different services is already taken care of in the docker compose file.
+
+Keep in mind that every time you change the `.env` file, you'll need to re-run `docker compose up`.
+
+If you want more config params, check the config documentation [here](/configuration).
+
+### 4. Setup OpenAI
+
+To enable automatic tagging, you'll need to configure OpenAI. This is optional though but highly recommended.
+
+- Follow [OpenAI's help](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key) to get an API key.
+- Add the OpenAI API key to the env file:
+
+```
+OPENAI_API_KEY=<key>
+```
+
+Learn more about the costs of using openai [here](/openai).
+
+<details>
+ <summary>If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>
+
+ **Note:** The quality of the tags you'll get will depend on the quality of the model you choose.
+
+ - Make sure ollama is running.
+ - Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
+ - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama3.1`)
+ - Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
+ - Make sure that you `ollama pull`-ed the models that you want to use.
+ - You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be.
+
+</details>
+
+### 5. Start the service
+
+Start the service by running:
+
+```
+docker compose up -d
+```
+
+Then visit `http://localhost:3000` and you should be greeted with the Sign In page.
+
+### [Optional] 6. Enable optional features
+
+Check the [configuration docs](/configuration) for extra features to enable such as full page archival, full page screenshots, inference languages, etc.
+
+### [Optional] 7. Setup quick sharing extensions
+
+Go to the [quick sharing page](/quick-sharing) to install the mobile apps and the browser extensions. Those will help you hoard things faster!
+
+## Updating
+
+Updating hoarder will depend on what you used for the `HOARDER_VERSION` env variable.
+
+- If you pinned the app to a specific version, bump the version and re-run `docker compose up -d`. This should pull the new version for you.
+- If you used `HOARDER_VERSION=release`, you'll need to force docker to pull the latest version by running `docker compose up --pull always -d`.
diff --git a/docs/versioned_docs/version-v0.23.1/02-Installation/02-unraid.md b/docs/versioned_docs/version-v0.23.1/02-Installation/02-unraid.md
new file mode 100644
index 00000000..42323a5f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/02-Installation/02-unraid.md
@@ -0,0 +1,19 @@
+# Unraid
+
+## Docker Compose Manager Plugin (Recommended)
+
+You can use [Docker Compose Manager](https://forums.unraid.net/topic/114415-plugin-docker-compose-manager/) plugin to deploy Hoarder using the official docker compose file provided [here](https://github.com/hoarder-app/hoarder/blob/main/docker/docker-compose.yml). After creating the stack, you'll need to setup some env variables similar to that from the docker compose installation docs [here](/Installation/docker#3-populate-the-environment-variables).
+
+## Community Apps
+
+:::info
+The community application template is maintained by the community.
+:::
+
+Hoarder can be installed on Unraid using the community application plugins. Hoarder is a multi-container service, and because unraid doesn't natively support that, you'll have to install the different pieces as separate applications and wire them manually together.
+
+Here's a high level overview of the services you'll need:
+
+- **Hoarder** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's main web app.
+- **Browserless** ([Support post](https://forums.unraid.net/topic/130163-support-template-masterwishxbrowserless/)): The chrome headless service used for fetching the content. Hoarder's official docker compose doesn't use browserless, but it's currently the only headless chrome service available on unraid, so you'll have to use it.
+- **MeiliSearch** ([Support post](https://forums.unraid.net/topic/164847-support-collectathon-meilisearch/)): The search engine used by Hoarder. It's optional but highly recommended. If you don't have it set up, search will be disabled.
diff --git a/docs/versioned_docs/version-v0.23.1/02-Installation/03-archlinux.md b/docs/versioned_docs/version-v0.23.1/02-Installation/03-archlinux.md
new file mode 100644
index 00000000..37ada2fa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/02-Installation/03-archlinux.md
@@ -0,0 +1,48 @@
+# Arch Linux
+
+## Installation
+
+> [Hoarder on AUR](https://aur.archlinux.org/packages/hoarder) is not maintained by the hoarder official.
+
+1. Install hoarder
+
+ ```shell
+ paru -S hoarder
+ ```
+
+2. (**Optional**) Install optional dependencies
+
+ ```shell
+ # meilisearch: for full text search
+ paru -S meilisearch
+
+ # ollama: for automatic tagging
+ paru -S ollama
+
+ # hoarder-cli: hoarder cli tool
+ paru -S hoarder-cli
+ ```
+
+ You can use Open-AI instead of `ollama`. If you use `ollama`, you need to download the ollama model. Please refer to: [https://ollama.com/library](https://ollama.com/library).
+
+3. Set up
+
+ Environment variables can be set in `/etc/hoarder/hoarder.env` according to [configuration page](/configuration). **The environment variables that are not specified in `/etc/hoarder/hoarder.env` need to be added by yourself.**
+
+4. Enable service
+
+ ```shell
+ sudo systemctl enable --now hoarder.target
+ ```
+
+ Then visit `http://localhost:3000` and you should be greated with the Sign In page.
+
+## Services and Ports
+
+`hoarder.target` include 3 services: `hoarder-web.service`, `hoarder-works.service`, `hoarder-browser.service`.
+
+- `hoarder-web.service`: Provide hoarder WebUI service, use `3000` port default.
+
+- `hoarder-works.service`: Provide hoarder works service, no port.
+
+- `hoarder-browser.service`: Provide browser headless service, use `9222` port default.
diff --git a/docs/versioned_docs/version-v0.23.1/02-Installation/04-kubernetes.md b/docs/versioned_docs/version-v0.23.1/02-Installation/04-kubernetes.md
new file mode 100644
index 00000000..76a84483
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/02-Installation/04-kubernetes.md
@@ -0,0 +1,112 @@
+# Kubernetes
+
+### Requirements
+
+- A kubernetes cluster
+- kubectl
+- kustomize
+
+### 1. Get the deployment manifests
+
+You can clone the repository and copy the `/kubernetes` directory into another directory of your choice.
+
+### 2. Populate the environment variables and secrets
+
+To configure the app, copy the `.env_sample` to `.env` and change to your specific needs.
+
+You should also change the `NEXTAUTH_URL` variable to point to your server address.
+
+Using `HOARDER_VERSION=release` will pull the latest stable version. You might want to pin the version instead to control the upgrades (e.g. `HOARDER_VERSION=0.10.0`). Check the latest versions [here](https://github.com/hoarder-app/hoarder/pkgs/container/hoarder-web).
+
+To see all available configuration options check the [documentation](https://docs.hoarder.app/configuration).
+
+To configure the neccessary secrets for the application copy the `.secrets_sample` file to `.secrets` and change the sample secrets to your generated secrets.
+
+> Note: You **should** change the random strings. You can use `openssl rand -base64 36` to generate the random strings.
+
+### 3. Setup OpenAI
+
+To enable automatic tagging, you'll need to configure OpenAI. This is optional though but highly recommended.
+
+- Follow [OpenAI's help](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key) to get an API key.
+- Add the OpenAI API key to the `.env` file:
+
+```
+OPENAI_API_KEY=<key>
+```
+
+Learn more about the costs of using openai [here](/openai).
+
+<details>
+ <summary>[EXPERIMENTAL] If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>
+
+ **Note:** The quality of the tags you'll get will depend on the quality of the model you choose. Running local models is a recent addition and not as battle tested as using openai, so proceed with care (and potentially expect a bunch of inference failures).
+
+ - Make sure ollama is running.
+ - Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
+ - Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `mistral`)
+ - Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
+ - Make sure that you `ollama pull`-ed the models that you want to use.
+
+
+</details>
+
+### 4. Deploy the service
+
+Deploy the service by running:
+
+```
+make deploy
+```
+
+### 5. Access the service
+
+#### via LoadBalancer IP
+
+By default, these manifests expose the application as a LoadBalancer Service. You can run `kubectl get services` to identify the IP of the loadbalancer for your service.
+
+Then visit `http://<loadbalancer-ip>:3000` and you should be greated with the Sign In page.
+
+> Note: Depending on your setup you might want to expose the service via an Ingress, or have a different means to access it.
+
+#### Via Ingress
+
+If you want to use an ingress, you can customize the sample ingress in the kubernetes folder and change the host to the DNS name of your choice.
+
+After that you have to configure the web service to the type ClusterIP so it is only reachable via the ingress.
+
+If you have already deployed the service you can patch the web service to the type ClusterIP with the following command:
+
+` kubectl -n hoarder patch service web -p '{"spec":{"type":"ClusterIP"}}' `
+
+Afterwards you can apply the ingress and access the service via your chosen URL.
+
+#### Setting up HTTPS access to the Service
+
+To access hoarder securely you can configure the ingress to use a preconfigured TLS certificate. This requires that you already have the needed files, namely your .crt and .key file, on hand.
+
+After you have deployed the hoarder manifests you can deploy your certificate for hoarder in the `hoarder` namespace with this example command. You can name the secret however you want. But be aware that the secret name in the ingress definition has to match the secret name.
+
+` $ kubectl --namespace hoarder create secret tls hoarder-web-tls --cert=/path/to/crt --key=/path/to/key `
+
+If the secret is successfully created you can now configure the Ingress to use TLS via this changes to the spec:
+
+```` yaml
+ spec:
+ tls:
+ - hosts:
+ - hoarder.example.com
+ secretName: hoarder-web-tls
+````
+
+> Note: Be aware that the hosts have to match between the tls spec and the HTTP spec.
+
+### [Optional] 6. Setup quick sharing extensions
+
+Go to the [quick sharing page](/quick-sharing) to install the mobile apps and the browser extensions. Those will help you hoard things faster!
+
+## Updating
+
+Edit the `HOARDER_VERSION` variable in the `kustomization.yaml` file and run `make clean deploy`.
+
+If you have chosen `release` as the image tag you can also destroy the web pod, since the deployment has an ImagePullPolicy set to always the pod always pulls the image from the registry, this way we can ensure that the newest release image is pulled. \ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.23.1/02-Installation/05-pikapods.md b/docs/versioned_docs/version-v0.23.1/02-Installation/05-pikapods.md
new file mode 100644
index 00000000..f954645a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/02-Installation/05-pikapods.md
@@ -0,0 +1,32 @@
+# PikaPods [Paid Hosting]
+
+:::info
+Note: PikaPods shares some of its revenue from hosting hoarder with the maintainer of this project.
+:::
+
+[PikaPods](https://www.pikapods.com/) offers managed paid hosting for many open source apps, including Hoarder.
+Server administration, updates, migrations and backups are all taken care of, which makes it well suited
+for less technical users. As of Nov 2024, running Hoarder there will cost you ~$3 per month.
+
+### Requirements
+
+- A _PikaPods_ account. Can be created for free [here](https://www.pikapods.com/register). You get an initial welcome credit of $5.
+
+### 1. Choose app
+
+Choose _Hoarder_ from their [list of apps](https://www.pikapods.com/apps) or use this [direct link](https://www.pikapods.com/pods?run=hoarder). This will either
+open a new dialog to add a new _Hoarder_ pod or ask you to log in.
+
+### 2. Add settings
+
+There are a few settings to configure in the dialog:
+
+- **Basics**: Give the pod a name and choose a region that's near you.
+- **Env Vars**: Here you can disable signups or set an OpenAI API key. All settings are optional.
+- **Resources**: The resources your _Hoarder_ pod can use. The defaults are fine, unless you have a very large collection.
+
+### 3. Start pod and add user
+
+After hitting _Add pod_ it will take about a minute for the app to fully start. After this you can visit
+the pod's URL and add an initial user under _Sign Up_. After this you may want to disable further sign-ups
+by setting the pod's `DISABLE_SIGNUPS` _Env Var_ to `true`.
diff --git a/docs/versioned_docs/version-v0.23.1/02-Installation/06-debuntu.md b/docs/versioned_docs/version-v0.23.1/02-Installation/06-debuntu.md
new file mode 100644
index 00000000..2c9c1901
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/02-Installation/06-debuntu.md
@@ -0,0 +1,70 @@
+# Debian 12/Ubuntu 24.04
+
+:::warning
+This script is a stripped-down version of those found in the [Proxmox Community Scripts](https://github.com/community-scripts/ProxmoxVE) repo. It has been adapted to work on baremetal Debian 12 or Ubuntu 24.04 installs **only**. Any other use is not supported and you use this script at your own risk.
+:::
+
+### Requirements
+
+- **Debian 12** (Buster) or
+- **Ubuntu 24.04** (Noble Numbat)
+
+The script will download and install all dependencies (except for Ollama), install Hoarder, do a basic configuration of Hoarder and Meilisearch (the search app used by Hoarder), and create and enable the systemd service files needed to run Hoarder on startup. Hoarder and Meilisearch are run in the context of their low-privilege user environments for more security.
+
+The script functions as an update script in addition to an installer. See **[Updating](#updating)**.
+
+### 1. Download the script from the [Hoarder repository](https://github.com/hoarder-app/hoarder/blob/main/hoarder-linux.sh).
+
+```
+wget https://raw.githubusercontent.com/hoarder-app/hoarder/main/hoarder-linux.sh
+```
+
+### 2. Run the script
+
+> This script must be run as `root`, or as a user with `sudo` privileges.
+
+ If this is a fresh install, then run the installer by using the following command:
+
+ ```shell
+ bash hoarder-linux.sh install
+ ```
+
+### 3. Create an account/sign in!
+
+ Then visit `http://localhost:3000` and you should be greated with the Sign In page.
+
+## Updating
+
+> This script must be run as `root`, or as a user with `sudo` privileges.
+
+ If Hoarder has previously been installed using this script, then run the updater like so:
+
+ ```shell
+ bash hoarder-linux.sh update
+ ```
+
+## Services and Ports
+
+`hoarder.target` includes 4 services: `meilisearch.service`, `hoarder-web.service`, `hoarder-workers.service`, `hoarder-browser.service`.
+
+- `meilisearch.service`: Provides full-text search, Hoarder Workers service connects to it, uses port `7700` by default.
+
+- `hoarder-web.service`: Provides the hoarder web service, uses `3000` port by default.
+
+- `hoarder-workers.service`: Provides the hoarder workers service, no port.
+
+- `hoarder-browser.service`: Provides the headless browser service, uses `9222` port by default.
+
+## Configuration, ENV file, database locations
+
+During installation, the script created a configuration file for `meilisearch`, an `ENV` file for Hoarder, and located config paths and database paths separate from the installation path of Hoarder, so as to allow for easier updating. Their names/locations are as follows:
+
+- `/etc/meilisearch.toml` - a basic configuration for meilisearch, that contains configs for the database location, disabling analytics, and using a master key, which prevents unauthorized connections.
+- `/var/lib/meilisearch` - Meilisearch DB location.
+- `/etc/hoarder/hoarder.env` - The Hoarder `ENV` file. Edit this file to configure Hoarder beyond the default. The web service and the workers service need to be restarted after editing this file:
+
+ ```shell
+ sudo systemctl restart hoarder-workers hoarder-web
+ ```
+- `/var/lib/hoarder` - The Hoarder database location. If you delete the contents of this folder you will lose all your data.
+
diff --git a/docs/versioned_docs/version-v0.23.1/02-Installation/07-minimal-install.md b/docs/versioned_docs/version-v0.23.1/02-Installation/07-minimal-install.md
new file mode 100644
index 00000000..147c1621
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/02-Installation/07-minimal-install.md
@@ -0,0 +1,49 @@
+# Minimal Installation
+
+:::warning
+Unless necessary, prefer the [full installation](/Installation/docker) to leverage all the features of hoarder. You'll be sacrificing a lot of functionality if you go with the minimal installation route.
+:::
+
+Hoarder's default installation has a dependency on Meilisearch for the full text search, Chrome for crawling and OpenAI/Ollama for AI tagging. You can however run hoarder without those dependencies if you're willing to sacrifice those features.
+
+- If you run without meilisearch, the search functionality will be completely disabled.
+- If you run without chrome, crawling will still work, but you'll lose ability to take screenshots of websites and websites with javascript content won't get crawled correctly.
+- If you don't setup OpenAI/Ollama, AI tagging will be disabled.
+
+Those features are important for leveraging hoarder's full potential, but if you're running in constrained environments, you can use the following minimal docker compose to skip all those dependencies:
+
+```yaml
+services:
+ web:
+ image: ghcr.io/hoarder-app/hoarder:release
+ restart: unless-stopped
+ volumes:
+ - data:/data
+ ports:
+ - 3000:3000
+ environment:
+ DATA_DIR: /data
+ NEXTAUTH_SECRET: super_random_string
+volumes:
+ data:
+```
+
+Or just with the following docker command:
+
+```base
+docker run -d \
+ --restart unless-stopped \
+ -v data:/data \
+ -p 3000:3000 \
+ -e DATA_DIR=/data \
+ -e NEXTAUTH_SECRET=super_random_string \
+ ghcr.io/hoarder-app/hoarder:release
+```
+
+:::warning
+You **MUST** change the `super_random_string` to a true random string which you can generate with `openssl rand -hex 32`.
+:::
+
+Check the [configuration docs](/configuration) for extra features to enable such as full page archival, full page screenshots, inference languages, etc.
+
+
diff --git a/docs/versioned_docs/version-v0.23.1/03-configuration.md b/docs/versioned_docs/version-v0.23.1/03-configuration.md
new file mode 100644
index 00000000..51ee23a5
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/03-configuration.md
@@ -0,0 +1,133 @@
+# Configuration
+
+The app is mainly configured by environment variables. All the used environment variables are listed in [packages/shared/config.ts](https://github.com/hoarder-app/hoarder/blob/main/packages/shared/config.ts). The most important ones are:
+
+| Name | Required | Default | Description |
+| ------------------------- | ------------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
+| DATA_DIR | Yes | Not set | The path for the persistent data directory. This is where the db lives. Assets are stored here by default unless `ASSETS_DIR` is set. |
+| ASSETS_DIR | No | Not set | The path where crawled assets will be stored. If not set, defaults to `${DATA_DIR}/assets`. |
+| NEXTAUTH_URL | Yes | Not set | Should point to the address of your server. The app will function without it, but will redirect you to wrong addresses on signout for example. |
+| NEXTAUTH_SECRET | Yes | Not set | Random string used to sign the JWT tokens. Generate one with `openssl rand -base64 36`. |
+| MEILI_ADDR | No | Not set | The address of meilisearch. If not set, Search will be disabled. E.g. (`http://meilisearch:7700`) |
+| MEILI_MASTER_KEY | Only in Prod and if search is enabled | Not set | The master key configured for meilisearch. Not needed in development environment. Generate one with `openssl rand -base64 36` |
+| MAX_ASSET_SIZE_MB | No | 50 | Sets the maximum allowed asset size (in MB) to be uploaded |
+| DISABLE_NEW_RELEASE_CHECK | No | false | If set to true, latest release check will be disabled in the admin panel. |
+
+## Authentication / Signup
+
+By default, Hoarder uses the database to store users, but it is possible to also use OAuth.
+The flags need to be provided to the `web` container.
+
+:::info
+Only OIDC compliant OAuth providers are supported! For information on how to set it up, consult the documentation of your provider.
+:::
+
+:::info
+When setting up OAuth, the allowed redirect URLs configured at the provider should be set to `<HOARDER_ADDRESS>/api/auth/callback/custom` where `<HOARDER_ADDRESS>` is the address you configured in `NEXTAUTH_URL` (for example: `https://try.hoarder.app/api/auth/callback/custom`).
+:::
+
+| Name | Required | Default | Description |
+| ------------------------------------------- | -------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| DISABLE_SIGNUPS | No | false | If enabled, no new signups will be allowed and the signup button will be disabled in the UI |
+| DISABLE_PASSWORD_AUTH | No | false | If enabled, only signups and logins using OAuth are allowed and the signup button and login form for local accounts will be disabled in the UI |
+| OAUTH_WELLKNOWN_URL | No | Not set | The "wellknown Url" for openid-configuration as provided by the OAuth provider |
+| OAUTH_CLIENT_SECRET | No | Not set | The "Client Secret" as provided by the OAuth provider |
+| OAUTH_CLIENT_ID | No | Not set | The "Client ID" as provided by the OAuth provider |
+| OAUTH_SCOPE | No | "openid email profile" | "Full list of scopes to request (space delimited)" |
+| OAUTH_PROVIDER_NAME | No | "Custom Provider" | The name of your provider. Will be shown on the signup page as "Sign in with `<name>`" |
+| OAUTH_ALLOW_DANGEROUS_EMAIL_ACCOUNT_LINKING | No | false | Whether existing accounts in hoarder stored in the database should automatically be linked with your OAuth account. Only enable it if you trust the OAuth provider! |
+| OAUTH_TIMEOUT | No | 3500 | The wait time in milliseconds for the OAuth provider response. Increase this if you are having `outgoing request timed out` errors |
+
+For more information on `OAUTH_ALLOW_DANGEROUS_EMAIL_ACCOUNT_LINKING`, check the [next-auth.js documentation](https://next-auth.js.org/configuration/providers/oauth#allowdangerousemailaccountlinking-option).
+
+## Inference Configs (For automatic tagging)
+
+Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic tagging to be enabled. Otherwise, automatic tagging will be skipped.
+
+:::warning
+
+- The quality of the tags you'll get will depend on the quality of the model you choose.
+- You might want to tune the `INFERENCE_CONTEXT_LENGTH` as the default is quite small. The larger the value, the better the quality of the tags, but the more expensive the inference will be (money-wise on OpenAI and resource-wise on ollama).
+ :::
+
+| Name | Required | Default | Description |
+| ------------------------------------ | -------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| OPENAI_API_KEY | No | Not set | The OpenAI key used for automatic tagging. More on that in [here](/openai). |
+| OPENAI_BASE_URL | No | Not set | If you just want to use OpenAI you don't need to pass this variable. If, however, you want to use some other openai compatible API (e.g. azure openai service), set this to the url of the API. |
+| OLLAMA_BASE_URL | No | Not set | If you want to use ollama for local inference, set the address of ollama API here. |
+| OLLAMA_KEEP_ALIVE | No | Not set | Controls how long the model will stay loaded into memory following the request (example value: "5m"). |
+| INFERENCE_TEXT_MODEL | No | gpt-4o-mini | The model to use for text inference. You'll need to change this to some other model if you're using ollama. |
+| INFERENCE_IMAGE_MODEL | No | gpt-4o-mini | The model to use for image inference. You'll need to change this to some other model if you're using ollama and that model needs to support vision APIs (e.g. llava). |
+| EMBEDDING_TEXT_MODEL | No | text-embedding-3-small | The model to be used for generating embeddings for the text. |
+| INFERENCE_CONTEXT_LENGTH | No | 2048 | The max number of tokens that we'll pass to the inference model. If your content is larger than this size, it'll be truncated to fit. The larger this value, the more of the content will be used in tag inference, but the more expensive the inference will be (money-wise on openAI and resource-wise on ollama). Check the model you're using for its max supported content size. |
+| INFERENCE_LANG | No | english | The language in which the tags will be generated. |
+| INFERENCE_JOB_TIMEOUT_SEC | No | 30 | How long to wait for the inference job to finish before timing out. If you're running ollama without powerful GPUs, you might want to increase the timeout a bit. |
+| INFERENCE_FETCH_TIMEOUT_SEC | No | 300 | \[Ollama Only\] The timeout of the fetch request to the ollama server. If your inference requests take longer than the default 5mins, you might want to increase this timeout. |
+| INFERENCE_SUPPORTS_STRUCTURED_OUTPUT | No | true | Whether the inference model supports structured output or not. |
+
+:::info
+
+- You can append additional instructions to the prompt used for automatic tagging, in the `AI Settings` (in the `User Settings` screen)
+- You can use the placeholders `$tags`, `$aiTags`, `$userTags` in the prompt. These placeholders will be replaced with all tags, ai generated tags or human created tags when automatic tagging is performed (e.g. `[hoarder, computer, ai]`)
+ :::
+
+## Crawler Configs
+
+| Name | Required | Default | Description |
+| ---------------------------------- | -------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| CRAWLER_NUM_WORKERS | No | 1 | Number of allowed concurrent crawling jobs. By default, we're only doing one crawling request at a time to avoid consuming a lot of resources. |
+| BROWSER_WEB_URL | No | Not set | The browser's http debugging address. The worker will talk to this endpoint to resolve the debugging console's websocket address. If you already have the websocket address, use `BROWSER_WEBSOCKET_URL` instead. If neither `BROWSER_WEB_URL` nor `BROWSER_WEBSOCKET_URL` are set, the worker will use plain http requests skipping screenshotting and javascript execution. |
+| BROWSER_WEBSOCKET_URL | No | Not set | The websocket address of browser's debugging console. If you want to use [browserless](https://browserless.io), use their websocket address here. If neither `BROWSER_WEB_URL` nor `BROWSER_WEBSOCKET_URL` are set, the worker will use plain http requests skipping screenshotting and javascript execution. |
+| BROWSER_CONNECT_ONDEMAND | No | false | If set to false, the crawler will proactively connect to the browser instance and always maintain an active connection. If set to true, the browser will be launched on demand only whenever a crawling is requested. Set to true if you're using a service that provides you with browser instances on demand. |
+| CRAWLER_DOWNLOAD_BANNER_IMAGE | No | true | Whether to cache the banner image used in the cards locally or fetch it each time directly from the website. Caching it consumes more storage space, but is more resilient against link rot and rate limits from websites. |
+| CRAWLER_STORE_SCREENSHOT | No | true | Whether to store a screenshot from the crawled website or not. Screenshots act as a fallback for when we fail to extract an image from a website. You can also view the stored screenshots for any link. |
+| CRAWLER_FULL_PAGE_SCREENSHOT | No | false | Whether to store a screenshot of the full page or not. Disabled by default, as it can lead to much higher disk usage. If disabled, the screenshot will only include the visible part of the page |
+| CRAWLER_SCREENSHOT_TIMEOUT_SEC | No | 5 | How long to wait for the screenshot finish before timing out. If you are capturing full-page screenshots of long webpages, consider increasing this value. |
+| CRAWLER_FULL_PAGE_ARCHIVE | No | false | Whether to store a full local copy of the page or not. Disabled by default, as it can lead to much higher disk usage. If disabled, only the readable text of the page is archived. |
+| CRAWLER_JOB_TIMEOUT_SEC | No | 60 | How long to wait for the crawler job to finish before timing out. If you have a slow internet connection or a low powered device, you might want to bump this up a bit |
+| CRAWLER_NAVIGATE_TIMEOUT_SEC | No | 30 | How long to spend navigating to the page (along with its redirects). Increase this if you have a slow internet connection |
+| CRAWLER_VIDEO_DOWNLOAD | No | false | Whether to download videos from the page or not (using yt-dlp) |
+| CRAWLER_VIDEO_DOWNLOAD_MAX_SIZE | No | 50 | The maximum file size for the downloaded video. The quality will be chosen accordingly. Use -1 to disable the limit. |
+| CRAWLER_VIDEO_DOWNLOAD_TIMEOUT_SEC | No | 600 | How long to wait for the video download to finish |
+| CRAWLER_ENABLE_ADBLOCKER | No | true | Whether to enable an adblocker in the crawler or not. If you're facing troubles downloading the adblocking lists on worker startup, you can disable this. |
+| CRAWLER_YTDLP_ARGS | No | [] | Include additional yt-dlp arguments to be passed at crawl time separated by %%: https://github.com/yt-dlp/yt-dlp?tab=readme-ov-file#general-options |
+
+## OCR Configs
+
+Hoarder uses [tesseract.js](https://github.com/naptha/tesseract.js) to extract text from images.
+
+| Name | Required | Default | Description |
+| ------------------------ | -------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| OCR_CACHE_DIR | No | $TEMP_DIR | The dir where tesseract will download its models. By default, those models are not persisted and stored in the OS' temp dir. |
+| OCR_LANGS | No | eng | Comma separated list of the language codes that you want tesseract to support. You can find the language codes [here](https://tesseract-ocr.github.io/tessdoc/Data-Files-in-different-versions.html). Set to empty string to disable OCR. |
+| OCR_CONFIDENCE_THRESHOLD | No | 50 | A number between 0 and 100 indicating the minimum acceptable confidence from tessaract. If tessaract's confidence is lower than this value, extracted text won't be stored. |
+
+## Webhook Configs
+
+You can use webhooks to trigger actions when bookmarks are created, changed or crawled.
+
+| Name | Required | Default | Description |
+| ------------------- | -------- | ------- | ------------------------------------------------- |
+| WEBHOOK_TIMEOUT_SEC | No | 5 | The timeout for the webhook request in seconds. |
+| WEBHOOK_RETRY_TIMES | No | 3 | The number of times to retry the webhook request. |
+
+:::info
+
+- The WEBHOOK_TOKEN is used for authentication. It will appear in the Authorization header as Bearer token.
+ ```
+ Authorization: Bearer <WEBHOOK_TOKEN>
+ ```
+- The webhook will be triggered with the job id (used for idempotence), bookmark id, bookmark type, the user id, the url and the operation in JSON format in the body.
+
+ ```json
+ {
+ "jobId": "123",
+ "type": "link",
+ "bookmarkId": "exampleBookmarkId",
+ "userId": "exampleUserId",
+ "url": "https://example.com",
+ "operation": "crawled"
+ }
+ ```
+
+ :::
diff --git a/docs/versioned_docs/version-v0.23.1/04-screenshots.md b/docs/versioned_docs/version-v0.23.1/04-screenshots.md
new file mode 100644
index 00000000..07830566
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/04-screenshots.md
@@ -0,0 +1,34 @@
+# Screenshots
+
+## Homepage
+
+![Homepage](/img/screenshots/homepage.png)
+
+## Homepage (Dark Mode)
+
+![Homepage](/img/screenshots/homepage-dark.png)
+
+## Tags
+
+![All Tags](/img/screenshots/all-tags.png)
+
+## Lists
+
+![All Lists](/img/screenshots/all-lists.png)
+
+## Bookmark Preview
+
+![Bookmark Preview](/img/screenshots/bookmark-preview.png)
+
+## Settings
+
+![Settings](/img/screenshots/settings.png)
+
+## Admin Panel
+
+![Ammin](/img/screenshots/admin.png)
+
+
+## iOS Sharing
+
+<img src="/img/screenshots/share-sheet.png" width="400px" />
diff --git a/docs/versioned_docs/version-v0.23.1/05-quick-sharing.md b/docs/versioned_docs/version-v0.23.1/05-quick-sharing.md
new file mode 100644
index 00000000..9488cb69
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/05-quick-sharing.md
@@ -0,0 +1,18 @@
+# Quick Sharing Extensions
+
+The whole point of Hoarder is making it easy to hoard the content. That's why there are a couple of
+
+## Mobile Apps
+
+<img src="/img/quick-sharing/mobile.png" alt="mobile screenshot" width="300"/>
+
+
+- **iOS app**: [App Store Link](https://apps.apple.com/us/app/hoarder-app/id6479258022).
+- **Android App**: [Play Store link](https://play.google.com/store/apps/details?id=app.hoarder.hoardermobile&pcampaignid=web_share).
+
+## Browser Extensions
+
+<img src="/img/quick-sharing/extension.png" alt="mobile screenshot" width="300"/>
+
+- **Chrome extension**: [here](https://chromewebstore.google.com/detail/hoarder/kgcjekpmcjjogibpjebkhaanilehneje).
+- **Firefox addon**: [here](https://addons.mozilla.org/en-US/firefox/addon/hoarder/).
diff --git a/docs/versioned_docs/version-v0.23.1/06-openai.md b/docs/versioned_docs/version-v0.23.1/06-openai.md
new file mode 100644
index 00000000..289f44c2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/06-openai.md
@@ -0,0 +1,11 @@
+# OpenAI Costs
+
+This service uses OpenAI for automatic tagging. This means that you'll incur some costs if automatic tagging is enabled. There are two type of inferences that we do:
+
+## Text Tagging
+
+For text tagging, we use the `gpt-4o-mini` model. This model is [extremely cheap](https://openai.com/api/pricing). Cost per inference varies depending on the content size per article. Though, roughly, You'll be able to generate tags for almost 3000+ bookmarks for less than $1.
+
+## Image Tagging
+
+For image uploads, we use the `gpt-4o-mini` model for extracting tags from the image. You can learn more about the costs of using this model [here](https://platform.openai.com/docs/guides/vision/calculating-costs). To lower the costs, we're using the low resolution mode (fixed number of tokens regardless of image size). You'll be able to run inference for 1000+ images for less than a $1.
diff --git a/docs/versioned_docs/version-v0.23.1/07-Development/01-setup.md b/docs/versioned_docs/version-v0.23.1/07-Development/01-setup.md
new file mode 100644
index 00000000..41e0a902
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/07-Development/01-setup.md
@@ -0,0 +1,122 @@
+# Setup
+
+## Manual Setup
+
+Hoarder uses `node` version 22. To install it, you can use `nvm` [^1]
+
+```
+$ nvm install 22
+```
+
+Verify node version using this command:
+```
+$ node --version
+v22.14.0
+```
+
+Hoarder also makes use of `corepack`[^2]. If you have `node` installed, then `corepack` should already be
+installed on your machine, and you don't need to do anything. To verify the `corepack` is installed run:
+
+```
+$ command -v corepack
+/home/<user>/.nvm/versions/node/v22.14.0/bin/corepack
+```
+
+To enable `corepack` run the following command:
+
+```
+$ corepack enable
+```
+
+Then install the packages and dependencies using:
+
+```
+$ pnpm install
+```
+
+Output of a successful `pnpm install` run should look something like:
+
+```
+Scope: all 20 workspace projects
+Lockfile is up to date, resolution step is skipped
+Packages: +3129
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+Progress: resolved 0, reused 2699, downloaded 0, added 3129, done
+
+devDependencies:
++ @hoarder/prettier-config 0.1.0 <- tooling/prettier
+
+. prepare$ husky
+โ””โ”€ Done in 45ms
+Done in 5.5s
+```
+
+You can now continue with the rest of this documentation.
+
+### First Setup
+
+- You'll need to prepare the environment variables for the dev env.
+- Easiest would be to set it up once in the root of the repo and then symlink it in each app directory (e.g. `/apps/web`, `/apps/workers`) and also `/packages/db`.
+- Start by copying the template by `cp .env.sample .env`.
+- The most important env variables to set are:
+ - `DATA_DIR`: Where the database and assets will be stored. This is the only required env variable. You can use an absolute path so that all apps point to the same dir.
+ - `NEXTAUTH_SECRET`: Random string used to sign the JWT tokens. Generate one with `openssl rand -base64 36`. Logging in will not work if this is missing!
+ - `MEILI_ADDR`: If not set, search will be disabled. You can set it to `http://127.0.0.1:7700` if you run meilisearch using the command below.
+ - `OPENAI_API_KEY`: If you want to enable auto tag inference in the dev env.
+- run `pnpm run db:migrate` in the root of the repo to set up the database.
+
+### Dependencies
+
+#### Meilisearch
+
+Meilisearch is the provider for the full text search. You can get it running with `docker run -p 7700:7700 getmeili/meilisearch:v1.11.1`.
+
+Mount persistent volume if you want to keep index data across restarts. You can trigger a re-index for the entire items collection in the admin panel in the web app.
+
+#### Chrome
+
+The worker app will automatically start headless chrome on startup for crawling pages. You don't need to do anything there.
+
+### Web App
+
+- Run `pnpm web` in the root of the repo.
+- Go to `http://localhost:3000`.
+
+> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless workers are running.
+
+### Workers
+
+- Run `pnpm workers` in the root of the repo.
+
+### iOS Mobile App
+
+- `cd apps/mobile`
+- `pnpm exec expo prebuild --no-install` to build the app.
+- Start the ios simulator.
+- `pnpm exec expo run:ios`
+- The app will be installed and started in the simulator.
+
+Changing the code will hot reload the app. However, installing new packages requires restarting the expo server.
+
+### Browser Extension
+
+- `cd apps/browser-extension`
+- `pnpm dev`
+- This will generate a `dist` package
+- Go to extension settings in chrome and enable developer mode.
+- Press `Load unpacked` and point it to the `dist` directory.
+- The plugin will pop up in the plugin list.
+
+In dev mode, opening and closing the plugin menu should reload the code.
+
+
+## Docker Dev Env
+
+If the manual setup is too much hassle for you. You can use a docker based dev environment by running `docker compose -f docker/docker-compose.dev.yml up` in the root of the repo. This setup wasn't super reliable for me though.
+
+
+[^1]: [nvm](https://github.com/nvm-sh/nvm) is a node version manager. You can install it following [these
+instructions](https://github.com/nvm-sh/nvm?tab=readme-ov-file#installing-and-updating).
+
+[^2]: [corepack](https://nodejs.org/api/corepack.html) is an experimental tool to help with managing versions of your
+package managers.
diff --git a/docs/versioned_docs/version-v0.23.1/07-Development/02-directories.md b/docs/versioned_docs/version-v0.23.1/07-Development/02-directories.md
new file mode 100644
index 00000000..54552402
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/07-Development/02-directories.md
@@ -0,0 +1,28 @@
+# Directory Structure
+
+## Apps
+
+| Directory | Description |
+| ------------------------ | ------------------------------------------------------ |
+| `apps/web` | The main web app |
+| `apps/workers` | The background workers logic |
+| `apps/mobile` | The react native based mobile app |
+| `apps/browser-extension` | The browser extension |
+| `apps/landing` | The landing page of [hoarder.app](https://hoarder.app) |
+
+## Shared Packages
+
+| Directory | Description |
+| ----------------- | ---------------------------------------------------------------------------- |
+| `packages/db` | The database schema and migrations |
+| `packages/trpc` | Where most of the business logic lies built as TRPC routes |
+| `packages/shared` | Some shared code between the different apps (e.g. loggers, configs, assetdb) |
+
+## Toolings
+
+| Directory | Description |
+| -------------------- | ----------------------- |
+| `tooling/typescript` | The shared tsconfigs |
+| `tooling/eslint` | ESlint configs |
+| `tooling/prettier` | Prettier configs |
+| `tooling/tailwind` | Shared tailwind configs |
diff --git a/docs/versioned_docs/version-v0.23.1/07-Development/03-database.md b/docs/versioned_docs/version-v0.23.1/07-Development/03-database.md
new file mode 100644
index 00000000..40e2d164
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/07-Development/03-database.md
@@ -0,0 +1,11 @@
+# Database Migrations
+
+- The database schema lives in `packages/db/schema.ts`.
+- Changing the schema, requires a migration.
+- You can generate the migration by running `pnpm drizzle-kit generate:sqlite` in the `packages/db` dir.
+- You can then apply the migration by running `pnpm run migrate`.
+
+
+## Drizzle Studio
+
+You can start the drizzle studio by running `pnpm db:studio` in the root of the repo.
diff --git a/docs/versioned_docs/version-v0.23.1/07-Development/04-architecture.md b/docs/versioned_docs/version-v0.23.1/07-Development/04-architecture.md
new file mode 100644
index 00000000..5a864034
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/07-Development/04-architecture.md
@@ -0,0 +1,9 @@
+# Architecture
+
+![Architecture Diagram](/img/architecture/arch.png)
+
+- Webapp: NextJS based using sqlite for data storage.
+- Workers: Consume the jobs from sqlite based job queue and executes them, there are three job types:
+ 1. Crawling: Fetches the content of links using a headless chrome browser running in the workers container.
+ 2. OpenAI: Uses OpenAI APIs to infer the tags of the content.
+ 3. Indexing: Indexes the content in meilisearch for faster retrieval during search.
diff --git a/docs/versioned_docs/version-v0.23.1/08-security-considerations.md b/docs/versioned_docs/version-v0.23.1/08-security-considerations.md
new file mode 100644
index 00000000..5a295526
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/08-security-considerations.md
@@ -0,0 +1,14 @@
+# Security Considerations
+
+If you're going to give app access to untrusted users, there's some security considerations that you'll need to be aware of given how the crawler works. The crawler is basically running a browser to fetch the content of the bookmarks. Any untrusted user can submit bookmarks to be crawled from your server and they'll be able to see the crawling result. This can be abused in multiple ways:
+
+1. Untrusted users can submit crawl requests to websites that you don't want to be coming out of your IPs.
+2. Crawling user controlled websites can expose your origin IP (and location) even if your service is hosted behind cloudflare for example.
+3. The crawling requests will be coming out from your own network, which untrusted users can leverage to crawl internal non-internet exposed endpoints.
+
+To mitigate those risks, you can do one of the following:
+
+1. Limit access to trusted users
+2. Let the browser traffic go through some VPN with restricted network policies.
+3. Host the browser container outside of your network.
+4. Use a hosted browser as a service (e.g. [browserless](https://browserless.io)). Note: I've never used them before.
diff --git a/docs/versioned_docs/version-v0.23.1/09-command-line.md b/docs/versioned_docs/version-v0.23.1/09-command-line.md
new file mode 100644
index 00000000..5d404914
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/09-command-line.md
@@ -0,0 +1,109 @@
+# Command Line Tool (CLI)
+
+Hoarder comes with a simple CLI for those users who want to do more advanced manipulation.
+
+## Features
+
+- Manipulate bookmarks, lists and tags
+- Mass import/export of bookmarks
+
+## Installation (NPM)
+
+```
+npm install -g @hoarderapp/cli
+```
+
+
+## Installation (Docker)
+
+```
+docker run --rm ghcr.io/hoarder-app/hoarder-cli:release --help
+```
+
+## Usage
+
+```
+hoarder
+```
+
+```
+Usage: hoarder [options] [command]
+
+A CLI interface to interact with the hoarder api
+
+Options:
+ --api-key <key> the API key to interact with the API (env: HOARDER_API_KEY)
+ --server-addr <addr> the address of the server to connect to (env: HOARDER_SERVER_ADDR)
+ -V, --version output the version number
+ -h, --help display help for command
+
+Commands:
+ bookmarks manipulating bookmarks
+ lists manipulating lists
+ tags manipulating tags
+ whoami returns info about the owner of this API key
+ help [command] display help for command
+```
+
+And some of the subcommands:
+
+```
+hoarder bookmarks
+```
+
+```
+Usage: hoarder bookmarks [options] [command]
+
+Manipulating bookmarks
+
+Options:
+ -h, --help display help for command
+
+Commands:
+ add [options] creates a new bookmark
+ get <id> fetch information about a bookmark
+ update [options] <id> updates bookmark
+ list [options] list all bookmarks
+ delete <id> delete a bookmark
+ help [command] display help for command
+
+```
+
+```
+hoarder lists
+```
+
+```
+Usage: hoarder lists [options] [command]
+
+Manipulating lists
+
+Options:
+ -h, --help display help for command
+
+Commands:
+ list lists all lists
+ delete <id> deletes a list
+ add-bookmark [options] add a bookmark to list
+ remove-bookmark [options] remove a bookmark from list
+ help [command] display help for command
+```
+
+## Optaining an API Key
+
+To use the CLI, you'll need to get an API key from your hoarder settings. You can validate that it's working by running:
+
+```
+hoarder --api-key <key> --server-addr <addr> whoami
+```
+
+For example:
+
+```
+hoarder --api-key mysupersecretkey --server-addr https://try.hoarder.app whoami
+{
+ id: 'j29gnbzxxd01q74j2lu88tnb',
+ name: 'Test User',
+ email: 'test@gmail.com'
+}
+```
diff --git a/docs/versioned_docs/version-v0.23.1/10-import.md b/docs/versioned_docs/version-v0.23.1/10-import.md
new file mode 100644
index 00000000..5de36e5a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/10-import.md
@@ -0,0 +1,49 @@
+# Importing Bookmarks
+
+
+Hoarder supports importing bookmarks using the Netscape HTML Format, Pocket's new CSV format & Omnivore's JSONs. Titles, tags and addition date will be preserved during the import. An automatically created list will contain all the imported bookmarks.
+
+:::info
+All the URLs in the bookmarks file will be added automatically, you will not be able to pick and choose which bookmarks to import!
+:::
+
+## Import from Chrome
+
+- Open Chrome and go to `chrome://bookmarks`
+- Click on the three dots on the top right corner and choose `Export bookmarks`
+- This will download an html file with all of your bookmarks.
+- To import the bookmark file, go to the settings and click "Import Bookmarks from HTML file".
+
+## Import from Firefox
+- Open Firefox and click on the menu button (โ˜ฐ) in the top right corner.
+- Navigate to Bookmarks > Manage bookmarks (or press Ctrl + Shift + O / Cmd + Shift + O to open the Bookmarks Library).
+- In the Bookmarks Library, click the Import and Backup button at the top. Select Export Bookmarks to HTML... to save your bookmarks as an HTML file.
+- To import a bookmark file, go back to the Import and Backup menu, then select Import Bookmarks from HTML... and choose your saved HTML file.
+
+## Import from Pocket
+
+- Go to the [Pocket export page](https://getpocket.com/export) and follow the instructions to export your bookmarks.
+- Pocket after a couple of minutes will mail you a zip file with all the bookmarks.
+- Unzip the file and you'll get a CSV file.
+- To import the bookmark file, go to the settings and click "Import Bookmarks from Pocket export".
+
+## Import from Omnivore
+
+- Follow Omnivore's [documentation](https://docs.omnivore.app/using/exporting.html) to export your bookmarks.
+- This will give you a zip file with all your data.
+- The zip file contains a lot of JSONs in the format `metadata_*.json`. You can either import every JSON file manually, or merge the JSONs into a single JSON file and import that.
+- To merge the JSONs into a single JSON file, you can use the following command in the unzipped directory: `jq -r '.[]' metadata_*.json | jq -s > omnivore.json` and then import the `omnivore.json` file. You'll need to have the [jq](https://github.com/jqlang/jq) tool installed.
+
+## Import using the CLI
+
+:::warning
+Importing bookmarks using the CLI requires some technical knowledge and might not be very straightforward for non-technical users. Don't hesitate to ask questions in github discussions or discord though.
+:::
+
+If you can get your bookmarks in a text file with one link per line, you can use the following command to import them using the [hoarder cli](https://docs.hoarder.app/command-line):
+
+```
+while IFS= read -r url; do
+ hoarder --api-key "<KEY>" --server-addr "<SERVER_ADDR>" bookmarks add --link "$url"
+done < all_links.txt
+```
diff --git a/docs/versioned_docs/version-v0.23.1/11-FAQ.md b/docs/versioned_docs/version-v0.23.1/11-FAQ.md
new file mode 100644
index 00000000..5a9a1098
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/11-FAQ.md
@@ -0,0 +1,60 @@
+# Frequently Asked Questions (FAQ)
+
+## User Management
+
+### Lost password
+
+#### If you are not an administrator
+
+Administrators can reset the password of any user. Contact an administrator to reset the password for you.
+
+* Navigate to the `Admin Settings` page
+* Find the user in the `Users List`
+* In the `Actions` column, there is a button to reset the password
+* Enter a new password and press `Reset`
+* The new password is now set
+* If required, you can change your password again (so the admin does not know your password) in the `User Settings`
+
+#### If you are an administrator
+
+If you are an administrator and lost your password, you have to reset the password in the database.
+
+To reset the password:
+
+* Acquire some kind of tools that helps you to connect to the database:
+ * `sqlite3` on Linux: run `apt-get install sqlite3` (depending on your package manager)
+ * e.g. `dbeaver` on Windows
+* Shut down hoarder
+* Connect to the `db.db` database, which is located in the `data` directory you have mounted to your docker container:
+ * by e.g. running `sqlite3 db.db` (in your `data` directory)
+ * or going through e.g. the `dbeaver` UI to locate the file in the data directory and connecting to it
+* Update the password in the database by running:
+ * `update user set password='$2a$10$5u40XUq/cD/TmLdCOyZ82ePENE6hpkbodJhsp7.e/BgZssUO5DDTa' where email='<YOUR_EMAIL_HERE>';`
+ * (don't forget to put your email address into the command)
+* The new password for your user is now `adminadmin`.
+* Start hoarder again
+* Log in with your email address and the password `adminadmin` and change the password to whatever you want in the `User Settings`
+
+### Adding another administrator
+
+By default, the first user to sign up gets promoted to administrator automatically.
+
+In case you want to grant those permissions to another user:
+
+* Navigate to the `Admin Settings` page
+* Find the user in the `Users List`
+* In the `Actions` column, there is a button to change the Role
+* Change the Role to `Admin`
+* Press `Change`
+* The new administrator has to log out and log in again to get the new role assigned
+
+### Adding new users, when signups are disabled
+
+Administrators can create new accounts any time:
+
+* Navigate to the `Admin Settings` page
+* Go to the `Users List`
+* Press the `Create User` Button.
+* Enter the information for the user
+* Press `create`
+* The new user can now log in
diff --git a/docs/versioned_docs/version-v0.23.1/12-troubleshooting.md b/docs/versioned_docs/version-v0.23.1/12-troubleshooting.md
new file mode 100644
index 00000000..15356309
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/12-troubleshooting.md
@@ -0,0 +1,33 @@
+# Troubleshooting
+
+## SqliteError: no such table: user
+
+This usually means that there's something wrong with the database setup (more concretely, it means that the database is not initialized). This can be caused by multiple problems:
+1. **Wiped DATA_DIR:** Your `DATA_DIR` got wiped (or the backing storage dir changed). If you did this intentionally, restart the container so that it can re-initalize the database.
+2. **Missing DATA_DIR**: You're not using the default docker compose file, and you forgot to configure the `DATA_DIR` env var. This will result into the database getting set up in a different directory than the one used by the service.
+
+## Chrome Failed to Read DnsConfig
+
+If you see this error in the logs of the chrome container, it's a benign error and you can safely ignore it. Whatever problems you're having, is unrelated to this error.
+
+## AI Tagging not working (when using OpenAI)
+
+Check the logs of the container and this will usually tell you what's wrong. Common problems are:
+1. Typo in the env variable `OPENAI_API_KEY` name resulting into logs saying something like "skipping inference as it's not configured".
+2. You forgot to call `docker compose up` after configuring open ai.
+3. OpenAI requires pre-charging the account with credits before using it, otherwise you'll get an error like "insufficient funds".
+
+## AI Tagging not working (when using Ollama)
+
+Check the logs of the container and this will usually tell you what's wrong. Common problems are:
+1. Typo in the env variable `OLLAMA_BASE_URL` name resulting into logs saying something like "skipping inference as it's not configured".
+2. You forgot to call `docker compose up` after configuring ollama.
+3. You didn't change the `INFERENCE_TEXT_MODEL` env variable, resulting into hoarder attempting to use gpt models with ollama which won't work.
+4. Ollama server is not reachable by the hoarder container. This can be caused by:
+ 1. Ollama server being in a different docker network than the hoarder container.
+ 2. You're using `localhost` as the `OLLAMA_BASE_URL` instead of the actual address of the ollama server. `localhost` points to the container itself, not the docker host. Check this [stackoverflow answer](https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach) to find how to correctly point to the docker host address instead.
+
+## Crawling not working
+
+Check the logs of the container and this will usually tell you what's wrong. Common problems are:
+1. You changed the name of the chrome container but didn't change the `BROWSER_WEB_URL` env variable.
diff --git a/docs/versioned_docs/version-v0.23.1/13-community-projects.md b/docs/versioned_docs/version-v0.23.1/13-community-projects.md
new file mode 100644
index 00000000..ab3c5ab0
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/13-community-projects.md
@@ -0,0 +1,47 @@
+# Community Projects
+
+This page lists community projects that are built around Hoarder, but not officially supported by the development team.
+
+:::warning
+This list comes with no guarantees about security, performance, reliability, or accuracy. Use at your own risk.
+:::
+
+### Raycast Extension
+
+_By [@luolei](https://github.com/foru17)._
+
+A user-friendly Raycast extension that seamlessly integrates with Hoarder, bringing powerful bookmark management to your fingertips. Quickly save, search, and organize your bookmarks, texts, and imagesโ€”all through Raycast's intuitive interface.
+
+Get it [here](https://www.raycast.com/luolei/hoarder).
+
+### Alfred Workflow
+
+_By [@yinan-c](https://github.com/yinan-c)_
+
+An Alfred workflow to quickly hoard stuff or access your hoarded bookmarks!
+
+Get it [here](https://www.alfredforum.com/topic/22528-hoarder-workflow-for-self-hosted-bookmark-management/).
+
+### Obsidian Plugin
+
+_By [@jhofker](https://github.com/jhofker)_
+
+An Obsidian plugin that syncs your Hoarder bookmarks with Obsidian, creating markdown notes for each bookmark in a designated folder.
+
+Get it [here](https://github.com/jhofker/obsidian-hoarder/), or install it directly from Obsidian's community plugin store ([link](https://obsidian.md/plugins?id=hoarder-sync)).
+
+### Telegram Bot
+
+_By [@Madh93](https://github.com/Madh93)_
+
+A Telegram Bot for saving bookmarks to Hoarder directly through Telegram.
+
+Get it [here](https://github.com/Madh93/hoarderbot).
+
+### Hoarder's Pipette
+
+_By [@DanSnow](https://github.com/DanSnow)_
+
+A chrome extension that injects hoarder's bookmarks into your search results.
+
+Get it [here](https://dansnow.github.io/hoarder-pipette/guides/installation/).
diff --git a/docs/versioned_docs/version-v0.23.1/14-Guides/01-legacy-container-upgrade.md b/docs/versioned_docs/version-v0.23.1/14-Guides/01-legacy-container-upgrade.md
new file mode 100644
index 00000000..3c86705a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/14-Guides/01-legacy-container-upgrade.md
@@ -0,0 +1,66 @@
+# Legacy Container Upgrade
+
+Hoarder's 0.16 release consolidated the web and worker containers into a single container and also dropped the need for the redis container. The legacy containers will stop being supported soon, to upgrade to the new container do the following:
+
+1. Remove the redis container and its volume if it had one.
+2. Move the environment variables that you've set exclusively to the `workers` container to the `web` container.
+3. Delete the `workers` container.
+4. Rename the web container image from `hoarder-app/hoarder-web` to `hoarder-app/hoarder`.
+
+```diff
+diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
+index cdfc908..6297563 100644
+--- a/docker/docker-compose.yml
++++ b/docker/docker-compose.yml
+@@ -1,7 +1,7 @@
+ version: "3.8"
+ services:
+ web:
+- image: ghcr.io/hoarder-app/hoarder-web:${HOARDER_VERSION:-release}
++ image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
+ restart: unless-stopped
+ volumes:
+ - data:/data
+@@ -10,14 +10,10 @@ services:
+ env_file:
+ - .env
+ environment:
+- REDIS_HOST: redis
+ MEILI_ADDR: http://meilisearch:7700
++ BROWSER_WEB_URL: http://chrome:9222
++ # OPENAI_API_KEY: ...
+ DATA_DIR: /data
+- redis:
+- image: redis:7.2-alpine
+- restart: unless-stopped
+- volumes:
+- - redis:/data
+ chrome:
+ image: gcr.io/zenika-hub/alpine-chrome:123
+ restart: unless-stopped
+@@ -37,24 +33,7 @@ services:
+ MEILI_NO_ANALYTICS: "true"
+ volumes:
+ - meilisearch:/meili_data
+- workers:
+- image: ghcr.io/hoarder-app/hoarder-workers:${HOARDER_VERSION:-release}
+- restart: unless-stopped
+- volumes:
+- - data:/data
+- env_file:
+- - .env
+- environment:
+- REDIS_HOST: redis
+- MEILI_ADDR: http://meilisearch:7700
+- BROWSER_WEB_URL: http://chrome:9222
+- DATA_DIR: /data
+- # OPENAI_API_KEY: ...
+- depends_on:
+- web:
+- condition: service_started
+
+ volumes:
+- redis:
+ meilisearch:
+ data:
+```
diff --git a/docs/versioned_docs/version-v0.23.1/14-Guides/02-search-query-language.md b/docs/versioned_docs/version-v0.23.1/14-Guides/02-search-query-language.md
new file mode 100644
index 00000000..b0d8ffd3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/14-Guides/02-search-query-language.md
@@ -0,0 +1,69 @@
+# Search Query Language
+
+Hoarder provides a search query language to filter and find bookmarks. Here are all the supported qualifiers and how to use them:
+
+## Basic Syntax
+
+- Use spaces to separate multiple conditions (implicit AND)
+- Use `and`/`or` keywords for explicit boolean logic
+- Use parentheses `()` for grouping conditions
+- Prefix qualifiers with `-` to negate them
+
+## Qualifiers
+
+Here's a comprehensive table of all supported qualifiers:
+
+| Qualifier | Description | Example Usage |
+| -------------------------------- | -------------------------------------------------- | --------------------- |
+| `is:fav` | Favorited bookmarks | `is:fav` |
+| `is:archived` | Archived bookmarks | `-is:archived` |
+| `is:tagged` | Bookmarks that has one or more tags | `is:tagged` |
+| `is:inlist` | Bookmarks that are in one or more lists | `is:inlist` |
+| `is:link`, `is:text`, `is:media` | Bookmarks that are of type link, text or media | `is:link` |
+| `url:<value>` | Match bookmarks with URL substring | `url:example.com` |
+| `#<tag>` | Match bookmarks with specific tag | `#important` |
+| | Supports quoted strings for tags with spaces | `#"work in progress"` |
+| `list:<name>` | Match bookmarks in specific list | `list:reading` |
+| | Supports quoted strings for list names with spaces | `list:"to review"` |
+| `after:<date>` | Bookmarks created on or after date (YYYY-MM-DD) | `after:2023-01-01` |
+| `before:<date>` | Bookmarks created on orbefore date (YYYY-MM-DD) | `before:2023-12-31` |
+
+### Examples
+
+```plaintext
+# Find favorited bookmarks from 2023 that are tagged "important"
+is:fav after:2023-01-01 before:2023-12-31 #important
+
+# Find archived bookmarks that are either in "reading" list or tagged "work"
+is:archived and (list:reading or #work)
+
+# Find bookmarks that are not tagged or not in any list
+-is:tagged or -is:inlist
+```
+
+## Combining Conditions
+
+You can combine multiple conditions using boolean logic:
+
+```plaintext
+# Find favorited bookmarks from 2023 that are tagged "important"
+is:fav after:2023-01-01 before:2023-12-31 #important
+
+# Find archived bookmarks that are either in "reading" list or tagged "work"
+is:archived and (list:reading or #work)
+
+# Find bookmarks that are not favorited and not archived
+-is:fav -is:archived
+```
+
+## Text Search
+
+Any text not part of a qualifier will be treated as a full-text search:
+
+```plaintext
+# Search for "machine learning" in bookmark content
+machine learning
+
+# Combine text search with qualifiers
+machine learning is:fav
+```
diff --git a/docs/versioned_docs/version-v0.23.1/14-Guides/03-singlefile.md b/docs/versioned_docs/version-v0.23.1/14-Guides/03-singlefile.md
new file mode 100644
index 00000000..522caf50
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/14-Guides/03-singlefile.md
@@ -0,0 +1,30 @@
+# Using Hoarder with SingleFile Extension
+
+Hoarder supports being a destination for the [SingleFile extension](https://github.com/gildas-lormeau/SingleFile). This has the benefit of allowing you to use the singlefile extension to hoard links as you're seeing them in the browser. This is perfect for websites that don't like to get crawled, has annoying cookie banner or require authentication.
+
+## Setup
+
+1. Install the [SingleFile extension](https://github.com/gildas-lormeau/SingleFile).
+2. In the extension settings, select `Destinations`.
+3. Select `upload to a REST Form API`.
+4. In the URL, insert the address: `https://YOUR_SERVER_ADDRESS/api/v1/bookmarks/singlefile`.
+5. In the `authorization token` field, paste an API key that you can get from your hoarder settings.
+6. Set `data field name` to `file`.
+7. Set `URL field name` to `url`.
+
+Now, go to any page and click the singlefile extension icon. Once it's done with the upload, the bookmark should show up in your hoarder instance. Note that the singlefile extension doesn't show any progress on the upload. Given that archives are typically large, it might take 30+ seconds until the upload is done and starts showing up in Hoarder.
+
+
+## Recommended settings
+
+In the singlefile extension, you probably will want to change the following settings for better experience:
+* Stylesheets > compress CSS content: on
+* Stylesheets > group duplicate stylesheets together: on
+* HTML content > remove frames: on
+
+Also, you most likely will want to change the default `MAX_ASSET_SIZE_MB` in hoarder to something higher, for example `100`.
+
+:::info
+Currently, we don't support screenshots for singlefile uploads, but this will change in the future.
+:::
+
diff --git a/docs/versioned_docs/version-v0.23.1/14-Guides/04-hoarder-to-karakeep-migration.md b/docs/versioned_docs/version-v0.23.1/14-Guides/04-hoarder-to-karakeep-migration.md
new file mode 100644
index 00000000..1ac50c7c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.23.1/14-Guides/04-hoarder-to-karakeep-migration.md
@@ -0,0 +1,18 @@
+# Hoarder to Karakeep Migration
+
+Hoarder is rebranding to Karakeep. Due to github limitations, the old docker image might not be getting new updates after the rebranding. You might need to update your docker image to point to the new karakeep image instead by applying the following change in the docker compose file.
+
+```diff
+diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
+index cdfc908..6297563 100644
+--- a/docker/docker-compose.yml
++++ b/docker/docker-compose.yml
+@@ -1,7 +1,7 @@
+ version: "3.8"
+ services:
+ web:
+- image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
++ image: ghcr.io/karakeep-app/karakeep:${HOARDER_VERSION:-release}
+```
+
+You can also change the `HOARDER_VERSION` environment variable but if you do so remember to change it in the `.env` file as well.