aboutsummaryrefslogtreecommitdiffstats
path: root/docs/versioned_docs/version-v0.30.0/06-administration
diff options
context:
space:
mode:
authorMohamed Bassem <me@mbassem.com>2026-01-01 10:58:22 +0000
committerMohamed Bassem <me@mbassem.com>2026-01-01 10:58:22 +0000
commit4b54eeec0ebdd60d28ce865d56e079c4d11384ae (patch)
tree8180003c92358d74a9796b48d3fd341fae8f8338 /docs/versioned_docs/version-v0.30.0/06-administration
parentd472a3a1c428bad8ce2ddc0822fb5b327e9465d4 (diff)
downloadkarakeep-4b54eeec0ebdd60d28ce865d56e079c4d11384ae.tar.zst
release(docs): release the 0.30 docs
Diffstat (limited to 'docs/versioned_docs/version-v0.30.0/06-administration')
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/01-security-considerations.md14
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/02-FAQ.md60
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/03-openai.md11
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/05-troubleshooting.md46
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/06-server-migration.md71
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/07-legacy-container-upgrade.md66
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/08-hoarder-to-karakeep-migration.md28
-rw-r--r--docs/versioned_docs/version-v0.30.0/06-administration/_category_.json4
8 files changed, 300 insertions, 0 deletions
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/01-security-considerations.md b/docs/versioned_docs/version-v0.30.0/06-administration/01-security-considerations.md
new file mode 100644
index 00000000..5a295526
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/01-security-considerations.md
@@ -0,0 +1,14 @@
+# Security Considerations
+
+If you're going to give app access to untrusted users, there's some security considerations that you'll need to be aware of given how the crawler works. The crawler is basically running a browser to fetch the content of the bookmarks. Any untrusted user can submit bookmarks to be crawled from your server and they'll be able to see the crawling result. This can be abused in multiple ways:
+
+1. Untrusted users can submit crawl requests to websites that you don't want to be coming out of your IPs.
+2. Crawling user controlled websites can expose your origin IP (and location) even if your service is hosted behind cloudflare for example.
+3. The crawling requests will be coming out from your own network, which untrusted users can leverage to crawl internal non-internet exposed endpoints.
+
+To mitigate those risks, you can do one of the following:
+
+1. Limit access to trusted users
+2. Let the browser traffic go through some VPN with restricted network policies.
+3. Host the browser container outside of your network.
+4. Use a hosted browser as a service (e.g. [browserless](https://browserless.io)). Note: I've never used them before.
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/02-FAQ.md b/docs/versioned_docs/version-v0.30.0/06-administration/02-FAQ.md
new file mode 100644
index 00000000..e6f59801
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/02-FAQ.md
@@ -0,0 +1,60 @@
+# Frequently Asked Questions (FAQ)
+
+## User Management
+
+### Lost password
+
+#### If you are not an administrator
+
+Administrators can reset the password of any user. Contact an administrator to reset the password for you.
+
+- Navigate to the `Admin Settings` page
+- Find the user in the `Users List`
+- In the `Actions` column, there is a button to reset the password
+- Enter a new password and press `Reset`
+- The new password is now set
+- If required, you can change your password again (so the admin does not know your password) in the `User Settings`
+
+#### If you are an administrator
+
+If you are an administrator and lost your password, you have to reset the password in the database.
+
+To reset the password:
+
+- Acquire some kind of tools that helps you to connect to the database:
+ - `sqlite3` on Linux: run `apt-get install sqlite3` (depending on your package manager)
+ - e.g. `dbeaver` on Windows
+- Shut down Karakeep
+- Connect to the `db.db` database, which is located in the `data` directory you have mounted to your docker container:
+ - by e.g. running `sqlite3 db.db` (in your `data` directory)
+ - or going through e.g. the `dbeaver` UI to locate the file in the data directory and connecting to it
+- Update the password in the database by running:
+ - `update user set password='$2a$10$5u40XUq/cD/TmLdCOyZ82ePENE6hpkbodJhsp7.e/BgZssUO5DDTa', salt='' where email='<YOUR_EMAIL_HERE>';`
+ - (don't forget to put your email address into the command)
+- The new password for your user is now `adminadmin`.
+- Start Karakeep again
+- Log in with your email address and the password `adminadmin` and change the password to whatever you want in the `User Settings`
+
+### Adding another administrator
+
+By default, the first user to sign up gets promoted to administrator automatically.
+
+In case you want to grant those permissions to another user:
+
+- Navigate to the `Admin Settings` page
+- Find the user in the `Users List`
+- In the `Actions` column, there is a button to change the Role
+- Change the Role to `Admin`
+- Press `Change`
+- The new administrator has to log out and log in again to get the new role assigned
+
+### Adding new users, when signups are disabled
+
+Administrators can create new accounts any time:
+
+- Navigate to the `Admin Settings` page
+- Go to the `Users List`
+- Press the `Create User` Button.
+- Enter the information for the user
+- Press `create`
+- The new user can now log in
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/03-openai.md b/docs/versioned_docs/version-v0.30.0/06-administration/03-openai.md
new file mode 100644
index 00000000..9247d065
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/03-openai.md
@@ -0,0 +1,11 @@
+# Tagging Costs
+
+This service uses OpenAI for automatic tagging. This means that you'll incur some costs if automatic tagging is enabled. There are two type of inferences that we do:
+
+## Text Tagging
+
+For text tagging, we use the `gpt-4.1-mini` model. This model is [extremely cheap](https://openai.com/api/pricing). Cost per inference varies depending on the content size per article. Though, roughly, You'll be able to generate tags for almost 3000+ bookmarks for less than $1.
+
+## Image Tagging
+
+For image uploads, we use the `gpt-4o-mini` model for extracting tags from the image. You can learn more about the costs of using this model [here](https://platform.openai.com/docs/guides/images?api-mode=chat#calculating-costs). To lower the costs, we're using the low resolution mode (fixed number of tokens regardless of image size). You'll be able to run inference for 1000+ images for less than a $1.
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/05-troubleshooting.md b/docs/versioned_docs/version-v0.30.0/06-administration/05-troubleshooting.md
new file mode 100644
index 00000000..4072442b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/05-troubleshooting.md
@@ -0,0 +1,46 @@
+# Troubleshooting
+
+## SqliteError: no such table: user
+
+This usually means that there's something wrong with the database setup (more concretely, it means that the database is not initialized). This can be caused by multiple problems:
+1. **Wiped DATA_DIR:** Your `DATA_DIR` got wiped (or the backing storage dir changed). If you did this intentionally, restart the container so that it can re-initalize the database.
+2. **Missing DATA_DIR**: You're not using the default docker compose file, and you forgot to configure the `DATA_DIR` env var. This will result into the database getting set up in a different directory than the one used by the service.
+
+## Chrome Failed to Read DnsConfig
+
+If you see this error in the logs of the chrome container, it's a benign error and you can safely ignore it. Whatever problems you're having, is unrelated to this error.
+
+## AI Tagging not working (when using OpenAI)
+
+Check the logs of the container and this will usually tell you what's wrong. Common problems are:
+1. Typo in the env variable `OPENAI_API_KEY` name resulting into logs saying something like "skipping inference as it's not configured".
+2. You forgot to call `docker compose up` after configuring open ai.
+3. OpenAI requires pre-charging the account with credits before using it, otherwise you'll get an error like "insufficient funds".
+
+## AI Tagging not working (when using Ollama)
+
+Check the logs of the container and this will usually tell you what's wrong. Common problems are:
+1. Typo in the env variable `OLLAMA_BASE_URL` name resulting into logs saying something like "skipping inference as it's not configured".
+2. You forgot to call `docker compose up` after configuring ollama.
+3. You didn't change the `INFERENCE_TEXT_MODEL` env variable, resulting into karakeep attempting to use gpt models with ollama which won't work.
+4. Ollama server is not reachable by the karakeep container. This can be caused by:
+ 1. Ollama server being in a different docker network than the karakeep container.
+ 2. You're using `localhost` as the `OLLAMA_BASE_URL` instead of the actual address of the ollama server. `localhost` points to the container itself, not the docker host. Check this [stackoverflow answer](https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach) to find how to correctly point to the docker host address instead.
+
+## Crawling not working
+
+Check the logs of the container and this will usually tell you what's wrong. Common problems are:
+1. You changed the name of the chrome container but didn't change the `BROWSER_WEB_URL` env variable.
+
+## Upgrading Meilisearch - Migrating the Meilisearch db version
+
+[Meilisearch](https://www.meilisearch.com/) is the database used by karakeep for searching in your bookmarks. The version used by karakeep is `1.13.3` and it is advised not to upgrade it without good reasons. If you do, you might see errors like `Your database version (1.11.1) is incompatible with your current engine version (1.13.3). To migrate data between Meilisearch versions, please follow our guide on https://www.meilisearch.com/docs/learn/update_and_migration/updating.`.
+
+Luckily we can easily workaround this:
+1. Stop the Meilisearch container.
+2. Inside the Meilisearch volume bound to `/meili_data`, erase/rename the folder called `data.ms`.
+3. Launch Meilisearch again.
+4. Login to karakeep as administrator and go to (as of v0.24.1) `Admin Settings > Background Jobs` then click on `Reindex All Bookmarks`.
+5. When the reindexing has finished, Meilisearch should be working as usual.
+
+If you run into issues, the official documentation can be found [there](https://www.meilisearch.com/docs/learn/update_and_migration/updating).
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/06-server-migration.md b/docs/versioned_docs/version-v0.30.0/06-administration/06-server-migration.md
new file mode 100644
index 00000000..147ae1ec
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/06-server-migration.md
@@ -0,0 +1,71 @@
+# Migrating Between Servers
+
+This guide explains how to migrate all of your data from one Karakeep server to another using the official CLI.
+
+## What the command does
+
+The migration copies user-owned data from a source server to a destination server in this order:
+
+- User settings
+- Lists (preserving hierarchy and settings)
+- RSS feeds
+- AI prompts (custom prompts and their enabled state)
+- Webhooks (URL and events)
+- Tags (ensures tags by name exist)
+- Rule engine rules (IDs remapped to destination equivalents)
+- Bookmarks (links, text, and assets)
+ - After creation, attaches the correct tags and adds to the correct lists
+
+Notes:
+- Webhook tokens cannot be read via the API, so tokens are not migrated. Re‑add them on the destination if needed.
+- Asset bookmarks are migrated by downloading the original asset and re‑uploading it to the destination. Only images and PDFs are supported for asset bookmarks.
+- Link bookmarks on the destination may be de‑duplicated if the same URL already exists.
+
+## Prerequisites
+
+- Install the CLI:
+ - NPM: `npm install -g @karakeep/cli`
+ - Docker: `docker run --rm ghcr.io/karakeep-app/karakeep-cli:release --help`
+- Collect API keys and base URLs for both servers:
+ - Source: `--server-addr`, `--api-key`
+ - Destination: `--dest-server`, `--dest-api-key`
+
+## Quick start
+
+```
+karakeep --server-addr https://src.example.com --api-key <SOURCE_API_KEY> migrate \
+ --dest-server https://dest.example.com \
+ --dest-api-key <DEST_API_KEY>
+```
+
+The command is long‑running and shows live progress for each phase. You will be prompted for confirmation; pass `--yes` to skip the prompt.
+
+### Options
+
+- `--server-addr <url>`: Source server base URL
+- `--api-key <key>`: API key for the source server
+- `--dest-server <url>`: Destination server base URL
+- `--dest-api-key <key>`: API key for the destination server
+- `--batch-size <n>`: Page size for bookmark migration (default 50, max 100)
+- `-y`, `--yes`: Skip the confirmation prompt
+
+## What to expect
+
+- Lists are recreated parent‑first and retain their hierarchy.
+- Feeds, prompts, webhooks, and tags are recreated by value.
+- Rules are recreated after IDs (tags, lists, feeds) are remapped to their corresponding destination IDs.
+- After each bookmark is created, the command attaches the correct tags and adds it to the correct lists.
+
+## Caveats and tips
+
+- Webhook auth tokens must be re‑entered on the destination after migration.
+- If your destination already contains data, duplicate links may be de‑duplicated; tags and list membership are still applied to the existing bookmark.
+
+## Troubleshooting
+
+- If the command exits early, you can re‑run it, but note:
+ - Tags and lists that already exist are reused.
+ - Link de‑duplication avoids duplicate link bookmarks. Notes and assets will get re-created.
+ - Rules, webhooks, rss feeds will get re-created and you'll have to manually clean them up afterwards.
+ - The progress log indicates how far it got.
+- Use a smaller `--batch-size` if your source or destination is under heavy load.
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/07-legacy-container-upgrade.md b/docs/versioned_docs/version-v0.30.0/06-administration/07-legacy-container-upgrade.md
new file mode 100644
index 00000000..d95c1c1e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/07-legacy-container-upgrade.md
@@ -0,0 +1,66 @@
+# Legacy Container Upgrade
+
+Karakeep's 0.16 release consolidated the web and worker containers into a single container and also dropped the need for the redis container. The legacy containers will stop being supported soon, to upgrade to the new container do the following:
+
+1. Remove the redis container and its volume if it had one.
+2. Move the environment variables that you've set exclusively to the `workers` container to the `web` container.
+3. Delete the `workers` container.
+4. Rename the web container image from `hoarder-app/hoarder-web` to `hoarder-app/hoarder`.
+
+```diff
+diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
+index cdfc908..6297563 100644
+--- a/docker/docker-compose.yml
++++ b/docker/docker-compose.yml
+@@ -1,7 +1,7 @@
+ version: "3.8"
+ services:
+ web:
+- image: ghcr.io/hoarder-app/hoarder-web:${KARAKEEP_VERSION:-release}
++ image: ghcr.io/karakeep-app/karakeep:${KARAKEEP_VERSION:-release}
+ restart: unless-stopped
+ volumes:
+ - data:/data
+@@ -10,14 +10,10 @@ services:
+ env_file:
+ - .env
+ environment:
+- REDIS_HOST: redis
+ MEILI_ADDR: http://meilisearch:7700
++ BROWSER_WEB_URL: http://chrome:9222
++ # OPENAI_API_KEY: ...
+ DATA_DIR: /data
+- redis:
+- image: redis:7.2-alpine
+- restart: unless-stopped
+- volumes:
+- - redis:/data
+ chrome:
+ image: gcr.io/zenika-hub/alpine-chrome:123
+ restart: unless-stopped
+@@ -37,24 +33,7 @@ services:
+ MEILI_NO_ANALYTICS: "true"
+ volumes:
+ - meilisearch:/meili_data
+- workers:
+- image: ghcr.io/hoarder-app/hoarder-workers:${KARAKEEP_VERSION:-release}
+- restart: unless-stopped
+- volumes:
+- - data:/data
+- env_file:
+- - .env
+- environment:
+- REDIS_HOST: redis
+- MEILI_ADDR: http://meilisearch:7700
+- BROWSER_WEB_URL: http://chrome:9222
+- DATA_DIR: /data
+- # OPENAI_API_KEY: ...
+- depends_on:
+- web:
+- condition: service_started
+
+ volumes:
+- redis:
+ meilisearch:
+ data:
+```
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/08-hoarder-to-karakeep-migration.md b/docs/versioned_docs/version-v0.30.0/06-administration/08-hoarder-to-karakeep-migration.md
new file mode 100644
index 00000000..4e309408
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/08-hoarder-to-karakeep-migration.md
@@ -0,0 +1,28 @@
+# Hoarder to Karakeep Migration
+
+Hoarder is rebranding to Karakeep. Due to github limitations, the old docker image might not be getting new updates after the rebranding. You might need to update your docker image to point to the new karakeep image instead by applying the following change in the docker compose file.
+
+```diff
+diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
+index cdfc908..6297563 100644
+--- a/docker/docker-compose.yml
++++ b/docker/docker-compose.yml
+@@ -1,7 +1,7 @@
+ version: "3.8"
+ services:
+ web:
+- image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
++ image: ghcr.io/karakeep-app/karakeep:${HOARDER_VERSION:-release}
+```
+
+You can also change the `HOARDER_VERSION` environment variable but if you do so remember to change it in the `.env` file as well.
+
+## Migrating a Baremetal Installation
+
+If you previously used the [Debian/Ubuntu install script](../installation/debuntu) to install Hoarder, there is an option to migrate your installation to Karakeep.
+
+```bash
+bash karakeep-linux.sh migrate
+```
+
+This will migrate your installation with no user input required. After the migration, the script will also check for an update.
diff --git a/docs/versioned_docs/version-v0.30.0/06-administration/_category_.json b/docs/versioned_docs/version-v0.30.0/06-administration/_category_.json
new file mode 100644
index 00000000..9847810a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.30.0/06-administration/_category_.json
@@ -0,0 +1,4 @@
+{
+ "label": "🛠️ Administration",
+ "position": 6
+}