aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMohamedBassem <me@mbassem.com>2024-08-31 21:08:52 +0000
committerMohamedBassem <me@mbassem.com>2024-08-31 21:08:52 +0000
commit83bc5bdab4127b5bdbee708fcbf57297fff62fc0 (patch)
tree2f139ff15737fc2cd75d1345dbbcd6fcbd05ca36
parent25b61cced098a49ce8fe318099f55616a57c6806 (diff)
downloadkarakeep-83bc5bdab4127b5bdbee708fcbf57297fff62fc0.tar.zst
docs: Remove references to redis and workers from docs
Diffstat (limited to '')
-rw-r--r--docs/docs/02-Installation/02-unraid.md2
-rw-r--r--docs/docs/07-Development/01-setup.md9
-rw-r--r--docs/docs/07-Development/04-architecture.md3
-rw-r--r--docs/versioned_docs/version-v0.16.0/02-Installation/02-unraid.md2
-rw-r--r--docs/versioned_docs/version-v0.16.0/07-Development/01-setup.md9
-rw-r--r--docs/versioned_docs/version-v0.16.0/07-Development/04-architecture.md3
6 files changed, 4 insertions, 24 deletions
diff --git a/docs/docs/02-Installation/02-unraid.md b/docs/docs/02-Installation/02-unraid.md
index b879f900..42323a5f 100644
--- a/docs/docs/02-Installation/02-unraid.md
+++ b/docs/docs/02-Installation/02-unraid.md
@@ -15,7 +15,5 @@ Hoarder can be installed on Unraid using the community application plugins. Hoar
Here's a high level overview of the services you'll need:
- **Hoarder** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's main web app.
-- **hoarder-worker** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's background workers (for running the AI tagging, fetching the content, etc).
-- **Redis**: Currently used for communication between the web app and the background workers.
- **Browserless** ([Support post](https://forums.unraid.net/topic/130163-support-template-masterwishxbrowserless/)): The chrome headless service used for fetching the content. Hoarder's official docker compose doesn't use browserless, but it's currently the only headless chrome service available on unraid, so you'll have to use it.
- **MeiliSearch** ([Support post](https://forums.unraid.net/topic/164847-support-collectathon-meilisearch/)): The search engine used by Hoarder. It's optional but highly recommended. If you don't have it set up, search will be disabled.
diff --git a/docs/docs/07-Development/01-setup.md b/docs/docs/07-Development/01-setup.md
index 94a2ce67..3bf9caf1 100644
--- a/docs/docs/07-Development/01-setup.md
+++ b/docs/docs/07-Development/01-setup.md
@@ -9,17 +9,12 @@
- The most important env variables to set are:
- `DATA_DIR`: Where the database and assets will be stored. This is the only required env variable. You can use an absolute path so that all apps point to the same dir.
- `NEXTAUTH_SECRET`: Random string used to sign the JWT tokens. Generate one with `openssl rand -base64 36`. Logging in will not work if this is missing!
- - `REDIS_HOST` and `REDIS_PORT` default to `localhost` and `6379` change them if redis is running on a different address.
- `MEILI_ADDR`: If not set, search will be disabled. You can set it to `http://127.0.0.1:7700` if you run meilisearch using the command below.
- `OPENAI_API_KEY`: If you want to enable auto tag inference in the dev env.
- run `pnpm run db:migrate` in the root of the repo to set up the database.
### Dependencies
-#### Redis
-
-Redis is used as the background job queue. The easiest way to get it running is with docker `docker run -p 6379:6379 redis:alpine`.
-
#### Meilisearch
Meilisearch is the provider for the full text search. You can get it running with `docker run -p 7700:7700 getmeili/meilisearch:v1.6`.
@@ -35,14 +30,12 @@ The worker app will automatically start headless chrome on startup for crawling
- Run `pnpm web` in the root of the repo.
- Go to `http://localhost:3000`.
-> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless redis is running.
+> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless workers are running.
### Workers
- Run `pnpm workers` in the root of the repo.
-> NOTE: The workers package requires having redis working as it's the queue provider.
-
### iOS Mobile App
- `cd apps/mobile`
diff --git a/docs/docs/07-Development/04-architecture.md b/docs/docs/07-Development/04-architecture.md
index df69376a..5a864034 100644
--- a/docs/docs/07-Development/04-architecture.md
+++ b/docs/docs/07-Development/04-architecture.md
@@ -3,8 +3,7 @@
![Architecture Diagram](/img/architecture/arch.png)
- Webapp: NextJS based using sqlite for data storage.
-- Redis: Used with BullMQ for scheduling background jobs for the workers.
-- Workers: Consume the jobs from redis and executes them, there are three job types:
+- Workers: Consume the jobs from sqlite based job queue and executes them, there are three job types:
1. Crawling: Fetches the content of links using a headless chrome browser running in the workers container.
2. OpenAI: Uses OpenAI APIs to infer the tags of the content.
3. Indexing: Indexes the content in meilisearch for faster retrieval during search.
diff --git a/docs/versioned_docs/version-v0.16.0/02-Installation/02-unraid.md b/docs/versioned_docs/version-v0.16.0/02-Installation/02-unraid.md
index b879f900..42323a5f 100644
--- a/docs/versioned_docs/version-v0.16.0/02-Installation/02-unraid.md
+++ b/docs/versioned_docs/version-v0.16.0/02-Installation/02-unraid.md
@@ -15,7 +15,5 @@ Hoarder can be installed on Unraid using the community application plugins. Hoar
Here's a high level overview of the services you'll need:
- **Hoarder** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's main web app.
-- **hoarder-worker** ([Support post](https://forums.unraid.net/topic/165108-support-collectathon-hoarder/)): Hoarder's background workers (for running the AI tagging, fetching the content, etc).
-- **Redis**: Currently used for communication between the web app and the background workers.
- **Browserless** ([Support post](https://forums.unraid.net/topic/130163-support-template-masterwishxbrowserless/)): The chrome headless service used for fetching the content. Hoarder's official docker compose doesn't use browserless, but it's currently the only headless chrome service available on unraid, so you'll have to use it.
- **MeiliSearch** ([Support post](https://forums.unraid.net/topic/164847-support-collectathon-meilisearch/)): The search engine used by Hoarder. It's optional but highly recommended. If you don't have it set up, search will be disabled.
diff --git a/docs/versioned_docs/version-v0.16.0/07-Development/01-setup.md b/docs/versioned_docs/version-v0.16.0/07-Development/01-setup.md
index 94a2ce67..3bf9caf1 100644
--- a/docs/versioned_docs/version-v0.16.0/07-Development/01-setup.md
+++ b/docs/versioned_docs/version-v0.16.0/07-Development/01-setup.md
@@ -9,17 +9,12 @@
- The most important env variables to set are:
- `DATA_DIR`: Where the database and assets will be stored. This is the only required env variable. You can use an absolute path so that all apps point to the same dir.
- `NEXTAUTH_SECRET`: Random string used to sign the JWT tokens. Generate one with `openssl rand -base64 36`. Logging in will not work if this is missing!
- - `REDIS_HOST` and `REDIS_PORT` default to `localhost` and `6379` change them if redis is running on a different address.
- `MEILI_ADDR`: If not set, search will be disabled. You can set it to `http://127.0.0.1:7700` if you run meilisearch using the command below.
- `OPENAI_API_KEY`: If you want to enable auto tag inference in the dev env.
- run `pnpm run db:migrate` in the root of the repo to set up the database.
### Dependencies
-#### Redis
-
-Redis is used as the background job queue. The easiest way to get it running is with docker `docker run -p 6379:6379 redis:alpine`.
-
#### Meilisearch
Meilisearch is the provider for the full text search. You can get it running with `docker run -p 7700:7700 getmeili/meilisearch:v1.6`.
@@ -35,14 +30,12 @@ The worker app will automatically start headless chrome on startup for crawling
- Run `pnpm web` in the root of the repo.
- Go to `http://localhost:3000`.
-> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless redis is running.
+> NOTE: The web app kinda works without any dependencies. However, search won't work unless meilisearch is running. Also, new items added won't get crawled/indexed unless workers are running.
### Workers
- Run `pnpm workers` in the root of the repo.
-> NOTE: The workers package requires having redis working as it's the queue provider.
-
### iOS Mobile App
- `cd apps/mobile`
diff --git a/docs/versioned_docs/version-v0.16.0/07-Development/04-architecture.md b/docs/versioned_docs/version-v0.16.0/07-Development/04-architecture.md
index df69376a..5a864034 100644
--- a/docs/versioned_docs/version-v0.16.0/07-Development/04-architecture.md
+++ b/docs/versioned_docs/version-v0.16.0/07-Development/04-architecture.md
@@ -3,8 +3,7 @@
![Architecture Diagram](/img/architecture/arch.png)
- Webapp: NextJS based using sqlite for data storage.
-- Redis: Used with BullMQ for scheduling background jobs for the workers.
-- Workers: Consume the jobs from redis and executes them, there are three job types:
+- Workers: Consume the jobs from sqlite based job queue and executes them, there are three job types:
1. Crawling: Fetches the content of links using a headless chrome browser running in the workers container.
2. OpenAI: Uses OpenAI APIs to infer the tags of the content.
3. Indexing: Indexes the content in meilisearch for faster retrieval during search.