aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorkamtschatka <sschatka@gmail.com>2024-05-12 14:06:41 +0200
committerGitHub <noreply@github.com>2024-05-12 13:06:41 +0100
commitd33be149e661945fe67a9b6c4ff0d1e47917b8cd (patch)
tree1f05995e401e00d71b016bc98131d503625afb36 /docs
parentcbc8dded9970121d003fdcfb71099308acd4f09f (diff)
downloadkarakeep-d33be149e661945fe67a9b6c4ff0d1e47917b8cd.tar.zst
feature: Take full page screenshots #143 (#148)
Added the fullPage flag to take full screen screenshots updated the UI accordingly to properly show the screenshots instead of scaling it down Co-authored-by: kamtschatka <simon.schatka@gmx.at>
Diffstat (limited to 'docs')
-rw-r--r--docs/docs/03-configuration.md1
1 files changed, 1 insertions, 0 deletions
diff --git a/docs/docs/03-configuration.md b/docs/docs/03-configuration.md
index 3d44f359..47bd115a 100644
--- a/docs/docs/03-configuration.md
+++ b/docs/docs/03-configuration.md
@@ -42,5 +42,6 @@ Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic taggin
| CRAWLER_NUM_WORKERS | No | 1 | Number of allowed concurrent crawling jobs. By default, we're only doing one crawling request at a time to avoid consuming a lot of resources. |
| CRAWLER_DOWNLOAD_BANNER_IMAGE | No | true | Whether to cache the banner image used in the cards locally or fetch it each time directly from the website. Caching it consumes more storage space, but is more resilient against link rot and rate limits from websites. |
| CRAWLER_STORE_SCREENSHOT | No | true | Whether to store a screenshot from the crawled website or not. Screenshots act as a fallback for when we fail to extract an image from a website. You can also view the stored screenshots for any link. |
+| CRAWLER_FULL_PAGE_SCREENSHOT | No | false | Whether to store a screenshot of the full page or not. Disabled by default, as it can lead to much higher disk usage. If disabled, the screenshot will only include the visible part of the page |
| CRAWLER_JOB_TIMEOUT_SEC | No | 60 | How long to wait for the crawler job to finish before timing out. If you have a slow internet connection or a low powered device, you might want to bump this up a bit |
| CRAWLER_NAVIGATE_TIMEOUT_SEC | No | 30 | How long to spend navigating to the page (along with its redirects). Increase this if you have a slow internet connection |