| Commit message (Collapse) | Author | Files | Lines |
|
|
|
This reverts commit a04d3c35fc9082e529a713605a038d236bb072c7.
|
|
|
|
|
|
|
|
* fix: Support nested smart lists and prevent infinite loops
---------
Co-authored-by: Claude <noreply@anthropic.com>
|
|
* feat(mobile): Add animated UI feedback to sharing modal
---------
Co-authored-by: Claude <noreply@anthropic.com>
|
|
|
|
|
|
* feat(ai): Support restricting AI tags to a subset of existing tags
Co-authored-by: Claude <noreply@anthropic.com>
|
|
* feat(mcp): Support custom configurable HTTP headers
* docs(mcp): Add KARAKEEP_CUSTOM_HEADERS documentation
* fix(mcp): Prioritize default headers and safely parse custom headers
* docs(mcp): Correct capitalization of Cloudflare headers
|
|
* Added Instapaper import
* Fixes #1444 Added Instapaper import support
|
|
* feat: add drag and drop bookmark cards into sidebar lists
Co-authored-by: Claude <noreply@anthropic.com>
|
|
feedback (#2467)
* feat(crawler): write metadata to DB early for faster user feedback
Split the single DB transaction in crawlAndParseUrl into two phases:
- Phase 1: Write metadata (title, description, favicon, author, etc.)
immediately after extraction, before downloading assets
- Phase 2: Write content and asset references after all assets are
stored (banner image, screenshot, pdf, html content)
This gives users near-instant feedback with bookmark metadata while
the slower asset downloads and uploads happen in the background.
https://claude.ai/code/session_013vKTXDcb5CEve3WMszQJmZ
* fix(crawler): move crawledAt to phase 2 DB write
crawledAt should only be set once all assets are fully stored, not
during the early metadata write.
https://claude.ai/code/session_013vKTXDcb5CEve3WMszQJmZ
---------
Co-authored-by: Claude <noreply@anthropic.com>
|
|
|
|
|
|
* feat: add source filter to query language
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* autocomplete source
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
|
|
(#2464)
When a bookmark is deleted before the rule engine worker processes its
event, the worker would throw an error, triggering failure metrics,
error logging, and retries. This changes both the worker and
RuleEngine.forBookmark to gracefully skip processing with an info log
instead.
Co-authored-by: Claude <noreply@anthropic.com>
|
|
|
|
* feat: add separate queue for import link crawling
---------
Co-authored-by: Claude <noreply@anthropic.com>
|
|
|
|
Track the time from bookmark creation to crawl completion as a histogram
(karakeep_bookmark_crawl_latency_seconds). This measures the end-to-end
latency users experience when adding bookmarks via extension, web, etc.
Excludes recrawls (crawledAt already set) and imports (low priority jobs).
https://claude.ai/code/session_019jTGGXGWzK9C5aTznQhdgz
Co-authored-by: Claude <noreply@anthropic.com>
|
|
Instruments the better-sqlite3 driver so that every prepared statement
execution (run/get/all) produces an OTel span with db.system,
db.statement, and db.operation attributes. The instrumentation is a
no-op when no TracerProvider is registered (i.e. tracing is disabled).
https://claude.ai/code/session_01JZut7LqeHPUKAFbFLfVP8F
|
|
* feat(import): new import details page
* fix typecheck
* review comments
|
|
|
|
|
|
The catch block in processOneBookmark was storing raw error strings via
String(error) in the resultReason field, which is exposed to users through
the getImportSessionResults tRPC route. This could leak internal details
like database constraint errors, file paths, stack traces, or connection
strings.
Replace String(error) with getSafeErrorMessage() that only allows through:
- TRPCError client errors (designed to be user-facing)
- Known safe validation messages from the import worker
- A generic fallback for all other errors
The full error is still logged server-side for debugging.
https://claude.ai/code/session_01F1NHE9dqio5LJ177vmSCvt
Co-authored-by: Claude <noreply@anthropic.com>
|
|
|
|
|
|
|
|
|
|
|
|
* fix: backfill old sessions and do queue backpressure
* fix typo
|
|
* feat: import workflow v3
* batch stage
* revert migration
* cleanups
* pr comments
* move to models
* add allowed workers
* e2e tests
* import list ids
* add missing indicies
* merge test
* more fixes
* add resume/pause to UI
* fix ui states
* fix tests
* simplify progress tracking
* remove backpressure
* fix list imports
* fix race on claiming bookmarks
* remove the codex file
|
|
* feat(ocr): add LLM-based OCR support alongside Tesseract
Add support for using configured LLM inference providers (OpenAI or Ollama)
for OCR text extraction from images as an alternative to Tesseract.
Changes:
- Add OCR_USE_LLM environment variable flag (default: false)
- Add buildOCRPrompt function for LLM-based text extraction
- Add readImageTextWithLLM function in asset preprocessing worker
- Update extractAndSaveImageText to route between Tesseract and LLM OCR
- Update documentation with the new configuration option
When OCR_USE_LLM is enabled, the system uses the configured inference model
to extract text from images. If no inference provider is configured, it
falls back to Tesseract.
https://claude.ai/code/session_01Y7h7kDAmqXKXEWDmWbVkDs
* format
---------
Co-authored-by: Claude <noreply@anthropic.com>
|
|
* feat: batch meilisearch requests
* more fixes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* feat: add support for redirectUrl after signup
* pr review
* more fixes
* format
* another fix
|
|
|
|
|
|
|
|
* refactor: migrate trpc to the new react query integration mode
* more fixes
* more migrations
* upgrade trpc client
|
|
* refactor(web): centralize next-auth client-side utilities
Create lib/auth/client.ts to re-export all next-auth/react APIs (useSession,
signIn, signOut, SessionProvider) from a single location. This prepares
for future auth provider replacement by isolating the next-auth dependency.
https://claude.ai/code/session_01RLLL6SquzmegG6wKHdT3Fm
* format
---------
Co-authored-by: Claude <noreply@anthropic.com>
|
|
|
|
|