diff options
| author | Mohamed Bassem <me@mbassem.com> | 2026-02-01 22:57:11 +0000 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2026-02-01 22:57:11 +0000 |
| commit | 3fcccb858ee3ef22fe9ce479af4ce458ac9a0fe1 (patch) | |
| tree | 0d6ae299126a581f0ccc58afa89b2dd16a9a0925 /packages/shared | |
| parent | 54243b8cc5ccd76fe23821f6e159b954a2166578 (diff) | |
| download | karakeep-3fcccb858ee3ef22fe9ce479af4ce458ac9a0fe1.tar.zst | |
feat: Add LLM-based OCR as alternative to Tesseract (#2442)
* feat(ocr): add LLM-based OCR support alongside Tesseract
Add support for using configured LLM inference providers (OpenAI or Ollama)
for OCR text extraction from images as an alternative to Tesseract.
Changes:
- Add OCR_USE_LLM environment variable flag (default: false)
- Add buildOCRPrompt function for LLM-based text extraction
- Add readImageTextWithLLM function in asset preprocessing worker
- Update extractAndSaveImageText to route between Tesseract and LLM OCR
- Update documentation with the new configuration option
When OCR_USE_LLM is enabled, the system uses the configured inference model
to extract text from images. If no inference provider is configured, it
falls back to Tesseract.
https://claude.ai/code/session_01Y7h7kDAmqXKXEWDmWbVkDs
* format
---------
Co-authored-by: Claude <noreply@anthropic.com>
Diffstat (limited to 'packages/shared')
| -rw-r--r-- | packages/shared/config.ts | 2 | ||||
| -rw-r--r-- | packages/shared/prompts.ts | 16 |
2 files changed, 18 insertions, 0 deletions
diff --git a/packages/shared/config.ts b/packages/shared/config.ts index 7238e90c..cfcf1532 100644 --- a/packages/shared/config.ts +++ b/packages/shared/config.ts @@ -82,6 +82,7 @@ const allEnv = z.object({ .default("eng") .transform((val) => val.split(",")), OCR_CONFIDENCE_THRESHOLD: z.coerce.number().default(50), + OCR_USE_LLM: stringBool("false"), CRAWLER_HEADLESS_BROWSER: stringBool("true"), BROWSER_WEB_URL: z.string().optional(), BROWSER_WEBSOCKET_URL: z.string().optional(), @@ -337,6 +338,7 @@ const serverConfigSchema = allEnv.transform((val, ctx) => { langs: val.OCR_LANGS, cacheDir: val.OCR_CACHE_DIR, confidenceThreshold: val.OCR_CONFIDENCE_THRESHOLD, + useLLM: val.OCR_USE_LLM, }, search: { numWorkers: val.SEARCH_NUM_WORKERS, diff --git a/packages/shared/prompts.ts b/packages/shared/prompts.ts index 00963550..e878a18b 100644 --- a/packages/shared/prompts.ts +++ b/packages/shared/prompts.ts @@ -106,3 +106,19 @@ export function buildSummaryPromptUntruncated( preprocessContent(content), ); } + +/** + * Build OCR prompt for extracting text from images using LLM + */ +export function buildOCRPrompt(): string { + return `You are an OCR (Optical Character Recognition) expert. Your task is to extract ALL text from this image. + +Rules: +- Extract every piece of text visible in the image, including titles, body text, captions, labels, watermarks, and any other textual content. +- Preserve the original structure and formatting as much as possible (e.g., paragraphs, lists, headings). +- If text appears in multiple columns, read from left to right, top to bottom. +- If text is partially obscured or unclear, make your best attempt and indicate uncertainty with [unclear] if needed. +- Do not add any commentary, explanations, or descriptions of non-text elements. +- If there is no text in the image, respond with an empty string. +- Output ONLY the extracted text, nothing else.`; +} |
