| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
This reverts commit 4ba3e8047a5b1f160169617187436c09e91662ec.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* WIP: public lists
* Drop viewing modes
* Add the public endpoint for assets
* regen the openapi spec
* proper handling for different asset types
* Add num bookmarks and a no bookmark banner
* Correctly set page title
* Add a not-found page
* merge the RSS and public list endpoints
* Add e2e tests for the public endpoints
* Redesign the share list modal
* Make NEXTAUTH_SECRET not required
* propery render text bookmarks
* rebase migration
* fix public token tests
* Add more tests
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fix typo
* implementation
* bug fix and refactoring
* Use nuqs for searchParam management
* remove the todo about the tests
* fix tests
---------
Co-authored-by: Mohamed Bassem <me@mbassem.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
* refactor: Move bookmark utils from shared-react to shared
* Expose RSS feeds for lists
* Add e2e tests
* Slightly improve the look of the share dialog
* allow specifying a limit in the rss endpoint
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add schema for the new rule engine
* Add rule engine backend logic
* Implement the worker logic and event firing
* Implement the UI changesfor the rule engine
* Ensure that when a referenced list or tag are deleted, the corresponding event/action is
* Dont show smart lists in rule engine events
* Add privacy validations for attached tag and list ids
* Move the rules logic into a models
|
| | |
|
| |
|
|
|
|
|
|
|
| |
* feat(web): Optionally add short description to lists
* regenerate openapi spec
---------
Co-authored-by: Mohamed Bassem <me@mbassem.com>
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Updated pdf2json to 3.1.5
* Extract and store a screenshot from PDF files using pdf2pic
* Installing graphicsmagick and ghostscript
* Generate Missing PDF screenshot with tidyAssets worker for backward support
* Display PDF screenshot instead of the PDF in web if it exists.
* Display PDF screenshot in mobile app if exists.
* Updated pnpm-lock.yaml
* Removed console.log
* Revert the unnecessary changes in package.json
* Revert pnpm-lock changes
* Prevent rendering PDF files if the screenshot is not generated
* refactor: replace useEffect with useMemo for section initialization
* feat: show PDF file download button and handle large PDFs by defaulting to screenshot view
* feat: add file size to openapi spec
* feature: Add Assets preprocessing in fix mode to admin actions
* i18n: add reprocess_assets_fix_mode translation
* i18n: Add missing ar translations
* A bunch of fixes
* Fix openspec schema
---------
Co-authored-by: Mohamed Bassem <me@mbassem.com>
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* feat: Add support for smart lists
* i18n
* Fix update list endpoint
* Add a test for smart lists
* Add header to the query explainer
* Hide remove from lists in the smart context list
* Add proper validation to list form
---------
Co-authored-by: Deepak Kapoor <41769111+orthdron@users.noreply.github.com>
|
| | |
|
| | |
|
| |
|
|
| |
Fixes #169
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Allow downloading more content from a webpage and index it #215
Added a worker that allows downloading videos depending on the environment variables
refactored the code a bit
added new video asset
updated documentation
* Some tweaks
* Drop the dependency on the yt-dlp wrapper
* Update openapi specs
* Dont log an error when the url is not supported
* Better handle supported websites that dont download anything
---------
Co-authored-by: Mohamed Bassem <me@mbassem.com>
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
Fixes #448
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* refactoring asset types
Extracted out functions to silently delete assets and to update them after crawling
Generalized the mapping of assets to bookmark fields to make extending them easier
* Added the bookmark type to the database
Introduced an enum to have better type safety
cleaned up the code and based some code on the type directly
* add BookmarkType.UNKNWON
* lint and remove unused function
---------
Co-authored-by: MohamedBassem <me@mbassem.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Allow downloading more content from a webpage and index it #215
added a new table that contains the information about assets for link bookmarks
created migration code that transfers the existing data into the new table
* Allow downloading more content from a webpage and index it #215
removed the old asset columns from the database
updated the UI to use the data from the linkBookmarkAssets array
* generalize the assets table to not be linked in particular to links
* fix migrations post merge
* fix missing asset ids in the getBookmarks call
---------
Co-authored-by: MohamedBassem <me@mbassem.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* feature request: pdf support #28
Added a new sourceUrl column to the asset bookmarks
Added transforming a link bookmark pointing at a pdf to an asset bookmark
made sure the "View Original" link is also shown for asset bookmarks that have a sourceURL
updated gitignore for IDEA
* remove pdf parsing from the crawler
* extract the http logic into its own function to avoid duplicating the post-processing actions (openai/index)
* Add 5s timeout to the content type fetch
---------
Co-authored-by: MohamedBassem <me@mbassem.com>
|
| | |
|
| | |
|
| | |
|