From edbd98d7841388d1169a3a3b159367487bda431e Mon Sep 17 00:00:00 2001 From: MohamedBassem Date: Sun, 21 Jul 2024 13:19:58 +0000 Subject: [docs] Change the docs to versioned --- docs/versioned_docs/version-v0.15.0/06-openai.md | 11 +++++++++++ 1 file changed, 11 insertions(+) create mode 100644 docs/versioned_docs/version-v0.15.0/06-openai.md (limited to 'docs/versioned_docs/version-v0.15.0/06-openai.md') diff --git a/docs/versioned_docs/version-v0.15.0/06-openai.md b/docs/versioned_docs/version-v0.15.0/06-openai.md new file mode 100644 index 00000000..fa2a83ef --- /dev/null +++ b/docs/versioned_docs/version-v0.15.0/06-openai.md @@ -0,0 +1,11 @@ +# OpenAI Costs + +This service uses OpenAI for automatic tagging. This means that you'll incur some costs if automatic tagging is enabled. There are two type of inferences that we do: + +## Text Tagging + +For text tagging, we use the `gpt-3.5-turbo-0125` model. This model is [extremely cheap](https://openai.com/pricing). Cost per inference varies depending on the content size per article. Though, roughly, You'll be able to generate tags for almost 1000+ bookmarks for less than $1. + +## Image Tagging + +For image uploads, we use the `gpt-4-turbo` model for extracting tags from the image. You can learn more about the costs of using this model [here](https://platform.openai.com/docs/guides/vision/calculating-costs). To lower the costs, we're using the low resolution mode (fixed number of tokens regardless of image size). The gpt-4 model, however, is much more expensive than the `gpt-3.5-turbo`. Currently, we're using around 350 token per image inference which ends up costing around $0.01 per inference. So around 10x more expensive than the text tagging. -- cgit v1.2.3-70-g09d2