AI Gateway

GPTCache

A Library for Creating Semantic Cache for LLM Queries

🚀 What is GPTCache?

ChatGPT and various large language models (LLMs) boast incredible versatility, enabling the development of a wide range of applications. However, as your application grows in popularity and encounters higher traffic levels, the expenses related to LLM API calls can become substantial. Additionally, LLM services might exhibit slow response times, especially when dealing with a significant number of requests.

To tackle this challenge, we have created GPTCache, a project dedicated to building a semantic cache for storing LLM responses.

😊 Quick Start

Note:

  • You can quickly try GPTCache and put it into a production environment without heavy development. However, please note that the repository is still under heavy development.
  • By default, only a limited number of libraries are installed to support the basic cache functionalities. When you need to use additional features, the related libraries will be automatically installed.
  • Make sure that the Python version is 3.8.1 or higher, check: python --version
  • If you encounter issues installing a library due to a low pip version, run: python -m pip install --upgrade pip.
GPTCache : A Library for Creating Semantic Cache for LLM Queries — GPTCache
About the author

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Gateway.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.