Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for more granular rate limit per API route #21

Open
spikelu2016 opened this issue Dec 2, 2023 · 0 comments
Open

Add support for more granular rate limit per API route #21

spikelu2016 opened this issue Dec 2, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@spikelu2016
Copy link
Contributor

Is it possible to define rate limits at the granularity of the OpenAI API? I.e., different RPM/TPM for each model? Context is that we want to give 100 students access to the API at the same time through our tier 3 token so every key needs to have 35 RPM/1600 TPM for gpt-3.5-turbo and 50 RPM/5000 TPM for text-embedding-ada-002. If I understand the documentation correctly we could only set the minimum of each limit currently?
@spikelu2016 spikelu2016 added the enhancement New feature or request label Dec 2, 2023
@spikelu2016 spikelu2016 self-assigned this Dec 2, 2023
@spikelu2016 spikelu2016 reopened this Dec 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant