Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR HTTPStatus Exception: Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions' #61

Open
ataur39n-sharif opened this issue Oct 9, 2023 · 17 comments

Comments

@ataur39n-sharif
Copy link

ataur39n-sharif commented Oct 9, 2023

I am using this with Docker. Here I providing some error messages--

my command is -

docker run -it \ -e OPENAI_API_KEY=API_KEY \ -v "$(pwd)":/app zeroxeli/readme-ai:latest \ readmeai -o readme-ai.md -r https://github.com/ataur39n-sharif/book-catelog-backend

Error -

ERROR HTTPStatus Exception:
Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://httpstatuses.com/429

images -

doc-command
doc-0
doc-1
doc-2

doc-command

@ataur39n-sharif ataur39n-sharif changed the title ERROR HTTPStatus Exception: Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions' For more information check: https://httpstatuses.com/429 ERROR HTTPStatus Exception: Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions' Oct 9, 2023
@eli64s
Copy link
Owner

eli64s commented Oct 23, 2023

Hi @ataur39n-sharif I just pulled the latest image and ran you're repo. Can you try once more with the latest and let me know if you still experience this?

Thanks!

@Cro22
Copy link

Cro22 commented Oct 23, 2023

Same Here! I tried with the latest image.
image

image

@eli64s
Copy link
Owner

eli64s commented Oct 23, 2023

@Cro22 @ataur39n-sharif Are you using a free OpenAI account or payment method?

@Cro22
Copy link

Cro22 commented Oct 24, 2023

@eli64s I use the paid OpenAI Account

@ataur39n-sharif
Copy link
Author

@eli64s I am using a free account of OpenAI

@jatolentino
Copy link

I'm also getting the same 429 error on my readme using [https://readmeai.streamlit.app/]

@Aviksaikat
Copy link

why not add rate limiting ?

@eli64s
Copy link
Owner

eli64s commented Nov 5, 2023

@Aviksaikat There is a default rate limit setting in the config file

This seems like a common issue for unpaid accounts, but still happens for paid accounts occasionally. I may need to work on a more robust API implementation to solve this problem for everyone.

@jatolentino
Copy link

@eli64s should we increase or decrease the rate_limit variable?

@Aviksaikat
Copy link

it should be decreased.

@Aviksaikat
Copy link

how can we update the config file ?

@Aviksaikat
Copy link

that rate_limit field it totally useless.

@daan-ef2
Copy link

Got the same error and using a paid api-key. Is there any workaround?

@abhi245y
Copy link

I encountered the same issue and resolved it by switching the model to gpt-4-1106-preview. After forking the repository and reviewing the code, it appears that the issue stems from a limitation with the default gpt-4 model. The README also indicates that it uses gpt-4-1106-preview. I've implemented these changes in my local files and added a troubleshooting section to the README.

As a temporary fix, you can use the following command:

readmeai --output readme-ai.md --model gpt-4-1106-preview --repository https://github.com/eli64s/readme-ai.

However, this is my first time contributing to a project, and I'm not entirely sure about the proper procedures for contributing.

@eli64s
Copy link
Owner

eli64s commented Nov 27, 2023

@abhi245y that is correct, using the model engine gpt-4-1106-preview would be a temporary workaround. However, take note of the following before trying this.

Warning

During brief testing of the gpt-4-1106-preview model I've noticed higher API costs.
If trying this workaround, use the OpenAI API Dashboard to continuously track your API usage and cost.

Thank you,
Eli

@abhi245y
Copy link

abhi245y commented Nov 27, 2023

Yah you are right, when I gave the script a few runs during testing, my usage shot up from 0.21$ to 1.4$ real quick. But if you're just using it once, it's no big deal.

I also noticed that you switched the model to gpt-4-1106-preview and then reverted it back to gpt-4. At first, I didn't understand why, but now it makes sense.

@alexiuscrow
Copy link

Had issues with —model gpt-4-1106-preview and —model gpt-3.5-turbo, but it works well with —model gpt-4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants