Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection time out with OpenAI API #447

Open
pecto2020 opened this issue Mar 1, 2024 · 3 comments
Open

Connection time out with OpenAI API #447

pecto2020 opened this issue Mar 1, 2024 · 3 comments

Comments

@pecto2020
Copy link

pecto2020 commented Mar 1, 2024

I really like spacy-llm but it is impossible for me to use it. I keep having connection time out with a working API key from OpenAI and after spending much time at setting all the framework finding this is very frustrating.
Here's the error and my config file. Thank you for any help.

Time out Error

raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=30.0)

During handling of the above exception, another exception occurred:

raise TimeoutError(
TimeoutError: Request time out. Check your network connection and the API's availability.

Config.cfg

`[paths]
examples = null

[nlp]
lang = "en"
pipeline = ["llm", "llm_rel"]

[components]

Named Entity Recognition Component

[components.llm]
factory = "llm"

[components.llm.task]
@llm_tasks = "spacy.NER.v2"
labels = [##]

[components.llm.task.template]
@misc = "spacy.FileReader.v1"
path = "templates//ner_v2.jinja"

[components.llm.task.label_definitions]

[components.llm.model]
@llm_models = "spacy.GPT-3-5.v3"
name = "gpt-3.5-turbo-16k-0613"
config = {"temperature": 0.0, "top_p": 0.0}
max_request_time = 30.0

[components.llm.cache]
@llm_misc = "spacy.BatchCache.v1"
path = "local-cache"
batch_size = 20
max_batches_in_mem = 500

Relationship Extraction Component

[components.llm_rel]
factory = "llm"

[components.llm_rel.task]
@llm_tasks = "spacy.REL.v1"
labels = [##]

[components.llm_rel.task.label_definitions]

[components.llm_rel.model]
@llm_models = "spacy.GPT-3-5.v3"
name = "gpt-3.5-turbo-16k-0613"
config = {"temperature": 0.0, "top_p": 0.0}
max_request_time = 30.0`

@rmitsch
Copy link
Collaborator

rmitsch commented Mar 16, 2024

Hi @pecto2020, this seems like you're experiencing connection issues with the OpenAI API. Under which circumstances does this happen?

@pecto2020
Copy link
Author

pecto2020 commented Mar 17, 2024

Hi @rmitsch , the issue arises when I process multiple texts, each around 4,000 tokens in size. I attempted to adjust the batch parameters in the configuration to address this, but I continued to encounter the same error. Currently, I've adapted my approach to process the text files individually, sending them to OpenAI one at a time for named entity recognition and relationship extraction. Added also sleep(5) after each iteration. Yet, the time request error persists. Additionally, I've noticed that sometimes the GPT's raw output presents labels in bullet points, as opposed to being comma-separated. It appears that when this formatting occurs, the likelihood of a time request error increases, though I'm not sure of a direct correlation (I see they are not correctly parsed by spacy-llm, so basically the information extracted by the model is lost with an empty json or I got time error request)

Is there anything else you'd like to know about the circumstances?

@rmitsch
Copy link
Collaborator

rmitsch commented Mar 21, 2024

the issue arises when I process multiple texts, each around 4,000 tokens in size. I attempted to adjust the batch parameters in the configuration to address this, but I continued to encounter the same error.

You're running into rate limitations on OpenAI's side. Unfortunately there isn't anything we can do about that. The batch size config won't make a difference here, but you'll want to experiment with interval and max_request_time.

Additionally, I've noticed that sometimes the GPT's raw output presents labels in bullet points, as opposed to being comma-separated. It appears that when this formatting occurs, the likelihood of a time request error increases, though I'm not sure of a direct correlation

I don't see a correlation here either. LLM output is not guaranteed to be consistent. Setting temperature to 0 is usually a good start, if you haven't done that yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants