Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Empty responses when using AzureOpenAI #78

Open
BryceAmackerLE opened this issue Nov 17, 2023 · 2 comments
Open

Empty responses when using AzureOpenAI #78

BryceAmackerLE opened this issue Nov 17, 2023 · 2 comments

Comments

@BryceAmackerLE
Copy link

BryceAmackerLE commented Nov 17, 2023

When replacing OpenAI() with AzureOpenAI() in backend/app/chat/engine.py, chat responses are empty. The frontend will display "Sorry, I either wasn't able to understand your question or I don't have an answer for it.".

I have confirmed that the AzureOpenAI parameters are correct, as I have gotten valid embeddings and chat completion responses back while debugging with the same objects passed to the chat_engine.

This is how I am constructing the chat_llm:

    chat_llm = AzureOpenAI(
        temperature=0,
        streaming=True,
        model=settings.AZURE_OPENAI_CHAT_MODEL_NAME,
        deployment_name=settings.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME,
        api_key=settings.AZURE_OPENAI_API_KEY,
        azure_endpoint=settings.AZURE_OPENAI_API_BASE,
        api_version=settings.AZURE_OPENAI_API_VERSION,
        additional_kwargs={"api_key": settings.AZURE_OPENAI_API_KEY},
    )

I have added those Azure specific values to config.py and have confirmed the parameters are correct by getting valid completions and embeddings via the debugger. They are also set via set -a; source .env before running.
I've also explicitly set AZURE_OPENAI_ENDPOINT.

Has anyone else had success getting this project to work correctly with Azure Open AI?

@fchenGT
Copy link

fchenGT commented Nov 30, 2023

There is an issue with openai_agent.astream_chat method when the llm is AzureOpenAI instead of parent OpenAI class instance. You can reproduce it like this:
run-llama/llama_index#9219

@fchenGT
Copy link

fchenGT commented Feb 14, 2024

Confirmed Bryce's merge fix the issue.

Thank you Bryce!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants