Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I replace OpenAPI with Azure Cognitive Services API and what modifications would be necessary for this migration? #64

Open
DhananjayanOnline opened this issue Oct 25, 2023 · 5 comments

Comments

@DhananjayanOnline
Copy link

No description provided.

@sourabhdesai
Copy link
Contributor

@DhananjayanOnline yes I think it'd be pretty easy to replace as the LlamaIndex framework does have an implementation of the generic LLM interface for Azure's OpenAI service - see LlamaIndex docs on how to set this up.

I think the main places where you'd need to make changes in the codebase are in backend/app/chat/engine.py. Specifically, here & here.

@DhananjayanOnline
Copy link
Author

@DhananjayanOnline yes I think it'd be pretty easy to replace as the LlamaIndex framework does have an implementation of the generic LLM interface for Azure's OpenAI service - see LlamaIndex docs on how to set this up.

I think the main places where you'd need to make changes in the codebase are in backend/app/chat/engine.py. Specifically, here & here.

@sourabhdesai I've made the changes in the code as per your previous suggestion, but I'm encountering a response that says, 'Sorry, I either couldn't comprehend your question or I don't have an answer for it.' It appears that the engine is returning an empty response.

@YAtidus
Copy link

YAtidus commented Nov 1, 2023

Experiencing same issue and same behavior , when switching to AzureOpenAI , when i see the code , i see that verification for support for function call is here , but not sure why it's happening

@DhananjayanOnline
Copy link
Author

DhananjayanOnline commented Nov 2, 2023

@DhananjayanOnline yes I think it'd be pretty easy to replace as the LlamaIndex framework does have an implementation of the generic LLM interface for Azure's OpenAI service - see LlamaIndex docs on how to set this up.
I think the main places where you'd need to make changes in the codebase are in backend/app/chat/engine.py. Specifically, here & here.

@sourabhdesai I've made the changes in the code as per your previous suggestion, but I'm encountering a response that says, 'Sorry, I either couldn't comprehend your question or I don't have an answer for it.' It appears that the engine is returning an empty response.

@sourabhdesai

I am currently facing an error when using the AzureOpenAI library, The error message I am receiving is as follows:

Traceback (most recent call last):
  File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/llama_index/embeddings/openai.py", line 166, in get_embeddings
    data = openai.Embedding.create(input=list_of_text, model=engine, **kwargs).data
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 151, in create
    ) = cls.__prepare_create_request(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 85, in __prepare_create_request
    raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>

I've made the necessary changes in the code as per your previous suggestion.
If you have any ideas on how to fix this issue or potential workarounds, feel free to mention them here

@BryceAmackerLE
Copy link

I am also facing the same issue with empty responses when using the AzureOpenAI class. I've replaced both the llm and embedding_model classes, and I get the same behavior that @DhananjayanOnline describes.

I've confirmed that my parameters are correct, as I have valid embeddings being generated and I can get valid responses by calling chat_llm.complete(). I'm wondering if it is behavior specific to AzureOpenAI + async + streaming = True?

Has anyone had success with AzureOpenAI so far?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants