Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: LLM Provider NOT provided #1382

Closed
2 tasks done
hccnm opened this issue Apr 26, 2024 · 5 comments
Closed
2 tasks done

[Bug]: LLM Provider NOT provided #1382

hccnm opened this issue Apr 26, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@hccnm
Copy link

hccnm commented Apr 26, 2024

Is there an existing issue for the same bug?

Describe the bug

I'm using version 0.4.0 and going through the guidance of AzureLLMs.md
Configured config.toml, when I executed make run, I encountered the following problems。Is this a bug? Or how should I modify the configuration?
image
image

Current Version

0.4.0

Installation and Configuration

LLM_MODEL="azure/gpt4-1106"
LLM_API_KEY="xxxx"
LLM_BASE_URL="https://xxx.openai.azure.com/"
LLM_EMBEDDING_MODEL="azureopenai"
LLM_EMBEDDING_DEPLOYMENT_NAME="embedding2"
LLM_API_VERSION="2024-02-15-preview"
WORKSPACE_BASE="/opendevin/OpenDevin/workspace"
SANDBOX_TYPE="exec"

Model and Agent

No response

Reproduction Steps

No response

Logs, Errors, Screenshots, and Additional Context

No response

@hccnm hccnm added the bug Something isn't working label Apr 26, 2024
@enyst
Copy link
Collaborator

enyst commented Apr 26, 2024

Can you please check this value "GPT4-1106"?
On the litellm list here https://litellm.vercel.app/docs/providers/azure I find "GPT4-1106-preview", or others, and the table indicates that the value corresponding to it should be "azure/<your chat model deployment name>". In other words, just like you have defined a deployment in your Azure account for embedding, you should have another one for the chat model you want to use. I'd suggest to use that name in LLM_MODEL. That might be by default the same name with the chat model, though it probably doesn't have to be.

You could just try 'GPT4-1106-preview' quick, or check the deployments page in the Azure account?

@enyst
Copy link
Collaborator

enyst commented Apr 26, 2024

In your Azure account, there's a "deployments" page/tab I think, where you can see the names of your deployments. It's that name you need for the chat model. However, I need to add a detail: if it's different than the default model name, which it might be, then:

  • start opendevin (make run, if that's how you prefer)
  • open settings in the UI, and add your actual deployment name (for the chat model) in the box. It is specific to your account, and it might be or not be in the list you see, but you can add your own value and save it.

@hccnm
Copy link
Author

hccnm commented Apr 28, 2024

image
thans you first. yes,I checked the name of the deployment,and I used the same parameters in litellm and it worked fine,So this makes me confused

@enyst
Copy link
Collaborator

enyst commented Apr 28, 2024

Please make sure to open the web UI, and in Settings, enter the model and save. Even if you sent it as parameter, save it in the UI. Does it work?

@hccnm
Copy link
Author

hccnm commented Apr 30, 2024

Please make sure to open the web UI, and in Settings, enter the model and save. Even if you sent it as parameter, save it in the UI. Does it work?

now ,it work

@hccnm hccnm closed this as completed Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants