Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: First task throws an error: opendevin:ERROR: agent_controller.py:102 - Error in loop #1422

Open
2 tasks done
ideaXdao opened this issue Apr 28, 2024 · 7 comments
Open
2 tasks done
Labels
bug Something isn't working installation problems with installation and setup severity:low Minor issues, code cleanup, etc

Comments

@ideaXdao
Copy link

Is there an existing issue for the same bug?

Describe the bug

Using the script/env below, OpenDevin UI opens fine, but as soon as I ask a simple task it fails.
We tried both the docker install and the base install with "make run" - both got the same error

It shows

STEP 0

04:00:04 - PLAN
2+2
04:00:06 - opendevin:ERROR: agent_controller.py:102 - Error in loop

Current Version

0.4.0

Installation and Configuration

Run Script:
# The directory you want OpenDevin to modify. MUST be an absolute path!
export WORKSPACE_BASE=$(pwd)/workspace

echo $WORKSPACE_BASE
# export LLM_BASE_URL="http://host.docker.internal:11434" \
export LLM_BASE_URL="http://127.0.0.1:11434"
export LLM_EMBEDDING_MODEL="phi3"
export LLM_API_KEY="ollama"
export LLM_MODEL="phi3"

docker run \
--add-host=host.docker.internal:host-gateway \
-e LLM_API_KEY="ollama" \
-e LLM_EMBEDDING_MODEL="ollama/phi3:latest" \
-e LLM_MODEL="ollama/phi3:latest" \
-e LLM_BASE_URL="http://host.docker.internal:11434" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/root/opendevin/workspace \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
ghcr.io/opendevin/opendevin:0.4.0

Model and Agent

Ollama / Phi3

Reproduction Steps

No response

Logs, Errors, Screenshots, and Additional Context

No response

@ideaXdao ideaXdao added the bug Something isn't working label Apr 28, 2024
@ideaXdao
Copy link
Author

Sorry - forgot to mention I do have the Ollama:Phi3:Latest locally on VPS
ollama list
NAME ID SIZE MODIFIED
gemma:2b b50d6c999e59 1.7 GB 33 hours ago
llama2:13b d475bf4c50bc 7.4 GB 28 hours ago
phi3:latest a2c89ceaed85 2.3 GB 34 hours ago

@li-boxuan
Copy link
Collaborator

Please provide the full logs

@enyst
Copy link
Collaborator

enyst commented Apr 28, 2024

Also, when you start the application, in the web UI, go to Settings and set the model: "ollama/phi3:latest" in the settings. You should be able to enter the text even though it's not in the list and save.

@zhonggegege
Copy link

Sorry - forgot to mention I do have the Ollama:Phi3:Latest locally on VPS ollama list NAME ID SIZE MODIFIED gemma:2b b50d6c999e59 1.7 GB 33 hours ago llama2:13b d475bf4c50bc 7.4 GB 28 hours ago phi3:latest a2c89ceaed85 2.3 GB 34 hours ago

I have tested all the models you used, and none of them are usable. You can save some time.^^

@ideaXdao
Copy link
Author

Please provide the full logs

(base) root@vmi1667462:~/OpenDevin# ./test.sh
/root/OpenDevin/workspace
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
INFO: 5.254.112.120:35394 - "GET /index.html HTTP/1.1" 304 Not Modified
INFO: ('138.199.31.27', 49262) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiI1OWY1ZjZhZC1kZTNmLTRkZmQtOTE1Ni1jOGRmOGF mMDBiZmEifQ.7RMc9b3Lh-XlWyGSWPgeSdb46GuUg5UpgaUB6WaYMWc" [accepted]
INFO: connection open
03:59:51 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM gpt-3.5-turbo
03:59:51 - opendevin:INFO: llm.py:51 - Initializing LLM with model: gpt-3.5-turbo
03:59:52 - opendevin:INFO: ssh_box.py:353 - Container stopped
03:59:52 - opendevin:INFO: ssh_box.py:373 - Mounting workspace directory: /root/OpenDevin/workspace
03:59:52 - opendevin:INFO: ssh_box.py:396 - Container started
03:59:53 - opendevin:INFO: ssh_box.py:413 - waiting for container to start: 1, container status: running
03:59:54 - opendevin:INFO: ssh_box.py:178 - Connecting to root@host.docker.internal via ssh. If you encounter any issues, you can try ssh -v -p 3 5245 root@host.docker.internal with the password 'e7c9904f-a6fe-4076-a4e8-641d8a0308fe' and report the issue on GitHub.
Starting loop_recv for sid: 59f5f6ad-de3f-4dfd-9156-c8df8af00bfa
INFO: 5.254.112.120:48824 - "GET /api/litellm-models HTTP/1.1" 200 OK
INFO: 5.254.112.120:48832 - "GET /api/messages/total HTTP/1.1" 200 OK
INFO: 5.254.112.120:48820 - "GET /api/refresh-files HTTP/1.1" 200 OK
INFO: 5.254.112.120:48820 - "GET /api/agents HTTP/1.1" 200 OK

==============
STEP 0

04:00:04 - PLAN
2+2
04:00:06 - opendevin:ERROR: agent_controller.py:102 - Error in loop
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 414, in completion
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 373, in completion
response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 581, in create
return self._post(
^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1232, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1012, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: 404 page not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1010, in completion
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 983, in completion
response = openai_chat_completions.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 420, in completion
raise OpenAIError(status_code=e.status_code, message=str(e))
litellm.llms.openai.OpenAIError: 404 page not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/app/opendevin/controller/agent_controller.py", line 98, in _run
finished = await self.step(i)
^^^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/agent_controller.py", line 211, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agenthub/monologue_agent/agent.py", line 218, in step
resp = self.llm.completion(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/app/opendevin/llm/llm.py", line 78, in wrapper
resp = completion_unwrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2977, in wrapper
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2875, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2137, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8665, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 7461, in exception_type
raise NotFoundError(
litellm.exceptions.NotFoundError: OpenAIException - 404 page not found
^CINFO: Shutting down
04:00:32 - opendevin:INFO: session.py:39 - WebSocket disconnected, sid: 59f5f6ad-de3f-4dfd-9156-c8df8af00bfa
INFO: connection closed
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [1]

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Aborted!
04:00:32 - opendevin:INFO: manager.py:37 - Closing 1 agent(s)...
04:00:32 - opendevin:INFO: manager.py:37 - Saving sessions...
04:00:32 - opendevin:INFO: msg_stack.py:42 - Saving messages...

@li-boxuan
Copy link
Collaborator

03:59:51 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM gpt-3.5-turbo

Looks like your settings are not passed to OpenDevin correctly. It's still trying to use the default gpt-3.5-turbo

@enyst
Copy link
Collaborator

enyst commented Apr 28, 2024

@ideaXdao Please open the web UI, and set the model there. If you don't find it in the predefined list, ignore that, just enter the value reported in ollama list. Save it, then start a task.

Also, please note that LLM_EMBEDDING_MODEL should be 'local', I believe. It's different than the chat model. But I don't think this is the main issue here, the error is about the above. Can you please try that?

@rbren rbren added severity:low Minor issues, code cleanup, etc installation problems with installation and setup labels May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working installation problems with installation and setup severity:low Minor issues, code cleanup, etc
Projects
None yet
Development

No branches or pull requests

5 participants