Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SyntaxError: invalid syntax #9

Open
tranvanhoa533 opened this issue Nov 14, 2023 · 3 comments
Open

SyntaxError: invalid syntax #9

tranvanhoa533 opened this issue Nov 14, 2023 · 3 comments

Comments

@tranvanhoa533
Copy link

Describe the issue

Issue:
I try to run your given example: Detect the person and frisbee in the image in detection examples. Sometime i meet error: SyntaxError: invalid syntax

Log:

2023-11-14 14:25:00 | INFO | stdout | template_name:  llava_v0
2023-11-14 14:25:00 | INFO | stdout | Messages: [['Human', ('Detect the person and frisbee in the image.\n<image>', <PIL.Image.Image image mode=RGB size=640x428 at 0x7FDD7DFAC5B0>, 'Crop', <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=640x428 at 0x7FDD7DFAC880>)], ['Assistant', None]]
2023-11-14 14:25:00 | INFO | gradio_web_server | model_name: llava_plus_v0_7b, worker_addr: http://localhost:40000
2023-11-14 14:25:00 | INFO | gradio_web_server | ==== request ====
{'model': 'llava_plus_v0_7b', 'prompt': "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\nDetect the person and frisbee in the image.###Assistant:", 'temperature': 0.2, 'top_p': 0.7, 'max_new_tokens': 512, 'stop': '###', 'images': "List of 1 images: ['e75f292184a71c98df096cba7e880afa']"}
==== request ====
2023-11-14 14:25:05 | ERROR | stderr | Traceback (most recent call last):
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict
2023-11-14 14:25:05 | ERROR | stderr |     output = await app.get_blocks().process_api(
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/gradio/blocks.py", line 1352, in process_api
2023-11-14 14:25:05 | ERROR | stderr |     result = await self.call_function(
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/gradio/blocks.py", line 1093, in call_function
2023-11-14 14:25:05 | ERROR | stderr |     prediction = await utils.async_iteration(iterator)
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/gradio/utils.py", line 341, in async_iteration
2023-11-14 14:25:05 | ERROR | stderr |     return await iterator.__anext__()
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/gradio/utils.py", line 334, in __anext__
2023-11-14 14:25:05 | ERROR | stderr |     return await anyio.to_thread.run_sync(
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
2023-11-14 14:25:05 | ERROR | stderr |     return await get_asynclib().run_sync_in_worker_thread(
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
2023-11-14 14:25:05 | ERROR | stderr |     return await future
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
2023-11-14 14:25:05 | ERROR | stderr |     result = context.run(func, *args)
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/miniconda3/envs/llava/lib/python3.10/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async
2023-11-14 14:25:05 | ERROR | stderr |     return next(iterator)
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/hoatv2/codes/LLaVA-Plus/llava/serve/gradio_web_server_llava_plus.py", line 471, in http_bot
2023-11-14 14:25:05 | ERROR | stderr |     yield (state, state.to_gradio_chatbot(with_debug_parameter=with_debug_parameter_from_state)) + (disable_btn,) * 6
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/hoatv2/codes/LLaVA-Plus/llava/conversation.py", line 435, in to_gradio_chatbot
2023-11-14 14:25:05 | ERROR | stderr |     ret = self.merge_output(ret, with_debug_parameter=with_debug_parameter)
2023-11-14 14:25:05 | ERROR | stderr |   File "/data/hoatv2/codes/LLaVA-Plus/llava/conversation.py", line 296, in merge_output
2023-11-14 14:25:05 | ERROR | stderr |     action_json = eval(action)
2023-11-14 14:25:05 | ERROR | stderr |   File "<string>", line 2
2023-11-14 14:25:05 | ERROR | stderr |     "value👉" I will use grounding_dino to help to answer the question. Please wait for a moment.
2023-11-14 14:25:05 | ERROR | stderr |     ^^^^^^^^
2023-11-14 14:25:05 | ERROR | stderr | SyntaxError: invalid syntax
@davidlight2018
Copy link

Met the same problem too.

@lllllllll-3154
Copy link

Same problem here. It seems that the model can not output with the designed format?

@pedramaghazadeh
Copy link

The model has issues with outputting the correct format sometimes (Thoughts, actions, and values). In my experience, the issue arises whenever model continues to output random words after successfully answering a question, which ruins the format that is used to extract the tool that needs to be used.
Solution
In my case, adding "###\n", "\n###", and couple of other strings to the stopping criteria helps with token generation and the output will be usable for the pattern matching regex to extract the correct tool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants