Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug/Assistance] - Reproducing Results on Alfworld (HH) (vs. ReAct paper) #127

Open
ai-nikolai opened this issue Mar 9, 2024 · 4 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@ai-nikolai
Copy link

ai-nikolai commented Mar 9, 2024

Bug / Assistance Description
The results that are reported in the HH column are very different to the ReAct paper. In particular, ReAct reports

To Reproduce
See screenshots below. Your results in HH column indicate 16% success for text-davinci-002 or gpt-3.5-turbo. However, the reults using text-davinci-002 on ReAct indicate 78% (second screenshot). This is a significant difference.

Screenshots or Terminal Copy&Paste
AgentBench

ReAct Paper

Concrete Questions / Actions:
Please tell us:

  1. How your evaluation for Alfworld (HH) differs from ReAct?
  2. Which exact model you used?
  3. Which prompts you used (1-shot, 2-shot), and are they the same as from the ReAct paper?
  4. Why are the results so different?
@ai-nikolai ai-nikolai added bug Something isn't working help wanted Extra attention is needed labels Mar 9, 2024
@ai-nikolai
Copy link
Author

@zhc7
Copy link
Collaborator

zhc7 commented Mar 11, 2024

Please read the paper carefully. You can find all the prompt in appendix or code. The results are different because 1. we are not using the same prompt. 2. we are not using exactly the same envrionment.

@ai-nikolai
Copy link
Author

Thanks for coming back @zhc7.

  1. Thanks for clarifying, yes in appending G.2 a prompt example can be seen, which I guess corresponds to either:
    a. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_react.json
    b. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_plan_first.json

  2. Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]

The reason for asking about this question is to understand whether you were able to get close to the results reported in ReAct and what the exact difference might be, as the results of ReAct seem quite impossible to reproduce.

@zhc7
Copy link
Collaborator

zhc7 commented Mar 26, 2024

Hi, @ai-nikolai sorry for the late reply, we've been quite busy lately. To answer your question, I believe the main difference is the prompting technique. We weren't aiming to reproduce the ReAct's result, but to design a prompt and a evaluation process that is relatively fair to all the models. The prompt we used is listed in paper Appendix G. The evaluation process is located at

async def start_sample(self, index, session: Session) -> TaskSampleExecutionResult:
.

Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]

The main differences are about adapting the alfworld to the framework and set some limitations and rules to avoid prolonged evaluation.

To sum up, you may have to do some more investigations on this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants