Skip to content

yuchenlin/SwiftSage

Repository files navigation

SwiftSage

  • We introduce SwiftSage, a novel agent framework inspired by the dual-process theory of human cognition, designed to excel in action planning for complex interactive reasoning tasks. SwiftSage integrates the strengths of behavior cloning and prompting large language models (LLMs) to enhance task completion performance.
  • The framework comprises two primary modules: the Swift module, representing fast and intuitive thinking, and the Sage module, emulating deliberate thought processes. The Swift module is a small encoder-decoder LM fine-tuned on the oracle agent's action trajectories (i.e., imitation learning / behavior cloning), while the Sage module employs LLMs such as GPT-4 for subgoal planning and grounding. We develop a heuristic method to harmoniously integrate the two modules, resulting in a more efficient and robust problem-solving process.
  • In 30 tasks from the ScienceWorld benchmark, SwiftSage significantly outperforms other methods such as SayCan, ReAct, and Reflexion, demonstrating its effectiveness in solving complex real-world tasks.

Authors:

Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, Xiang Ren. (AI2-Mosaic and USC-INK).

Comparisons

Framework

TODO:

  • [] add support to open-source LLMs (e.g., https://huggingface.co/Salesforce/xgen-7b-8k-inst)
  • [] add other tasks such as web tasks and math problems

Installation

conda create -n swiftsage python=3.8 pip
conda activate swiftsage
pip3 install scienceworld==1.1.3
pip3 install -r requirements.txt
pip3 install torch --extra-index-url https://download.pytorch.org/whl/cu116
conda install -c "nvidia/label/cuda-11.6.0" cuda-toolkit
conda install -c conda-forge openjdk # if needed 

Imitation learning

You can skip this step by simply using our checkpoint here: https://huggingface.co/yuchenlin/swift_sw It is based on Flan-t5-large (770m).

Generating data for imitation learning (behavior cloning)

cd  fast_slow_agent/data_utils/
# unzip goldpaths-all.zip 
python data_convert.py 

Train Swift Module

cd fast_agent
bash ds_train.sh  

The SwiftSage Agent

Note that we name SwiftSage as fast_slow_agent in the codebase.

bash run_eval_fast_slow.sh

The logs will be saved and the scripts for showing results and doing analysis are in the analysis folder.

Specifically, if you'd like to test the pipeline or debug a particular task and var:

CUDA_VISIBLE_DEVICES=0 python eval_agent_fast_slow.py \
    --task_nums "28" \
    --set "test_mini" \
    --seed 42 \
    --debug_var "450" \
    --gpt_version "gpt-4" \
    --output_path "fast_slow_logs/tmp_gpt4/"
# you can then check `fast_slow_logs/tmp/task28.log` for the progress.

Evaluation

SayCan, ReAct, Reflexion

Please check the baselines folder for the scripts and code.

Other baseline methods

Check out: https://github.com/allenai/ScienceWorld

Citation

@article{Lin2023SwiftSageAG,
    author = {Bill Yuchen Lin and Yicheng Fu and Karina Yang and Prithviraj Ammanabrolu and Faeze Brahman and Shiyu Huang and Chandra Bhagavatula and Yejin Choi and Xiang Ren},
    journal = {ArXiv preprint},
    title = {SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks},
    url = {https://arxiv.org/abs/2305.17390},
    volume = {abs/2305.17390},
    year = {2023}
}