Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data_set 报错 #498

Open
wangyiyang opened this issue Aug 25, 2023 · 0 comments
Open

data_set 报错 #498

wangyiyang opened this issue Aug 25, 2023 · 0 comments

Comments

@wangyiyang
Copy link

wangyiyang commented Aug 25, 2023

参数:

torchrun --nproc_per_node 8 src/entry_point/sft_train.py \
    --ddp_timeout 36000 \
    --model_name_or_path ${model_name_or_path} \
    --llama \
    --use_lora \
    --deepspeed configs/deepspeed_config_stage3.json \
    --lora_config configs/lora_config_llama.json \
    --train_file ${train_file} \
    --validation_file ${validation_file} \
    --per_device_train_batch_size 1 \
    --per_device_eval_batch_size 1 \
    --gradient_accumulation_steps 1 \
    --num_train_epochs 10 \
    --model_max_length ${cutoff_len} \
    --save_strategy "steps" \
    --save_total_limit 3 \
    --learning_rate 3e-4 \
    --weight_decay 0.00001 \
    --warmup_ratio 0.01 \
    --lr_scheduler_type "cosine" \
    --logging_steps 10 \
    --evaluation_strategy "steps" \
    --torch_dtype "bfloat16" \
    --bf16 False --fp16 True  --seed 1234 --gradient_checkpointing --cache_dir ${cache_dir}     --output_dir  ${output_dir}\
   # --use_flash_attention
   # --resume_from_checkpoint ...

GPU:V100s
docker 启动指令:docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --network host -it --name dtm -v /data/BELLE:/data/BELLE -v data/Llama-2-7b-hf:/data/Llama-2-7b-hf tothemoon/belle:latest /bin/bash

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant