Skip to content

Latest commit

 

History

History
25 lines (15 loc) · 1.35 KB

README.md

File metadata and controls

25 lines (15 loc) · 1.35 KB

Video Captioning

Human labeling of videos is expensive and time-consuming. We adopt powerful image captioning models to generate captions for videos. Although GPT-4V achieves a better performance, its 20s/sample speed is too slow for us. With batch inference, we can achieve a speed of 3s/sample with LLaVA, and the quality is comparable. LLaVA is the second best open-source model in MMMU and accepts any resolution.

Caption

GPT-4V Captioning

Run the following command to generate captions for videos with GPT-4V:

python -m tools.caption.caption_gpt4 FOLDER_WITH_VIDEOS output.csv --key $OPENAI_API_KEY

The cost is approximately $0.01 per video (3 frames per video). The output is a CSV file with path and caption.

LLaVA Captioning

First, install LLaVA according to their official instructions. We use the liuhaotian/llava-v1.6-34b model for captioning, which can be download here. Then, run the following command to generate captions for videos with LLaVA:

CUDA_VISIBLE_DEVICES=0,1 python -m tools.caption.caption_llava samples output.csv

The Yi-34B requires 2 80GB GPUs and 3s/sample. The output is a CSV file with path and caption.