Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch size for prior training #90

Open
eladrich opened this issue Aug 7, 2023 · 0 comments
Open

Batch size for prior training #90

eladrich opened this issue Aug 7, 2023 · 0 comments

Comments

@eladrich
Copy link

eladrich commented Aug 7, 2023

Hi,
Great work with the Kandinsky model, the last improvements look really impressive 馃帹

For prior training/tuning I saw that the default batch size is 1, is that actually the size used during training, or is a larger batch needed for stable training?
Would it be possible to share the configuration used for training the prior from scratch (the one that took 1M iterations)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant