You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You should be able to train some models based off of a workflow in Floneum using either burn or candle.
This can be useful to "flatten" a workflow into a model. For example, you could have a workflow that does classification on reddit posts. Initially you build that workflow to use llama2 to recognize certain posts. After you get the workflow working, you can hook up a classifier to learn the task the language model is doing. After running it overnight, you should be able to replace the language model with a much faster, smaller classifier
Initially I would like to just support classifiers because you can train them quickly without a GPU, but in the future we could add fine tuning language models, or image models.
The text was updated successfully, but these errors were encountered:
You should be able to train some models based off of a workflow in Floneum using either burn or candle.
This can be useful to "flatten" a workflow into a model. For example, you could have a workflow that does classification on reddit posts. Initially you build that workflow to use llama2 to recognize certain posts. After you get the workflow working, you can hook up a classifier to learn the task the language model is doing. After running it overnight, you should be able to replace the language model with a much faster, smaller classifier
Initially I would like to just support classifiers because you can train them quickly without a GPU, but in the future we could add fine tuning language models, or image models.
The text was updated successfully, but these errors were encountered: