You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am proposing the addition of a feature to visualize training and evaluation metrics during or after AutoGluon training, similar to the functionality provided by Tensorboard in TensorFlow. This feature would be particularly beneficial for the tabular module but could potentially be extended to multimodal and timeseries modules.
The ideal implementation would allow users to access training and evaluation metrics in a structured format (e.g., a DataFrame) post-training, and to visualize these metrics through dynamically updating graphs during the training process. This would not only enhance user understanding of model behavior but also aid in quicker debugging and optimization of model parameters.
Proposed API Changes
from autogluon.tabular import TabularPredictor
from autogluon.visualization import TrainingMonitor
predictor = TabularPredictor(label='target').fit(train_data)
monitor = TrainingMonitor(predictor)
monitor.plot_metrics() # Generates dynamic plots for training and validation metrics over epochs
Open-Source Implementations
TensorBoard provides an excellent example of real-time plotting and logging capabilities integrated into TensorFlow training routines. It serves as a benchmark for what could be implemented in AutoGluon.
MLflow's Tracking API is another relevant tool that allows logging metrics, parameters, and artifacts to help visualize the machine learning lifecycle. https://mlflow.org/docs/latest/tracking.html
The text was updated successfully, but these errors were encountered:
Description
I am proposing the addition of a feature to visualize training and evaluation metrics during or after AutoGluon training, similar to the functionality provided by Tensorboard in TensorFlow. This feature would be particularly beneficial for the tabular module but could potentially be extended to multimodal and timeseries modules.
The ideal implementation would allow users to access training and evaluation metrics in a structured format (e.g., a DataFrame) post-training, and to visualize these metrics through dynamically updating graphs during the training process. This would not only enhance user understanding of model behavior but also aid in quicker debugging and optimization of model parameters.
Proposed API Changes
References
Tensorboard: Visualization toolkit for machine learning experimentation https://www.tensorflow.org/tensorboard
Scikit-learn plotting API: Provides a model visualization for inspection. https://scikit-learn.org/stable/visualizations.html
Open-Source Implementations
TensorBoard provides an excellent example of real-time plotting and logging capabilities integrated into TensorFlow training routines. It serves as a benchmark for what could be implemented in AutoGluon.
MLflow's Tracking API is another relevant tool that allows logging metrics, parameters, and artifacts to help visualize the machine learning lifecycle. https://mlflow.org/docs/latest/tracking.html
The text was updated successfully, but these errors were encountered: