Skip to content

MLflow

Track ML experiments, register models, and compare runs — all from your Calliope workspace.

Overview

MLflow is the standard open-source platform for managing the machine learning lifecycle. Inside Calliope, MLflow runs as a shared service so your whole team can log experiments, track metrics, compare model versions, and manage artifacts — without any infrastructure setup. Whether you’re training in a Lab notebook or running scripts in the IDE, MLflow is ready to receive your run data.

Key Features

  • Experiment Tracking — Log parameters, metrics, and artifacts from every training run
  • Run Comparison — Side-by-side comparison of runs across metrics and parameters
  • Model Registry — Version, stage, and promote models (Staging → Production)
  • Artifact Storage — Save model files, plots, datasets, and any files alongside runs
  • Auto-logging — One-line integration with scikit-learn, PyTorch, TensorFlow, XGBoost, and more

Getting Started

  1. From the Hub, click MLflow to launch the tracking UI
  2. Browse existing experiments or create a new one
  3. In your notebook or script, log to MLflow:
import mlflow

mlflow.set_experiment("my-experiment")

with mlflow.start_run():
    mlflow.log_param("learning_rate", 0.01)
    mlflow.log_param("epochs", 50)

    # ... training code ...

    mlflow.log_metric("accuracy", 0.94)
    mlflow.log_metric("loss", 0.12)
    mlflow.log_artifact("model.pkl")
  1. Switch to the MLflow UI to see your run appear

Auto-logging

Enable one-line auto-logging for popular frameworks:

import mlflow

# scikit-learn
mlflow.sklearn.autolog()

# PyTorch Lightning
mlflow.pytorch.autolog()

# TensorFlow/Keras
mlflow.tensorflow.autolog()

# XGBoost
mlflow.xgboost.autolog()

Auto-logging captures parameters, metrics, and model artifacts automatically without manual log calls.

Comparing Runs

In the MLflow UI:

  1. Go to your experiment
  2. Select multiple runs using the checkboxes
  3. Click Compare to open a side-by-side view
  4. View parameter diffs, metric charts, and artifact comparisons

Model Registry

Promote your best models through stages:

  1. After a successful run, click Register Model in the run view
  2. Create a new model or add a version to an existing one
  3. Transition versions through stages: None → Staging → Production → Archived
  4. Load registered models by name in downstream code:
model = mlflow.sklearn.load_model("models:/my-model/Production")

Connecting from Your Workspace

The MLflow tracking server is pre-configured in your Calliope environment. No URI setup needed — just import mlflow and start logging:

import mlflow
# Already pointed at your Calliope MLflow server

If you need to set the tracking URI explicitly:

mlflow.set_tracking_uri("https://your-hub/mlflow")

When to Use MLflow

TaskTool
Tracking model training runsMLflow
Comparing hyperparameter experimentsMLflow
Registering production modelsMLflow
Writing training codeAI Notebook Lab or AI IDE
Data preparation and EDAAI Notebook Lab