How Can We Build Scalable and Reproducible Machine Learning Experiment Pipelines Using Meta Research Hydra?
In this tutorial, we discover Hydra, a sophisticated configuration administration framework initially developed and open-sourced by Meta Research. We start by defining structured configurations utilizing Python dataclasses, which permits us to handle experiment parameters in a clear, modular, and reproducible method. As we transfer by way of the tutorial, we compose configurations, apply runtime overrides, and simulate multirun experiments for hyperparameter sweeps. Check out the FULL CODES here.
import subprocess
import sys
subprocess.check_call([sys.executable, "-m", "pip", "install", "-q", "hydra-core"])
import hydra
from hydra import compose, initialize_config_dir
from omegaconf import OmegaConf, DictConfig
from dataclasses import dataclass, discipline
from typing import List, Optional
import os
from pathlib import Path
We start by putting in Hydra and importing all of the important modules required for structured configurations, dynamic composition, and file dealing with. This setup ensures our surroundings is able to execute the complete tutorial seamlessly on Google Colab. Check out the FULL CODES here.
@dataclass
class OptimizerConfig:
_target_: str = "torch.optim.SGD"
lr: float = 0.01
@dataclass
class AdamConfig(OptimizerConfig):
_target_: str = "torch.optim.Adam"
lr: float = 0.001
betas: tuple = (0.9, 0.999)
weight_decay: float = 0.0
@dataclass
class SGDConfig(OptimizerConfig):
_target_: str = "torch.optim.SGD"
lr: float = 0.01
momentum: float = 0.9
nesterov: bool = True
@dataclass
class ModelConfig:
identify: str = "resnet"
num_layers: int = 50
hidden_dim: int = 512
dropout: float = 0.1
@dataclass
class InformationConfig:
dataset: str = "cifar10"
batch_size: int = 32
num_workers: int = 4
augmentation: bool = True
@dataclass
class CoachingConfig:
mannequin: ModelConfig = discipline(default_factory=ModelConfig)
knowledge: InformationConfig = discipline(default_factory=InformationConfig)
optimizer: OptimizerConfig = discipline(default_factory=AdamConfig)
epochs: int = 100
seed: int = 42
system: str = "cuda"
experiment_name: str = "exp_001"
We outline clear, type-safe configurations utilizing Python dataclasses for the mannequin, knowledge, and optimizer settings. This construction permits us to handle advanced experiment parameters in a modular and readable method whereas guaranteeing consistency throughout runs. Check out the FULL CODES here.
def setup_config_dir():
config_dir = Path("./hydra_configs")
config_dir.mkdir(exist_ok=True)
main_config = """
defaults:
- mannequin: resnet
- knowledge: cifar10
- optimizer: adam
- _self_
epochs: 100
seed: 42
system: cuda
experiment_name: exp_001
"""
(config_dir / "config.yaml").write_text(main_config)
model_dir = config_dir / "mannequin"
model_dir.mkdir(exist_ok=True)
(model_dir / "resnet.yaml").write_text("""
identify: resnet
num_layers: 50
hidden_dim: 512
dropout: 0.1
""")
(model_dir / "vit.yaml").write_text("""
identify: vision_transformer
num_layers: 12
hidden_dim: 768
dropout: 0.1
patch_size: 16
""")
data_dir = config_dir / "knowledge"
data_dir.mkdir(exist_ok=True)
(data_dir / "cifar10.yaml").write_text("""
dataset: cifar10
batch_size: 32
num_workers: 4
augmentation: true
""")
(data_dir / "imagenet.yaml").write_text("""
dataset: imagenet
batch_size: 128
num_workers: 8
augmentation: true
""")
opt_dir = config_dir / "optimizer"
opt_dir.mkdir(exist_ok=True)
(opt_dir / "adam.yaml").write_text("""
_target_: torch.optim.Adam
lr: 0.001
betas: [0.9, 0.999]
weight_decay: 0.0
""")
(opt_dir / "sgd.yaml").write_text("""
_target_: torch.optim.SGD
lr: 0.01
momentum: 0.9
nesterov: true
""")
return str(config_dir.absolute())
We programmatically create a listing containing YAML configuration information for fashions, datasets, and optimizers. This strategy allows us to exhibit how Hydra mechanically composes configurations from completely different information, thereby sustaining flexibility and readability in experiments. Check out the FULL CODES here.
@hydra.essential(version_base=None, config_path="hydra_configs", config_name="config")
def prepare(cfg: DictConfig) -> float:
print("=" * 80)
print("CONFIGURATION")
print("=" * 80)
print(OmegaConf.to_yaml(cfg))
print("n" + "=" * 80)
print("ACCESSING CONFIGURATION VALUES")
print("=" * 80)
print(f"Model: {cfg.mannequin.identify}")
print(f"Dataset: {cfg.knowledge.dataset}")
print(f"Batch Size: {cfg.knowledge.batch_size}")
print(f"Optimizer LR: {cfg.optimizer.lr}")
print(f"Epochs: {cfg.epochs}")
best_acc = 0.0
for epoch in vary(min(cfg.epochs, 3)):
acc = 0.5 + (epoch * 0.1) + (cfg.optimizer.lr * 10)
best_acc = max(best_acc, acc)
print(f"Epoch {epoch+1}/{cfg.epochs}: Accuracy = {acc:.4f}")
return best_acc
We implement a coaching operate that leverages Hydra’s configuration system to print, entry, and use nested config values. By simulating a easy coaching loop, we showcase how Hydra cleanly integrates experiment management into actual workflows. Check out the FULL CODES here.
def demo_basic_usage():
print("n" + "
DEMO 1: Basic Configurationn")
config_dir = setup_config_dir()
with initialize_config_dir(version_base=None, config_dir=config_dir):
cfg = compose(config_name="config")
print(OmegaConf.to_yaml(cfg))
def demo_config_override():
print("n" + "
DEMO 2: Configuration Overridesn")
config_dir = setup_config_dir()
with initialize_config_dir(version_base=None, config_dir=config_dir):
cfg = compose(
config_name="config",
overrides=[
"model=vit",
"data=imagenet",
"optimizer=sgd",
"optimizer.lr=0.1",
"epochs=50"
]
)
print(OmegaConf.to_yaml(cfg))
def demo_structured_config():
print("n" + "
DEMO 3: Structured Config Validationn")
from hydra.core.config_store import ConfigRetailer
cs = ConfigRetailer.occasion()
cs.retailer(identify="training_config", node=CoachingConfig)
with initialize_config_dir(version_base=None, config_dir=setup_config_dir()):
cfg = compose(config_name="config")
print(f"Config kind: {kind(cfg)}")
print(f"Epochs (validated as int): {cfg.epochs}")
def demo_multirun_simulation():
print("n" + "
DEMO 4: Multirun Simulationn")
config_dir = setup_config_dir()
experiments = [
["model=resnet", "optimizer=adam", "optimizer.lr=0.001"],
["model=resnet", "optimizer=sgd", "optimizer.lr=0.01"],
["model=vit", "optimizer=adam", "optimizer.lr=0.0001"],
]
outcomes = {}
for i, overrides in enumerate(experiments):
print(f"n--- Experiment {i+1} ---")
with initialize_config_dir(version_base=None, config_dir=config_dir):
cfg = compose(config_name="config", overrides=overrides)
print(f"Model: {cfg.mannequin.identify}, Optimizer: {cfg.optimizer._target_}")
print(f"Learning Rate: {cfg.optimizer.lr}")
outcomes[f"exp_{i+1}"] = cfg
return outcomes
def demo_interpolation():
print("n" + "
DEMO 5: Variable Interpolationn")
cfg = OmegaConf.create({
"mannequin": {"identify": "resnet", "layers": 50},
"experiment": "${mannequin.identify}_${mannequin.layers}",
"output_dir": "/outputs/${experiment}",
"checkpoint": "${output_dir}/finest.ckpt"
})
print(OmegaConf.to_yaml(cfg))
print(f"nResolved experiment identify: {cfg.experiment}")
print(f"Resolved checkpoint path: {cfg.checkpoint}")
We exhibit Hydra’s superior capabilities, together with config overrides, structured config validation, multi-run simulations, and variable interpolation. Each demo showcases how Hydra accelerates experimentation velocity, streamlines guide setup, and fosters reproducibility in analysis. Check out the FULL CODES here.
if __name__ == "__main__":
demo_basic_usage()
demo_config_override()
demo_structured_config()
demo_multirun_simulation()
demo_interpolation()
print("n" + "=" * 80)
print("Tutorial full! Key takeaways:")
print("✓ Config composition with defaults")
print("✓ Runtime overrides through command line")
print("✓ Structured configs with kind security")
print("✓ Multirun for hyperparameter sweeps")
print("✓ Variable interpolation")
print("=" * 80)
We execute all demonstrations in sequence to look at Hydra in motion, from loading configs to performing multiruns. By the top, we summarize key takeaways, reinforcing how Hydra allows scalable and elegant experiment administration.
In conclusion, we grasp how Hydra, pioneered by Meta Research, simplifies and enhances experiment administration by way of its highly effective composition system. We discover structured configs, interpolation, and multirun capabilities that make large-scale machine studying workflows extra versatile and maintainable. With this data, you at the moment are outfitted to combine Hydra into your individual analysis or improvement pipelines, guaranteeing reproducibility, effectivity, and readability in each experiment you run.
Check out the FULL CODES here. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to observe us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
The submit How Can We Build Scalable and Reproducible Machine Learning Experiment Pipelines Using Meta Research Hydra? appeared first on MarkTechPost.
