LazyTune
Docs
A fast and efficient hyperparameter optimization framework for scikit-learn models. Dramatically reduces training time with a smart screening → pruning → full-training pipeline.
Installation
Install LazyTune via pip. No extra configuration needed — all dependencies are pulled in automatically.
Requires Python 3.8+, numpy, pandas, and scikit-learn.
Quick Start
Get up and running with RandomForestClassifier on the
breast cancer dataset in under a minute.
from sklearn.datasets import load_breast_cancer from sklearn.ensemble import RandomForestClassifier from lazytune import SmartSearch X, y = load_breast_cancer(return_X_y=True) param_grid = { "n_estimators": [50, 100, 150, 200], "max_depth": [5, 10, 15, None], "min_samples_split": [2, 3, 4, 5] } search = SmartSearch( estimator=RandomForestClassifier(random_state=42), param_grid=param_grid, metric="accuracy", cv_folds=3, prune_ratio=0.5, # keep top 50% after screening n_jobs=-1 # use all available cores ) search.fit(X, y) print("Best parameters:", search.best_params_) print("Best CV score: ", search.best_score_) print("\nBest model:\n", search.best_estimator_)
SVM Classification
Use SmartSearch with a Support Vector Machine to tune
C, kernel, and
gamma together.
from sklearn.svm import SVC from lazytune import SmartSearch search = SmartSearch( estimator=SVC(random_state=42), param_grid={ "C": [0.1, 1, 10, 50, 100], "kernel": ["linear", "rbf"], "gamma": ["scale", "auto", 0.001, 0.0001] }, metric="f1_macro", cv_folds=5, prune_ratio=0.6 )
Regression
Works identically for regression — just switch the estimator and use
a regression metric like r2.
from sklearn.ensemble import RandomForestRegressor from lazytune import SmartSearch search = SmartSearch( estimator=RandomForestRegressor(random_state=42), param_grid={ "n_estimators": [100, 200, 300, 500], "max_depth": [8, 12, 16, None], "min_samples_split": [2, 4, 8] }, metric="r2", cv_folds=4, n_jobs=-1 )
Supported Metrics
LazyTune supports all scikit-learn scoring strings. Pass any as the
metric argument. For custom metrics use
sklearn.metrics.make_scorer.
How It Works
LazyTune's four-phase pipeline eliminates wasted compute compared to brute-force GridSearchCV — while typically reaching identical final performance.
Generate Combinations
All hyperparameter combinations are produced from the
user-defined param_grid.
Screening Round
Every candidate is quickly evaluated with cross-validation using minimal resources — just enough to rank relative performance.
Rank & Prune
Candidates are sorted by screening score. The bottom
prune_ratio fraction are eliminated before full
training begins.
Full Training
Only top-ranked survivors are trained fully. The best model, parameters, score and detailed trial summary are returned.
API Reference
All functionality is exposed through the
SmartSearch class.
SmartSearch( estimator, # any scikit-learn style estimator param_grid, # dict of param -> list of values metric, # scoring string or make_scorer object cv_folds = 3, # number of CV folds for screening prune_ratio = 0.5, # fraction to prune (0.0 = keep all) n_jobs = 1 # parallel workers (-1 = all cores) )
Attributes
| Attribute | Type | Description |
|---|---|---|
best_params_ |
dict | Best found hyperparameter dictionary. |
best_score_ |
float | Best cross-validated score achieved. |
best_estimator_ |
estimator | Fully fitted estimator with best parameters. |
summary_ |
DataFrame | pandas DataFrame with all trial results and rankings. |
cv_results_ |
dict | Detailed cross-validation results per candidate. |
Methods
| Method | Description |
|---|---|
.fit(X, y) |
Run the full optimization pipeline on training data. |
.predict(X) |
Predict using the best found estimator. |
.score(X, y) |
Score the best estimator on given data. |
.get_params() |
Get parameters for this estimator. |
.set_params(**params) |
Set parameters of this estimator. |
Requirements
- Python ≥ 3.8
- numpy
- pandas
- scikit-learn
All dependencies are installed automatically via pip.
License & Author
LazyTune is released under the MIT License — free to use, modify, and distribute.
Built by Anik Chand. Feedback, issues, stars, and contributions are very welcome!
GITHUB
Chack the code in the github repository.