Examples
We provide an extensive set of examples on our Github page. An overview is given below.
Torch backend
Example 1: How to train PGBM on CPU
Example 1a: How to train PGBM on CPU (Jupyter Notebook)
Example 2: How to train PGBM on GPU
Example 4: How to train PGBM using a validation loop.
Example 5: How PGBM compares to NGBoost
Example 6: How PGBM training time compares to LightGBM.
Example 7: How the choice of output distribution can be optimized after training.
Example 8: How to use autodifferentiation for loss functions where no analytical gradient or hessian is provided.
Example 9: How to plot the feature importance of a learner after training using partial dependence plots.
Example 9a: How to plot the feature importance of a learner after training using Shapley values.
Example 10: How we employed PGBM to forecast Covid-19 daily hospital admissions in the Netherlands.
Example 11: How to save and load a PGBM model. Train and predict using different devices (CPU or GPU).
Example 12: How to continue training and using checkpoints to save model state during training.
Example 15: How to use monotone constraints to improve model performance.
Scikit-learn backend
Example 1: How to train PGBM
Example 4: How to train PGBM using a validation loop.
Example 5: How PGBM compares to NGBoost
Example 6: How PGBM compares to LightGBM.
Example 7: How parameters can be optimized using GridSearchCV.
Example 9: How to plot the feature importance of a learner after training using Shapley values.
Example 11: How to save and load a PGBM model.
Example 12: How to continue training after saving a model.
Example 13: How to use monotone constraints to improve model performance.
Example 14: How HistGradientBoostingRegressor with PGBM fares against quantile regression methods.
Torch-distributed backend
Example 13: How to train the housing dataset using our distributed backend.
Example 14: How to train the Higgs dataset using our distributed backend.