Docs
Search
K
Comment on page

Train

Training functions in PyCaret

compare_models

This function trains and evaluates the performance of all estimators available in the model library using cross-validation. The output of this function is a scoring grid with average cross-validated scores. Metrics evaluated during CV can be accessed using the get_metrics function. Custom metrics can be added or removed using add_metric and remove_metric function.

Example

1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models()
Output from compare_models
The compare_models returns only the top-performing model based on the criteria defined in sort parameter. It is Accuracy for classification experiments and R2 for regression. You can change the sort order by passing the name of the metric based on which you want to do model selection.

Change the sort order

1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models(sort = 'F1')
Output from compare_models(sort = 'F1')
Notice that the sort order of scoring grid is changed now and also the best model returned by this function is selected based on F1.
1
print(best)
Output from print(best)

Compare only a few models

If you don't want to do horse racing on the entire model library, you can only compare a few models of your choice by using the include parameter.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models(include = ['lr', 'dt', 'lightgbm'])
Output from compare_models(include = ['lr', 'dt', 'lightgbm'])
Alternatively, you can also use exclude parameter. This will compare all models except for the ones that are passed in the exclude parameter.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models(exclude = ['lr', 'dt', 'lightgbm'])
Output from compare_models(exclude = ['lr', 'dt', 'lightgbm'])

Return more than one model

By default, compare_models only return the top-performing model but if you want you can get the Top N models instead of just one model.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models(n_select = 3)
Output from compare_models(n_select = 3)
Notice that there is no change in the results, however, if you check the variable best , it will now contain a list of the top 3 models instead of just one model as seen previously.
1
type(best)
2
# >>> list
3
4
print(best)
Output from print(best)

Set the budget time

If you are running short on time and would like to set a fixed budget time for this function to run, you can do that by setting the budget_time parameter.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models(budget_time = 0.5)
Output from compare_models(budget_time = 0.5)

Set the probability threshold

When performing binary classification, you can change the probability threshold or cut-off value for hard labels. By default, all classifiers use 0.5 as a default threshold.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models(probability_threshold = 0.25)
Output from compare_models(probability_threshold = 0.25)
Notice that all metrics except for AUC are now different. AUC doesn't change because it's not dependent on the hard labels, everything else is dependent on the hard label which is now obtained using probability_threshold=0.25 .
NOTE: This parameter is only available in the Classification module of PyCaret.

Disable cross-validation

If you don't want to evaluate models using cross-validation and rather just train them and see the metrics on the test/hold-out set you can set the cross_validation=False.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# compare models
10
best = compare_models(cross_validation=False)
Output from compare_models(cross_validation=False)
The output looks pretty similar but if you focus, the metrics are now different and that's because instead of average cross-validated scores, these are now the metrics on the test/hold-out set.
NOTE: This function is only available in Classification and Regression modules.

Distributed training on a cluster

To scale on large datasets you can run compare_models function on a cluster in distributed mode using a parameter called parallel. It leverages the Fugue abstraction layer to run compare_models on Spark or Dask clusters.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable', n_jobs = 1)
8
9
# create pyspark session
10
from pyspark.sql import SparkSession
11
spark = SparkSession.builder.getOrCreate()
12
13
# import parallel back-end
14
from pycaret.parallel import FugueBackend
15
16
# compare models
17
best = compare_models(parallel = FugueBackend(spark))
Output from compare_models(parallel = FugueBackend(spark))
Note that we need to set n_jobs = 1 in the setup for testing with local Spark because some models will already try to use all available cores, and running such models in parallel can cause deadlocks from resource contention.
For Dask, we can specify the "dask" inside FugueBackend and it will pull the available Dask client.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable', n_jobs = 1)
8
9
# import parallel back-end
10
from pycaret.parallel import FugueBackend
11
12
# compare models
13
best = compare_models(parallel = FugueBackend("dask"))
For the complete example and other features related to distributed execution, check this example. This example also shows how to get the leaderboard in real-time. In a distributed setting, this involves setting up an RPCClient, but Fugue simplifies that.

create_model

This function trains and evaluates the performance of a given estimator using cross-validation. The output of this function is a scoring grid with CV scores by fold. Metrics evaluated during CV can be accessed using the get_metrics function. Custom metrics can be added or removed using add_metric and remove_metric function. All the available models can be accessed using the models function.

Example

1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# train logistic regression
10
lr = create_model('lr')
Output from create_model('lr')
This function displays the performance metrics by fold and the average and standard deviation for each metric and returns the trained model. By default, it uses the 10 fold that can either be changed globally in the setup function or locally within create_model.

Changing the fold param

1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# train logistic regression
10
lr = create_model('lr', fold = 5)
Output from create_model('lr', fold = 5)
The model returned by this is the same as above, however, the performance evaluation is done using 5 fold cross-validation.

Model library

To check the list of available models in any module, you can use models function.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# check available models
10
models()
Output from models()

Models with custom param

When you just run create_model('dt'), it will train Decision Tree with all default hyperparameter settings. If you would like to change that, simply pass the attributes in the create_model function.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# train decision tree
10
dt = create_model('dt', max_depth = 5)
Output from create_model('dt', max_depth = 5)
# see models params
print(dt)

Access the scoring grid

The performance metrics/scoring grid you see after the create_model is only displayed and is not returned. As such, if you want to access that grid as pandas.DataFrame you will have to use pull command after create_model.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# train decision tree
10
dt = create_model('dt', max_depth = 5)
11
12
# access the scoring grid
13
dt_results = pull()
14
print(dt_results)
Output from print(dt_results)
1
# check type
2
type(dt_results)
3
# >>> pandas.core.frame.DataFrame
4
5
# select only Mean
6
dt_results.loc[['Mean']]
Output from dt_results.loc[['Mean']

Disable cross-validation

If you don't want to evaluate models using cross-validation and rather just train them and see the metrics on the test/hold-out set you can set the cross_validation=False.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# train model without cv
10
lr = create_model('lr', cross_validation = False)
Output from create_model('lr', cross_validation = False)
These are the metrics on the test/hold-out set. That's why you only see one row as opposed to the 12 rows in the original output. When you disable cross_validation, the model is only trained one time, on the entire training dataset and scored using the test/hold-out set.
NOTE: This function is only available in Classification and Regression modules.

Return train score

The default scoring grid shows the performance metrics on the validation set by fold. If you want to see the performance metrics on the training set by fold as well to examine the over-fitting/under-fitting you can use return_train_score parameter.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# train model without cv
10
lr = create_model('lr', return_train_score = True)
Output from createmodel('lr', return_train_score = True)

Set the probability threshold

When performing binary classification, you can change the probability threshold or cut-off value for hard labels. By default, all classifiers use 0.5 as a default threshold.
1
# load dataset
2
from pycaret.datasets import get_data
3
diabetes = get_data('diabetes')
4
5
# init setup
6
from pycaret.classification import *
7
clf1 = setup(data = diabetes, target = 'Class variable')
8
9
# train model with 0.25 threshold
10
lr = create_model('lr', probability_threshold = 0.25)
Output from create_model('lr', probability_threshold = 0.25)
1
# see the model
2
print(lr)
Output from print(lr)

Train models in a loop

You can use the create_model function in a loop to train multiple models or even the same model with different configurations and compare their results.
1
import numpy as npimport pandas as pd
2
3
# load dataset
4
from pycaret.datasets import get_data
5
diabetes = get_data('diabetes')
6
7
# init setup
8
from pycaret.classification import *
9
clf1 = setup(data = diabetes, target = 'Class variable')
10
11
# train models in a loop
12
lgbs = [create_model('lightgbm', learning_rate = i) for i in np.arange(0.1,1,0.1)]
1
type(lgbs)
2
# >>> list
3
4
len(lgbs)
5
# >>> 9
If you want to keep track of metrics as well, as in most cases, this is how you can do it.
1
import numpy as np
2
import pandas as pd
3
4
# load dataset
5
from pycaret.datasets import get_data
6
diabetes = get_data('diabetes')
7
8
# init setup
9
from pycaret.classification import *
10
clf1 = setup(data = diabetes, target = 'Class variable')
11
12
# start a loop
13
models = []
14
results = []
15
16
for i in np.arange(0.1,1,0.1):
17
model = create_model('lightgbm', learning_rate = i)
18
model_results = pull().loc[['Mean']]
19
models.append(model)
20
results.append(model_results)
21
22
results = pd.concat(results, axis=0)
23
results.index = np.arange(0.1,1,0.1)
24
results.plot()
Output from results.plot()

Train custom models

You can use your own custom models for training or models from other libraries which are not part of pycaret. As long as their API is consistent with sklearn, it will work like a breeze.
1
# install gplearn library
2
# pip install gplearn
3
4
# load dataset
5
from pycaret.datasets import get_data
6
diabetes = get_data('diabetes')
7
8
# init setup
9
from pycaret.classification import *
10
clf1 = setup(data = diabetes, target = 'Class variable')
11
12
# import custom model
13
from gplearn.genetic import SymbolicClassifier
14
sc = SymbolicClassifier()
15
16
# train custom model
17
sc_trained = create_model(sc)
Output from create_model(sc)
1
type(sc_trained)
2
# >>> gplearn.genetic.SymbolicClassifier
3
4
print(sc_trained)
Output from print(sc_trained)

Write your own models

You can also write your own class with fit and predict function. PyCaret will be compatible with that. Here is a simple example:
1
# load dataset
2
from pycaret.datasets import get_data
3
insurance= get_data('insurance')
4
5
# init setup
6
from pycaret.regression import *
7
reg1 = setup(data = insurance, target = 'charges')
8
9
# create custom estimator
10
import numpy as np
11
from sklearn.base import BaseEstimator
12
class MyOwnModel(BaseEstimator):
13
14
def __init__(self):
15
self.mean = 0
16
17
def fit(self, X, y):
18
self.mean = y.mean()
19
return self
20
21
def predict(self, X):
22
return np.array(X.shape[0]*[self.mean])
23
24
# create an instance
25
my_own_model = MyOwnModel()
26
27
# train model
28
my_model_trained = create_model(my_own_model)
Output from create_model(my_own_model)
Last modified 8mo ago