๐Quickstart
Get Up and Running in No Time: A Beginner's Guide to PyCaret
๐ Classification
PyCaretโs Classification Module is a supervised machine learning module that is used for classifying elements into groups.
The goal is to predict the categorical class labels which are discrete and unordered. Some common use cases include predicting customer default (Yes or No), predicting customer churn (customer will leave or stay), the disease found (positive or negative).
This module can be used for binary or multiclass problems.
Setup
This function initializes the training environment and creates the transformation pipeline. Setup function must be called before executing any other function. It takes two required parameters: data
and target
. All the other parameters are optional.
PyCaret 3.0 has two API's. You can choose one of it based on your preference. The functionalities and experiment results are consistent.
Functional API
OOP API
Compare Models
This function trains and evaluates the performance of all the estimators available in the model library using cross-validation. The output of this function is a scoring grid with average cross-validated scores. Metrics evaluated during CV can be accessed using the get_metrics
function. Custom metrics can be added or removed using add_metric
and remove_metric
function.
Analyze Model
This function analyzes the performance of a trained model on the test set. It may require re-training the model in certain cases.
evaluate_model
can only be used in Notebook since it uses ipywidget
. You can also use the plot_model
function to generate plots individually.
Predictions
This function scores the data and returns prediction_label
and prediction_score
probability of the predicted class). When data
is None, it predicts label and score on the test set (created during the setup
function).
The evaluation metrics are calculated on the test set. The second output is the pd.DataFrame
with predictions on the test set (see the last two columns). To generate labels on the unseen (new) dataset, simply pass the dataset in the data
parameter under predict_model
function.
Score
means the probability of the predicted class (NOT the positive class). If prediction_label
is 0 and prediction_score
is 0.90, this means 90% probability of class 0. If you want to see the probability of both the classes, simply pass raw_score=True
in the predict_model
function.
Save the model
To load the model back in environment:
๐ Regression
PyCaretโs Regression Module is a supervised machine learning module that is used for estimating the relationships between a dependent variable (often called the โoutcome variableโ, or โtargetโ) and one or more independent variables (often called โfeaturesโ, โpredictorsโ, or โcovariatesโ).
The objective of regression is to predict continuous values such as predicting sales amount, predicting quantity, predicting temperature, etc.
Setup
This function initializes the training environment and creates the transformation pipeline. Setup function must be called before executing any other function. It takes two required parameters: data
and target
. All the other parameters are optional.
PyCaret 3.0 has two API's. You can choose one of it based on your preference. The functionalities and experiment results are consistent.
Functional API
OOP API
Compare Models
This function trains and evaluates the performance of all the estimators available in the model library using cross-validation. The output of this function is a scoring grid with average cross-validated scores. Metrics evaluated during CV can be accessed using the get_metrics
function. Custom metrics can be added or removed using add_metric
and remove_metric
function.
Analyze Model
This function analyzes the performance of a trained model on the test set. It may require re-training the model in certain cases.
evaluate_model
can only be used in Notebook since it uses ipywidget
. You can also use the plot_model
function to generate plots individually.
Predictions
This function predicts prediction_label
using the trained model. When data
is None, it predicts label and score on the test set (created during the setup
function).
The evaluation metrics are calculated on the test set. The second output is the pd.DataFrame
with predictions on the test set (see the last two columns). To generate labels on the unseen (new) dataset, simply pass the dataset in the predict_model
function.
Save the model
To load the model back in the environment:
๐ Clustering
PyCaretโs Clustering Module is an unsupervised machine learning module that performs the task of grouping a set of objects in such a way that objects in the same group (also known as a cluster) are more similar to each other than to those in other groups.
Setup
This function initializes the training environment and creates the transformation pipeline. Setup function must be called before executing any other function. It takes only one required parameter: data
. All the other parameters are optional.
PyCaret 3.0 has two API's. You can choose one of it based on your preference. The functionalities and experiment results are consistent.
Functional API
OOP API
Create Model
This function trains and evaluates the performance of a given model. Metrics evaluated can be accessed using the get_metrics
function. Custom metrics can be added or removed using the add_metric
and remove_metric
function. All the available models can be accessed using the models
function.
Analyze Model
This function analyzes the performance of a trained model.
evaluate_model
can only be used in Notebook since it uses ipywidget
. You can also use the plot_model
function to generate plots individually.
Assign Model
This function assigns cluster labels to the training data, given a trained model.
Predictions
This function generates cluster labels using a trained model on the new/unseen dataset.
Save the model
To load the model back in the environment:
๐ Anomaly Detection
PyCaretโs Anomaly Detection Module is an unsupervised machine learning module that is used for identifying rare items, events, or observations that raise suspicions by differing significantly from the majority of the data.
Typically, the anomalous items will translate to some kind of problems such as bank fraud, a structural defect, medical problems, or errors.
Setup
This function initializes the training environment and creates the transformation pipeline. The setup
function must be called before executing any other function. It takes only one required parameter only: data
. All the other parameters are optional.
PyCaret 3.0 has two API's. You can choose one of it based on your preference. The functionalities and experiment results are consistent.
Functional API
OOP API
Create Model
This function trains an unsupervised anomaly detection model. All the available models can be accessed using the models
function.
Analyze Model
Assign Model
This function assigns anomaly labels to the dataset for a given model. (1 = outlier, 0 = inlier).
Predictions
This function generates anomaly labels using a trained model on the new/unseen dataset.
Save the model
To load the model back in the environment:
๐ Time Series
PyCaret Time Series module is a powerful tool for analyzing and predicting time series data using machine learning and classical statistical techniques. This module enables users to easily perform complex time series forecasting tasks by automating the entire process from data preparation to model deployment.
PyCaret Time Series Forecasting module supports a wide range of forecasting methods such as ARIMA, Prophet, and LSTM. It also provides various features to handle missing values, time series decomposition, and data visualizations.
Setup
This function initializes the training environment and creates the transformation pipeline. Setup function must be called before executing any other function.
PyCaret 3.0 has two API's. You can choose one of it based on your preference. The functionalities and experiment results are consistent.
Functional API
OOP API
Compare Models
This function trains and evaluates the performance of all the estimators available in the model library using cross-validation. The output of this function is a scoring grid with average cross-validated scores. Metrics evaluated during CV can be accessed using the get_metrics
function. Custom metrics can be added or removed using add_metric
and remove_metric
function.
Analyze Model
Predictions
Save the model
To load the model back in the environment:
Last updated