💡Functions
All functions in PyCaret
This function initializes the experiment in PyCaret and prepares the transformation pipeline based on all the parameters passed in the function. The setup function must be called before executing any other function. It only requires two parameters: data
and target
. All the other parameters are optional.
This function trains and evaluates the performance of all the models available in the model library using cross-validation. The output of this function is a scoring grid with average cross-validated scores.
This function trains and evaluates the performance of a given model using cross-validation. The output of this function is a scoring grid with cross-validated scores along with mean and standard deviation.
This function tunes the hyperparameters of a given model. The output of this function is a scoring grid with cross-validated scores of the best model. Search spaces are pre-defined with the flexibility to provide your own. The search algorithm can be random, bayesian, and a few others with the ability to scale on large clusters.
This function ensembles a given model. The output of this function is a scoring grid with cross-validated scores of the ensembled model. Two methods Bagging
or Boosting
can be used for ensembling.
This function trains a Soft Voting / Majority Rule classifier for given models in a list. The output of this function is a scoring grid with cross-validated scores of a Voting Classifier or Regressor.
This function trains a meta-model over given models in a list. The output of this function is a scoring grid with cross-validated scores of a Stacking Classifier or Regressor.
This function optimizes the probability threshold for a given model. It iterates over performance metrics at different probability thresholds and returns a plot with performance metrics on the y-axis and threshold on the x-axis.
This function calibrates the probability of a given model using isotonic or logistic regression. The output of this function is a scoring grid with cross-validated scores of calibrated classifier.
This function analyzes the performance of a trained model on the hold-out set. It may require re-training the model in certain cases.
This function uses ipywidgets
to display a basic user interface for analyzing the performance of a trained model.
This function analyzes the predictions generated from a trained model. Most plots in this function are implemented based on the SHAP (Shapley Additive exPlanations).
This function generates the interactive dashboard for a trained model. The dashboard is implemented using the ExplainerDashboard project.
This function provides fairness-related metrics between different groups in the dataset for a given model. There are many approaches to evaluate fairness but this function uses the approach known as group fairness, which asks: which groups of individuals are at risk for experiencing harm.
This function returns the leaderboard of all models trained in the current setup.
This function assigns labels to the training dataset using the trained model. It is only available for unsupervised modules.
This function generates the label using a trained model. When unseen data is not passed, it predicts the label and score on the holdout set.
This function refits a given model on the entire dataset.
This function saves the ML pipeline as a pickle file for later use.
This function loads a previously saved pipeline.
This function saves an experiment to a pickle file.
This function loads an experiment back into Python from a pickle file.
This function generates a drift report file using the evidently library.
This function deploys the entire ML pipeline on the cloud.
This function transpiles the trained machine learning model's decision function in different programming languages such as Python, C, Java, Go, C#, etc.
This function takes an input model and creates a POST API for inference. It only creates the API and doesn't run it automatically.
This function creates a Dockerfile and requirements.txt for deploying API.
This function creates a basic gradio app for inference.
Returns the last printed scoring grid.
Return a table containing all the models available in the imported module of the model library.
This function retrieves the global variables created by the setup function.
This function resets the global variables.
Returns the table of all available metrics used for cross-validation.
Adds a custom metric to the metric container for cross-validation.
Removes a custom metric from the metric container.
This function returns the best model from all the models in the current setup.
Returns a table of experiment logs. Only works when log_experiment = True when initializing the setup function.
Obtain the current experiment object.
Set the current experiment to be used with the functional API.
Last updated