Docs
  • PyCaret 3.0
  • GET STARTED
    • 💻Installation
    • 🚀Quickstart
    • ⭐Tutorials
    • 📶Modules
    • ⚙️Data Preprocessing
      • Data Preparation
      • Scale and Transform
      • Feature Engineering
      • Feature Selection
      • Other setup parameters
    • 💡Functions
      • Initialize
      • Train
      • Optimize
      • Analyze
      • Deploy
      • Others
  • LEARN PYCARET
    • 📖Blog
      • Announcing PyCaret 1.0
      • Announcing PyCaret 2.0
      • 5 things you dont know about PyCaret
      • Build and deploy your first machine learning web app
      • Build your own AutoML in Power BI using PyCaret
      • Deploy ML Pipeline on Google Kubernetes
      • Deploy PyCaret and Streamlit on AWS Fargate
      • Anomaly Detector in Power BI using PyCaret
      • Deploy ML App on Google Kubernetes
      • Deploy Machine Learning Pipeline on GKE
      • Deploy Machine Learning Pipeline on AWS Fargate
      • Deploy ML Pipeline on the cloud with Docker
      • Clustering Analysis in Power BI using PyCaret
      • Deploy PyCaret Models on edge with ONNX Runtime
      • GitHub is the best AutoML you will ever need
      • Deploy PyCaret and Streamlit on AWS Fargate
      • Easy MLOps with PyCaret and MLflow
      • Clustering Analysis in Power BI using PyCaret
      • Machine Learning in Alteryx with PyCaret
      • Machine Learning in KNIME with PyCaret
      • Machine Learning in SQL using PyCaret Part I
      • Machine Learning in Power BI using PyCaret
      • Machine Learning in Tableau with PyCaret
      • Multiple Time Series Forecasting with PyCaret
      • Predict Customer Churn using PyCaret
      • Predict Lead Score (the Right Way) Using PyCaret
      • NLP Text Classification in Python using PyCaret
      • Predict Lead Score (the Right Way) Using PyCaret
      • Predicting Crashes in Gold Prices Using PyCaret
      • Predicting Gold Prices Using Machine Learning
      • PyCaret 2.1 Feature Summary
      • Ship ML Models to SQL Server using PyCaret
      • Supercharge Your ML with PyCaret and Gradio
      • Time Series 101 - For beginners
      • Time Series Anomaly Detection with PyCaret
      • Time Series Forecasting with PyCaret Regression
      • Topic Modeling in Power BI using PyCaret
      • Write and train custom ML models using PyCaret
      • Build and deploy ML app with PyCaret and Streamlit
      • PyCaret 2.3.6 is Here! Learn What’s New?
    • 📺Videos
    • 🛩️Cheat sheet
    • ❓FAQs
    • 👩‍💻Examples
  • IMPORTANT LINKS
    • 🛠️Release Notes
    • ⚙️API Reference
    • 🙋 Discussions
    • 📤Issues
    • 👮 License
  • MEDIA
    • 💻Slack
    • 📺YouTube
    • 🔗LinkedIn
    • 😾GitHub
    • 🔅Stack Overflow
Powered by GitBook
On this page
  • pull
  • models
  • get_config
  • set_config
  • get_metrics
  • add_metric
  • remove_metric
  • automl
  • get_logs
  • get_current_experiment
  • set_current_experiment

Was this helpful?

  1. GET STARTED
  2. Functions

Others

Other functions in PyCaret

PreviousDeployNextBlog

Last updated 2 years ago

Was this helpful?

pull

Returns the last printed scoring grid. Use pull function after any training function to store the scoring grid in pandas.DataFrame.

Example

# loading dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable')

# compare models
best_model = compare_models()

# get the scoring grid
results = pull()
type(results)
# >>> pandas.core.frame.DataFrame

models

Return a table containing all the models available in the imported module of the model library.

Example

# loading dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable')

# check model library
models()

If you want to see a little more information than this, you can pass internal=True.

# loading dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable')

# check model library
models(internal = True)

get_config

Example

# load dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable')

# get X_train
get_config('X_train')

To check all accessible parameters with get_config:

# check all available param
get_config()

Variables accessible by get_config function:

  • 'USI'

  • 'X'

  • 'X_test'

  • 'X_test_transformed'

  • 'X_train'

  • 'X_train_transformed'

  • 'X_transformed'

  • 'data'

  • 'dataset'

  • 'dataset_transformed'

  • 'exp_id'

  • 'exp_name_log'

  • 'fix_imbalance'

  • 'fold_generator'

  • 'fold_groups_param'

  • 'fold_shuffle_param'

  • 'gpu_n_jobs_param'

  • 'gpu_param'

  • 'html_param'

  • 'idx'

  • 'is_multiclass'

  • 'log_plots_param'

  • 'logging_param'

  • 'memory'

  • 'n_jobs_param'

  • 'pipeline'

  • 'seed'

  • 'target_param'

  • 'test'

  • 'test_transformed'

  • 'train'

  • 'train_transformed'

  • 'variable_and_property_keys'

  • 'variables'

  • 'y'

  • 'y_test'

  • 'y_test_transformed'

  • 'y_train'

  • 'y_train_transformed'

  • 'y_transformed'

set_config

This function resets the global variables.

Example

# load dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable', session_id = 123)

# reset environment seed
set_config('seed', 999) 

get_metrics

Returns the table of all the available metrics in the metric container. All these metrics are used for cross-validation.

# load dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable', session_id = 123)

# get metrics
get_metrics()

add_metric

Adds a custom metric to the metric container.

# load dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable', session_id = 123)

# add metric
from sklearn.metrics import log_loss
add_metric('logloss', 'Log Loss', log_loss, greater_is_better = False)

Now if you check metric container:

get_metrics()

remove_metric

Removes a metric from the metric container.

# remove metric
remove_metric('logloss')

No Output. Let's check the metric container again.

get_metrics()

automl

This function returns the best model out of all trained models in the current setup based on the optimize parameter. Metrics evaluated can be accessed using the get_metrics function.

Example

# load dataset 
from pycaret.datasets import get_data 
data = get_data('diabetes') 

# init setup 
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable') 

# compare models
top5 = compare_models(n_select = 5) 

# tune models
tuned_top5 = [tune_model(i) for i in top5]

# ensemble models
bagged_top5 = [ensemble_model(i) for i in tuned_top5]

# blend models
blender = blend_models(estimator_list = top5) 

# stack models
stacker = stack_models(estimator_list = top5) 

# automl 
best = automl(optimize = 'Recall')
print(best)

get_logs

Example

# load dataset
from pycaret.datasets import get_data
data = get_data('diabetes')

# init setup
from pycaret.classification import *
clf1 = setup(data, target = 'Class variable', log_experiment = True, experiment_name = 'diabetes1')

# compare models
top5 = compare_models()

# check ML logs
get_logs()

get_current_experiment

Obtain the current experiment object and return a class. This is useful when you are using a functional API and want to move to an OOP API.

# loading dataset
from pycaret.datasets import get_data
data = get_data('insurance')

# init setup using functional API
from pycaret.regression import *
s = setup(data, target = 'charges', session_id = 123)

# compare models
best = compare_models()

# return OOP class for current functional experiment
reg1 = get_current_experiment()

set_current_experiment

Set the current experiment created using the OOP API to be used with the functional API.

# loading dataset
from pycaret.datasets import get_data
data = get_data('insurance')

# init setup using OOP API
from pycaret.regression import RegressionExperiment
reg1 = RegressionExperiment()
reg1.setup(data, target = 'charges', session_id = 123)

# compare models
best = compare_models()

# set OOP experiment as functional
set_current_experiment(reg1)

This function retrieves the global variables created when initializing the function.

Returns a table of experiment logs. Only works when log_experiment = True when initializing the function.

💡
setup
setup
Output from pull()
Output from models()
Output from models(internal = True)
Output from get_config('X_train')
Output from get_metrics()
Output from add_metric('logloss', 'Log Loss', log_loss, greater_is_better = False)
Output from get_metrics() (after adding log loss metric)
Output from get_metrics() (after removing log loss metric)
Output from print(best)
Output from get_logs()