Write and train custom ML models using PyCaret
Last updated
Was this helpful?
Last updated
Was this helpful?
PyCaret is an open-source, low-code machine learning library and end-to-end model management tool built-in Python for automating machine learning workflows. It is incredibly popular for its ease of use, simplicity, and ability to quickly and efficiently build and deploy end-to-end ML prototypes.
PyCaret is an alternate low-code library that can replace hundreds of code lines with few lines only. This makes the experiment cycle exponentially fast and efficient.
PyCaret is simple and easy to use. All the operations performed in PyCaret are sequentially stored in a Pipeline that is fully automated for **deployment. **Whether it’s imputing missing values, one-hot-encoding, transforming categorical data, feature engineering, or even hyperparameter tuning, PyCaret automates all of it.
This tutorial assumes that you have some prior knowledge and experience with PyCaret. If you haven’t used it before, no problem — you can get a quick headstart through these tutorials:
Installing PyCaret is very easy and takes only a few minutes. We strongly recommend using a virtual environment to avoid potential conflicts with other libraries.
Whenever you initialize the setup function in PyCaret, it profiles the dataset and infers the data types for all input features. If all data types are correctly inferred, you can press enter to continue.
To check the list of all models available for training, you can use the function called models . It displays a table with model ID, name, and the reference of the actual estimator.
The most used function for training any model in PyCaret is create_model . It takes an ID for the estimator you want to train.
The output shows the 10-fold cross-validated metrics with mean and standard deviation. The output from this function is a trained model object, which is essentially a scikit-learn object.
To train multiple models in a loop, you can write a simple list comprehension:
If you want to train all the models available in the library instead of the few selected you can use PyCaret’s compare_models function instead of writing your own loop (the results will be the same though).
compare_models returns the output which shows the cross-validated metrics for all models. According to this output, Gradient Boosting Regressor is the best model with $2,702 in Mean Absolute Error ****(MAE) ****using 10-fold cross-validation on the train set.
The metrics shown in the above grid is cross-validation scores, to check the score of the best_modelon hold-out set:
To generate predictions on the unseen dataset you can use the same predict_model function but just pass an extra parameter data :
So far what we have seen is training and model selection for all the available models in PyCaret. However, the way PyCaret works for custom models is exactly the same. As long as, your estimator is compatible with sklearn API style, it will work the same way. Let’s see few examples.
Before I show you how to write your own custom class, I will first demonstrate how you can work with custom non-sklearn models (models that are not available in sklearn or pycaret’s base library).
Symbolic regression is a machine learning technique that aims to identify an underlying mathematical expression that best describes a relationship. It begins by building a population of naive random formulas to represent a relationship between known independent variables and their dependent variable targets to predict new data. Each successive generation of programs is then evolved from the one that came before it by selecting the fittest individuals from the population to undergo genetic operations.
To use models from gplearn you will have to first install it:
Now you can simply import the untrained model and pass it in the create_model function:
You can also check the hold-out score for this:
To use models from ngboost, you will have to first install ngboost:
Once installed, you can import the untrained estimator from the ngboost library and use create_model to train and evaluate the model:
The above two examples gplearn and ngboost are custom models for pycaret as they are not available in the default library but you can use them just like you can use any other out-of-the-box models. However, there may be a use-case that involves writing your own algorithm (i.e. maths behind the algorithm), in which case you can inherit the base class from sklearn and write your own maths.
Let’s create a naive estimator which learns the mean value of target variable during fit stage and predicts the same mean value for all new data points, irrespective of X input (probably not useful in real life, but just to make demonstrate the functionality).
Now let’s use this estimator for training:
Notice that Label column which is essentially the prediction is the same number $13,225 for all the rows, that’s because we created this algorithm in such a way, that learns from the mean of train set and predict the same value (just to keep things simple).
I hope that you will appreciate the ease of use and simplicity in PyCaret. In just a few lines, you can perform end-to-end machine learning experiments and write your own algorithms without adjusting any native code.
There is no limit to what you can achieve using this lightweight workflow automation library in Python. If you find this useful, please do not forget to give us ⭐️ on our GitHub repository.
Click on the links below to see the documentation and working examples.
PyCaret’s default installation is a slim version of pycaret that only installs hard dependencies .
When you install the full version of pycaret, all the optional dependencies as are also installed.
Before we start talking about custom model training, let’s see a quick demo of how PyCaret works with out-of-the-box models. I will be using the ‘insurance’ dataset available on . The goal of this dataset is to predict patient charges based on some attributes.
Common to all modules in PyCaret, the setup is the first and the only mandatory step in any machine learning experiment performed in PyCaret. This function takes care of all the data preparation required before training models. Besides performing some basic default processing tasks, PyCaret also offers a wide array of pre-processing features. To learn more about all the preprocessing functionalities in PyCaret, you can see this .
While Genetic Programming (GP) can be used to perform a , gplearn is purposefully constrained to solving symbolic regression problems.
ngboost is a Python library that implements Natural Gradient Boosting, as described in . It is built on top of and is designed to be scalable and modular with respect to the choice of proper scoring rule, distribution, and base learner. A didactic introduction to the methodology underlying NGBoost is available in this .
Next week I will be writing a tutorial to advance this tutorial. We will write a more complex algorithm instead of just a mean prediction. I will introduce some complex concepts in the next tutorial. Please follow me on , , and to get more updates.
To hear more about PyCaret follow us on and .
Join us on our slack channel. Invite link .