Scale and Transform
Normalization is a technique often applied as part of data preparation for machine learning. The goal of normalization is to rescale the values of numeric columns in the dataset without distorting differences in the ranges of values or losing information. There are several methods available for normalization, by default, PyCaret uses
zscore
.- normalize: bool, default = False When set to
True
, the feature space is transformed using the method defined under thenormalized_method
parameter. - normalize_method: string, default = ‘zscore’ Defines the method to be used for normalization. By default, the method is set to
zscore
. The other available options are:z-score
The standard zscore is calculated as z = (x – u) / sminmax
scales and translates each feature individually such that it is in the range of 0 – 1.maxabs
scales and translates each feature individually such that the maximal absolute value of each feature will be 1.0. It does not shift/center the data and thus does not destroy any sparsity.robust
scales and translates each feature according to the Interquartile range. When the dataset contains outliers, the robust scaler often gives better results.
# load dataset
from pycaret.datasets import get_data
pokemon = get_data('pokemon')
# init setup
from pycaret.classification import *
clf1 = setup(data = pokemon, target = 'Legendary', normalize = True)



While normalization rescales the data within new limits to reduce the impact of magnitude in the variance, Feature transformation is a more radical technique. Transformation changes the shape of the distribution such that the transformed data can be represented by a normal or approximate normal distribution. There are two methods available for transformation
yeo-johnson
and quantile
.- transformation: bool, default = False When set to
True
, a power transformer is applied to make the data more normal / Gaussian-like. This is useful for modeling issues related to heteroscedasticity or other situations where normality is desired. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. - transformation_method: string, default = ‘yeo-johnson’ Defines the method for transformation. By default, the transformation method is set to
yeo-johnson
. The other available option isquantile
transformation. Both the transformation transforms the feature set to follow a Gaussian-like or normal distribution. Quantile transformer is non-linear and may distort linear correlations between variables measured at the same scale.
# load dataset
from pycaret.datasets import get_data
pokemon = get_data('pokemon')
# init setup
from pycaret.classification import *
clf1 = setup(data = pokemon, target = 'Legendary', transformation = True)

Dataframe view before transformation

Dataframe view after transformation

Target Transformation is similar to Feature Transformation as it will change the shape of the distribution of the target variable instead of Features. There are two methods supported for the target transformation
box-cox
and yeo-johnson
.- transform_target: bool, default = False When set to
True
, the target variable is transformed using the method defined intransform_target_method
parameter. Target transformation is applied separately from feature transformations. - transform_target_method: string, default = ‘box-cox’
box-cox
requires input data to be strictly positive, whileyeo-johnson
supports both positive and negative data. Whentransform_target_method
isbox-cox
and target variable contains negative values, the method is internally forced toyeo-johnson
to avoid any exceptions.
# load dataset
from pycaret.datasets import get_data
diamond = get_data('diamond')
# init setup
from pycaret.regression import *
reg1 = setup(data = diamond, target = 'Price', transform_target = True)

Dataframe view before target transformation

Dataframe view after target transformationn
NOTE: This functionality is only available in
pycaret.classification module.
Last modified 11mo ago