autoprognosis.plugins.prediction.regression.plugin_xgboost_regressor module

class XGBoostRegressorPlugin(reg_lambda: Optional[float] = None, reg_alpha: Optional[float] = None, colsample_bytree: Optional[float] = None, colsample_bynode: Optional[float] = None, colsample_bylevel: Optional[float] = None, n_estimators: int = 100, max_depth: Optional[int] = 3, lr: Optional[float] = None, subsample: Optional[float] = None, min_child_weight: Optional[int] = None, max_bin: int = 256, booster: int = 0, grow_policy: int = 0, eta: float = 0.3, model: Optional[Any] = None, random_state: int = 0, hyperparam_search_iterations: Optional[int] = None, **kwargs: Any)

Bases: autoprognosis.plugins.prediction.regression.base.RegressionPlugin

Regression plugin based on the XGBoost.

Method:

Gradient boosting is a supervised learning algorithm that attempts to accurately predict a target variable by combining an ensemble of estimates from a set of simpler and weaker models. The XGBoostRegressor algorithm has a robust handling of a variety of data types, relationships, distributions, and the variety of hyperparameters that you can fine-tune.

Parameters
  • n_estimators – int The maximum number of estimators at which boosting is terminated.

  • max_depth – int Maximum depth of a tree.

  • reg_lambda – float L2 regularization term on weights (xgb’s lambda).

  • reg_alpha – float L1 regularization term on weights (xgb’s alpha).

  • colsample_bytree – float Subsample ratio of columns when constructing each tree.

  • colsample_bynode – float Subsample ratio of columns for each split.

  • colsample_bylevel – float Subsample ratio of columns for each level.

  • subsample – float Subsample ratio of the training instance.

  • learning_rate – float Boosting learning rate

  • booster – str Specify which booster to use: gbtree, gblinear or dart.

  • min_child_weight – int Minimum sum of instance weight(hessian) needed in a child.

  • max_bin – int Number of bins for histogram construction.

  • random_state – float Random number seed.

Example

>>> from autoprognosis.plugins.prediction import Predictions
>>> plugin = Predictions(category="regressors").get("xgboost_regressor")
>>> from sklearn.datasets import load_iris
>>> X, y = load_iris(return_X_y=True)
>>> plugin.fit_predict(X, y)
booster = ['gbtree', 'gblinear', 'dart']
change_output(output: str) None
explain(X: pandas.core.frame.DataFrame, *args: Any, **kwargs: Any) pandas.core.frame.DataFrame
fit(X: pandas.core.frame.DataFrame, *args: Any, **kwargs: Any) autoprognosis.plugins.prediction.regression.base.RegressionPlugin

Train the plugin

Parameters

X – pd.DataFrame

fit_predict(X: pandas.core.frame.DataFrame, *args: Any, **kwargs: Any) pandas.core.frame.DataFrame

Fit the model and predict the training data. Used by predictors.

fit_transform(X: pandas.core.frame.DataFrame, *args: Any, **kwargs: Any) pandas.core.frame.DataFrame

Fit the model and transform the training data. Used by imputers and preprocessors.

classmethod fqdn() str

The fully-qualified name of the plugin: type->subtype->name

get_args() dict
grow_policy = ['depthwise', 'lossguide']
static hyperparameter_space(*args: Any, **kwargs: Any) List[autoprognosis.plugins.core.params.Params]

The hyperparameter search domain, used for tuning.

classmethod hyperparameter_space_fqdn(*args: Any, **kwargs: Any) List[autoprognosis.plugins.core.params.Params]

The hyperparameter domain using they fully-qualified name.

is_fitted() bool

Check if the model was trained

classmethod load(buff: bytes) autoprognosis.plugins.prediction.regression.plugin_xgboost_regressor.XGBoostRegressorPlugin

Load the plugin from bytes

static name() str

The name of the plugin, e.g.: xgboost

predict(X: pandas.core.frame.DataFrame, *args: Any, **kwargs: Any) pandas.core.frame.DataFrame

Run predictions for the input. Used by predictors.

Parameters

X – pd.DataFrame

predict_proba(X: pandas.core.frame.DataFrame, *args: Any, **kwargs: Any) pandas.core.frame.DataFrame
classmethod sample_hyperparameters(trial: optuna.trial.Trial, *args: Any, **kwargs: Any) Dict[str, Any]

Sample hyperparameters for Optuna.

classmethod sample_hyperparameters_fqdn(trial: optuna.trial.Trial, *args: Any, **kwargs: Any) Dict[str, Any]

Sample hyperparameters using they fully-qualified name.

classmethod sample_hyperparameters_np(random_state: int = 0, *args: Any, **kwargs: Any) Dict[str, Any]

Sample hyperparameters as a dict.

save() bytes

Save the plugin to bytes

score(X: pandas.core.frame.DataFrame, y: pandas.core.frame.DataFrame, metric: str = 'aucroc') float
static subtype() str

The type of the plugin, e.g.: classifier

transform(X: pandas.core.frame.DataFrame) pandas.core.frame.DataFrame

Transform the input. Used by imputers and preprocessors.

Parameters

X – pd.DataFrame

static type() str

The type of the plugin, e.g.: prediction

plugin

alias of autoprognosis.plugins.prediction.regression.plugin_xgboost_regressor.XGBoostRegressorPlugin