skelm.BatchCholeskySolver

class skelm.BatchCholeskySolver(alpha: float = 1e-07)[source]
__init__(alpha: float = 1e-07)[source]

Methods

__init__([alpha])

compute_output_weights()

fit(X, y)

Solves an L2-regularized linear system like Ridge regression, overwrites any previous solutions.

get_params([deep])

Get parameters for this estimator.

partial_fit(X, y[, forget, ...])

Update model with a new batch of data.

predict(X)

score(X, y[, sample_weight])

Return the coefficient of determination of the prediction.

set_params(**params)

Set the parameters of this estimator.

Attributes

XtX_

XtY_

coef_

intercept_

fit(X, y)[source]

Solves an L2-regularized linear system like Ridge regression, overwrites any previous solutions.

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params – Parameter names mapped to their values.

Return type:

dict

partial_fit(X, y, forget=False, compute_output_weights=True) BatchCholeskySolver[source]

Update model with a new batch of data.

Output weight computation can be temporary turned off for faster processing. This will mark model as not fit. Enable compute_output_weights in the final call to partial_fit.

Parameters:
  • X ({array-like, sparse matrix}, shape=[n_samples, n_features]) – Training input samples

  • y (array-like, shape=[n_samples, n_targets]) – Training targets

  • forget (boolean, default False) – Performs a negative update, effectively removing the information given by training samples from the model. Output weights need to be re-computed after forgetting data.

  • compute_output_weights (boolean, optional, default True) –

    Whether to compute new output weights (coef_, intercept_). Disable this in intermediate partial_fit steps to run computations faster, then enable in the last call to compute the new solution.

    Note

    Solution can be updated without extra data by setting X=None and y=None.

score(X, y, sample_weight=None)

Return the coefficient of determination of the prediction.

The coefficient of determination R^2 is defined as (1 - \frac{u}{v}), where u is the residual sum of squares ((y_true - y_pred)** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.

Parameters:
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns:

scoreR^2 of self.predict(X) w.r.t. y.

Return type:

float

Notes

The R^2 score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:

**params (dict) – Estimator parameters.

Returns:

self – Estimator instance.

Return type:

estimator instance