sktree.HonestForestClassifier#
- class sktree.HonestForestClassifier(n_estimators=100, criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None, honest_prior='empirical', honest_fraction=0.5, tree_estimator=None, stratify=False, **tree_estimator_params)[source]#
A forest classifier with honest leaf estimates.
- Parameters:
- n_estimators
int, default=100 The number of trees in the forest.
- criterion{“gini”, “entropy”}, default=”gini”
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. Note: this parameter is tree-specific.
- splitter{“best”, “random”}, default=”best”
The strategy used to choose the split at each node. Supported strategies are “best” to choose the best split and “random” to choose the best random split.
- max_depth
int, default=None The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
- min_samples_split
intorfloat, default=2 The minimum number of samples required to split an internal node:
If int, then consider
min_samples_splitas the minimum number.If float, then
min_samples_splitis a fraction andceil(min_samples_split * n_samples)are the minimum number of samples for each split.
- min_samples_leaf
intorfloat, default=1 The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaftraining samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.If int, then consider
min_samples_leafas the minimum number.If float, then
min_samples_leafis a fraction andceil(min_samples_leaf * n_samples)are the minimum number of samples for each node.
- min_weight_fraction_leaf
float, default=0.0 The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
- max_features{“sqrt”, “log2”,
None},intorfloat, default=”sqrt” The number of features to consider when looking for the best split:
If int, then consider
max_featuresfeatures at each split.If float, then
max_featuresis a fraction andround(max_features * n_features)features are considered at each split.If “auto”, then
max_features=sqrt(n_features).If “sqrt”, then
max_features=sqrt(n_features).If “log2”, then
max_features=log2(n_features).If None, then
max_features=n_features.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than
max_featuresfeatures.- max_leaf_nodes
int, default=None Grow trees with
max_leaf_nodesin best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.- min_impurity_decrease
float, default=0.0 A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)
where
Nis the total number of samples,N_tis the number of samples at the current node,N_t_Lis the number of samples in the left child, andN_t_Ris the number of samples in the right child.N,N_t,N_t_RandN_t_Lall refer to the weighted sum, ifsample_weightis passed.- bootstrap
bool, default=False Whether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
When bootstrap is True, each tree bootstrap samples the dataset, and then the unique indices are split in half, where one half is used to train the structure of the tree and one half is used to train the leaves of the tree. The remaining sample indices are considered “out of bag”.
- oob_score
bool, default=False Whether to use out-of-bag samples to estimate the generalization score. Only available if bootstrap=True.
- n_jobs
int, default=None The number of jobs to run in parallel.
fit(),predict(),decision_path()andapply()are all parallelized over the trees.Nonemeans 1 unless in ajoblib.parallel_backendcontext.-1means using all processors. See Glossary for more details.- random_state
int,RandomStateinstance orNone, default=None Controls both the randomness of the bootstrapping of the samples used when building trees (if
bootstrap=True) and the sampling of the features to consider when looking for the best split at each node (ifmax_features < n_features). See Glossary for details.- verbose
int, default=0 Controls the verbosity when fitting and predicting.
- warm_start
bool, default=False When set to
True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. See the Glossary.- class_weight{“balanced”, “balanced_subsample”},
dictorlistof dicts, default=None Weights associated with classes in the form
{class_label: weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y.Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}].
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as
n_samples / (n_classes * np.bincount(y))The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown.
For multi-output, the weights of each column of y will be multiplied.
Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.
- ccp_alphanon-negative
float, default=0.0 Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than
ccp_alphawill be chosen. By default, no pruning is performed. See Minimal Cost-Complexity Pruning for details.- max_samples
intorfloat, default=None If bootstrap is True, the number of samples to draw from X to train each base tree estimator with replacement. If bootstrap is False, then this will subsample the dataset without replacement.
If None (default), then draw
X.shape[0]samples.If int, then draw
max_samplessamples.If float, then draw
max_samples * X.shape[0]samples.
- honest_prior{“ignore”, “uniform”, “empirical”}, default=”empirical”
Method for dealing with empty leaves during evaluation of a test sample. If “ignore”, the tree is ignored. If “uniform”, the prior tree posterior is 1/(number of classes). If “empirical”, the prior tree posterior is the relative class frequency in the voting subsample. If all trees are ignored, the empirical estimate is returned.
- honest_fraction
float, default=0.5 Fraction of training samples used for estimates in the trees. The remaining samples will be used to learn the tree structure. A larger fraction creates shallower trees with lower variance estimates.
- tree_estimator
object, default=None Instantiated tree of type BaseDecisionTree from sktree. If None, then sklearn’s DecisionTreeClassifier with default parameters will be used. Note that none of the parameters in
tree_estimatorneed to be set. The parameters of thetree_estimatorcan be set using thetree_estimator_paramskeyword argument.- stratify
bool Whether or not to stratify sample when considering structure and leaf indices. This will also stratify samples when bootstrap sampling is used. For more information, see
sklearn.utils.resample(). By default False.- **tree_estimator_params
dict Parameters to pass to the underlying base tree estimators. These must be parameters for
tree_estimator.
- n_estimators
Notes
The default values for the parameters controlling the size of the trees (e.g.
max_depth,min_samples_leaf, etc.) lead to fully grown and unpruned trees which can potentially be very large on some data sets. To reduce memory consumption, the complexity and size of the trees should be controlled by setting those parameter values.The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data,
max_features=n_featuresandbootstrap=False, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting,random_statehas to be fixed.Honesty is a feature of trees that enables unbiased estimates of confidence intervals. The default implementation here is using double sampling to implement honesty. The amount of samples used for learning split nodes vs leaf nodes is controlled by the
honest_fractionparameter. In order to enforce honesty, but also enable the tree to have access to all y labels, we set sample_weight to 0 for a random subset of samples. This results in inefficiency when building trees using a greedy splitter as we still sort over all values of X. We recommend using propensity trees if you are computing causal effects.This forest classifier is a “meta-estimator” because any tree model can be used in the classification process, while enabling honesty separates the data used for split and leaf nodes.
References
[1]Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001.
[2]S. Athey, J. Tibshirani, and S. Wager. “Generalized Random Forests”, Annals of Statistics, 2019.
Examples
>>> from honest_forests.estimators import HonestForestClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_samples=1000, n_features=4, ... n_informative=2, n_redundant=0, ... random_state=0, shuffle=False) >>> clf = HonestForestClassifier( >>> max_depth=2, >>> random_state=0, >>> tree_estimator=ObliqueDecisionTreeClassifier()) >>> clf.fit(X, y) HonestForestClassifier(...) >>> print(clf.predict([[0, 0, 0, 0]])) [1]
- Attributes:
- estimator
sktree.tree.HonestTreeClassifier The child estimator template used to create the collection of fitted sub-estimators.
- estimators_
listofsktree.tree.HonestTreeClassifier The collection of fitted sub-estimators.
- classes_
ndarrayof shape (n_classes,) or alistof such arrays The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).
- n_classes_
intorlist The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem).
- n_features_in_
int Number of features seen during fit.
- feature_names_in_
ndarrayof shape (n_features_in_,) Names of features seen during fit. Defined only when
Xhas feature names that are all strings.- n_outputs_
int The number of outputs when
fitis performed.feature_importances_ndarrayof shape (n_features,)The impurity-based feature importances.
- oob_score_
float Score of the training dataset obtained using an out-of-bag estimate. This attribute exists only when
oob_scoreis True.- oob_decision_function_
ndarrayof shape (n_samples, n_classes) or (n_samples, n_classes, n_outputs) Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case,
oob_decision_function_might contain NaN. This attribute exists only whenoob_scoreis True.- honest_decision_function_
ndarrayof shape (n_samples, n_classes) or (n_samples, n_classes, n_outputs) Decision function computed on each sample, including only the trees for which it was in the honest subsample. It is possible that a sample is never in the honest subset in which case
honest_decision_function_might contain NaN.structure_indices_listof lists, shape=(n_estimators, n_structure)The indices used to learn the structure of the trees.
honest_indices_listof lists, shape=(n_estimators, n_honest)The indices used to fit the leaf nodes.
oob_samples_listof lists, shape=(n_estimators, n_samples_bootstrap)The sample indices that are out-of-bag.
- estimator
Methods
apply(X)Apply trees in the forest to X, return leaf indices.
Return the decision path in the forest.
fit(X, y[, sample_weight, classes])Build a forest of trees from the training set (X, y).
Get samples in each leaf node across the forest.
Get metadata routing of this object.
get_params([deep])Get parameters for this estimator.
partial_fit(X, y[, sample_weight, classes])Update a decision tree classifier from the training set (X, y).
predict(X)Predict class for X.
Predict class log-probabilities for X.
Predict class probabilities for X.
predict_proba_per_tree(X[, indices])Compute the probability estimates for each tree in the forest.
predict_quantiles(X[, quantiles, method])Predict class or regression value for X at given quantiles.
score(X, y[, sample_weight])Return the mean accuracy on the given test data and labels.
set_fit_request(*[, classes, sample_weight])Request metadata passed to the
fitmethod.set_params(**params)Set the parameters of this estimator.
set_partial_fit_request(*[, classes, ...])Request metadata passed to the
partial_fitmethod.set_score_request(*[, sample_weight])Request metadata passed to the
scoremethod.- apply(X)#
Apply trees in the forest to X, return leaf indices.
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to
dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparsecsr_matrix.
- Returns:
- X_leaves
ndarrayof shape (n_samples, n_estimators) For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.
- X_leaves
- decision_path(X)[source]#
Return the decision path in the forest.
New in version 0.18.
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to
dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparsecsr_matrix.
- Returns:
- indicatorsparse matrix of shape (n_samples, n_nodes)
Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.
- n_nodes_ptr
ndarrayof shape (n_estimators + 1,) The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.
- fit(X, y, sample_weight=None, classes=None, **fit_params)[source]#
Build a forest of trees from the training set (X, y).
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, its dtype will be converted to
dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparsecsc_matrix.- yarray_like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in regression).
- sample_weightarray_like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.
- classesarray_like of shape (n_classes,), default=None
List of all the classes that can possibly appear in the y vector.
- **fit_params
dict Parameters to pass to the underlying base tree estimators.
Only available if
enable_metadata_routing=True, which can be set by usingsklearn.set_config(enable_metadata_routing=True). See Metadata Routing User Guide for more details.
- Returns:
- self
object Fitted estimator.
- self
- get_leaf_node_samples(X)[source]#
Get samples in each leaf node across the forest.
- Parameters:
- Xarray_like of shape (n_samples, n_features)
The data array.
- Returns:
- leaf_node_samplesarray_like of shape (n_samples, n_estimators)
Samples within each leaf node.
- get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_params(deep=True)#
Get parameters for this estimator.
- partial_fit(X, y, sample_weight=None, classes=None)#
Update a decision tree classifier from the training set (X, y).
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
The training input samples. Internally, it will be converted to
dtype=np.float32and if a sparse matrix is provided to a sparsecsc_matrix.- yarray_like of shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels) as integers or strings.
- sample_weightarray_like of shape (n_samples,), default=None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. Splits are also ignored if they would result in any single class carrying a negative weight in either child node.
- classesarray_like of shape (n_classes,), default=None
List of all the classes that can possibly appear in the y vector. Must be provided at the first call to partial_fit, can be omitted in subsequent calls.
- Returns:
- self
object Returns the instance itself.
- self
- predict(X)#
Predict class for X.
The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability estimate across the trees.
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to
dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparsecsr_matrix.
- Returns:
- y
ndarrayof shape (n_samples,) or (n_samples, n_outputs) The predicted classes.
- y
- predict_log_proba(X)#
Predict class log-probabilities for X.
The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the trees in the forest.
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to
dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparsecsr_matrix.
- Returns:
- predict_proba(X)[source]#
Predict class probabilities for X.
The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf.
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
The input samples. Internally, its dtype will be converted to
dtype=np.float32. If a sparse matrix is provided, it will be converted into a sparsecsr_matrix.
- Returns:
- predict_proba_per_tree(X, indices=None)#
Compute the probability estimates for each tree in the forest.
- Parameters:
- Xarray_like of shape (n_samples, n_features)
The input data.
- indices
listofn_estimatorslength of array_like of shape (n_samples,), optional The indices of the samples used to compute the probability estimates for each tree in the forest. If None, the indices are every sample in the input data.
- Returns:
- proba_per_treearray_like of shape (n_estimators, n_samples, n_classes)
The probability estimates for each tree in the forest.
- predict_quantiles(X, quantiles=0.5, method='nearest')[source]#
Predict class or regression value for X at given quantiles.
- Parameters:
- X{array_like, sparse matrix} of shape (n_samples, n_features)
Input data.
- quantiles
float, optional The quantiles at which to evaluate, by default 0.5 (median).
- method
str, optional The method to interpolate, by default ‘linear’. Can be any keyword argument accepted by
numpy.quantile().
- Returns:
- y
ndarrayof shape (n_samples, n_quantiles, [n_outputs]) The predicted values. The
n_outputsdimension is present only for multi-output regressors.
- y
- score(X, y, sample_weight=None)#
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
- Xarray_like of shape (n_samples, n_features)
Test samples.
- yarray_like of shape (n_samples,) or (n_samples, n_outputs)
True labels for
X.- sample_weightarray_like of shape (n_samples,), default=None
Sample weights.
- Returns:
- score
float Mean accuracy of
self.predict(X)w.r.t.y.
- score
- set_fit_request(*, classes='$UNCHANGED$', sample_weight='$UNCHANGED$')#
Request metadata passed to the
fitmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed tofitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it tofit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- Returns:
- self
object The updated object.
- self
- set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **params
dict Estimator parameters.
- **params
- Returns:
- selfestimator instance
Estimator instance.
- set_partial_fit_request(*, classes='$UNCHANGED$', sample_weight='$UNCHANGED$')#
Request metadata passed to the
partial_fitmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topartial_fitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topartial_fit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
- Returns:
- self
object The updated object.
- self
- set_score_request(*, sample_weight='$UNCHANGED$')#
Request metadata passed to the
scoremethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.
- property estimators_samples_#
The subset of drawn samples for each base estimator.
Returns a dynamically generated list of indices identifying the samples used for fitting each member of the ensemble, i.e., the in-bag samples.
Note: the list is re-created at each call to the property in order to reduce the object memory footprint by not storing the sampling data. Thus fetching the property may be slower than expected.
- property feature_importances_#
The impurity-based feature importances.
The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance.
Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See
sklearn.inspection.permutation_importance()as an alternative.- Returns:
- feature_importances_
ndarrayof shape (n_features,) The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros.
- feature_importances_
- property honest_indices_#
The indices used to fit the leaf nodes.
- property oob_samples_#
The sample indices that are out-of-bag.
Only utilized if
bootstrap=True, otherwise, all samples are “in-bag”.
- property structure_indices_#
The indices used to learn the structure of the trees.