mirror of
https://github.com/Doctorado-ML/STree.git
synced 2025-08-18 08:56:00 +00:00
Compare commits
1 Commits
new_predic
...
entropy_fu
Author | SHA1 | Date | |
---|---|---|---|
7a625eee09
|
4
.github/workflows/main.yml
vendored
4
.github/workflows/main.yml
vendored
@@ -12,8 +12,8 @@ jobs:
|
|||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [macos-latest, ubuntu-latest, windows-latest]
|
os: [macos-latest, ubuntu-latest]
|
||||||
python: [3.8, "3.10"]
|
python: [3.8]
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
|
37
CITATION.cff
37
CITATION.cff
@@ -1,37 +0,0 @@
|
|||||||
cff-version: 1.2.0
|
|
||||||
message: "If you use this software, please cite it as below."
|
|
||||||
authors:
|
|
||||||
- family-names: "Montañana"
|
|
||||||
given-names: "Ricardo"
|
|
||||||
orcid: "https://orcid.org/0000-0003-3242-5452"
|
|
||||||
- family-names: "Gámez"
|
|
||||||
given-names: "José A."
|
|
||||||
orcid: "https://orcid.org/0000-0003-1188-1117"
|
|
||||||
- family-names: "Puerta"
|
|
||||||
given-names: "José M."
|
|
||||||
orcid: "https://orcid.org/0000-0002-9164-5191"
|
|
||||||
title: "STree"
|
|
||||||
version: 1.2.3
|
|
||||||
doi: 10.5281/zenodo.5504083
|
|
||||||
date-released: 2021-11-02
|
|
||||||
url: "https://github.com/Doctorado-ML/STree"
|
|
||||||
preferred-citation:
|
|
||||||
type: article
|
|
||||||
authors:
|
|
||||||
- family-names: "Montañana"
|
|
||||||
given-names: "Ricardo"
|
|
||||||
orcid: "https://orcid.org/0000-0003-3242-5452"
|
|
||||||
- family-names: "Gámez"
|
|
||||||
given-names: "José A."
|
|
||||||
orcid: "https://orcid.org/0000-0003-1188-1117"
|
|
||||||
- family-names: "Puerta"
|
|
||||||
given-names: "José M."
|
|
||||||
orcid: "https://orcid.org/0000-0002-9164-5191"
|
|
||||||
doi: "10.1007/978-3-030-85713-4_6"
|
|
||||||
journal: "Lecture Notes in Computer Science"
|
|
||||||
month: 9
|
|
||||||
start: 54
|
|
||||||
end: 64
|
|
||||||
title: "STree: A Single Multi-class Oblique Decision Tree Based on Support Vector Machines"
|
|
||||||
volume: 12882
|
|
||||||
year: 2021
|
|
6
Makefile
6
Makefile
@@ -10,9 +10,6 @@ coverage: ## Run tests with coverage
|
|||||||
deps: ## Install dependencies
|
deps: ## Install dependencies
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
devdeps: ## Install development dependencies
|
|
||||||
pip install black pip-audit flake8 mypy coverage
|
|
||||||
|
|
||||||
lint: ## Lint and static-check
|
lint: ## Lint and static-check
|
||||||
black stree
|
black stree
|
||||||
flake8 stree
|
flake8 stree
|
||||||
@@ -35,9 +32,6 @@ build: ## Build package
|
|||||||
doc-clean: ## Update documentation
|
doc-clean: ## Update documentation
|
||||||
make -C docs --makefile=Makefile clean
|
make -C docs --makefile=Makefile clean
|
||||||
|
|
||||||
audit: ## Audit pip
|
|
||||||
pip-audit
|
|
||||||
|
|
||||||
help: ## Show help message
|
help: ## Show help message
|
||||||
@IFS=$$'\n' ; \
|
@IFS=$$'\n' ; \
|
||||||
help_lines=(`fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##/:/'`); \
|
help_lines=(`fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##/:/'`); \
|
||||||
|
38
README.md
38
README.md
@@ -36,23 +36,23 @@ Can be found in [stree.readthedocs.io](https://stree.readthedocs.io/en/stable/)
|
|||||||
|
|
||||||
## Hyperparameters
|
## Hyperparameters
|
||||||
|
|
||||||
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
||||||
| --- | ------------------- | -------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| --- | ------------------- | ------------------------------------------------------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
||||||
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
||||||
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
||||||
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
||||||
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
||||||
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
||||||
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
||||||
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
||||||
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
||||||
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
||||||
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
||||||
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
||||||
| | splitter | {"best", "random", "trandom", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **“trandom”**: The algorithm generates only one random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
|
| | splitter | {"best", "random", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **“trandom”**: The algorithm generates a true random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
|
||||||
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
||||||
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
||||||
|
|
||||||
\* Hyperparameter used by the support vector classifier of every node
|
\* Hyperparameter used by the support vector classifier of every node
|
||||||
|
|
||||||
@@ -73,7 +73,3 @@ python -m unittest -v stree.tests
|
|||||||
## License
|
## License
|
||||||
|
|
||||||
STree is [MIT](https://github.com/doctorado-ml/stree/blob/master/LICENSE) licensed
|
STree is [MIT](https://github.com/doctorado-ml/stree/blob/master/LICENSE) licensed
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class oblique decision tree based on support vector machines.", 2021 LNAI 12882, pg. 54-64
|
|
||||||
|
@@ -54,4 +54,4 @@ html_theme = "sphinx_rtd_theme"
|
|||||||
# Add any paths that contain custom static files (such as style sheets) here,
|
# Add any paths that contain custom static files (such as style sheets) here,
|
||||||
# relative to this directory. They are copied after the builtin static files,
|
# relative to this directory. They are copied after the builtin static files,
|
||||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||||
html_static_path = []
|
html_static_path = ["_static"]
|
||||||
|
@@ -1,22 +1,22 @@
|
|||||||
# Hyperparameters
|
# Hyperparameters
|
||||||
|
|
||||||
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
||||||
| --- | ------------------- | -------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| --- | ------------------- | ------------------------------------------------------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
||||||
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
||||||
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
||||||
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
||||||
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
||||||
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
||||||
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
||||||
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
||||||
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
||||||
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
||||||
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
||||||
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
||||||
| | splitter | {"best", "random", "trandom", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **“trandom”**: The algorithm generates only one random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
|
| | splitter | {"best", "random", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **“trandom”**: The algorithm generates a true random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
|
||||||
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
||||||
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
||||||
|
|
||||||
\* Hyperparameter used by the support vector classifier of every node
|
\* Hyperparameter used by the support vector classifier of every node
|
||||||
|
|
||||||
|
4
setup.py
4
setup.py
@@ -1,5 +1,4 @@
|
|||||||
import setuptools
|
import setuptools
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
def readme():
|
def readme():
|
||||||
@@ -9,8 +8,7 @@ def readme():
|
|||||||
|
|
||||||
def get_data(field):
|
def get_data(field):
|
||||||
item = ""
|
item = ""
|
||||||
file_name = "_version.py" if field == "version" else "__init__.py"
|
with open("stree/__init__.py") as f:
|
||||||
with open(os.path.join("stree", file_name)) as f:
|
|
||||||
for line in f.readlines():
|
for line in f.readlines():
|
||||||
if line.startswith(f"__{field}__"):
|
if line.startswith(f"__{field}__"):
|
||||||
delim = '"' if '"' in line else "'"
|
delim = '"' if '"' in line else "'"
|
||||||
|
@@ -68,7 +68,6 @@ class Snode:
|
|||||||
self._impurity = impurity
|
self._impurity = impurity
|
||||||
self._partition_column: int = -1
|
self._partition_column: int = -1
|
||||||
self._scaler = scaler
|
self._scaler = scaler
|
||||||
self._proba = None
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def copy(cls, node: "Snode") -> "Snode":
|
def copy(cls, node: "Snode") -> "Snode":
|
||||||
@@ -128,44 +127,23 @@ class Snode:
|
|||||||
def get_up(self) -> "Snode":
|
def get_up(self) -> "Snode":
|
||||||
return self._up
|
return self._up
|
||||||
|
|
||||||
def make_predictor(self, num_classes: int) -> None:
|
def make_predictor(self):
|
||||||
"""Compute the class of the predictor and its belief based on the
|
"""Compute the class of the predictor and its belief based on the
|
||||||
subdataset of the node only if it is a leaf
|
subdataset of the node only if it is a leaf
|
||||||
"""
|
"""
|
||||||
if not self.is_leaf():
|
if not self.is_leaf():
|
||||||
return
|
return
|
||||||
classes, card = np.unique(self._y, return_counts=True)
|
classes, card = np.unique(self._y, return_counts=True)
|
||||||
self._proba = np.zeros((num_classes,), dtype=np.int64)
|
if len(classes) > 1:
|
||||||
for c, n in zip(classes, card):
|
|
||||||
self._proba[c] = n
|
|
||||||
try:
|
|
||||||
max_card = max(card)
|
max_card = max(card)
|
||||||
self._class = classes[card == max_card][0]
|
self._class = classes[card == max_card][0]
|
||||||
self._belief = max_card / np.sum(card)
|
self._belief = max_card / np.sum(card)
|
||||||
except ValueError:
|
|
||||||
self._class = None
|
|
||||||
|
|
||||||
def graph(self):
|
|
||||||
"""
|
|
||||||
Return a string representing the node in graphviz format
|
|
||||||
"""
|
|
||||||
output = ""
|
|
||||||
count_values = np.unique(self._y, return_counts=True)
|
|
||||||
if self.is_leaf():
|
|
||||||
output += (
|
|
||||||
f'N{id(self)} [shape=box style=filled label="'
|
|
||||||
f"class={self._class} impurity={self._impurity:.3f} "
|
|
||||||
f'counts={self._proba}"];\n'
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
output += (
|
self._belief = 1
|
||||||
f'N{id(self)} [label="#features={len(self._features)} '
|
try:
|
||||||
f"classes={count_values[0]} samples={count_values[1]} "
|
self._class = classes[0]
|
||||||
f'({sum(count_values[1])})" fontcolor=black];\n'
|
except IndexError:
|
||||||
)
|
self._class = None
|
||||||
output += f"N{id(self)} -> N{id(self.get_up())} [color=black];\n"
|
|
||||||
output += f"N{id(self)} -> N{id(self.get_down())} [color=black];\n"
|
|
||||||
return output
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
def __str__(self) -> str:
|
||||||
count_values = np.unique(self._y, return_counts=True)
|
count_values = np.unique(self._y, return_counts=True)
|
||||||
@@ -224,8 +202,7 @@ class Splitter:
|
|||||||
max_features < num_features). Supported strategies are: “best”: sklearn
|
max_features < num_features). Supported strategies are: “best”: sklearn
|
||||||
SelectKBest algorithm is used in every node to choose the max_features
|
SelectKBest algorithm is used in every node to choose the max_features
|
||||||
best features. “random”: The algorithm generates 5 candidates and
|
best features. “random”: The algorithm generates 5 candidates and
|
||||||
choose the best (max. info. gain) of them. “trandom”: The algorithm
|
choose the best (max. info. gain) of them. "mutual": Chooses the best
|
||||||
generates only one random combination. "mutual": Chooses the best
|
|
||||||
features w.r.t. their mutual info with the label. "cfs": Apply
|
features w.r.t. their mutual info with the label. "cfs": Apply
|
||||||
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
|
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
|
||||||
Based, by default None
|
Based, by default None
|
||||||
@@ -389,8 +366,9 @@ class Splitter:
|
|||||||
.get_support(indices=True)
|
.get_support(indices=True)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
def _fs_mutual(
|
def _fs_mutual(
|
||||||
self, dataset: np.array, labels: np.array, max_features: int
|
dataset: np.array, labels: np.array, max_features: int
|
||||||
) -> tuple:
|
) -> tuple:
|
||||||
"""Return the best features with mutual information with labels
|
"""Return the best features with mutual information with labels
|
||||||
|
|
||||||
@@ -410,9 +388,7 @@ class Splitter:
|
|||||||
indices of the features selected
|
indices of the features selected
|
||||||
"""
|
"""
|
||||||
# return best features with mutual info with the label
|
# return best features with mutual info with the label
|
||||||
feature_list = mutual_info_classif(
|
feature_list = mutual_info_classif(dataset, labels)
|
||||||
dataset, labels, random_state=self._random_state
|
|
||||||
)
|
|
||||||
return tuple(
|
return tuple(
|
||||||
sorted(
|
sorted(
|
||||||
range(len(feature_list)), key=lambda sub: feature_list[sub]
|
range(len(feature_list)), key=lambda sub: feature_list[sub]
|
||||||
@@ -502,18 +478,6 @@ class Splitter:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _entropy(y: np.array) -> float:
|
def _entropy(y: np.array) -> float:
|
||||||
"""Compute entropy of a labels set
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
----------
|
|
||||||
y : np.array
|
|
||||||
set of labels
|
|
||||||
|
|
||||||
Returns
|
|
||||||
-------
|
|
||||||
float
|
|
||||||
entropy
|
|
||||||
"""
|
|
||||||
n_labels = len(y)
|
n_labels = len(y)
|
||||||
if n_labels <= 1:
|
if n_labels <= 1:
|
||||||
return 0
|
return 0
|
||||||
@@ -521,13 +485,10 @@ class Splitter:
|
|||||||
proportions = counts / n_labels
|
proportions = counts / n_labels
|
||||||
n_classes = np.count_nonzero(proportions)
|
n_classes = np.count_nonzero(proportions)
|
||||||
if n_classes <= 1:
|
if n_classes <= 1:
|
||||||
return 0
|
return 0.0
|
||||||
entropy = 0.0
|
from scipy.stats import entropy
|
||||||
# Compute standard entropy.
|
|
||||||
for prop in proportions:
|
return entropy(y, base=n_classes)
|
||||||
if prop != 0.0:
|
|
||||||
entropy -= prop * log(prop, n_classes)
|
|
||||||
return entropy
|
|
||||||
|
|
||||||
def information_gain(
|
def information_gain(
|
||||||
self, labels: np.array, labels_up: np.array, labels_dn: np.array
|
self, labels: np.array, labels_up: np.array, labels_dn: np.array
|
||||||
|
136
stree/Strees.py
136
stree/Strees.py
@@ -17,7 +17,6 @@ from sklearn.utils.validation import (
|
|||||||
_check_sample_weight,
|
_check_sample_weight,
|
||||||
)
|
)
|
||||||
from .Splitter import Splitter, Snode, Siterator
|
from .Splitter import Splitter, Snode, Siterator
|
||||||
from ._version import __version__
|
|
||||||
|
|
||||||
|
|
||||||
class Stree(BaseEstimator, ClassifierMixin):
|
class Stree(BaseEstimator, ClassifierMixin):
|
||||||
@@ -83,8 +82,7 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
max_features < num_features). Supported strategies are: “best”: sklearn
|
max_features < num_features). Supported strategies are: “best”: sklearn
|
||||||
SelectKBest algorithm is used in every node to choose the max_features
|
SelectKBest algorithm is used in every node to choose the max_features
|
||||||
best features. “random”: The algorithm generates 5 candidates and
|
best features. “random”: The algorithm generates 5 candidates and
|
||||||
choose the best (max. info. gain) of them. “trandom”: The algorithm
|
choose the best (max. info. gain) of them. "mutual": Chooses the best
|
||||||
generates only one random combination. "mutual": Chooses the best
|
|
||||||
features w.r.t. their mutual info with the label. "cfs": Apply
|
features w.r.t. their mutual info with the label. "cfs": Apply
|
||||||
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
|
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
|
||||||
Based , by default "random"
|
Based , by default "random"
|
||||||
@@ -130,7 +128,7 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
References
|
References
|
||||||
----------
|
----------
|
||||||
R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class
|
R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class
|
||||||
oblique decision tree based on support vector machines.", 2021 LNAI 12882
|
oblique decision tree based on support vector machines.", 2021 LNAI...
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
@@ -170,11 +168,6 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
self.normalize = normalize
|
self.normalize = normalize
|
||||||
self.multiclass_strategy = multiclass_strategy
|
self.multiclass_strategy = multiclass_strategy
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def version() -> str:
|
|
||||||
"""Return the version of the package."""
|
|
||||||
return __version__
|
|
||||||
|
|
||||||
def _more_tags(self) -> dict:
|
def _more_tags(self) -> dict:
|
||||||
"""Required by sklearn to supply features of the classifier
|
"""Required by sklearn to supply features of the classifier
|
||||||
make mandatory the labels array
|
make mandatory the labels array
|
||||||
@@ -314,7 +307,7 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
if np.unique(y).shape[0] == 1:
|
if np.unique(y).shape[0] == 1:
|
||||||
# only 1 class => pure dataset
|
# only 1 class => pure dataset
|
||||||
node.set_title(title + ", <pure>")
|
node.set_title(title + ", <pure>")
|
||||||
node.make_predictor(self.n_classes_)
|
node.make_predictor()
|
||||||
return node
|
return node
|
||||||
# Train the model
|
# Train the model
|
||||||
clf = self._build_clf()
|
clf = self._build_clf()
|
||||||
@@ -333,7 +326,7 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
if X_U is None or X_D is None:
|
if X_U is None or X_D is None:
|
||||||
# didn't part anything
|
# didn't part anything
|
||||||
node.set_title(title + ", <cgaf>")
|
node.set_title(title + ", <cgaf>")
|
||||||
node.make_predictor(self.n_classes_)
|
node.make_predictor()
|
||||||
return node
|
return node
|
||||||
node.set_up(
|
node.set_up(
|
||||||
self._train(X_U, y_u, sw_u, depth + 1, title + f" - Up({depth+1})")
|
self._train(X_U, y_u, sw_u, depth + 1, title + f" - Up({depth+1})")
|
||||||
@@ -367,66 +360,28 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
def __predict_class(self, X: np.array) -> np.array:
|
@staticmethod
|
||||||
def compute_prediction(xp, indices, node):
|
def _reorder_results(y: np.array, indices: np.array) -> np.array:
|
||||||
if xp is None:
|
"""Reorder an array based on the array of indices passed
|
||||||
return
|
|
||||||
if node.is_leaf():
|
|
||||||
# set a class for indices
|
|
||||||
result[indices] = node._proba
|
|
||||||
return
|
|
||||||
self.splitter_.partition(xp, node, train=False)
|
|
||||||
x_u, x_d = self.splitter_.part(xp)
|
|
||||||
i_u, i_d = self.splitter_.part(indices)
|
|
||||||
compute_prediction(x_u, i_u, node.get_up())
|
|
||||||
compute_prediction(x_d, i_d, node.get_down())
|
|
||||||
|
|
||||||
# setup prediction & make it happen
|
|
||||||
result = np.zeros((X.shape[0], self.n_classes_))
|
|
||||||
indices = np.arange(X.shape[0])
|
|
||||||
compute_prediction(X, indices, self.tree_)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def check_predict(self, X) -> np.array:
|
|
||||||
check_is_fitted(self, ["tree_"])
|
|
||||||
# Input validation
|
|
||||||
X = check_array(X)
|
|
||||||
if X.shape[1] != self.n_features_:
|
|
||||||
raise ValueError(
|
|
||||||
f"Expected {self.n_features_} features but got "
|
|
||||||
f"({X.shape[1]})"
|
|
||||||
)
|
|
||||||
return X
|
|
||||||
|
|
||||||
def predict_proba(self, X: np.array) -> np.array:
|
|
||||||
"""Predict class probabilities of the input samples X.
|
|
||||||
|
|
||||||
The predicted class probability is the fraction of samples of the same
|
|
||||||
class in a leaf.
|
|
||||||
|
|
||||||
Parameters
|
Parameters
|
||||||
----------
|
----------
|
||||||
X : dataset of samples.
|
y : np.array
|
||||||
|
data untidy
|
||||||
|
indices : np.array
|
||||||
|
indices used to set order
|
||||||
|
|
||||||
Returns
|
Returns
|
||||||
-------
|
-------
|
||||||
proba : array of shape (n_samples, n_classes)
|
np.array
|
||||||
The class probabilities of the input samples.
|
array y ordered
|
||||||
|
|
||||||
Raises
|
|
||||||
------
|
|
||||||
ValueError
|
|
||||||
if dataset with inconsistent number of features
|
|
||||||
NotFittedError
|
|
||||||
if model is not fitted
|
|
||||||
"""
|
"""
|
||||||
|
# return array of same type given in y
|
||||||
X = self.check_predict(X)
|
y_ordered = y.copy()
|
||||||
# return # of samples of each class in leaf node
|
indices = indices.astype(int)
|
||||||
values = self.__predict_class(X)
|
for i, index in enumerate(indices):
|
||||||
normalizer = values.sum(axis=1)[:, np.newaxis]
|
y_ordered[index] = y[i]
|
||||||
normalizer[normalizer == 0.0] = 1.0
|
return y_ordered
|
||||||
return values / normalizer
|
|
||||||
|
|
||||||
def predict(self, X: np.array) -> np.array:
|
def predict(self, X: np.array) -> np.array:
|
||||||
"""Predict labels for each sample in dataset passed
|
"""Predict labels for each sample in dataset passed
|
||||||
@@ -448,8 +403,40 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
NotFittedError
|
NotFittedError
|
||||||
if model is not fitted
|
if model is not fitted
|
||||||
"""
|
"""
|
||||||
X = self.check_predict(X)
|
|
||||||
return self.classes_[np.argmax(self.__predict_class(X), axis=1)]
|
def predict_class(
|
||||||
|
xp: np.array, indices: np.array, node: Snode
|
||||||
|
) -> np.array:
|
||||||
|
if xp is None:
|
||||||
|
return [], []
|
||||||
|
if node.is_leaf():
|
||||||
|
# set a class for every sample in dataset
|
||||||
|
prediction = np.full((xp.shape[0], 1), node._class)
|
||||||
|
return prediction, indices
|
||||||
|
self.splitter_.partition(xp, node, train=False)
|
||||||
|
x_u, x_d = self.splitter_.part(xp)
|
||||||
|
i_u, i_d = self.splitter_.part(indices)
|
||||||
|
prx_u, prin_u = predict_class(x_u, i_u, node.get_up())
|
||||||
|
prx_d, prin_d = predict_class(x_d, i_d, node.get_down())
|
||||||
|
return np.append(prx_u, prx_d), np.append(prin_u, prin_d)
|
||||||
|
|
||||||
|
# sklearn check
|
||||||
|
check_is_fitted(self, ["tree_"])
|
||||||
|
# Input validation
|
||||||
|
X = check_array(X)
|
||||||
|
if X.shape[1] != self.n_features_:
|
||||||
|
raise ValueError(
|
||||||
|
f"Expected {self.n_features_} features but got "
|
||||||
|
f"({X.shape[1]})"
|
||||||
|
)
|
||||||
|
# setup prediction & make it happen
|
||||||
|
indices = np.arange(X.shape[0])
|
||||||
|
result = (
|
||||||
|
self._reorder_results(*predict_class(X, indices, self.tree_))
|
||||||
|
.astype(int)
|
||||||
|
.ravel()
|
||||||
|
)
|
||||||
|
return self.classes_[result]
|
||||||
|
|
||||||
def nodes_leaves(self) -> tuple:
|
def nodes_leaves(self) -> tuple:
|
||||||
"""Compute the number of nodes and leaves in the built tree
|
"""Compute the number of nodes and leaves in the built tree
|
||||||
@@ -482,23 +469,6 @@ class Stree(BaseEstimator, ClassifierMixin):
|
|||||||
tree = None
|
tree = None
|
||||||
return Siterator(tree)
|
return Siterator(tree)
|
||||||
|
|
||||||
def graph(self, title="") -> str:
|
|
||||||
"""Graphviz code representing the tree
|
|
||||||
|
|
||||||
Returns
|
|
||||||
-------
|
|
||||||
str
|
|
||||||
graphviz code
|
|
||||||
"""
|
|
||||||
output = (
|
|
||||||
"digraph STree {\nlabel=<STree "
|
|
||||||
f"{title}>\nfontsize=30\nfontcolor=blue\nlabelloc=t\n"
|
|
||||||
)
|
|
||||||
for node in self:
|
|
||||||
output += node.graph()
|
|
||||||
output += "}\n"
|
|
||||||
return output
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
def __str__(self) -> str:
|
||||||
"""String representation of the tree
|
"""String representation of the tree
|
||||||
|
|
||||||
|
@@ -1,5 +1,7 @@
|
|||||||
from .Strees import Stree, Siterator
|
from .Strees import Stree, Siterator
|
||||||
|
|
||||||
|
__version__ = "1.2.1"
|
||||||
|
|
||||||
__author__ = "Ricardo Montañana Gómez"
|
__author__ = "Ricardo Montañana Gómez"
|
||||||
__copyright__ = "Copyright 2020-2021, Ricardo Montañana Gómez"
|
__copyright__ = "Copyright 2020-2021, Ricardo Montañana Gómez"
|
||||||
__license__ = "MIT License"
|
__license__ = "MIT License"
|
||||||
|
@@ -1 +0,0 @@
|
|||||||
__version__ = "1.2.4"
|
|
@@ -67,28 +67,10 @@ class Snode_test(unittest.TestCase):
|
|||||||
|
|
||||||
def test_make_predictor_on_leaf(self):
|
def test_make_predictor_on_leaf(self):
|
||||||
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
|
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
|
||||||
test.make_predictor(2)
|
test.make_predictor()
|
||||||
self.assertEqual(1, test._class)
|
self.assertEqual(1, test._class)
|
||||||
self.assertEqual(0.75, test._belief)
|
self.assertEqual(0.75, test._belief)
|
||||||
self.assertEqual(-1, test._partition_column)
|
self.assertEqual(-1, test._partition_column)
|
||||||
self.assertListEqual([1, 3], test._proba.tolist())
|
|
||||||
|
|
||||||
def test_make_predictor_on_not_leaf(self):
|
|
||||||
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
|
|
||||||
test.set_up(Snode(None, [1], [1], [], 0.0, "another_test"))
|
|
||||||
test.make_predictor(2)
|
|
||||||
self.assertIsNone(test._class)
|
|
||||||
self.assertEqual(0, test._belief)
|
|
||||||
self.assertEqual(-1, test._partition_column)
|
|
||||||
self.assertEqual(-1, test.get_up()._partition_column)
|
|
||||||
self.assertIsNone(test._proba)
|
|
||||||
|
|
||||||
def test_make_predictor_on_leaf_bogus_data(self):
|
|
||||||
test = Snode(None, [1, 2, 3, 4], [], [], 0.0, "test")
|
|
||||||
test.make_predictor(2)
|
|
||||||
self.assertIsNone(test._class)
|
|
||||||
self.assertEqual(-1, test._partition_column)
|
|
||||||
self.assertListEqual([0, 0], test._proba.tolist())
|
|
||||||
|
|
||||||
def test_set_title(self):
|
def test_set_title(self):
|
||||||
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
|
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
|
||||||
@@ -115,6 +97,21 @@ class Snode_test(unittest.TestCase):
|
|||||||
test.set_features([1, 2])
|
test.set_features([1, 2])
|
||||||
self.assertListEqual([1, 2], test.get_features())
|
self.assertListEqual([1, 2], test.get_features())
|
||||||
|
|
||||||
|
def test_make_predictor_on_not_leaf(self):
|
||||||
|
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
|
||||||
|
test.set_up(Snode(None, [1], [1], [], 0.0, "another_test"))
|
||||||
|
test.make_predictor()
|
||||||
|
self.assertIsNone(test._class)
|
||||||
|
self.assertEqual(0, test._belief)
|
||||||
|
self.assertEqual(-1, test._partition_column)
|
||||||
|
self.assertEqual(-1, test.get_up()._partition_column)
|
||||||
|
|
||||||
|
def test_make_predictor_on_leaf_bogus_data(self):
|
||||||
|
test = Snode(None, [1, 2, 3, 4], [], [], 0.0, "test")
|
||||||
|
test.make_predictor()
|
||||||
|
self.assertIsNone(test._class)
|
||||||
|
self.assertEqual(-1, test._partition_column)
|
||||||
|
|
||||||
def test_copy_node(self):
|
def test_copy_node(self):
|
||||||
px = [1, 2, 3, 4]
|
px = [1, 2, 3, 4]
|
||||||
py = [1]
|
py = [1]
|
||||||
|
@@ -10,7 +10,6 @@ from sklearn.svm import LinearSVC
|
|||||||
from stree import Stree
|
from stree import Stree
|
||||||
from stree.Splitter import Snode
|
from stree.Splitter import Snode
|
||||||
from .utils import load_dataset
|
from .utils import load_dataset
|
||||||
from .._version import __version__
|
|
||||||
|
|
||||||
|
|
||||||
class Stree_test(unittest.TestCase):
|
class Stree_test(unittest.TestCase):
|
||||||
@@ -115,38 +114,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
yp = clf.fit(X, y).predict(X[:num, :])
|
yp = clf.fit(X, y).predict(X[:num, :])
|
||||||
self.assertListEqual(y[:num].tolist(), yp.tolist())
|
self.assertListEqual(y[:num].tolist(), yp.tolist())
|
||||||
|
|
||||||
def test_multiple_predict_proba(self):
|
|
||||||
expected = {
|
|
||||||
"liblinear": {
|
|
||||||
0: [0.02401129943502825, 0.9759887005649718],
|
|
||||||
17: [0.9282970550576184, 0.07170294494238157],
|
|
||||||
},
|
|
||||||
"linear": {
|
|
||||||
0: [0.029329608938547486, 0.9706703910614525],
|
|
||||||
17: [0.9298469387755102, 0.07015306122448979],
|
|
||||||
},
|
|
||||||
"rbf": {
|
|
||||||
0: [0.023448275862068966, 0.976551724137931],
|
|
||||||
17: [0.9458064516129032, 0.05419354838709677],
|
|
||||||
},
|
|
||||||
"poly": {
|
|
||||||
0: [0.01601164483260553, 0.9839883551673945],
|
|
||||||
17: [0.9089790897908979, 0.0910209102091021],
|
|
||||||
},
|
|
||||||
}
|
|
||||||
indices = [0, 17]
|
|
||||||
X, y = load_dataset(self._random_state)
|
|
||||||
for kernel in ["liblinear", "linear", "rbf", "poly"]:
|
|
||||||
clf = Stree(
|
|
||||||
kernel=kernel,
|
|
||||||
multiclass_strategy="ovr" if kernel == "liblinear" else "ovo",
|
|
||||||
random_state=self._random_state,
|
|
||||||
)
|
|
||||||
yp = clf.fit(X, y).predict_proba(X)
|
|
||||||
for index in indices:
|
|
||||||
for exp, comp in zip(expected[kernel][index], yp[index]):
|
|
||||||
self.assertAlmostEqual(exp, comp)
|
|
||||||
|
|
||||||
def test_single_vs_multiple_prediction(self):
|
def test_single_vs_multiple_prediction(self):
|
||||||
"""Check if predicting sample by sample gives the same result as
|
"""Check if predicting sample by sample gives the same result as
|
||||||
predicting all samples at once
|
predicting all samples at once
|
||||||
@@ -390,7 +357,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
|
|
||||||
# Tests of score
|
# Tests of score
|
||||||
def test_score_binary(self):
|
def test_score_binary(self):
|
||||||
"""Check score for binary classification."""
|
|
||||||
X, y = load_dataset(self._random_state)
|
X, y = load_dataset(self._random_state)
|
||||||
accuracies = [
|
accuracies = [
|
||||||
0.9506666666666667,
|
0.9506666666666667,
|
||||||
@@ -413,7 +379,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertAlmostEqual(accuracy_expected, accuracy_score)
|
self.assertAlmostEqual(accuracy_expected, accuracy_score)
|
||||||
|
|
||||||
def test_score_max_features(self):
|
def test_score_max_features(self):
|
||||||
"""Check score using max_features."""
|
|
||||||
X, y = load_dataset(self._random_state)
|
X, y = load_dataset(self._random_state)
|
||||||
clf = Stree(
|
clf = Stree(
|
||||||
kernel="liblinear",
|
kernel="liblinear",
|
||||||
@@ -425,7 +390,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertAlmostEqual(0.9453333333333334, clf.score(X, y))
|
self.assertAlmostEqual(0.9453333333333334, clf.score(X, y))
|
||||||
|
|
||||||
def test_bogus_splitter_parameter(self):
|
def test_bogus_splitter_parameter(self):
|
||||||
"""Check that bogus splitter parameter raises exception."""
|
|
||||||
clf = Stree(splitter="duck")
|
clf = Stree(splitter="duck")
|
||||||
with self.assertRaises(ValueError):
|
with self.assertRaises(ValueError):
|
||||||
clf.fit(*load_dataset())
|
clf.fit(*load_dataset())
|
||||||
@@ -481,7 +445,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertListEqual([47], resdn[1].tolist())
|
self.assertListEqual([47], resdn[1].tolist())
|
||||||
|
|
||||||
def test_score_multiclass_rbf(self):
|
def test_score_multiclass_rbf(self):
|
||||||
"""Test score for multiclass classification with rbf kernel."""
|
|
||||||
X, y = load_dataset(
|
X, y = load_dataset(
|
||||||
random_state=self._random_state,
|
random_state=self._random_state,
|
||||||
n_classes=3,
|
n_classes=3,
|
||||||
@@ -499,7 +462,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||||
|
|
||||||
def test_score_multiclass_poly(self):
|
def test_score_multiclass_poly(self):
|
||||||
"""Test score for multiclass classification with poly kernel."""
|
|
||||||
X, y = load_dataset(
|
X, y = load_dataset(
|
||||||
random_state=self._random_state,
|
random_state=self._random_state,
|
||||||
n_classes=3,
|
n_classes=3,
|
||||||
@@ -521,7 +483,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||||
|
|
||||||
def test_score_multiclass_liblinear(self):
|
def test_score_multiclass_liblinear(self):
|
||||||
"""Test score for multiclass classification with liblinear kernel."""
|
|
||||||
X, y = load_dataset(
|
X, y = load_dataset(
|
||||||
random_state=self._random_state,
|
random_state=self._random_state,
|
||||||
n_classes=3,
|
n_classes=3,
|
||||||
@@ -547,7 +508,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||||
|
|
||||||
def test_score_multiclass_sigmoid(self):
|
def test_score_multiclass_sigmoid(self):
|
||||||
"""Test score for multiclass classification with sigmoid kernel."""
|
|
||||||
X, y = load_dataset(
|
X, y = load_dataset(
|
||||||
random_state=self._random_state,
|
random_state=self._random_state,
|
||||||
n_classes=3,
|
n_classes=3,
|
||||||
@@ -568,7 +528,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(0.9662921348314607, clf2.fit(X, y).score(X, y))
|
self.assertEqual(0.9662921348314607, clf2.fit(X, y).score(X, y))
|
||||||
|
|
||||||
def test_score_multiclass_linear(self):
|
def test_score_multiclass_linear(self):
|
||||||
"""Test score for multiclass classification with linear kernel."""
|
|
||||||
warnings.filterwarnings("ignore", category=ConvergenceWarning)
|
warnings.filterwarnings("ignore", category=ConvergenceWarning)
|
||||||
warnings.filterwarnings("ignore", category=RuntimeWarning)
|
warnings.filterwarnings("ignore", category=RuntimeWarning)
|
||||||
X, y = load_dataset(
|
X, y = load_dataset(
|
||||||
@@ -596,13 +555,11 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||||
|
|
||||||
def test_zero_all_sample_weights(self):
|
def test_zero_all_sample_weights(self):
|
||||||
"""Test exception raises when all sample weights are zero."""
|
|
||||||
X, y = load_dataset(self._random_state)
|
X, y = load_dataset(self._random_state)
|
||||||
with self.assertRaises(ValueError):
|
with self.assertRaises(ValueError):
|
||||||
Stree().fit(X, y, np.zeros(len(y)))
|
Stree().fit(X, y, np.zeros(len(y)))
|
||||||
|
|
||||||
def test_mask_samples_weighted_zero(self):
|
def test_mask_samples_weighted_zero(self):
|
||||||
"""Check that the weighted zero samples are masked."""
|
|
||||||
X = np.array(
|
X = np.array(
|
||||||
[
|
[
|
||||||
[1, 1],
|
[1, 1],
|
||||||
@@ -630,7 +587,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(model2.score(X, y, w), 1)
|
self.assertEqual(model2.score(X, y, w), 1)
|
||||||
|
|
||||||
def test_depth(self):
|
def test_depth(self):
|
||||||
"""Check depth of the tree."""
|
|
||||||
X, y = load_dataset(
|
X, y = load_dataset(
|
||||||
random_state=self._random_state,
|
random_state=self._random_state,
|
||||||
n_classes=3,
|
n_classes=3,
|
||||||
@@ -646,7 +602,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(4, clf.depth_)
|
self.assertEqual(4, clf.depth_)
|
||||||
|
|
||||||
def test_nodes_leaves(self):
|
def test_nodes_leaves(self):
|
||||||
"""Check number of nodes and leaves."""
|
|
||||||
X, y = load_dataset(
|
X, y = load_dataset(
|
||||||
random_state=self._random_state,
|
random_state=self._random_state,
|
||||||
n_classes=3,
|
n_classes=3,
|
||||||
@@ -666,7 +621,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(6, leaves)
|
self.assertEqual(6, leaves)
|
||||||
|
|
||||||
def test_nodes_leaves_artificial(self):
|
def test_nodes_leaves_artificial(self):
|
||||||
"""Check leaves of artificial dataset."""
|
|
||||||
n1 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test1")
|
n1 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test1")
|
||||||
n2 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test2")
|
n2 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test2")
|
||||||
n3 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test3")
|
n3 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test3")
|
||||||
@@ -685,14 +639,12 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(2, leaves)
|
self.assertEqual(2, leaves)
|
||||||
|
|
||||||
def test_bogus_multiclass_strategy(self):
|
def test_bogus_multiclass_strategy(self):
|
||||||
"""Check invalid multiclass strategy."""
|
|
||||||
clf = Stree(multiclass_strategy="other")
|
clf = Stree(multiclass_strategy="other")
|
||||||
X, y = load_wine(return_X_y=True)
|
X, y = load_wine(return_X_y=True)
|
||||||
with self.assertRaises(ValueError):
|
with self.assertRaises(ValueError):
|
||||||
clf.fit(X, y)
|
clf.fit(X, y)
|
||||||
|
|
||||||
def test_multiclass_strategy(self):
|
def test_multiclass_strategy(self):
|
||||||
"""Check multiclass strategy."""
|
|
||||||
X, y = load_wine(return_X_y=True)
|
X, y = load_wine(return_X_y=True)
|
||||||
clf_o = Stree(multiclass_strategy="ovo")
|
clf_o = Stree(multiclass_strategy="ovo")
|
||||||
clf_r = Stree(multiclass_strategy="ovr")
|
clf_r = Stree(multiclass_strategy="ovr")
|
||||||
@@ -702,7 +654,6 @@ class Stree_test(unittest.TestCase):
|
|||||||
self.assertEqual(0.9269662921348315, score_r)
|
self.assertEqual(0.9269662921348315, score_r)
|
||||||
|
|
||||||
def test_incompatible_hyperparameters(self):
|
def test_incompatible_hyperparameters(self):
|
||||||
"""Check incompatible hyperparameters."""
|
|
||||||
X, y = load_wine(return_X_y=True)
|
X, y = load_wine(return_X_y=True)
|
||||||
clf = Stree(kernel="liblinear", multiclass_strategy="ovo")
|
clf = Stree(kernel="liblinear", multiclass_strategy="ovo")
|
||||||
with self.assertRaises(ValueError):
|
with self.assertRaises(ValueError):
|
||||||
@@ -710,50 +661,3 @@ class Stree_test(unittest.TestCase):
|
|||||||
clf = Stree(multiclass_strategy="ovo", split_criteria="max_samples")
|
clf = Stree(multiclass_strategy="ovo", split_criteria="max_samples")
|
||||||
with self.assertRaises(ValueError):
|
with self.assertRaises(ValueError):
|
||||||
clf.fit(X, y)
|
clf.fit(X, y)
|
||||||
|
|
||||||
def test_version(self):
|
|
||||||
"""Check STree version."""
|
|
||||||
clf = Stree()
|
|
||||||
self.assertEqual(__version__, clf.version())
|
|
||||||
|
|
||||||
def test_graph(self):
|
|
||||||
"""Check graphviz representation of the tree."""
|
|
||||||
X, y = load_wine(return_X_y=True)
|
|
||||||
clf = Stree(random_state=self._random_state)
|
|
||||||
|
|
||||||
expected_head = (
|
|
||||||
"digraph STree {\nlabel=<STree >\nfontsize=30\n"
|
|
||||||
"fontcolor=blue\nlabelloc=t\n"
|
|
||||||
)
|
|
||||||
expected_tail = (
|
|
||||||
' [shape=box style=filled label="class=1 impurity=0.000 '
|
|
||||||
'counts=[0 1 0]"];\n}\n'
|
|
||||||
)
|
|
||||||
self.assertEqual(clf.graph(), expected_head + "}\n")
|
|
||||||
clf.fit(X, y)
|
|
||||||
computed = clf.graph()
|
|
||||||
computed_head = computed[: len(expected_head)]
|
|
||||||
num = -len(expected_tail)
|
|
||||||
computed_tail = computed[num:]
|
|
||||||
self.assertEqual(computed_head, expected_head)
|
|
||||||
self.assertEqual(computed_tail, expected_tail)
|
|
||||||
|
|
||||||
def test_graph_title(self):
|
|
||||||
X, y = load_wine(return_X_y=True)
|
|
||||||
clf = Stree(random_state=self._random_state)
|
|
||||||
expected_head = (
|
|
||||||
"digraph STree {\nlabel=<STree Sample title>\nfontsize=30\n"
|
|
||||||
"fontcolor=blue\nlabelloc=t\n"
|
|
||||||
)
|
|
||||||
expected_tail = (
|
|
||||||
' [shape=box style=filled label="class=1 impurity=0.000 '
|
|
||||||
'counts=[0 1 0]"];\n}\n'
|
|
||||||
)
|
|
||||||
self.assertEqual(clf.graph("Sample title"), expected_head + "}\n")
|
|
||||||
clf.fit(X, y)
|
|
||||||
computed = clf.graph("Sample title")
|
|
||||||
computed_head = computed[: len(expected_head)]
|
|
||||||
num = -len(expected_tail)
|
|
||||||
computed_tail = computed[num:]
|
|
||||||
self.assertEqual(computed_head, expected_head)
|
|
||||||
self.assertEqual(computed_tail, expected_tail)
|
|
||||||
|
Reference in New Issue
Block a user