mirror of
https://github.com/Doctorado-ML/STree.git
synced 2025-08-17 16:36:01 +00:00
Compare commits
29 Commits
Author | SHA1 | Date | |
---|---|---|---|
c5c94488f6
|
|||
d678901930
|
|||
ab2d96fe94
|
|||
08f8ac018b
|
|||
cf63863e64
|
|||
d6c99e9e56
|
|||
82838fa3e0
|
|||
f0b2ce3c7b
|
|||
00ed57c015
|
|||
|
08222f109e | ||
cc931d8547
|
|||
b044a057df
|
|||
fc48bc8ba4
|
|||
|
8251f07674 | ||
|
0b15a5af11 | ||
|
28d905368b | ||
e5d49132ec
|
|||
8daecc4726
|
|||
|
bf678df159 | ||
|
36b08b1bcf | ||
36ff3da26d
|
|||
|
6b281ebcc8 | ||
|
3aaddd096f | ||
|
15a5a4c407 | ||
|
0afe14a447 | ||
|
fc9b7b5c92 | ||
|
3f79d2877f | ||
ecc2800705
|
|||
0524d47d64
|
2
.github/workflows/main.yml
vendored
2
.github/workflows/main.yml
vendored
@@ -12,7 +12,7 @@ jobs:
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [macos-latest, ubuntu-latest]
|
||||
os: [macos-latest, ubuntu-latest, windows-latest]
|
||||
python: [3.8]
|
||||
|
||||
steps:
|
||||
|
37
CITATION.cff
Normal file
37
CITATION.cff
Normal file
@@ -0,0 +1,37 @@
|
||||
cff-version: 1.2.0
|
||||
message: "If you use this software, please cite it as below."
|
||||
authors:
|
||||
- family-names: "Montañana"
|
||||
given-names: "Ricardo"
|
||||
orcid: "https://orcid.org/0000-0003-3242-5452"
|
||||
- family-names: "Gámez"
|
||||
given-names: "José A."
|
||||
orcid: "https://orcid.org/0000-0003-1188-1117"
|
||||
- family-names: "Puerta"
|
||||
given-names: "José M."
|
||||
orcid: "https://orcid.org/0000-0002-9164-5191"
|
||||
title: "STree"
|
||||
version: 1.2.3
|
||||
doi: 10.5281/zenodo.5504083
|
||||
date-released: 2021-11-02
|
||||
url: "https://github.com/Doctorado-ML/STree"
|
||||
preferred-citation:
|
||||
type: article
|
||||
authors:
|
||||
- family-names: "Montañana"
|
||||
given-names: "Ricardo"
|
||||
orcid: "https://orcid.org/0000-0003-3242-5452"
|
||||
- family-names: "Gámez"
|
||||
given-names: "José A."
|
||||
orcid: "https://orcid.org/0000-0003-1188-1117"
|
||||
- family-names: "Puerta"
|
||||
given-names: "José M."
|
||||
orcid: "https://orcid.org/0000-0002-9164-5191"
|
||||
doi: "10.1007/978-3-030-85713-4_6"
|
||||
journal: "Lecture Notes in Computer Science"
|
||||
month: 9
|
||||
start: 54
|
||||
end: 64
|
||||
title: "STree: A Single Multi-class Oblique Decision Tree Based on Support Vector Machines"
|
||||
volume: 12882
|
||||
year: 2021
|
7
Makefile
7
Makefile
@@ -10,6 +10,9 @@ coverage: ## Run tests with coverage
|
||||
deps: ## Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
devdeps: ## Install development dependencies
|
||||
pip install black pip-audit flake8 mypy coverage
|
||||
|
||||
lint: ## Lint and static-check
|
||||
black stree
|
||||
flake8 stree
|
||||
@@ -26,11 +29,15 @@ doc: ## Update documentation
|
||||
|
||||
build: ## Build package
|
||||
rm -fr dist/*
|
||||
rm -fr build/*
|
||||
python setup.py sdist bdist_wheel
|
||||
|
||||
doc-clean: ## Update documentation
|
||||
make -C docs --makefile=Makefile clean
|
||||
|
||||
audit: ## Audit pip
|
||||
pip-audit
|
||||
|
||||
help: ## Show help message
|
||||
@IFS=$$'\n' ; \
|
||||
help_lines=(`fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##/:/'`); \
|
||||
|
45
README.md
45
README.md
@@ -2,6 +2,9 @@
|
||||
[](https://codecov.io/gh/doctorado-ml/stree)
|
||||
[](https://www.codacy.com/gh/Doctorado-ML/STree?utm_source=github.com&utm_medium=referral&utm_content=Doctorado-ML/STree&utm_campaign=Badge_Grade)
|
||||
[](https://lgtm.com/projects/g/Doctorado-ML/STree/context:python)
|
||||
[](https://badge.fury.io/py/STree)
|
||||

|
||||
[](https://zenodo.org/badge/latestdoi/262658230)
|
||||
|
||||
# STree
|
||||
|
||||
@@ -17,14 +20,12 @@ pip install git+https://github.com/doctorado-ml/stree
|
||||
|
||||
## Documentation
|
||||
|
||||
Can be found in
|
||||
Can be found in [stree.readthedocs.io](https://stree.readthedocs.io/en/stable/)
|
||||
|
||||
## Examples
|
||||
|
||||
### Jupyter notebooks
|
||||
|
||||
- [](https://mybinder.org/v2/gh/Doctorado-ML/STree/master?urlpath=lab/tree/notebooks/benchmark.ipynb) Benchmark
|
||||
|
||||
- [](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/benchmark.ipynb) Benchmark
|
||||
|
||||
- [](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/features.ipynb) Some features
|
||||
@@ -35,23 +36,23 @@ Can be found in
|
||||
|
||||
## Hyperparameters
|
||||
|
||||
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
||||
| --- | ------------------- | ------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
||||
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
||||
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
||||
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
||||
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
||||
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
||||
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
||||
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’ and ‘poly’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
||||
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
||||
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
||||
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
||||
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
||||
| | splitter | {"best", "random", "mutual"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose one randomly. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label |
|
||||
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
||||
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
||||
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
||||
| --- | ------------------- | -------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
||||
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
||||
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
||||
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
||||
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
||||
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
||||
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
||||
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
||||
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
||||
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
||||
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
||||
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
||||
| | splitter | {"best", "random", "trandom", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **“trandom”**: The algorithm generates only one random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
|
||||
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
||||
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
||||
|
||||
\* Hyperparameter used by the support vector classifier of every node
|
||||
|
||||
@@ -72,3 +73,7 @@ python -m unittest -v stree.tests
|
||||
## License
|
||||
|
||||
STree is [MIT](https://github.com/doctorado-ml/stree/blob/master/LICENSE) licensed
|
||||
|
||||
## Reference
|
||||
|
||||
R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class oblique decision tree based on support vector machines.", 2021 LNAI 12882, pg. 54-64
|
||||
|
@@ -1,4 +1,4 @@
|
||||
sphinx
|
||||
sphinx-rtd-theme
|
||||
myst-parser
|
||||
git+https://github.com/doctorado-ml/stree
|
||||
mufs
|
@@ -1,7 +1,7 @@
|
||||
Siterator
|
||||
=========
|
||||
|
||||
.. automodule:: stree
|
||||
.. automodule:: Splitter
|
||||
.. autoclass:: Siterator
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
@@ -1,7 +1,7 @@
|
||||
Snode
|
||||
=====
|
||||
|
||||
.. automodule:: stree
|
||||
.. automodule:: Splitter
|
||||
.. autoclass:: Snode
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
@@ -1,7 +1,7 @@
|
||||
Splitter
|
||||
========
|
||||
|
||||
.. automodule:: stree
|
||||
.. automodule:: Splitter
|
||||
.. autoclass:: Splitter
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
@@ -6,6 +6,6 @@ API index
|
||||
:caption: Contents:
|
||||
|
||||
Stree
|
||||
Splitter
|
||||
Snode
|
||||
Siterator
|
||||
Snode
|
||||
Splitter
|
||||
|
@@ -54,4 +54,4 @@ html_theme = "sphinx_rtd_theme"
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ["_static"]
|
||||
html_static_path = []
|
||||
|
@@ -2,8 +2,6 @@
|
||||
|
||||
## Notebooks
|
||||
|
||||
- [](https://mybinder.org/v2/gh/Doctorado-ML/STree/master?urlpath=lab/tree/notebooks/benchmark.ipynb) Benchmark
|
||||
|
||||
- [](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/benchmark.ipynb) Benchmark
|
||||
|
||||
- [](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/features.ipynb) Some features
|
||||
|
@@ -1,22 +1,22 @@
|
||||
## Hyperparameters
|
||||
# Hyperparameters
|
||||
|
||||
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
||||
| --- | ------------------- | ------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
||||
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
||||
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
||||
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
||||
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
||||
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
||||
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
||||
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’ and ‘poly’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
||||
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
||||
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
||||
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
||||
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
||||
| | splitter | {"best", "random", "mutual"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose one randomly. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label |
|
||||
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
||||
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
||||
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
|
||||
| --- | ------------------- | -------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
|
||||
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
|
||||
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
|
||||
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
|
||||
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
|
||||
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
|
||||
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels. |
|
||||
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if ‘auto’, uses 1 / n_features. |
|
||||
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
|
||||
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
|
||||
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
|
||||
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
|
||||
| | splitter | {"best", "random", "trandom", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **“best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **“random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **“trandom”**: The algorithm generates only one random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
|
||||
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
|
||||
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
|
||||
|
||||
\* Hyperparameter used by the support vector classifier of every node
|
||||
|
||||
|
@@ -1,9 +1,12 @@
|
||||
# STree
|
||||
|
||||
[](https://app.codeship.com/projects/399170)
|
||||

|
||||
[](https://codecov.io/gh/doctorado-ml/stree)
|
||||
[](https://www.codacy.com/gh/Doctorado-ML/STree?utm_source=github.com&utm_medium=referral&utm_content=Doctorado-ML/STree&utm_campaign=Badge_Grade)
|
||||
[](https://lgtm.com/projects/g/Doctorado-ML/STree/context:python)
|
||||
[](https://badge.fury.io/py/STree)
|
||||

|
||||
[](https://zenodo.org/badge/latestdoi/262658230)
|
||||
|
||||
Oblique Tree classifier based on SVM nodes. The nodes are built and splitted with sklearn SVC models. Stree is a sklearn estimator and can be integrated in pipelines, grid searches, etc.
|
||||
|
||||
|
@@ -178,7 +178,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Stree\n",
|
||||
"stree = Stree(random_state=random_state, C=.01, max_iter=1e3)"
|
||||
"stree = Stree(random_state=random_state, C=.01, max_iter=1e3, kernel=\"liblinear\", multiclass_strategy=\"ovr\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -368,4 +368,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
}
|
||||
|
@@ -1 +1,2 @@
|
||||
scikit-learn>0.24
|
||||
scikit-learn>0.24
|
||||
mufs
|
6
setup.py
6
setup.py
@@ -1,4 +1,5 @@
|
||||
import setuptools
|
||||
import os
|
||||
|
||||
|
||||
def readme():
|
||||
@@ -8,7 +9,8 @@ def readme():
|
||||
|
||||
def get_data(field):
|
||||
item = ""
|
||||
with open("stree/__init__.py") as f:
|
||||
file_name = "_version.py" if field == "version" else "__init__.py"
|
||||
with open(os.path.join("stree", file_name)) as f:
|
||||
for line in f.readlines():
|
||||
if line.startswith(f"__{field}__"):
|
||||
delim = '"' if '"' in line else "'"
|
||||
@@ -44,7 +46,7 @@ setuptools.setup(
|
||||
"Topic :: Scientific/Engineering :: Artificial Intelligence",
|
||||
"Intended Audience :: Science/Research",
|
||||
],
|
||||
install_requires=["scikit-learn", "numpy"],
|
||||
install_requires=["scikit-learn", "mufs"],
|
||||
test_suite="stree.tests",
|
||||
zip_safe=False,
|
||||
)
|
||||
|
809
stree/Splitter.py
Normal file
809
stree/Splitter.py
Normal file
@@ -0,0 +1,809 @@
|
||||
"""
|
||||
Oblique decision tree classifier based on SVM nodes
|
||||
Splitter class
|
||||
"""
|
||||
|
||||
import os
|
||||
import warnings
|
||||
import random
|
||||
from math import log, factorial
|
||||
import numpy as np
|
||||
from sklearn.feature_selection import SelectKBest, mutual_info_classif
|
||||
from sklearn.preprocessing import StandardScaler
|
||||
from sklearn.svm import SVC
|
||||
from sklearn.exceptions import ConvergenceWarning
|
||||
from mufs import MUFS
|
||||
|
||||
|
||||
class Snode:
|
||||
"""
|
||||
Nodes of the tree that keeps the svm classifier and if testing the
|
||||
dataset assigned to it
|
||||
|
||||
Parameters
|
||||
----------
|
||||
clf : SVC
|
||||
Classifier used
|
||||
X : np.ndarray
|
||||
input dataset in train time (only in testing)
|
||||
y : np.ndarray
|
||||
input labes in train time
|
||||
features : np.array
|
||||
features used to compute hyperplane
|
||||
impurity : float
|
||||
impurity of the node
|
||||
title : str
|
||||
label describing the route to the node
|
||||
weight : np.ndarray, optional
|
||||
weights applied to input dataset in train time, by default None
|
||||
scaler : StandardScaler, optional
|
||||
scaler used if any, by default None
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
clf: SVC,
|
||||
X: np.ndarray,
|
||||
y: np.ndarray,
|
||||
features: np.array,
|
||||
impurity: float,
|
||||
title: str,
|
||||
weight: np.ndarray = None,
|
||||
scaler: StandardScaler = None,
|
||||
):
|
||||
self._clf = clf
|
||||
self._title = title
|
||||
self._belief = 0.0
|
||||
# Only store dataset in Testing
|
||||
self._X = X if os.environ.get("TESTING", "NS") != "NS" else None
|
||||
self._y = y
|
||||
self._down = None
|
||||
self._up = None
|
||||
self._class = None
|
||||
self._feature = None
|
||||
self._sample_weight = (
|
||||
weight if os.environ.get("TESTING", "NS") != "NS" else None
|
||||
)
|
||||
self._features = features
|
||||
self._impurity = impurity
|
||||
self._partition_column: int = -1
|
||||
self._scaler = scaler
|
||||
|
||||
@classmethod
|
||||
def copy(cls, node: "Snode") -> "Snode":
|
||||
return cls(
|
||||
node._clf,
|
||||
node._X,
|
||||
node._y,
|
||||
node._features,
|
||||
node._impurity,
|
||||
node._title,
|
||||
node._sample_weight,
|
||||
node._scaler,
|
||||
)
|
||||
|
||||
def set_partition_column(self, col: int):
|
||||
self._partition_column = col
|
||||
|
||||
def get_partition_column(self) -> int:
|
||||
return self._partition_column
|
||||
|
||||
def set_down(self, son):
|
||||
self._down = son
|
||||
|
||||
def set_title(self, title):
|
||||
self._title = title
|
||||
|
||||
def set_classifier(self, clf):
|
||||
self._clf = clf
|
||||
|
||||
def set_features(self, features):
|
||||
self._features = features
|
||||
|
||||
def set_impurity(self, impurity):
|
||||
self._impurity = impurity
|
||||
|
||||
def get_title(self) -> str:
|
||||
return self._title
|
||||
|
||||
def get_classifier(self) -> SVC:
|
||||
return self._clf
|
||||
|
||||
def get_impurity(self) -> float:
|
||||
return self._impurity
|
||||
|
||||
def get_features(self) -> np.array:
|
||||
return self._features
|
||||
|
||||
def set_up(self, son):
|
||||
self._up = son
|
||||
|
||||
def is_leaf(self) -> bool:
|
||||
return self._up is None and self._down is None
|
||||
|
||||
def get_down(self) -> "Snode":
|
||||
return self._down
|
||||
|
||||
def get_up(self) -> "Snode":
|
||||
return self._up
|
||||
|
||||
def make_predictor(self):
|
||||
"""Compute the class of the predictor and its belief based on the
|
||||
subdataset of the node only if it is a leaf
|
||||
"""
|
||||
if not self.is_leaf():
|
||||
return
|
||||
classes, card = np.unique(self._y, return_counts=True)
|
||||
if len(classes) > 1:
|
||||
max_card = max(card)
|
||||
self._class = classes[card == max_card][0]
|
||||
self._belief = max_card / np.sum(card)
|
||||
else:
|
||||
self._belief = 1
|
||||
try:
|
||||
self._class = classes[0]
|
||||
except IndexError:
|
||||
self._class = None
|
||||
|
||||
def graph(self):
|
||||
"""
|
||||
Return a string representing the node in graphviz format
|
||||
"""
|
||||
output = ""
|
||||
count_values = np.unique(self._y, return_counts=True)
|
||||
if self.is_leaf():
|
||||
output += (
|
||||
f'N{id(self)} [shape=box style=filled label="'
|
||||
f"class={self._class} impurity={self._impurity:.3f} "
|
||||
f'classes={count_values[0]} samples={count_values[1]}"];\n'
|
||||
)
|
||||
else:
|
||||
output += (
|
||||
f'N{id(self)} [label="#features={len(self._features)} '
|
||||
f"classes={count_values[0]} samples={count_values[1]} "
|
||||
f'({sum(count_values[1])})" fontcolor=black];\n'
|
||||
)
|
||||
output += f"N{id(self)} -> N{id(self.get_up())} [color=black];\n"
|
||||
output += f"N{id(self)} -> N{id(self.get_down())} [color=black];\n"
|
||||
return output
|
||||
|
||||
def __str__(self) -> str:
|
||||
count_values = np.unique(self._y, return_counts=True)
|
||||
if self.is_leaf():
|
||||
return (
|
||||
f"{self._title} - Leaf class={self._class} belief="
|
||||
f"{self._belief: .6f} impurity={self._impurity:.4f} "
|
||||
f"counts={count_values}"
|
||||
)
|
||||
return (
|
||||
f"{self._title} feaures={self._features} impurity="
|
||||
f"{self._impurity:.4f} "
|
||||
f"counts={count_values}"
|
||||
)
|
||||
|
||||
|
||||
class Siterator:
|
||||
"""Stree preorder iterator"""
|
||||
|
||||
def __init__(self, tree: Snode):
|
||||
self._stack = []
|
||||
self._push(tree)
|
||||
|
||||
def __iter__(self):
|
||||
# To complete the iterator interface
|
||||
return self
|
||||
|
||||
def _push(self, node: Snode):
|
||||
if node is not None:
|
||||
self._stack.append(node)
|
||||
|
||||
def __next__(self) -> Snode:
|
||||
if len(self._stack) == 0:
|
||||
raise StopIteration()
|
||||
node = self._stack.pop()
|
||||
self._push(node.get_up())
|
||||
self._push(node.get_down())
|
||||
return node
|
||||
|
||||
|
||||
class Splitter:
|
||||
"""
|
||||
Splits a dataset in two based on different criteria
|
||||
|
||||
Parameters
|
||||
----------
|
||||
clf : SVC, optional
|
||||
classifier, by default None
|
||||
criterion : str, optional
|
||||
The function to measure the quality of a split (only used if
|
||||
max_features != num_features). Supported criteria are “gini” for the
|
||||
Gini impurity and “entropy” for the information gain., by default
|
||||
"entropy", by default None
|
||||
feature_select : str, optional
|
||||
The strategy used to choose the feature set at each node (only used if
|
||||
max_features < num_features). Supported strategies are: “best”: sklearn
|
||||
SelectKBest algorithm is used in every node to choose the max_features
|
||||
best features. “random”: The algorithm generates 5 candidates and
|
||||
choose the best (max. info. gain) of them. “trandom”: The algorithm
|
||||
generates only one random combination. "mutual": Chooses the best
|
||||
features w.r.t. their mutual info with the label. "cfs": Apply
|
||||
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
|
||||
Based, by default None
|
||||
criteria : str, optional
|
||||
ecides (just in case of a multi class classification) which column
|
||||
(class) use to split the dataset in a node. max_samples is
|
||||
incompatible with 'ovo' multiclass_strategy, by default None
|
||||
min_samples_split : int, optional
|
||||
The minimum number of samples required to split an internal node. 0
|
||||
(default) for any, by default None
|
||||
random_state : optional
|
||||
Controls the pseudo random number generation for shuffling the data for
|
||||
probability estimates. Ignored when probability is False.Pass an int
|
||||
for reproducible output across multiple function calls, by
|
||||
default None
|
||||
normalize : bool, optional
|
||||
If standardization of features should be applied on each node with the
|
||||
samples that reach it , by default False
|
||||
|
||||
Raises
|
||||
------
|
||||
ValueError
|
||||
clf has to be a sklearn estimator
|
||||
ValueError
|
||||
criterion must be gini or entropy
|
||||
ValueError
|
||||
criteria has to be max_samples or impurity
|
||||
ValueError
|
||||
splitter must be in {random, best, mutual, cfs, fcbf}
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
clf: SVC = None,
|
||||
criterion: str = None,
|
||||
feature_select: str = None,
|
||||
criteria: str = None,
|
||||
min_samples_split: int = None,
|
||||
random_state=None,
|
||||
normalize=False,
|
||||
):
|
||||
|
||||
self._clf = clf
|
||||
self._random_state = random_state
|
||||
if random_state is not None:
|
||||
random.seed(random_state)
|
||||
self._criterion = criterion
|
||||
self._min_samples_split = min_samples_split
|
||||
self._criteria = criteria
|
||||
self._feature_select = feature_select
|
||||
self._normalize = normalize
|
||||
|
||||
if clf is None:
|
||||
raise ValueError(f"clf has to be a sklearn estimator, got({clf})")
|
||||
|
||||
if criterion not in ["gini", "entropy"]:
|
||||
raise ValueError(
|
||||
f"criterion must be gini or entropy got({criterion})"
|
||||
)
|
||||
|
||||
if criteria not in [
|
||||
"max_samples",
|
||||
"impurity",
|
||||
]:
|
||||
raise ValueError(
|
||||
f"criteria has to be max_samples or impurity; got ({criteria})"
|
||||
)
|
||||
|
||||
if feature_select not in [
|
||||
"random",
|
||||
"trandom",
|
||||
"best",
|
||||
"mutual",
|
||||
"cfs",
|
||||
"fcbf",
|
||||
"iwss",
|
||||
]:
|
||||
raise ValueError(
|
||||
"splitter must be in {random, trandom, best, mutual, cfs, "
|
||||
"fcbf, iwss} "
|
||||
f"got ({feature_select})"
|
||||
)
|
||||
self.criterion_function = getattr(self, f"_{self._criterion}")
|
||||
self.decision_criteria = getattr(self, f"_{self._criteria}")
|
||||
self.fs_function = getattr(self, f"_fs_{self._feature_select}")
|
||||
|
||||
def _fs_random(
|
||||
self, dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Return the best of five random feature set combinations
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(< number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
# Random feature reduction
|
||||
n_features = dataset.shape[1]
|
||||
features_sets = self._generate_spaces(n_features, max_features)
|
||||
return self._select_best_set(dataset, labels, features_sets)
|
||||
|
||||
@staticmethod
|
||||
def _fs_trandom(
|
||||
dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Return the a random feature set combination
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(< number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
# Random feature reduction
|
||||
n_features = dataset.shape[1]
|
||||
return tuple(sorted(random.sample(range(n_features), max_features)))
|
||||
|
||||
@staticmethod
|
||||
def _fs_best(
|
||||
dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Return the variabes with higher f-score
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(< number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
return (
|
||||
SelectKBest(k=max_features)
|
||||
.fit(dataset, labels)
|
||||
.get_support(indices=True)
|
||||
)
|
||||
|
||||
def _fs_mutual(
|
||||
self, dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Return the best features with mutual information with labels
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(< number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
# return best features with mutual info with the label
|
||||
feature_list = mutual_info_classif(
|
||||
dataset, labels, random_state=self._random_state
|
||||
)
|
||||
return tuple(
|
||||
sorted(
|
||||
range(len(feature_list)), key=lambda sub: feature_list[sub]
|
||||
)[-max_features:]
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _fs_cfs(
|
||||
dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Correlattion-based feature selection with max_features limit
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(< number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
mufs = MUFS(max_features=max_features, discrete=False)
|
||||
return mufs.cfs(dataset, labels).get_results()
|
||||
|
||||
@staticmethod
|
||||
def _fs_fcbf(
|
||||
dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Fast Correlation-based Filter algorithm with max_features limit
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(< number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
mufs = MUFS(max_features=max_features, discrete=False)
|
||||
return mufs.fcbf(dataset, labels, 5e-4).get_results()
|
||||
|
||||
@staticmethod
|
||||
def _fs_iwss(
|
||||
dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Correlattion-based feature selection based on iwss with max_features
|
||||
limit
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(< number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
mufs = MUFS(max_features=max_features, discrete=False)
|
||||
return mufs.iwss(dataset, labels, 0.25).get_results()
|
||||
|
||||
def partition_impurity(self, y: np.array) -> np.array:
|
||||
return self.criterion_function(y)
|
||||
|
||||
@staticmethod
|
||||
def _gini(y: np.array) -> float:
|
||||
_, count = np.unique(y, return_counts=True)
|
||||
return 1 - np.sum(np.square(count / np.sum(count)))
|
||||
|
||||
@staticmethod
|
||||
def _entropy(y: np.array) -> float:
|
||||
"""Compute entropy of a labels set
|
||||
|
||||
Parameters
|
||||
----------
|
||||
y : np.array
|
||||
set of labels
|
||||
|
||||
Returns
|
||||
-------
|
||||
float
|
||||
entropy
|
||||
"""
|
||||
n_labels = len(y)
|
||||
if n_labels <= 1:
|
||||
return 0
|
||||
counts = np.bincount(y)
|
||||
proportions = counts / n_labels
|
||||
n_classes = np.count_nonzero(proportions)
|
||||
if n_classes <= 1:
|
||||
return 0
|
||||
entropy = 0.0
|
||||
# Compute standard entropy.
|
||||
for prop in proportions:
|
||||
if prop != 0.0:
|
||||
entropy -= prop * log(prop, n_classes)
|
||||
return entropy
|
||||
|
||||
def information_gain(
|
||||
self, labels: np.array, labels_up: np.array, labels_dn: np.array
|
||||
) -> float:
|
||||
"""Compute information gain of a split candidate
|
||||
|
||||
Parameters
|
||||
----------
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
labels_up : np.array
|
||||
labels of one side
|
||||
labels_dn : np.array
|
||||
labels on the other side
|
||||
|
||||
Returns
|
||||
-------
|
||||
float
|
||||
information gain
|
||||
"""
|
||||
imp_prev = self.criterion_function(labels)
|
||||
card_up = card_dn = imp_up = imp_dn = 0
|
||||
if labels_up is not None:
|
||||
card_up = labels_up.shape[0]
|
||||
imp_up = self.criterion_function(labels_up)
|
||||
if labels_dn is not None:
|
||||
card_dn = labels_dn.shape[0] if labels_dn is not None else 0
|
||||
imp_dn = self.criterion_function(labels_dn)
|
||||
samples = card_up + card_dn
|
||||
if samples == 0:
|
||||
return 0.0
|
||||
else:
|
||||
result = (
|
||||
imp_prev
|
||||
- (card_up / samples) * imp_up
|
||||
- (card_dn / samples) * imp_dn
|
||||
)
|
||||
return result
|
||||
|
||||
def _select_best_set(
|
||||
self, dataset: np.array, labels: np.array, features_sets: list
|
||||
) -> list:
|
||||
"""Return the best set of features among feature_sets, the criterion is
|
||||
the information gain
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples (# samples, # features)
|
||||
labels : np.array
|
||||
array of labels
|
||||
features_sets : list
|
||||
list of features sets to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
list
|
||||
best feature set
|
||||
"""
|
||||
max_gain = 0
|
||||
selected = None
|
||||
warnings.filterwarnings("ignore", category=ConvergenceWarning)
|
||||
for feature_set in features_sets:
|
||||
self._clf.fit(dataset[:, feature_set], labels)
|
||||
node = Snode(
|
||||
self._clf, dataset, labels, feature_set, 0.0, "subset"
|
||||
)
|
||||
self.partition(dataset, node, train=True)
|
||||
y1, y2 = self.part(labels)
|
||||
gain = self.information_gain(labels, y1, y2)
|
||||
if gain > max_gain:
|
||||
max_gain = gain
|
||||
selected = feature_set
|
||||
return selected if selected is not None else feature_set
|
||||
|
||||
@staticmethod
|
||||
def _generate_spaces(features: int, max_features: int) -> list:
|
||||
"""Generate at most 5 feature random combinations
|
||||
|
||||
Parameters
|
||||
----------
|
||||
features : int
|
||||
number of features in each combination
|
||||
max_features : int
|
||||
number of features in dataset
|
||||
|
||||
Returns
|
||||
-------
|
||||
list
|
||||
list with up to 5 combination of features randomly selected
|
||||
"""
|
||||
comb = set()
|
||||
# Generate at most 5 combinations
|
||||
number = factorial(features) / (
|
||||
factorial(max_features) * factorial(features - max_features)
|
||||
)
|
||||
set_length = min(5, number)
|
||||
while len(comb) < set_length:
|
||||
comb.add(
|
||||
tuple(sorted(random.sample(range(features), max_features)))
|
||||
)
|
||||
return list(comb)
|
||||
|
||||
def _get_subspaces_set(
|
||||
self, dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Compute the indices of the features selected by splitter depending
|
||||
on the self._feature_select hyper parameter
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(<= number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
# No feature reduction
|
||||
n_features = dataset.shape[1]
|
||||
if n_features == max_features:
|
||||
return tuple(range(n_features))
|
||||
# select features as selected in constructor
|
||||
return self.fs_function(dataset, labels, max_features)
|
||||
|
||||
def get_subspace(
|
||||
self, dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Re3turn a subspace of the selected dataset of max_features length.
|
||||
Depending on hyperparameter
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples (# samples, # features)
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features to form the subspace
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
tuple with the dataset with only the features selected and the
|
||||
indices of the features selected
|
||||
"""
|
||||
indices = self._get_subspaces_set(dataset, labels, max_features)
|
||||
return dataset[:, indices], indices
|
||||
|
||||
def _impurity(self, data: np.array, y: np.array) -> np.array:
|
||||
"""return column of dataset to be taken into account to split dataset
|
||||
|
||||
Parameters
|
||||
----------
|
||||
data : np.array
|
||||
distances to hyper plane of every class
|
||||
y : np.array
|
||||
vector of labels (classes)
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.array
|
||||
column of dataset to be taken into account to split dataset
|
||||
"""
|
||||
max_gain = 0
|
||||
selected = -1
|
||||
for col in range(data.shape[1]):
|
||||
tup = y[data[:, col] > 0]
|
||||
tdn = y[data[:, col] <= 0]
|
||||
info_gain = self.information_gain(y, tup, tdn)
|
||||
if info_gain > max_gain:
|
||||
selected = col
|
||||
max_gain = info_gain
|
||||
return selected
|
||||
|
||||
@staticmethod
|
||||
def _max_samples(data: np.array, y: np.array) -> np.array:
|
||||
"""return column of dataset to be taken into account to split dataset
|
||||
|
||||
Parameters
|
||||
----------
|
||||
data : np.array
|
||||
distances to hyper plane of every class
|
||||
y : np.array
|
||||
column of dataset to be taken into account to split dataset
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.array
|
||||
column of dataset to be taken into account to split dataset
|
||||
"""
|
||||
# select the class with max number of samples
|
||||
_, samples = np.unique(y, return_counts=True)
|
||||
return np.argmax(samples)
|
||||
|
||||
def partition(self, samples: np.array, node: Snode, train: bool):
|
||||
"""Set the criteria to split arrays. Compute the indices of the samples
|
||||
that should go to one side of the tree (up)
|
||||
|
||||
Parameters
|
||||
----------
|
||||
samples : np.array
|
||||
array of samples (# samples, # features)
|
||||
node : Snode
|
||||
Node of the tree where partition is going to be made
|
||||
train : bool
|
||||
Train time - True / Test time - False
|
||||
"""
|
||||
# data contains the distances of every sample to every class hyperplane
|
||||
# array of (m, nc) nc = # classes
|
||||
data = self._distances(node, samples)
|
||||
if data.shape[0] < self._min_samples_split:
|
||||
# there aren't enough samples to split
|
||||
self._up = np.ones((data.shape[0]), dtype=bool)
|
||||
return
|
||||
if data.ndim > 1:
|
||||
# split criteria for multiclass
|
||||
# Convert data to a (m, 1) array selecting values for samples
|
||||
if train:
|
||||
# in train time we have to compute the column to take into
|
||||
# account to split the dataset
|
||||
col = self.decision_criteria(data, node._y)
|
||||
node.set_partition_column(col)
|
||||
else:
|
||||
# in predcit time just use the column computed in train time
|
||||
# is taking the classifier of class <col>
|
||||
col = node.get_partition_column()
|
||||
if col == -1:
|
||||
# No partition is producing information gain
|
||||
data = np.ones(data.shape)
|
||||
data = data[:, col]
|
||||
self._up = data > 0
|
||||
|
||||
def part(self, origin: np.array) -> list:
|
||||
"""Split an array in two based on indices (self._up) and its complement
|
||||
partition has to be called first to establish up indices
|
||||
|
||||
Parameters
|
||||
----------
|
||||
origin : np.array
|
||||
dataset to split
|
||||
|
||||
Returns
|
||||
-------
|
||||
list
|
||||
list with two splits of the array
|
||||
"""
|
||||
down = ~self._up
|
||||
return [
|
||||
origin[self._up] if any(self._up) else None,
|
||||
origin[down] if any(down) else None,
|
||||
]
|
||||
|
||||
def _distances(self, node: Snode, data: np.ndarray) -> np.array:
|
||||
"""Compute distances of the samples to the hyperplane of the node
|
||||
|
||||
Parameters
|
||||
----------
|
||||
node : Snode
|
||||
node containing the svm classifier
|
||||
data : np.ndarray
|
||||
samples to compute distance to hyperplane
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.array
|
||||
array of shape (m, nc) with the distances of every sample to
|
||||
the hyperplane of every class. nc = # of classes
|
||||
"""
|
||||
X_transformed = data[:, node._features]
|
||||
if self._normalize:
|
||||
X_transformed = node._scaler.transform(X_transformed)
|
||||
return node._clf.decision_function(X_transformed)
|
667
stree/Strees.py
667
stree/Strees.py
@@ -2,561 +2,137 @@
|
||||
Oblique decision tree classifier based on SVM nodes
|
||||
"""
|
||||
|
||||
import os
|
||||
import numbers
|
||||
import random
|
||||
import warnings
|
||||
from math import log, factorial
|
||||
from typing import Optional
|
||||
import numpy as np
|
||||
from sklearn.base import BaseEstimator, ClassifierMixin
|
||||
from sklearn.svm import SVC, LinearSVC
|
||||
from sklearn.feature_selection import SelectKBest, mutual_info_classif
|
||||
from sklearn.preprocessing import StandardScaler
|
||||
from sklearn.utils.multiclass import check_classification_targets
|
||||
from sklearn.exceptions import ConvergenceWarning
|
||||
from sklearn.utils.validation import (
|
||||
check_X_y,
|
||||
check_array,
|
||||
check_is_fitted,
|
||||
_check_sample_weight,
|
||||
)
|
||||
|
||||
|
||||
class Snode:
|
||||
"""Nodes of the tree that keeps the svm classifier and if testing the
|
||||
dataset assigned to it
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
clf: SVC,
|
||||
X: np.ndarray,
|
||||
y: np.ndarray,
|
||||
features: np.array,
|
||||
impurity: float,
|
||||
title: str,
|
||||
weight: np.ndarray = None,
|
||||
scaler: StandardScaler = None,
|
||||
):
|
||||
self._clf = clf
|
||||
self._title = title
|
||||
self._belief = 0.0
|
||||
# Only store dataset in Testing
|
||||
self._X = X if os.environ.get("TESTING", "NS") != "NS" else None
|
||||
self._y = y
|
||||
self._down = None
|
||||
self._up = None
|
||||
self._class = None
|
||||
self._feature = None
|
||||
self._sample_weight = (
|
||||
weight if os.environ.get("TESTING", "NS") != "NS" else None
|
||||
)
|
||||
self._features = features
|
||||
self._impurity = impurity
|
||||
self._partition_column: int = -1
|
||||
self._scaler = scaler
|
||||
|
||||
@classmethod
|
||||
def copy(cls, node: "Snode") -> "Snode":
|
||||
return cls(
|
||||
node._clf,
|
||||
node._X,
|
||||
node._y,
|
||||
node._features,
|
||||
node._impurity,
|
||||
node._title,
|
||||
node._sample_weight,
|
||||
node._scaler,
|
||||
)
|
||||
|
||||
def set_partition_column(self, col: int):
|
||||
self._partition_column = col
|
||||
|
||||
def get_partition_column(self) -> int:
|
||||
return self._partition_column
|
||||
|
||||
def set_down(self, son):
|
||||
self._down = son
|
||||
|
||||
def set_title(self, title):
|
||||
self._title = title
|
||||
|
||||
def set_classifier(self, clf):
|
||||
self._clf = clf
|
||||
|
||||
def set_features(self, features):
|
||||
self._features = features
|
||||
|
||||
def set_impurity(self, impurity):
|
||||
self._impurity = impurity
|
||||
|
||||
def get_title(self) -> str:
|
||||
return self._title
|
||||
|
||||
def get_classifier(self) -> SVC:
|
||||
return self._clf
|
||||
|
||||
def get_impurity(self) -> float:
|
||||
return self._impurity
|
||||
|
||||
def get_features(self) -> np.array:
|
||||
return self._features
|
||||
|
||||
def set_up(self, son):
|
||||
self._up = son
|
||||
|
||||
def is_leaf(self) -> bool:
|
||||
return self._up is None and self._down is None
|
||||
|
||||
def get_down(self) -> "Snode":
|
||||
return self._down
|
||||
|
||||
def get_up(self) -> "Snode":
|
||||
return self._up
|
||||
|
||||
def make_predictor(self):
|
||||
"""Compute the class of the predictor and its belief based on the
|
||||
subdataset of the node only if it is a leaf
|
||||
"""
|
||||
if not self.is_leaf():
|
||||
return
|
||||
classes, card = np.unique(self._y, return_counts=True)
|
||||
if len(classes) > 1:
|
||||
max_card = max(card)
|
||||
self._class = classes[card == max_card][0]
|
||||
self._belief = max_card / np.sum(card)
|
||||
else:
|
||||
self._belief = 1
|
||||
try:
|
||||
self._class = classes[0]
|
||||
except IndexError:
|
||||
self._class = None
|
||||
|
||||
def __str__(self) -> str:
|
||||
count_values = np.unique(self._y, return_counts=True)
|
||||
if self.is_leaf():
|
||||
return (
|
||||
f"{self._title} - Leaf class={self._class} belief="
|
||||
f"{self._belief: .6f} impurity={self._impurity:.4f} "
|
||||
f"counts={count_values}"
|
||||
)
|
||||
return (
|
||||
f"{self._title} feaures={self._features} impurity="
|
||||
f"{self._impurity:.4f} "
|
||||
f"counts={count_values}"
|
||||
)
|
||||
|
||||
|
||||
class Siterator:
|
||||
"""Stree preorder iterator"""
|
||||
|
||||
def __init__(self, tree: Snode):
|
||||
self._stack = []
|
||||
self._push(tree)
|
||||
|
||||
def __iter__(self):
|
||||
# To complete the iterator interface
|
||||
return self
|
||||
|
||||
def _push(self, node: Snode):
|
||||
if node is not None:
|
||||
self._stack.append(node)
|
||||
|
||||
def __next__(self) -> Snode:
|
||||
if len(self._stack) == 0:
|
||||
raise StopIteration()
|
||||
node = self._stack.pop()
|
||||
self._push(node.get_up())
|
||||
self._push(node.get_down())
|
||||
return node
|
||||
|
||||
|
||||
class Splitter:
|
||||
def __init__(
|
||||
self,
|
||||
clf: SVC = None,
|
||||
criterion: str = None,
|
||||
feature_select: str = None,
|
||||
criteria: str = None,
|
||||
min_samples_split: int = None,
|
||||
random_state=None,
|
||||
normalize=False,
|
||||
):
|
||||
self._clf = clf
|
||||
self._random_state = random_state
|
||||
if random_state is not None:
|
||||
random.seed(random_state)
|
||||
self._criterion = criterion
|
||||
self._min_samples_split = min_samples_split
|
||||
self._criteria = criteria
|
||||
self._feature_select = feature_select
|
||||
self._normalize = normalize
|
||||
|
||||
if clf is None:
|
||||
raise ValueError(f"clf has to be a sklearn estimator, got({clf})")
|
||||
|
||||
if criterion not in ["gini", "entropy"]:
|
||||
raise ValueError(
|
||||
f"criterion must be gini or entropy got({criterion})"
|
||||
)
|
||||
|
||||
if criteria not in [
|
||||
"max_samples",
|
||||
"impurity",
|
||||
]:
|
||||
raise ValueError(
|
||||
f"criteria has to be max_samples or impurity; got ({criteria})"
|
||||
)
|
||||
|
||||
if feature_select not in ["random", "best", "mutual"]:
|
||||
raise ValueError(
|
||||
"splitter must be in {random, best, mutual} got "
|
||||
f"({feature_select})"
|
||||
)
|
||||
self.criterion_function = getattr(self, f"_{self._criterion}")
|
||||
self.decision_criteria = getattr(self, f"_{self._criteria}")
|
||||
|
||||
def partition_impurity(self, y: np.array) -> np.array:
|
||||
return self.criterion_function(y)
|
||||
|
||||
@staticmethod
|
||||
def _gini(y: np.array) -> float:
|
||||
_, count = np.unique(y, return_counts=True)
|
||||
return 1 - np.sum(np.square(count / np.sum(count)))
|
||||
|
||||
@staticmethod
|
||||
def _entropy(y: np.array) -> float:
|
||||
"""Compute entropy of a labels set
|
||||
|
||||
Parameters
|
||||
----------
|
||||
y : np.array
|
||||
set of labels
|
||||
|
||||
Returns
|
||||
-------
|
||||
float
|
||||
entropy
|
||||
"""
|
||||
n_labels = len(y)
|
||||
if n_labels <= 1:
|
||||
return 0
|
||||
counts = np.bincount(y)
|
||||
proportions = counts / n_labels
|
||||
n_classes = np.count_nonzero(proportions)
|
||||
if n_classes <= 1:
|
||||
return 0
|
||||
entropy = 0.0
|
||||
# Compute standard entropy.
|
||||
for prop in proportions:
|
||||
if prop != 0.0:
|
||||
entropy -= prop * log(prop, n_classes)
|
||||
return entropy
|
||||
|
||||
def information_gain(
|
||||
self, labels: np.array, labels_up: np.array, labels_dn: np.array
|
||||
) -> float:
|
||||
"""Compute information gain of a split candidate
|
||||
|
||||
Parameters
|
||||
----------
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
labels_up : np.array
|
||||
labels of one side
|
||||
labels_dn : np.array
|
||||
labels on the other side
|
||||
|
||||
Returns
|
||||
-------
|
||||
float
|
||||
information gain
|
||||
"""
|
||||
imp_prev = self.criterion_function(labels)
|
||||
card_up = card_dn = imp_up = imp_dn = 0
|
||||
if labels_up is not None:
|
||||
card_up = labels_up.shape[0]
|
||||
imp_up = self.criterion_function(labels_up)
|
||||
if labels_dn is not None:
|
||||
card_dn = labels_dn.shape[0] if labels_dn is not None else 0
|
||||
imp_dn = self.criterion_function(labels_dn)
|
||||
samples = card_up + card_dn
|
||||
if samples == 0:
|
||||
return 0.0
|
||||
else:
|
||||
result = (
|
||||
imp_prev
|
||||
- (card_up / samples) * imp_up
|
||||
- (card_dn / samples) * imp_dn
|
||||
)
|
||||
return result
|
||||
|
||||
def _select_best_set(
|
||||
self, dataset: np.array, labels: np.array, features_sets: list
|
||||
) -> list:
|
||||
"""Return the best set of features among feature_sets, the criterion is
|
||||
the information gain
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples (# samples, # features)
|
||||
labels : np.array
|
||||
array of labels
|
||||
features_sets : list
|
||||
list of features sets to check
|
||||
|
||||
Returns
|
||||
-------
|
||||
list
|
||||
best feature set
|
||||
"""
|
||||
max_gain = 0
|
||||
selected = None
|
||||
warnings.filterwarnings("ignore", category=ConvergenceWarning)
|
||||
for feature_set in features_sets:
|
||||
self._clf.fit(dataset[:, feature_set], labels)
|
||||
node = Snode(
|
||||
self._clf, dataset, labels, feature_set, 0.0, "subset"
|
||||
)
|
||||
self.partition(dataset, node, train=True)
|
||||
y1, y2 = self.part(labels)
|
||||
gain = self.information_gain(labels, y1, y2)
|
||||
if gain > max_gain:
|
||||
max_gain = gain
|
||||
selected = feature_set
|
||||
return selected if selected is not None else feature_set
|
||||
|
||||
@staticmethod
|
||||
def _generate_spaces(features: int, max_features: int) -> list:
|
||||
"""Generate at most 5 feature random combinations
|
||||
|
||||
Parameters
|
||||
----------
|
||||
features : int
|
||||
number of features in each combination
|
||||
max_features : int
|
||||
number of features in dataset
|
||||
|
||||
Returns
|
||||
-------
|
||||
list
|
||||
list with up to 5 combination of features randomly selected
|
||||
"""
|
||||
comb = set()
|
||||
# Generate at most 5 combinations
|
||||
number = factorial(features) / (
|
||||
factorial(max_features) * factorial(features - max_features)
|
||||
)
|
||||
set_length = min(5, number)
|
||||
while len(comb) < set_length:
|
||||
comb.add(
|
||||
tuple(sorted(random.sample(range(features), max_features)))
|
||||
)
|
||||
return list(comb)
|
||||
|
||||
def _get_subspaces_set(
|
||||
self, dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Compute the indices of the features selected by splitter depending
|
||||
on the self._feature_select hyper parameter
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features of the subspace
|
||||
(<= number of features in dataset)
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
indices of the features selected
|
||||
"""
|
||||
# No feature reduction
|
||||
if dataset.shape[1] == max_features:
|
||||
return tuple(range(dataset.shape[1]))
|
||||
# Random feature reduction
|
||||
if self._feature_select == "random":
|
||||
features_sets = self._generate_spaces(
|
||||
dataset.shape[1], max_features
|
||||
)
|
||||
return self._select_best_set(dataset, labels, features_sets)
|
||||
# return the KBest features
|
||||
if self._feature_select == "best":
|
||||
return (
|
||||
SelectKBest(k=max_features)
|
||||
.fit(dataset, labels)
|
||||
.get_support(indices=True)
|
||||
)
|
||||
# return best features with mutual info with the label
|
||||
feature_list = mutual_info_classif(dataset, labels)
|
||||
return tuple(
|
||||
sorted(
|
||||
range(len(feature_list)), key=lambda sub: feature_list[sub]
|
||||
)[-max_features:]
|
||||
)
|
||||
|
||||
def get_subspace(
|
||||
self, dataset: np.array, labels: np.array, max_features: int
|
||||
) -> tuple:
|
||||
"""Re3turn a subspace of the selected dataset of max_features length.
|
||||
Depending on hyperparmeter
|
||||
|
||||
Parameters
|
||||
----------
|
||||
dataset : np.array
|
||||
array of samples (# samples, # features)
|
||||
labels : np.array
|
||||
labels of the dataset
|
||||
max_features : int
|
||||
number of features to form the subspace
|
||||
|
||||
Returns
|
||||
-------
|
||||
tuple
|
||||
tuple with the dataset with only the features selected and the
|
||||
indices of the features selected
|
||||
"""
|
||||
indices = self._get_subspaces_set(dataset, labels, max_features)
|
||||
return dataset[:, indices], indices
|
||||
|
||||
def _impurity(self, data: np.array, y: np.array) -> np.array:
|
||||
"""return column of dataset to be taken into account to split dataset
|
||||
|
||||
Parameters
|
||||
----------
|
||||
data : np.array
|
||||
distances to hyper plane of every class
|
||||
y : np.array
|
||||
vector of labels (classes)
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.array
|
||||
column of dataset to be taken into account to split dataset
|
||||
"""
|
||||
max_gain = 0
|
||||
selected = -1
|
||||
for col in range(data.shape[1]):
|
||||
tup = y[data[:, col] > 0]
|
||||
tdn = y[data[:, col] <= 0]
|
||||
info_gain = self.information_gain(y, tup, tdn)
|
||||
if info_gain > max_gain:
|
||||
selected = col
|
||||
max_gain = info_gain
|
||||
return selected
|
||||
|
||||
@staticmethod
|
||||
def _max_samples(data: np.array, y: np.array) -> np.array:
|
||||
"""return column of dataset to be taken into account to split dataset
|
||||
|
||||
Parameters
|
||||
----------
|
||||
data : np.array
|
||||
distances to hyper plane of every class
|
||||
y : np.array
|
||||
column of dataset to be taken into account to split dataset
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.array
|
||||
column of dataset to be taken into account to split dataset
|
||||
"""
|
||||
# select the class with max number of samples
|
||||
_, samples = np.unique(y, return_counts=True)
|
||||
return np.argmax(samples)
|
||||
|
||||
def partition(self, samples: np.array, node: Snode, train: bool):
|
||||
"""Set the criteria to split arrays. Compute the indices of the samples
|
||||
that should go to one side of the tree (up)
|
||||
|
||||
Parameters
|
||||
----------
|
||||
samples : np.array
|
||||
array of samples (# samples, # features)
|
||||
node : Snode
|
||||
Node of the tree where partition is going to be made
|
||||
train : bool
|
||||
Train time - True / Test time - False
|
||||
"""
|
||||
# data contains the distances of every sample to every class hyperplane
|
||||
# array of (m, nc) nc = # classes
|
||||
data = self._distances(node, samples)
|
||||
if data.shape[0] < self._min_samples_split:
|
||||
# there aren't enough samples to split
|
||||
self._up = np.ones((data.shape[0]), dtype=bool)
|
||||
return
|
||||
if data.ndim > 1:
|
||||
# split criteria for multiclass
|
||||
# Convert data to a (m, 1) array selecting values for samples
|
||||
if train:
|
||||
# in train time we have to compute the column to take into
|
||||
# account to split the dataset
|
||||
col = self.decision_criteria(data, node._y)
|
||||
node.set_partition_column(col)
|
||||
else:
|
||||
# in predcit time just use the column computed in train time
|
||||
# is taking the classifier of class <col>
|
||||
col = node.get_partition_column()
|
||||
if col == -1:
|
||||
# No partition is producing information gain
|
||||
data = np.ones(data.shape)
|
||||
data = data[:, col]
|
||||
self._up = data > 0
|
||||
|
||||
def part(self, origin: np.array) -> list:
|
||||
"""Split an array in two based on indices (self._up) and its complement
|
||||
partition has to be called first to establish up indices
|
||||
|
||||
Parameters
|
||||
----------
|
||||
origin : np.array
|
||||
dataset to split
|
||||
|
||||
Returns
|
||||
-------
|
||||
list
|
||||
list with two splits of the array
|
||||
"""
|
||||
down = ~self._up
|
||||
return [
|
||||
origin[self._up] if any(self._up) else None,
|
||||
origin[down] if any(down) else None,
|
||||
]
|
||||
|
||||
def _distances(self, node: Snode, data: np.ndarray) -> np.array:
|
||||
"""Compute distances of the samples to the hyperplane of the node
|
||||
|
||||
Parameters
|
||||
----------
|
||||
node : Snode
|
||||
node containing the svm classifier
|
||||
data : np.ndarray
|
||||
samples to compute distance to hyperplane
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.array
|
||||
array of shape (m, nc) with the distances of every sample to
|
||||
the hyperplane of every class. nc = # of classes
|
||||
"""
|
||||
X_transformed = data[:, node._features]
|
||||
if self._normalize:
|
||||
X_transformed = node._scaler.transform(X_transformed)
|
||||
return node._clf.decision_function(X_transformed)
|
||||
from .Splitter import Splitter, Snode, Siterator
|
||||
from ._version import __version__
|
||||
|
||||
|
||||
class Stree(BaseEstimator, ClassifierMixin):
|
||||
"""Estimator that is based on binary trees of svm nodes
|
||||
"""
|
||||
Estimator that is based on binary trees of svm nodes
|
||||
can deal with sample_weights in predict, used in boosting sklearn methods
|
||||
inheriting from BaseEstimator implements get_params and set_params methods
|
||||
inheriting from ClassifierMixin implement the attribute _estimator_type
|
||||
with "classifier" as value
|
||||
|
||||
Parameters
|
||||
----------
|
||||
C : float, optional
|
||||
Regularization parameter. The strength of the regularization is
|
||||
inversely proportional to C. Must be strictly positive., by default 1.0
|
||||
kernel : str, optional
|
||||
Specifies the kernel type to be used in the algorithm. It must be one
|
||||
of ‘liblinear’, ‘linear’, ‘poly’ or ‘rbf’. liblinear uses
|
||||
[liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and
|
||||
the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)
|
||||
library through scikit-learn library, by default "linear"
|
||||
max_iter : int, optional
|
||||
Hard limit on iterations within solver, or -1 for no limit., by default
|
||||
1e5
|
||||
random_state : int, optional
|
||||
Controls the pseudo random number generation for shuffling the data for
|
||||
probability estimates. Ignored when probability is False.Pass an int
|
||||
for reproducible output across multiple function calls, by
|
||||
default None
|
||||
max_depth : int, optional
|
||||
Specifies the maximum depth of the tree, by default None
|
||||
tol : float, optional
|
||||
Tolerance for stopping, by default 1e-4
|
||||
degree : int, optional
|
||||
Degree of the polynomial kernel function (‘poly’). Ignored by all other
|
||||
kernels., by default 3
|
||||
gamma : str, optional
|
||||
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.if gamma='scale'
|
||||
(default) is passed then it uses 1 / (n_features * X.var()) as value
|
||||
of gamma,if ‘auto’, uses 1 / n_features., by default "scale"
|
||||
split_criteria : str, optional
|
||||
Decides (just in case of a multi class classification) which column
|
||||
(class) use to split the dataset in a node. max_samples is
|
||||
incompatible with 'ovo' multiclass_strategy, by default "impurity"
|
||||
criterion : str, optional
|
||||
The function to measure the quality of a split (only used if
|
||||
max_features != num_features). Supported criteria are “gini” for the
|
||||
Gini impurity and “entropy” for the information gain., by default
|
||||
"entropy"
|
||||
min_samples_split : int, optional
|
||||
The minimum number of samples required to split an internal node. 0
|
||||
(default) for any, by default 0
|
||||
max_features : optional
|
||||
The number of features to consider when looking for the split: If int,
|
||||
then consider max_features features at each split. If float, then
|
||||
max_features is a fraction and int(max_features * n_features) features
|
||||
are considered at each split. If “auto”, then max_features=
|
||||
sqrt(n_features). If “sqrt”, then max_features=sqrt(n_features). If
|
||||
“log2”, then max_features=log2(n_features). If None, then max_features=
|
||||
n_features., by default None
|
||||
splitter : str, optional
|
||||
The strategy used to choose the feature set at each node (only used if
|
||||
max_features < num_features). Supported strategies are: “best”: sklearn
|
||||
SelectKBest algorithm is used in every node to choose the max_features
|
||||
best features. “random”: The algorithm generates 5 candidates and
|
||||
choose the best (max. info. gain) of them. “trandom”: The algorithm
|
||||
generates only one random combination. "mutual": Chooses the best
|
||||
features w.r.t. their mutual info with the label. "cfs": Apply
|
||||
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
|
||||
Based , by default "random"
|
||||
multiclass_strategy : str, optional
|
||||
Strategy to use with multiclass datasets, "ovo": one versus one. "ovr":
|
||||
one versus rest, by default "ovo"
|
||||
normalize : bool, optional
|
||||
If standardization of features should be applied on each node with the
|
||||
samples that reach it , by default False
|
||||
|
||||
Attributes
|
||||
----------
|
||||
classes_ : ndarray of shape (n_classes,)
|
||||
The classes labels.
|
||||
|
||||
n_classes_ : int
|
||||
The number of classes
|
||||
|
||||
n_iter_ : int
|
||||
Max number of iterations in classifier
|
||||
|
||||
depth_ : int
|
||||
Max depht of the tree
|
||||
|
||||
n_features_ : int
|
||||
The number of features when ``fit`` is performed.
|
||||
|
||||
n_features_in_ : int
|
||||
Number of features seen during :term:`fit`.
|
||||
|
||||
max_features_ : int
|
||||
Number of features to use in hyperplane computation
|
||||
|
||||
tree_ : Node
|
||||
root of the tree
|
||||
|
||||
X_ : ndarray
|
||||
points to the input dataset
|
||||
|
||||
y_ : ndarray
|
||||
points to the input labels
|
||||
|
||||
References
|
||||
----------
|
||||
R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class
|
||||
oblique decision tree based on support vector machines.", 2021 LNAI 12882
|
||||
|
||||
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@@ -577,6 +153,7 @@ class Stree(BaseEstimator, ClassifierMixin):
|
||||
multiclass_strategy: str = "ovo",
|
||||
normalize: bool = False,
|
||||
):
|
||||
|
||||
self.max_iter = max_iter
|
||||
self.C = C
|
||||
self.kernel = kernel
|
||||
@@ -593,6 +170,11 @@ class Stree(BaseEstimator, ClassifierMixin):
|
||||
self.normalize = normalize
|
||||
self.multiclass_strategy = multiclass_strategy
|
||||
|
||||
@staticmethod
|
||||
def version() -> str:
|
||||
"""Return the version of the package."""
|
||||
return __version__
|
||||
|
||||
def _more_tags(self) -> dict:
|
||||
"""Required by sklearn to supply features of the classifier
|
||||
make mandatory the labels array
|
||||
@@ -894,6 +476,23 @@ class Stree(BaseEstimator, ClassifierMixin):
|
||||
tree = None
|
||||
return Siterator(tree)
|
||||
|
||||
def graph(self, title="") -> str:
|
||||
"""Graphviz code representing the tree
|
||||
|
||||
Returns
|
||||
-------
|
||||
str
|
||||
graphviz code
|
||||
"""
|
||||
output = (
|
||||
"digraph STree {\nlabel=<STree "
|
||||
f"{title}>\nfontsize=30\nfontcolor=blue\nlabelloc=t\n"
|
||||
)
|
||||
for node in self:
|
||||
output += node.graph()
|
||||
output += "}\n"
|
||||
return output
|
||||
|
||||
def __str__(self) -> str:
|
||||
"""String representation of the tree
|
||||
|
||||
|
@@ -1,10 +1,8 @@
|
||||
from .Strees import Stree, Snode, Siterator, Splitter
|
||||
|
||||
__version__ = "1.1"
|
||||
from .Strees import Stree, Siterator
|
||||
|
||||
__author__ = "Ricardo Montañana Gómez"
|
||||
__copyright__ = "Copyright 2020-2021, Ricardo Montañana Gómez"
|
||||
__license__ = "MIT License"
|
||||
__author_email__ = "ricardo.montanana@alu.uclm.es"
|
||||
|
||||
__all__ = ["Stree", "Snode", "Siterator", "Splitter"]
|
||||
__all__ = ["Stree", "Siterator"]
|
||||
|
1
stree/_version.py
Normal file
1
stree/_version.py
Normal file
@@ -0,0 +1 @@
|
||||
__version__ = "1.2.4"
|
@@ -1,7 +1,8 @@
|
||||
import os
|
||||
import unittest
|
||||
import numpy as np
|
||||
from stree import Stree, Snode
|
||||
from stree import Stree
|
||||
from stree.Splitter import Snode
|
||||
from .utils import load_dataset
|
||||
|
||||
|
||||
|
@@ -5,8 +5,8 @@ import random
|
||||
import numpy as np
|
||||
from sklearn.svm import SVC
|
||||
from sklearn.datasets import load_wine, load_iris
|
||||
from stree import Splitter
|
||||
from .utils import load_dataset
|
||||
from stree.Splitter import Splitter
|
||||
from .utils import load_dataset, load_disc_dataset
|
||||
|
||||
|
||||
class Splitter_test(unittest.TestCase):
|
||||
@@ -244,3 +244,69 @@ class Splitter_test(unittest.TestCase):
|
||||
Xs, computed = tcl.get_subspace(X, y, k)
|
||||
self.assertListEqual(expected, list(computed))
|
||||
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
|
||||
|
||||
def test_get_best_subspaces_discrete(self):
|
||||
results = [
|
||||
(4, [0, 3, 16, 18]),
|
||||
(7, [0, 3, 13, 14, 16, 18, 19]),
|
||||
(9, [0, 3, 7, 13, 14, 15, 16, 18, 19]),
|
||||
]
|
||||
X, y = load_disc_dataset(n_features=20)
|
||||
for k, expected in results:
|
||||
tcl = self.build(
|
||||
feature_select="best",
|
||||
)
|
||||
Xs, computed = tcl.get_subspace(X, y, k)
|
||||
self.assertListEqual(expected, list(computed))
|
||||
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
|
||||
|
||||
def test_get_cfs_subspaces(self):
|
||||
results = [
|
||||
(4, [1, 5, 9, 12]),
|
||||
(6, [1, 5, 9, 12, 4, 2]),
|
||||
(7, [1, 5, 9, 12, 4, 2, 3]),
|
||||
]
|
||||
X, y = load_dataset(n_features=20, n_informative=7)
|
||||
for k, expected in results:
|
||||
tcl = self.build(feature_select="cfs")
|
||||
Xs, computed = tcl.get_subspace(X, y, k)
|
||||
self.assertListEqual(expected, list(computed))
|
||||
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
|
||||
|
||||
def test_get_fcbf_subspaces(self):
|
||||
results = [
|
||||
(4, [1, 5, 9, 12]),
|
||||
(6, [1, 5, 9, 12, 4, 2]),
|
||||
(7, [1, 5, 9, 12, 4, 2, 16]),
|
||||
]
|
||||
for rs, expected in results:
|
||||
X, y = load_dataset(n_features=20, n_informative=7)
|
||||
tcl = self.build(feature_select="fcbf", random_state=rs)
|
||||
Xs, computed = tcl.get_subspace(X, y, rs)
|
||||
self.assertListEqual(expected, list(computed))
|
||||
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
|
||||
|
||||
def test_get_iwss_subspaces(self):
|
||||
results = [
|
||||
(4, [1, 5, 9, 12]),
|
||||
(6, [1, 5, 9, 12, 4, 15]),
|
||||
]
|
||||
for rs, expected in results:
|
||||
X, y = load_dataset(n_features=20, n_informative=7)
|
||||
tcl = self.build(feature_select="iwss", random_state=rs)
|
||||
Xs, computed = tcl.get_subspace(X, y, rs)
|
||||
self.assertListEqual(expected, list(computed))
|
||||
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
|
||||
|
||||
def test_get_trandom_subspaces(self):
|
||||
results = [
|
||||
(4, [3, 7, 9, 12]),
|
||||
(6, [0, 1, 2, 8, 15, 18]),
|
||||
(7, [1, 2, 4, 8, 10, 12, 13]),
|
||||
]
|
||||
for rs, expected in results:
|
||||
X, y = load_dataset(n_features=20, n_informative=7)
|
||||
tcl = self.build(feature_select="trandom", random_state=rs)
|
||||
Xs, computed = tcl.get_subspace(X, y, rs)
|
||||
self.assertListEqual(expected, list(computed))
|
||||
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
|
||||
|
@@ -7,8 +7,10 @@ from sklearn.datasets import load_iris, load_wine
|
||||
from sklearn.exceptions import ConvergenceWarning
|
||||
from sklearn.svm import LinearSVC
|
||||
|
||||
from stree import Stree, Snode
|
||||
from stree import Stree
|
||||
from stree.Splitter import Snode
|
||||
from .utils import load_dataset
|
||||
from .._version import __version__
|
||||
|
||||
|
||||
class Stree_test(unittest.TestCase):
|
||||
@@ -356,6 +358,7 @@ class Stree_test(unittest.TestCase):
|
||||
|
||||
# Tests of score
|
||||
def test_score_binary(self):
|
||||
"""Check score for binary classification."""
|
||||
X, y = load_dataset(self._random_state)
|
||||
accuracies = [
|
||||
0.9506666666666667,
|
||||
@@ -378,6 +381,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertAlmostEqual(accuracy_expected, accuracy_score)
|
||||
|
||||
def test_score_max_features(self):
|
||||
"""Check score using max_features."""
|
||||
X, y = load_dataset(self._random_state)
|
||||
clf = Stree(
|
||||
kernel="liblinear",
|
||||
@@ -389,6 +393,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertAlmostEqual(0.9453333333333334, clf.score(X, y))
|
||||
|
||||
def test_bogus_splitter_parameter(self):
|
||||
"""Check that bogus splitter parameter raises exception."""
|
||||
clf = Stree(splitter="duck")
|
||||
with self.assertRaises(ValueError):
|
||||
clf.fit(*load_dataset())
|
||||
@@ -444,6 +449,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertListEqual([47], resdn[1].tolist())
|
||||
|
||||
def test_score_multiclass_rbf(self):
|
||||
"""Test score for multiclass classification with rbf kernel."""
|
||||
X, y = load_dataset(
|
||||
random_state=self._random_state,
|
||||
n_classes=3,
|
||||
@@ -461,6 +467,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||
|
||||
def test_score_multiclass_poly(self):
|
||||
"""Test score for multiclass classification with poly kernel."""
|
||||
X, y = load_dataset(
|
||||
random_state=self._random_state,
|
||||
n_classes=3,
|
||||
@@ -482,6 +489,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||
|
||||
def test_score_multiclass_liblinear(self):
|
||||
"""Test score for multiclass classification with liblinear kernel."""
|
||||
X, y = load_dataset(
|
||||
random_state=self._random_state,
|
||||
n_classes=3,
|
||||
@@ -507,6 +515,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||
|
||||
def test_score_multiclass_sigmoid(self):
|
||||
"""Test score for multiclass classification with sigmoid kernel."""
|
||||
X, y = load_dataset(
|
||||
random_state=self._random_state,
|
||||
n_classes=3,
|
||||
@@ -527,6 +536,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(0.9662921348314607, clf2.fit(X, y).score(X, y))
|
||||
|
||||
def test_score_multiclass_linear(self):
|
||||
"""Test score for multiclass classification with linear kernel."""
|
||||
warnings.filterwarnings("ignore", category=ConvergenceWarning)
|
||||
warnings.filterwarnings("ignore", category=RuntimeWarning)
|
||||
X, y = load_dataset(
|
||||
@@ -554,11 +564,13 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
|
||||
|
||||
def test_zero_all_sample_weights(self):
|
||||
"""Test exception raises when all sample weights are zero."""
|
||||
X, y = load_dataset(self._random_state)
|
||||
with self.assertRaises(ValueError):
|
||||
Stree().fit(X, y, np.zeros(len(y)))
|
||||
|
||||
def test_mask_samples_weighted_zero(self):
|
||||
"""Check that the weighted zero samples are masked."""
|
||||
X = np.array(
|
||||
[
|
||||
[1, 1],
|
||||
@@ -586,6 +598,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(model2.score(X, y, w), 1)
|
||||
|
||||
def test_depth(self):
|
||||
"""Check depth of the tree."""
|
||||
X, y = load_dataset(
|
||||
random_state=self._random_state,
|
||||
n_classes=3,
|
||||
@@ -601,6 +614,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(4, clf.depth_)
|
||||
|
||||
def test_nodes_leaves(self):
|
||||
"""Check number of nodes and leaves."""
|
||||
X, y = load_dataset(
|
||||
random_state=self._random_state,
|
||||
n_classes=3,
|
||||
@@ -620,6 +634,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(6, leaves)
|
||||
|
||||
def test_nodes_leaves_artificial(self):
|
||||
"""Check leaves of artificial dataset."""
|
||||
n1 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test1")
|
||||
n2 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test2")
|
||||
n3 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test3")
|
||||
@@ -638,12 +653,14 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(2, leaves)
|
||||
|
||||
def test_bogus_multiclass_strategy(self):
|
||||
"""Check invalid multiclass strategy."""
|
||||
clf = Stree(multiclass_strategy="other")
|
||||
X, y = load_wine(return_X_y=True)
|
||||
with self.assertRaises(ValueError):
|
||||
clf.fit(X, y)
|
||||
|
||||
def test_multiclass_strategy(self):
|
||||
"""Check multiclass strategy."""
|
||||
X, y = load_wine(return_X_y=True)
|
||||
clf_o = Stree(multiclass_strategy="ovo")
|
||||
clf_r = Stree(multiclass_strategy="ovr")
|
||||
@@ -653,6 +670,7 @@ class Stree_test(unittest.TestCase):
|
||||
self.assertEqual(0.9269662921348315, score_r)
|
||||
|
||||
def test_incompatible_hyperparameters(self):
|
||||
"""Check incompatible hyperparameters."""
|
||||
X, y = load_wine(return_X_y=True)
|
||||
clf = Stree(kernel="liblinear", multiclass_strategy="ovo")
|
||||
with self.assertRaises(ValueError):
|
||||
@@ -660,3 +678,50 @@ class Stree_test(unittest.TestCase):
|
||||
clf = Stree(multiclass_strategy="ovo", split_criteria="max_samples")
|
||||
with self.assertRaises(ValueError):
|
||||
clf.fit(X, y)
|
||||
|
||||
def test_version(self):
|
||||
"""Check STree version."""
|
||||
clf = Stree()
|
||||
self.assertEqual(__version__, clf.version())
|
||||
|
||||
def test_graph(self):
|
||||
"""Check graphviz representation of the tree."""
|
||||
X, y = load_wine(return_X_y=True)
|
||||
clf = Stree(random_state=self._random_state)
|
||||
|
||||
expected_head = (
|
||||
"digraph STree {\nlabel=<STree >\nfontsize=30\n"
|
||||
"fontcolor=blue\nlabelloc=t\n"
|
||||
)
|
||||
expected_tail = (
|
||||
' [shape=box style=filled label="class=1 impurity=0.000 '
|
||||
'classes=[1] samples=[1]"];\n}\n'
|
||||
)
|
||||
self.assertEqual(clf.graph(), expected_head + "}\n")
|
||||
clf.fit(X, y)
|
||||
computed = clf.graph()
|
||||
computed_head = computed[: len(expected_head)]
|
||||
num = -len(expected_tail)
|
||||
computed_tail = computed[num:]
|
||||
self.assertEqual(computed_head, expected_head)
|
||||
self.assertEqual(computed_tail, expected_tail)
|
||||
|
||||
def test_graph_title(self):
|
||||
X, y = load_wine(return_X_y=True)
|
||||
clf = Stree(random_state=self._random_state)
|
||||
expected_head = (
|
||||
"digraph STree {\nlabel=<STree Sample title>\nfontsize=30\n"
|
||||
"fontcolor=blue\nlabelloc=t\n"
|
||||
)
|
||||
expected_tail = (
|
||||
' [shape=box style=filled label="class=1 impurity=0.000 '
|
||||
'classes=[1] samples=[1]"];\n}\n'
|
||||
)
|
||||
self.assertEqual(clf.graph("Sample title"), expected_head + "}\n")
|
||||
clf.fit(X, y)
|
||||
computed = clf.graph("Sample title")
|
||||
computed_head = computed[: len(expected_head)]
|
||||
num = -len(expected_tail)
|
||||
computed_tail = computed[num:]
|
||||
self.assertEqual(computed_head, expected_head)
|
||||
self.assertEqual(computed_tail, expected_tail)
|
||||
|
@@ -1,11 +1,14 @@
|
||||
from sklearn.datasets import make_classification
|
||||
import numpy as np
|
||||
|
||||
|
||||
def load_dataset(random_state=0, n_classes=2, n_features=3, n_samples=1500):
|
||||
def load_dataset(
|
||||
random_state=0, n_classes=2, n_features=3, n_samples=1500, n_informative=3
|
||||
):
|
||||
X, y = make_classification(
|
||||
n_samples=n_samples,
|
||||
n_features=n_features,
|
||||
n_informative=3,
|
||||
n_informative=n_informative,
|
||||
n_redundant=0,
|
||||
n_repeated=0,
|
||||
n_classes=n_classes,
|
||||
@@ -15,3 +18,12 @@ def load_dataset(random_state=0, n_classes=2, n_features=3, n_samples=1500):
|
||||
random_state=random_state,
|
||||
)
|
||||
return X, y
|
||||
|
||||
|
||||
def load_disc_dataset(
|
||||
random_state=0, n_classes=2, n_features=3, n_samples=1500
|
||||
):
|
||||
np.random.seed(random_state)
|
||||
X = np.random.randint(1, 17, size=(n_samples, n_features)).astype(float)
|
||||
y = np.random.randint(low=0, high=n_classes, size=(n_samples), dtype=int)
|
||||
return X, y
|
||||
|
Reference in New Issue
Block a user