Compare commits

..

30 Commits

Author SHA1 Message Date
Ricardo Montañana Gómez
93be8a89a8 Graphviz (#52)
* Add graphviz representation of the tree

* Complete graphviz test
Add comments to some tests

* Add optional title to tree graph

* Add fontcolor keyword to nodes of the tree

* Add color keyword to arrows of graph

* Update version file to 1.2.4
2022-04-17 19:47:58 +02:00
82838fa3e0 Add audit and devdeps to Makefile 2022-01-11 11:02:09 +01:00
f0b2ce3c7b Fix github actions lint mistake 2022-01-11 10:44:45 +01:00
00ed57c015 Add version of the model method 2021-12-17 11:01:09 +01:00
Ricardo Montañana Gómez
08222f109e Update CITATION.cff 2021-11-04 11:06:13 +01:00
cc931d8547 Fix random seed not used in fs_mutual 2021-11-04 10:04:30 +01:00
b044a057df Update comments and README.md 2021-11-02 14:04:10 +01:00
fc48bc8ba4 Update docs and version number 2021-11-02 12:17:46 +01:00
Ricardo Montañana Gómez
8251f07674 Fix Citation (#49) 2021-11-02 10:58:30 +01:00
Ricardo Montañana Gómez
0b15a5af11 Fix space in CITATION.cff 2021-11-02 00:25:21 +01:00
Ricardo Montañana Gómez
28d905368b Create CITATION.cff 2021-11-02 00:20:49 +01:00
e5d49132ec Update benchmark hyperparams os STree 2021-10-31 12:41:30 +01:00
8daecc4726 Remove obsolete binder links 2021-10-31 11:51:31 +01:00
Ricardo Montañana Gómez
bf678df159 (#46) Implement true random feature selection (#48)
* (#46) Implement true random feature selection
2021-10-29 12:59:03 +02:00
Ricardo Montañana Gómez
36b08b1bcf Implement iwss feature selection (#45) (#47) 2021-10-29 11:49:46 +02:00
36ff3da26d Update Docs 2021-09-13 18:32:59 +02:00
Ricardo Montañana Gómez
6b281ebcc8 Add DOI to README 2021-09-13 18:23:11 +02:00
Ricardo Montañana Gómez
3aaddd096f Add package version badge in README 2021-08-17 12:00:36 +02:00
Ricardo Montañana Gómez
15a5a4c407 Add python 3.8 badge to README
Add badge from shields.io
2021-08-12 11:05:07 +02:00
Ricardo Montañana Gómez
0afe14a447 Mfstomufs #43 (#44)
* Implement module mfs changed name to mufs

* Update github CI file
2021-08-02 18:03:59 +02:00
Ricardo Montañana Gómez
fc9b7b5c92 Update version info (#42)
* Update version info and update docs (#41)
2021-07-31 01:45:16 +02:00
Ricardo Montañana Gómez
3f79d2877f Add cfs fcbf #39 (#40)
* Implement CFS/FCBF in splitter

* Split Splitter class to its own file
Update hyperparams table in docs
Implement CFS/FCBS with max_features and variable type

* Set mfs to continuous variables

* Fix some tests and style issues in Splitter

* Update requirements in github CI
2021-07-30 20:01:08 +02:00
ecc2800705 Fix mistakes in README and in docs 2021-07-21 11:24:37 +02:00
0524d47d64 Complete splitter description in hyperparameters 2021-07-14 18:10:46 +02:00
d46f544466 Add docs config
Update setup remove ipympl dependency
Update Project Name
add build to Makefile
2021-05-11 19:11:03 +02:00
79190ef2e1 Add doc-clean and lgtm badge 2021-05-11 09:03:26 +02:00
Ricardo Montañana Gómez
4f04e72670 Implement ovo strategy (#37)
* Implement ovo strategy
* Set ovo strategy as default
* Add kernel liblinear with LinearSVC classifier
* Fix weak test
2021-05-10 12:16:53 +02:00
5cef0f4875 Implement splitter type mutual info 2021-05-01 23:38:34 +02:00
28c7558f01 Update Readme
Add max_features > n_features test
Add make doc
2021-04-27 23:15:21 +02:00
Ricardo Montañana Gómez
e19d10f6a7 Package doc #7 (#34)
* Add first doc info to sources

* Update doc to separate classes in api

* Refactor build_predictor

* Fix random_sate issue in non linear kernels

* Refactor score method using base class implementation

* Some quality refactoring

* Fix codecov config.

* Add sigmoid kernel

* Refactor setup and add Makefile
2021-04-26 09:10:01 +02:00
25 changed files with 1482 additions and 655 deletions

View File

@@ -12,7 +12,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-latest, ubuntu-latest]
os: [macos-latest, ubuntu-latest, windows-latest]
python: [3.8]
steps:

37
CITATION.cff Normal file
View File

@@ -0,0 +1,37 @@
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Montañana"
given-names: "Ricardo"
orcid: "https://orcid.org/0000-0003-3242-5452"
- family-names: "Gámez"
given-names: "José A."
orcid: "https://orcid.org/0000-0003-1188-1117"
- family-names: "Puerta"
given-names: "José M."
orcid: "https://orcid.org/0000-0002-9164-5191"
title: "STree"
version: 1.2.3
doi: 10.5281/zenodo.5504083
date-released: 2021-11-02
url: "https://github.com/Doctorado-ML/STree"
preferred-citation:
type: article
authors:
- family-names: "Montañana"
given-names: "Ricardo"
orcid: "https://orcid.org/0000-0003-3242-5452"
- family-names: "Gámez"
given-names: "José A."
orcid: "https://orcid.org/0000-0003-1188-1117"
- family-names: "Puerta"
given-names: "José M."
orcid: "https://orcid.org/0000-0002-9164-5191"
doi: "10.1007/978-3-030-85713-4_6"
journal: "Lecture Notes in Computer Science"
month: 9
start: 54
end: 64
title: "STree: A Single Multi-class Oblique Decision Tree Based on Support Vector Machines"
volume: 12882
year: 2021

View File

@@ -1,6 +1,6 @@
SHELL := /bin/bash
.DEFAULT_GOAL := help
.PHONY: coverage deps help lint push test
.PHONY: coverage deps help lint push test doc build
coverage: ## Run tests with coverage
coverage erase
@@ -10,6 +10,9 @@ coverage: ## Run tests with coverage
deps: ## Install dependencies
pip install -r requirements.txt
devdeps: ## Install development dependencies
pip install black pip-audit flake8 mypy coverage
lint: ## Lint and static-check
black stree
flake8 stree
@@ -21,6 +24,20 @@ push: ## Push code with tags
test: ## Run tests
python -m unittest -v stree.tests
doc: ## Update documentation
make -C docs --makefile=Makefile html
build: ## Build package
rm -fr dist/*
rm -fr build/*
python setup.py sdist bdist_wheel
doc-clean: ## Update documentation
make -C docs --makefile=Makefile clean
audit: ## Audit pip
pip-audit
help: ## Show help message
@IFS=$$'\n' ; \
help_lines=(`fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##/:/'`); \

View File

@@ -1,8 +1,12 @@
![CI](https://github.com/Doctorado-ML/STree/workflows/CI/badge.svg)
[![codecov](https://codecov.io/gh/doctorado-ml/stree/branch/master/graph/badge.svg)](https://codecov.io/gh/doctorado-ml/stree)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/35fa3dfd53a24a339344b33d9f9f2f3d)](https://www.codacy.com/gh/Doctorado-ML/STree?utm_source=github.com&utm_medium=referral&utm_content=Doctorado-ML/STree&utm_campaign=Badge_Grade)
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Doctorado-ML/STree.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Doctorado-ML/STree/context:python)
[![PyPI version](https://badge.fury.io/py/STree.svg)](https://badge.fury.io/py/STree)
![https://img.shields.io/badge/python-3.8%2B-blue](https://img.shields.io/badge/python-3.8%2B-brightgreen)
[![DOI](https://zenodo.org/badge/262658230.svg)](https://zenodo.org/badge/latestdoi/262658230)
# Stree
# STree
Oblique Tree classifier based on SVM nodes. The nodes are built and splitted with sklearn SVC models. Stree is a sklearn estimator and can be integrated in pipelines, grid searches, etc.
@@ -16,14 +20,12 @@ pip install git+https://github.com/doctorado-ml/stree
## Documentation
Can be found in
Can be found in [stree.readthedocs.io](https://stree.readthedocs.io/en/stable/)
## Examples
### Jupyter notebooks
- [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Doctorado-ML/STree/master?urlpath=lab/tree/notebooks/benchmark.ipynb) Benchmark
- [![benchmark](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/benchmark.ipynb) Benchmark
- [![features](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/features.ipynb) Some features
@@ -34,22 +36,23 @@ Can be found in
## Hyperparameters
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
| --- | ------------------ | ------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
| \* | kernel | {"linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of linear, poly or rbf. |
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (poly). Ignored by all other kernels. |
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for rbf and poly.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if auto, uses 1 / n_features. |
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\* |
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
| | splitter | {"best", "random"} | random | The strategy used to choose the feature set at each node (only used if max_features != num_features). <br>Supported strategies are “best” to choose the best feature set and “random” to choose a random combination. <br>The algorithm generates 5 candidates at most to choose from in both strategies. |
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
| --- | ------------------- | -------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of liblinear, linear, poly or rbf. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (poly). Ignored by all other kernels. |
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for rbf, poly and sigmoid.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if auto, uses 1 / n_features. |
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
| | splitter | {"best", "random", "trandom", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **trandom”**: The algorithm generates only one random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
\* Hyperparameter used by the support vector classifier of every node
@@ -70,3 +73,7 @@ python -m unittest -v stree.tests
## License
STree is [MIT](https://github.com/doctorado-ml/stree/blob/master/LICENSE) licensed
## Reference
R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class oblique decision tree based on support vector machines.", 2021 LNAI 12882, pg. 54-64

View File

@@ -1,3 +1,4 @@
sphinx
sphinx-rtd-theme
myst-parser
mufs

View File

@@ -1,7 +1,7 @@
Siterator
=========
.. automodule:: stree
.. automodule:: Splitter
.. autoclass:: Siterator
:members:
:undoc-members:

View File

@@ -1,7 +1,7 @@
Snode
=====
.. automodule:: stree
.. automodule:: Splitter
.. autoclass:: Snode
:members:
:undoc-members:

View File

@@ -1,7 +1,7 @@
Splitter
========
.. automodule:: stree
.. automodule:: Splitter
.. autoclass:: Splitter
:members:
:undoc-members:

View File

@@ -6,6 +6,6 @@ API index
:caption: Contents:
Stree
Splitter
Snode
Siterator
Snode
Splitter

View File

@@ -12,6 +12,7 @@
#
import os
import sys
import stree
sys.path.insert(0, os.path.abspath("../../stree/"))
@@ -23,7 +24,8 @@ copyright = "2020 - 2021, Ricardo Montañana Gómez"
author = "Ricardo Montañana Gómez"
# The full version, including alpha/beta/rc tags
release = "1.0"
version = stree.__version__
release = version
# -- General configuration ---------------------------------------------------
@@ -52,4 +54,4 @@ html_theme = "sphinx_rtd_theme"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
html_static_path = []

View File

@@ -2,8 +2,6 @@
## Notebooks
- [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Doctorado-ML/STree/master?urlpath=lab/tree/notebooks/benchmark.ipynb) Benchmark
- [![benchmark](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/benchmark.ipynb) Benchmark
- [![features](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/features.ipynb) Some features

View File

@@ -1,21 +1,22 @@
# Hyperparameters
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
| --- | ------------------ | ------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
| \* | kernel | {"linear", "poly", "rbf"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of linear, poly or rbf. |
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (poly). Ignored by all other kernels. |
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for rbf and poly.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if auto, uses 1 / n_features. |
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\* |
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
| | splitter | {"best", "random"} | random | The strategy used to choose the feature set at each node (only used if max_features != num_features). <br>Supported strategies are “best” to choose the best feature set and “random” to choose a random combination. <br>The algorithm generates 5 candidates at most to choose from in both strategies. |
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
| --- | ------------------- | -------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
| \* | kernel | {"liblinear", "linear", "poly", "rbf", "sigmoid"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of liblinear, linear, poly or rbf. liblinear uses [liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/) library through scikit-learn library |
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (poly). Ignored by all other kernels. |
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for rbf, poly and sigmoid.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if auto, uses 1 / n_features. |
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\*. max_samples is incompatible with 'ovo' multiclass_strategy |
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
| | splitter | {"best", "random", "trandom", "mutual", "cfs", "fcbf", "iwss"} | "random" | The strategy used to choose the feature set at each node (only used if max_features < num_features). Supported strategies are: **best”**: sklearn SelectKBest algorithm is used in every node to choose the max_features best features. **random”**: The algorithm generates 5 candidates and choose the best (max. info. gain) of them. **trandom”**: The algorithm generates only one random combination. **"mutual"**: Chooses the best features w.r.t. their mutual info with the label. **"cfs"**: Apply Correlation-based Feature Selection. **"fcbf"**: Apply Fast Correlation-Based Filter. **"iwss"**: IWSS based algorithm |
| | normalize | \<bool\> | False | If standardization of features should be applied on each node with the samples that reach it |
| \* | multiclass_strategy | {"ovo", "ovr"} | "ovo" | Strategy to use with multiclass datasets, **"ovo"**: one versus one. **"ovr"**: one versus rest |
\* Hyperparameter used by the support vector classifier of every node

View File

@@ -1,8 +1,12 @@
# Stree
# STree
[![Codeship Status for Doctorado-ML/STree](https://app.codeship.com/projects/8b2bd350-8a1b-0138-5f2c-3ad36f3eb318/status?branch=master)](https://app.codeship.com/projects/399170)
![CI](https://github.com/Doctorado-ML/STree/workflows/CI/badge.svg)
[![codecov](https://codecov.io/gh/doctorado-ml/stree/branch/master/graph/badge.svg)](https://codecov.io/gh/doctorado-ml/stree)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/35fa3dfd53a24a339344b33d9f9f2f3d)](https://www.codacy.com/gh/Doctorado-ML/STree?utm_source=github.com&utm_medium=referral&utm_content=Doctorado-ML/STree&utm_campaign=Badge_Grade)
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Doctorado-ML/STree.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Doctorado-ML/STree/context:python)
[![PyPI version](https://badge.fury.io/py/STree.svg)](https://badge.fury.io/py/STree)
![https://img.shields.io/badge/python-3.8%2B-blue](https://img.shields.io/badge/python-3.8%2B-brightgreen)
[![DOI](https://zenodo.org/badge/262658230.svg)](https://zenodo.org/badge/latestdoi/262658230)
Oblique Tree classifier based on SVM nodes. The nodes are built and splitted with sklearn SVC models. Stree is a sklearn estimator and can be integrated in pipelines, grid searches, etc.

View File

@@ -178,7 +178,7 @@
"outputs": [],
"source": [
"# Stree\n",
"stree = Stree(random_state=random_state, C=.01, max_iter=1e3)"
"stree = Stree(random_state=random_state, C=.01, max_iter=1e3, kernel=\"liblinear\", multiclass_strategy=\"ovr\")"
]
},
{
@@ -368,4 +368,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}

View File

@@ -1 +1,2 @@
scikit-learn>0.24
scikit-learn>0.24
mufs

View File

@@ -1,5 +1,5 @@
import setuptools
import stree
import os
def readme():
@@ -7,29 +7,46 @@ def readme():
return f.read()
VERSION = stree.__version__
def get_data(field):
item = ""
file_name = "_version.py" if field == "version" else "__init__.py"
with open(os.path.join("stree", file_name)) as f:
for line in f.readlines():
if line.startswith(f"__{field}__"):
delim = '"' if '"' in line else "'"
item = line.split(delim)[1]
break
else:
raise RuntimeError(f"Unable to find {field} string.")
return item
setuptools.setup(
name="STree",
version=stree.__version__,
license=stree.__license__,
version=get_data("version"),
license=get_data("license"),
description="Oblique decision tree with svm nodes",
long_description=readme(),
long_description_content_type="text/markdown",
packages=setuptools.find_packages(),
url=stree.__url__,
author=stree.__author__,
author_email=stree.__author_email__,
url="https://github.com/Doctorado-ML/STree#stree",
project_urls={
"Code": "https://github.com/Doctorado-ML/STree",
"Documentation": "https://stree.readthedocs.io/en/latest/index.html",
},
author=get_data("author"),
author_email=get_data("author_email"),
keywords="scikit-learn oblique-classifier oblique-decision-tree decision-\
tree svm svc",
classifiers=[
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: " + stree.__license__,
"License :: OSI Approved :: " + get_data("license"),
"Programming Language :: Python :: 3.8",
"Natural Language :: English",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Intended Audience :: Science/Research",
],
install_requires=["scikit-learn", "numpy", "ipympl"],
install_requires=["scikit-learn", "mufs"],
test_suite="stree.tests",
zip_safe=False,
)

10
stree/.readthedocs.yaml Normal file
View File

@@ -0,0 +1,10 @@
version: 2
sphinx:
configuration: docs/source/conf.py
python:
version: 3.8
install:
- requirements: requirements.txt
- requirements: docs/requirements.txt

809
stree/Splitter.py Normal file
View File

@@ -0,0 +1,809 @@
"""
Oblique decision tree classifier based on SVM nodes
Splitter class
"""
import os
import warnings
import random
from math import log, factorial
import numpy as np
from sklearn.feature_selection import SelectKBest, mutual_info_classif
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.exceptions import ConvergenceWarning
from mufs import MUFS
class Snode:
"""
Nodes of the tree that keeps the svm classifier and if testing the
dataset assigned to it
Parameters
----------
clf : SVC
Classifier used
X : np.ndarray
input dataset in train time (only in testing)
y : np.ndarray
input labes in train time
features : np.array
features used to compute hyperplane
impurity : float
impurity of the node
title : str
label describing the route to the node
weight : np.ndarray, optional
weights applied to input dataset in train time, by default None
scaler : StandardScaler, optional
scaler used if any, by default None
"""
def __init__(
self,
clf: SVC,
X: np.ndarray,
y: np.ndarray,
features: np.array,
impurity: float,
title: str,
weight: np.ndarray = None,
scaler: StandardScaler = None,
):
self._clf = clf
self._title = title
self._belief = 0.0
# Only store dataset in Testing
self._X = X if os.environ.get("TESTING", "NS") != "NS" else None
self._y = y
self._down = None
self._up = None
self._class = None
self._feature = None
self._sample_weight = (
weight if os.environ.get("TESTING", "NS") != "NS" else None
)
self._features = features
self._impurity = impurity
self._partition_column: int = -1
self._scaler = scaler
@classmethod
def copy(cls, node: "Snode") -> "Snode":
return cls(
node._clf,
node._X,
node._y,
node._features,
node._impurity,
node._title,
node._sample_weight,
node._scaler,
)
def set_partition_column(self, col: int):
self._partition_column = col
def get_partition_column(self) -> int:
return self._partition_column
def set_down(self, son):
self._down = son
def set_title(self, title):
self._title = title
def set_classifier(self, clf):
self._clf = clf
def set_features(self, features):
self._features = features
def set_impurity(self, impurity):
self._impurity = impurity
def get_title(self) -> str:
return self._title
def get_classifier(self) -> SVC:
return self._clf
def get_impurity(self) -> float:
return self._impurity
def get_features(self) -> np.array:
return self._features
def set_up(self, son):
self._up = son
def is_leaf(self) -> bool:
return self._up is None and self._down is None
def get_down(self) -> "Snode":
return self._down
def get_up(self) -> "Snode":
return self._up
def make_predictor(self):
"""Compute the class of the predictor and its belief based on the
subdataset of the node only if it is a leaf
"""
if not self.is_leaf():
return
classes, card = np.unique(self._y, return_counts=True)
if len(classes) > 1:
max_card = max(card)
self._class = classes[card == max_card][0]
self._belief = max_card / np.sum(card)
else:
self._belief = 1
try:
self._class = classes[0]
except IndexError:
self._class = None
def graph(self):
"""
Return a string representing the node in graphviz format
"""
output = ""
count_values = np.unique(self._y, return_counts=True)
if self.is_leaf():
output += (
f'N{id(self)} [shape=box style=filled label="'
f"class={self._class} impurity={self._impurity:.3f} "
f'classes={count_values[0]} samples={count_values[1]}"];\n'
)
else:
output += (
f'N{id(self)} [label="#features={len(self._features)} '
f"classes={count_values[0]} samples={count_values[1]} "
f'({sum(count_values[1])})" fontcolor=black];\n'
)
output += f"N{id(self)} -> N{id(self.get_up())} [color=black];\n"
output += f"N{id(self)} -> N{id(self.get_down())} [color=black];\n"
return output
def __str__(self) -> str:
count_values = np.unique(self._y, return_counts=True)
if self.is_leaf():
return (
f"{self._title} - Leaf class={self._class} belief="
f"{self._belief: .6f} impurity={self._impurity:.4f} "
f"counts={count_values}"
)
return (
f"{self._title} feaures={self._features} impurity="
f"{self._impurity:.4f} "
f"counts={count_values}"
)
class Siterator:
"""Stree preorder iterator"""
def __init__(self, tree: Snode):
self._stack = []
self._push(tree)
def __iter__(self):
# To complete the iterator interface
return self
def _push(self, node: Snode):
if node is not None:
self._stack.append(node)
def __next__(self) -> Snode:
if len(self._stack) == 0:
raise StopIteration()
node = self._stack.pop()
self._push(node.get_up())
self._push(node.get_down())
return node
class Splitter:
"""
Splits a dataset in two based on different criteria
Parameters
----------
clf : SVC, optional
classifier, by default None
criterion : str, optional
The function to measure the quality of a split (only used if
max_features != num_features). Supported criteria are “gini” for the
Gini impurity and “entropy” for the information gain., by default
"entropy", by default None
feature_select : str, optional
The strategy used to choose the feature set at each node (only used if
max_features < num_features). Supported strategies are: “best”: sklearn
SelectKBest algorithm is used in every node to choose the max_features
best features. “random”: The algorithm generates 5 candidates and
choose the best (max. info. gain) of them. “trandom”: The algorithm
generates only one random combination. "mutual": Chooses the best
features w.r.t. their mutual info with the label. "cfs": Apply
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
Based, by default None
criteria : str, optional
ecides (just in case of a multi class classification) which column
(class) use to split the dataset in a node. max_samples is
incompatible with 'ovo' multiclass_strategy, by default None
min_samples_split : int, optional
The minimum number of samples required to split an internal node. 0
(default) for any, by default None
random_state : optional
Controls the pseudo random number generation for shuffling the data for
probability estimates. Ignored when probability is False.Pass an int
for reproducible output across multiple function calls, by
default None
normalize : bool, optional
If standardization of features should be applied on each node with the
samples that reach it , by default False
Raises
------
ValueError
clf has to be a sklearn estimator
ValueError
criterion must be gini or entropy
ValueError
criteria has to be max_samples or impurity
ValueError
splitter must be in {random, best, mutual, cfs, fcbf}
"""
def __init__(
self,
clf: SVC = None,
criterion: str = None,
feature_select: str = None,
criteria: str = None,
min_samples_split: int = None,
random_state=None,
normalize=False,
):
self._clf = clf
self._random_state = random_state
if random_state is not None:
random.seed(random_state)
self._criterion = criterion
self._min_samples_split = min_samples_split
self._criteria = criteria
self._feature_select = feature_select
self._normalize = normalize
if clf is None:
raise ValueError(f"clf has to be a sklearn estimator, got({clf})")
if criterion not in ["gini", "entropy"]:
raise ValueError(
f"criterion must be gini or entropy got({criterion})"
)
if criteria not in [
"max_samples",
"impurity",
]:
raise ValueError(
f"criteria has to be max_samples or impurity; got ({criteria})"
)
if feature_select not in [
"random",
"trandom",
"best",
"mutual",
"cfs",
"fcbf",
"iwss",
]:
raise ValueError(
"splitter must be in {random, trandom, best, mutual, cfs, "
"fcbf, iwss} "
f"got ({feature_select})"
)
self.criterion_function = getattr(self, f"_{self._criterion}")
self.decision_criteria = getattr(self, f"_{self._criteria}")
self.fs_function = getattr(self, f"_fs_{self._feature_select}")
def _fs_random(
self, dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Return the best of five random feature set combinations
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(< number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
# Random feature reduction
n_features = dataset.shape[1]
features_sets = self._generate_spaces(n_features, max_features)
return self._select_best_set(dataset, labels, features_sets)
@staticmethod
def _fs_trandom(
dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Return the a random feature set combination
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(< number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
# Random feature reduction
n_features = dataset.shape[1]
return tuple(sorted(random.sample(range(n_features), max_features)))
@staticmethod
def _fs_best(
dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Return the variabes with higher f-score
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(< number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
return (
SelectKBest(k=max_features)
.fit(dataset, labels)
.get_support(indices=True)
)
def _fs_mutual(
self, dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Return the best features with mutual information with labels
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(< number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
# return best features with mutual info with the label
feature_list = mutual_info_classif(
dataset, labels, random_state=self._random_state
)
return tuple(
sorted(
range(len(feature_list)), key=lambda sub: feature_list[sub]
)[-max_features:]
)
@staticmethod
def _fs_cfs(
dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Correlattion-based feature selection with max_features limit
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(< number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
mufs = MUFS(max_features=max_features, discrete=False)
return mufs.cfs(dataset, labels).get_results()
@staticmethod
def _fs_fcbf(
dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Fast Correlation-based Filter algorithm with max_features limit
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(< number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
mufs = MUFS(max_features=max_features, discrete=False)
return mufs.fcbf(dataset, labels, 5e-4).get_results()
@staticmethod
def _fs_iwss(
dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Correlattion-based feature selection based on iwss with max_features
limit
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(< number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
mufs = MUFS(max_features=max_features, discrete=False)
return mufs.iwss(dataset, labels, 0.25).get_results()
def partition_impurity(self, y: np.array) -> np.array:
return self.criterion_function(y)
@staticmethod
def _gini(y: np.array) -> float:
_, count = np.unique(y, return_counts=True)
return 1 - np.sum(np.square(count / np.sum(count)))
@staticmethod
def _entropy(y: np.array) -> float:
"""Compute entropy of a labels set
Parameters
----------
y : np.array
set of labels
Returns
-------
float
entropy
"""
n_labels = len(y)
if n_labels <= 1:
return 0
counts = np.bincount(y)
proportions = counts / n_labels
n_classes = np.count_nonzero(proportions)
if n_classes <= 1:
return 0
entropy = 0.0
# Compute standard entropy.
for prop in proportions:
if prop != 0.0:
entropy -= prop * log(prop, n_classes)
return entropy
def information_gain(
self, labels: np.array, labels_up: np.array, labels_dn: np.array
) -> float:
"""Compute information gain of a split candidate
Parameters
----------
labels : np.array
labels of the dataset
labels_up : np.array
labels of one side
labels_dn : np.array
labels on the other side
Returns
-------
float
information gain
"""
imp_prev = self.criterion_function(labels)
card_up = card_dn = imp_up = imp_dn = 0
if labels_up is not None:
card_up = labels_up.shape[0]
imp_up = self.criterion_function(labels_up)
if labels_dn is not None:
card_dn = labels_dn.shape[0] if labels_dn is not None else 0
imp_dn = self.criterion_function(labels_dn)
samples = card_up + card_dn
if samples == 0:
return 0.0
else:
result = (
imp_prev
- (card_up / samples) * imp_up
- (card_dn / samples) * imp_dn
)
return result
def _select_best_set(
self, dataset: np.array, labels: np.array, features_sets: list
) -> list:
"""Return the best set of features among feature_sets, the criterion is
the information gain
Parameters
----------
dataset : np.array
array of samples (# samples, # features)
labels : np.array
array of labels
features_sets : list
list of features sets to check
Returns
-------
list
best feature set
"""
max_gain = 0
selected = None
warnings.filterwarnings("ignore", category=ConvergenceWarning)
for feature_set in features_sets:
self._clf.fit(dataset[:, feature_set], labels)
node = Snode(
self._clf, dataset, labels, feature_set, 0.0, "subset"
)
self.partition(dataset, node, train=True)
y1, y2 = self.part(labels)
gain = self.information_gain(labels, y1, y2)
if gain > max_gain:
max_gain = gain
selected = feature_set
return selected if selected is not None else feature_set
@staticmethod
def _generate_spaces(features: int, max_features: int) -> list:
"""Generate at most 5 feature random combinations
Parameters
----------
features : int
number of features in each combination
max_features : int
number of features in dataset
Returns
-------
list
list with up to 5 combination of features randomly selected
"""
comb = set()
# Generate at most 5 combinations
number = factorial(features) / (
factorial(max_features) * factorial(features - max_features)
)
set_length = min(5, number)
while len(comb) < set_length:
comb.add(
tuple(sorted(random.sample(range(features), max_features)))
)
return list(comb)
def _get_subspaces_set(
self, dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Compute the indices of the features selected by splitter depending
on the self._feature_select hyper parameter
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(<= number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
# No feature reduction
n_features = dataset.shape[1]
if n_features == max_features:
return tuple(range(n_features))
# select features as selected in constructor
return self.fs_function(dataset, labels, max_features)
def get_subspace(
self, dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Re3turn a subspace of the selected dataset of max_features length.
Depending on hyperparameter
Parameters
----------
dataset : np.array
array of samples (# samples, # features)
labels : np.array
labels of the dataset
max_features : int
number of features to form the subspace
Returns
-------
tuple
tuple with the dataset with only the features selected and the
indices of the features selected
"""
indices = self._get_subspaces_set(dataset, labels, max_features)
return dataset[:, indices], indices
def _impurity(self, data: np.array, y: np.array) -> np.array:
"""return column of dataset to be taken into account to split dataset
Parameters
----------
data : np.array
distances to hyper plane of every class
y : np.array
vector of labels (classes)
Returns
-------
np.array
column of dataset to be taken into account to split dataset
"""
max_gain = 0
selected = -1
for col in range(data.shape[1]):
tup = y[data[:, col] > 0]
tdn = y[data[:, col] <= 0]
info_gain = self.information_gain(y, tup, tdn)
if info_gain > max_gain:
selected = col
max_gain = info_gain
return selected
@staticmethod
def _max_samples(data: np.array, y: np.array) -> np.array:
"""return column of dataset to be taken into account to split dataset
Parameters
----------
data : np.array
distances to hyper plane of every class
y : np.array
column of dataset to be taken into account to split dataset
Returns
-------
np.array
column of dataset to be taken into account to split dataset
"""
# select the class with max number of samples
_, samples = np.unique(y, return_counts=True)
return np.argmax(samples)
def partition(self, samples: np.array, node: Snode, train: bool):
"""Set the criteria to split arrays. Compute the indices of the samples
that should go to one side of the tree (up)
Parameters
----------
samples : np.array
array of samples (# samples, # features)
node : Snode
Node of the tree where partition is going to be made
train : bool
Train time - True / Test time - False
"""
# data contains the distances of every sample to every class hyperplane
# array of (m, nc) nc = # classes
data = self._distances(node, samples)
if data.shape[0] < self._min_samples_split:
# there aren't enough samples to split
self._up = np.ones((data.shape[0]), dtype=bool)
return
if data.ndim > 1:
# split criteria for multiclass
# Convert data to a (m, 1) array selecting values for samples
if train:
# in train time we have to compute the column to take into
# account to split the dataset
col = self.decision_criteria(data, node._y)
node.set_partition_column(col)
else:
# in predcit time just use the column computed in train time
# is taking the classifier of class <col>
col = node.get_partition_column()
if col == -1:
# No partition is producing information gain
data = np.ones(data.shape)
data = data[:, col]
self._up = data > 0
def part(self, origin: np.array) -> list:
"""Split an array in two based on indices (self._up) and its complement
partition has to be called first to establish up indices
Parameters
----------
origin : np.array
dataset to split
Returns
-------
list
list with two splits of the array
"""
down = ~self._up
return [
origin[self._up] if any(self._up) else None,
origin[down] if any(down) else None,
]
def _distances(self, node: Snode, data: np.ndarray) -> np.array:
"""Compute distances of the samples to the hyperplane of the node
Parameters
----------
node : Snode
node containing the svm classifier
data : np.ndarray
samples to compute distance to hyperplane
Returns
-------
np.array
array of shape (m, nc) with the distances of every sample to
the hyperplane of every class. nc = # of classes
"""
X_transformed = data[:, node._features]
if self._normalize:
X_transformed = node._scaler.transform(X_transformed)
return node._clf.decision_function(X_transformed)

View File

@@ -2,548 +2,137 @@
Oblique decision tree classifier based on SVM nodes
"""
import os
import numbers
import random
import warnings
from math import log, factorial
from typing import Optional
import numpy as np
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.svm import SVC, LinearSVC
from sklearn.feature_selection import SelectKBest
from sklearn.preprocessing import StandardScaler
from sklearn.utils.multiclass import check_classification_targets
from sklearn.exceptions import ConvergenceWarning
from sklearn.utils.validation import (
check_X_y,
check_array,
check_is_fitted,
_check_sample_weight,
)
class Snode:
"""Nodes of the tree that keeps the svm classifier and if testing the
dataset assigned to it
"""
def __init__(
self,
clf: SVC,
X: np.ndarray,
y: np.ndarray,
features: np.array,
impurity: float,
title: str,
weight: np.ndarray = None,
scaler: StandardScaler = None,
):
self._clf = clf
self._title = title
self._belief = 0.0
# Only store dataset in Testing
self._X = X if os.environ.get("TESTING", "NS") != "NS" else None
self._y = y
self._down = None
self._up = None
self._class = None
self._feature = None
self._sample_weight = (
weight if os.environ.get("TESTING", "NS") != "NS" else None
)
self._features = features
self._impurity = impurity
self._partition_column: int = -1
self._scaler = scaler
@classmethod
def copy(cls, node: "Snode") -> "Snode":
return cls(
node._clf,
node._X,
node._y,
node._features,
node._impurity,
node._title,
node._sample_weight,
node._scaler,
)
def set_partition_column(self, col: int):
self._partition_column = col
def get_partition_column(self) -> int:
return self._partition_column
def set_down(self, son):
self._down = son
def set_title(self, title):
self._title = title
def set_classifier(self, clf):
self._clf = clf
def set_features(self, features):
self._features = features
def set_impurity(self, impurity):
self._impurity = impurity
def get_title(self) -> str:
return self._title
def get_classifier(self) -> SVC:
return self._clf
def get_impurity(self) -> float:
return self._impurity
def get_features(self) -> np.array:
return self._features
def set_up(self, son):
self._up = son
def is_leaf(self) -> bool:
return self._up is None and self._down is None
def get_down(self) -> "Snode":
return self._down
def get_up(self) -> "Snode":
return self._up
def make_predictor(self):
"""Compute the class of the predictor and its belief based on the
subdataset of the node only if it is a leaf
"""
if not self.is_leaf():
return
classes, card = np.unique(self._y, return_counts=True)
if len(classes) > 1:
max_card = max(card)
self._class = classes[card == max_card][0]
self._belief = max_card / np.sum(card)
else:
self._belief = 1
try:
self._class = classes[0]
except IndexError:
self._class = None
def __str__(self) -> str:
count_values = np.unique(self._y, return_counts=True)
if self.is_leaf():
return (
f"{self._title} - Leaf class={self._class} belief="
f"{self._belief: .6f} impurity={self._impurity:.4f} "
f"counts={count_values}"
)
return (
f"{self._title} feaures={self._features} impurity="
f"{self._impurity:.4f} "
f"counts={count_values}"
)
class Siterator:
"""Stree preorder iterator"""
def __init__(self, tree: Snode):
self._stack = []
self._push(tree)
def _push(self, node: Snode):
if node is not None:
self._stack.append(node)
def __next__(self) -> Snode:
if len(self._stack) == 0:
raise StopIteration()
node = self._stack.pop()
self._push(node.get_up())
self._push(node.get_down())
return node
class Splitter:
def __init__(
self,
clf: SVC = None,
criterion: str = None,
feature_select: str = None,
criteria: str = None,
min_samples_split: int = None,
random_state=None,
normalize=False,
):
self._clf = clf
self._random_state = random_state
if random_state is not None:
random.seed(random_state)
self._criterion = criterion
self._min_samples_split = min_samples_split
self._criteria = criteria
self._feature_select = feature_select
self._normalize = normalize
if clf is None:
raise ValueError(f"clf has to be a sklearn estimator, got({clf})")
if criterion not in ["gini", "entropy"]:
raise ValueError(
f"criterion must be gini or entropy got({criterion})"
)
if criteria not in [
"max_samples",
"impurity",
]:
raise ValueError(
f"criteria has to be max_samples or impurity; got ({criteria})"
)
if feature_select not in ["random", "best"]:
raise ValueError(
"splitter must be either random or best, got "
f"({feature_select})"
)
self.criterion_function = getattr(self, f"_{self._criterion}")
self.decision_criteria = getattr(self, f"_{self._criteria}")
def partition_impurity(self, y: np.array) -> np.array:
return self.criterion_function(y)
@staticmethod
def _gini(y: np.array) -> float:
_, count = np.unique(y, return_counts=True)
return 1 - np.sum(np.square(count / np.sum(count)))
@staticmethod
def _entropy(y: np.array) -> float:
"""Compute entropy of a labels set
Parameters
----------
y : np.array
set of labels
Returns
-------
float
entropy
"""
n_labels = len(y)
if n_labels <= 1:
return 0
counts = np.bincount(y)
proportions = counts / n_labels
n_classes = np.count_nonzero(proportions)
if n_classes <= 1:
return 0
entropy = 0.0
# Compute standard entropy.
for prop in proportions:
if prop != 0.0:
entropy -= prop * log(prop, n_classes)
return entropy
def information_gain(
self, labels: np.array, labels_up: np.array, labels_dn: np.array
) -> float:
"""Compute information gain of a split candidate
Parameters
----------
labels : np.array
labels of the dataset
labels_up : np.array
labels of one side
labels_dn : np.array
labels on the other side
Returns
-------
float
information gain
"""
imp_prev = self.criterion_function(labels)
card_up = card_dn = imp_up = imp_dn = 0
if labels_up is not None:
card_up = labels_up.shape[0]
imp_up = self.criterion_function(labels_up)
if labels_dn is not None:
card_dn = labels_dn.shape[0] if labels_dn is not None else 0
imp_dn = self.criterion_function(labels_dn)
samples = card_up + card_dn
if samples == 0:
return 0.0
else:
result = (
imp_prev
- (card_up / samples) * imp_up
- (card_dn / samples) * imp_dn
)
return result
def _select_best_set(
self, dataset: np.array, labels: np.array, features_sets: list
) -> list:
"""Return the best set of features among feature_sets, the criterion is
the information gain
Parameters
----------
dataset : np.array
array of samples (# samples, # features)
labels : np.array
array of labels
features_sets : list
list of features sets to check
Returns
-------
list
best feature set
"""
max_gain = 0
selected = None
warnings.filterwarnings("ignore", category=ConvergenceWarning)
for feature_set in features_sets:
self._clf.fit(dataset[:, feature_set], labels)
node = Snode(
self._clf, dataset, labels, feature_set, 0.0, "subset"
)
self.partition(dataset, node, train=True)
y1, y2 = self.part(labels)
gain = self.information_gain(labels, y1, y2)
if gain > max_gain:
max_gain = gain
selected = feature_set
return selected if selected is not None else feature_set
@staticmethod
def _generate_spaces(features: int, max_features: int) -> list:
"""Generate at most 5 feature random combinations
Parameters
----------
features : int
number of features in each combination
max_features : int
number of features in dataset
Returns
-------
list
list with up to 5 combination of features randomly selected
"""
comb = set()
# Generate at most 5 combinations
number = factorial(features) / (
factorial(max_features) * factorial(features - max_features)
)
set_length = min(5, number)
while len(comb) < set_length:
comb.add(
tuple(sorted(random.sample(range(features), max_features)))
)
return list(comb)
def _get_subspaces_set(
self, dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Compute the indices of the features selected by splitter depending
on the self._feature_select hyper parameter
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(<= number of features in dataset)
Returns
-------
tuple
indices of the features selected
"""
if dataset.shape[1] == max_features:
# No feature reduction applies
return tuple(range(dataset.shape[1]))
if self._feature_select == "random":
features_sets = self._generate_spaces(
dataset.shape[1], max_features
)
return self._select_best_set(dataset, labels, features_sets)
# Take KBest features
return (
SelectKBest(k=max_features)
.fit(dataset, labels)
.get_support(indices=True)
)
def get_subspace(
self, dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Re3turn a subspace of the selected dataset of max_features length.
Depending on hyperparmeter
Parameters
----------
dataset : np.array
array of samples (# samples, # features)
labels : np.array
labels of the dataset
max_features : int
number of features to form the subspace
Returns
-------
tuple
tuple with the dataset with only the features selected and the
indices of the features selected
"""
indices = self._get_subspaces_set(dataset, labels, max_features)
return dataset[:, indices], indices
def _impurity(self, data: np.array, y: np.array) -> np.array:
"""return column of dataset to be taken into account to split dataset
Parameters
----------
data : np.array
distances to hyper plane of every class
y : np.array
vector of labels (classes)
Returns
-------
np.array
column of dataset to be taken into account to split dataset
"""
max_gain = 0
selected = -1
for col in range(data.shape[1]):
tup = y[data[:, col] > 0]
tdn = y[data[:, col] <= 0]
info_gain = self.information_gain(y, tup, tdn)
if info_gain > max_gain:
selected = col
max_gain = info_gain
return selected
@staticmethod
def _max_samples(data: np.array, y: np.array) -> np.array:
"""return column of dataset to be taken into account to split dataset
Parameters
----------
data : np.array
distances to hyper plane of every class
y : np.array
column of dataset to be taken into account to split dataset
Returns
-------
np.array
column of dataset to be taken into account to split dataset
"""
# select the class with max number of samples
_, samples = np.unique(y, return_counts=True)
return np.argmax(samples)
def partition(self, samples: np.array, node: Snode, train: bool):
"""Set the criteria to split arrays. Compute the indices of the samples
that should go to one side of the tree (up)
Parameters
----------
samples : np.array
array of samples (# samples, # features)
node : Snode
Node of the tree where partition is going to be made
train : bool
Train time - True / Test time - False
"""
# data contains the distances of every sample to every class hyperplane
# array of (m, nc) nc = # classes
data = self._distances(node, samples)
if data.shape[0] < self._min_samples_split:
# there aren't enough samples to split
self._up = np.ones((data.shape[0]), dtype=bool)
return
if data.ndim > 1:
# split criteria for multiclass
# Convert data to a (m, 1) array selecting values for samples
if train:
# in train time we have to compute the column to take into
# account to split the dataset
col = self.decision_criteria(data, node._y)
node.set_partition_column(col)
else:
# in predcit time just use the column computed in train time
# is taking the classifier of class <col>
col = node.get_partition_column()
if col == -1:
# No partition is producing information gain
data = np.ones(data.shape)
data = data[:, col]
self._up = data > 0
def part(self, origin: np.array) -> list:
"""Split an array in two based on indices (self._up) and its complement
partition has to be called first to establish up indices
Parameters
----------
origin : np.array
dataset to split
Returns
-------
list
list with two splits of the array
"""
down = ~self._up
return [
origin[self._up] if any(self._up) else None,
origin[down] if any(down) else None,
]
def _distances(self, node: Snode, data: np.ndarray) -> np.array:
"""Compute distances of the samples to the hyperplane of the node
Parameters
----------
node : Snode
node containing the svm classifier
data : np.ndarray
samples to compute distance to hyperplane
Returns
-------
np.array
array of shape (m, nc) with the distances of every sample to
the hyperplane of every class. nc = # of classes
"""
X_transformed = data[:, node._features]
if self._normalize:
X_transformed = node._scaler.transform(X_transformed)
return node._clf.decision_function(X_transformed)
from .Splitter import Splitter, Snode, Siterator
from ._version import __version__
class Stree(BaseEstimator, ClassifierMixin):
"""Estimator that is based on binary trees of svm nodes
"""
Estimator that is based on binary trees of svm nodes
can deal with sample_weights in predict, used in boosting sklearn methods
inheriting from BaseEstimator implements get_params and set_params methods
inheriting from ClassifierMixin implement the attribute _estimator_type
with "classifier" as value
Parameters
----------
C : float, optional
Regularization parameter. The strength of the regularization is
inversely proportional to C. Must be strictly positive., by default 1.0
kernel : str, optional
Specifies the kernel type to be used in the algorithm. It must be one
of liblinear, linear, poly or rbf. liblinear uses
[liblinear](https://www.csie.ntu.edu.tw/~cjlin/liblinear/) library and
the rest uses [libsvm](https://www.csie.ntu.edu.tw/~cjlin/libsvm/)
library through scikit-learn library, by default "linear"
max_iter : int, optional
Hard limit on iterations within solver, or -1 for no limit., by default
1e5
random_state : int, optional
Controls the pseudo random number generation for shuffling the data for
probability estimates. Ignored when probability is False.Pass an int
for reproducible output across multiple function calls, by
default None
max_depth : int, optional
Specifies the maximum depth of the tree, by default None
tol : float, optional
Tolerance for stopping, by default 1e-4
degree : int, optional
Degree of the polynomial kernel function (poly). Ignored by all other
kernels., by default 3
gamma : str, optional
Kernel coefficient for rbf, poly and sigmoid.if gamma='scale'
(default) is passed then it uses 1 / (n_features * X.var()) as value
of gamma,if auto, uses 1 / n_features., by default "scale"
split_criteria : str, optional
Decides (just in case of a multi class classification) which column
(class) use to split the dataset in a node. max_samples is
incompatible with 'ovo' multiclass_strategy, by default "impurity"
criterion : str, optional
The function to measure the quality of a split (only used if
max_features != num_features). Supported criteria are “gini” for the
Gini impurity and “entropy” for the information gain., by default
"entropy"
min_samples_split : int, optional
The minimum number of samples required to split an internal node. 0
(default) for any, by default 0
max_features : optional
The number of features to consider when looking for the split: If int,
then consider max_features features at each split. If float, then
max_features is a fraction and int(max_features * n_features) features
are considered at each split. If “auto”, then max_features=
sqrt(n_features). If “sqrt”, then max_features=sqrt(n_features). If
“log2”, then max_features=log2(n_features). If None, then max_features=
n_features., by default None
splitter : str, optional
The strategy used to choose the feature set at each node (only used if
max_features < num_features). Supported strategies are: “best”: sklearn
SelectKBest algorithm is used in every node to choose the max_features
best features. “random”: The algorithm generates 5 candidates and
choose the best (max. info. gain) of them. “trandom”: The algorithm
generates only one random combination. "mutual": Chooses the best
features w.r.t. their mutual info with the label. "cfs": Apply
Correlation-based Feature Selection. "fcbf": Apply Fast Correlation-
Based , by default "random"
multiclass_strategy : str, optional
Strategy to use with multiclass datasets, "ovo": one versus one. "ovr":
one versus rest, by default "ovo"
normalize : bool, optional
If standardization of features should be applied on each node with the
samples that reach it , by default False
Attributes
----------
classes_ : ndarray of shape (n_classes,)
The classes labels.
n_classes_ : int
The number of classes
n_iter_ : int
Max number of iterations in classifier
depth_ : int
Max depht of the tree
n_features_ : int
The number of features when ``fit`` is performed.
n_features_in_ : int
Number of features seen during :term:`fit`.
max_features_ : int
Number of features to use in hyperplane computation
tree_ : Node
root of the tree
X_ : ndarray
points to the input dataset
y_ : ndarray
points to the input labels
References
----------
R. Montañana, J. A. Gámez, J. M. Puerta, "STree: a single multi-class
oblique decision tree based on support vector machines.", 2021 LNAI 12882
"""
def __init__(
@@ -561,8 +150,10 @@ class Stree(BaseEstimator, ClassifierMixin):
min_samples_split: int = 0,
max_features=None,
splitter: str = "random",
multiclass_strategy: str = "ovo",
normalize: bool = False,
):
self.max_iter = max_iter
self.C = C
self.kernel = kernel
@@ -577,6 +168,12 @@ class Stree(BaseEstimator, ClassifierMixin):
self.criterion = criterion
self.splitter = splitter
self.normalize = normalize
self.multiclass_strategy = multiclass_strategy
@staticmethod
def version() -> str:
"""Return the version of the package."""
return __version__
def _more_tags(self) -> dict:
"""Required by sklearn to supply features of the classifier
@@ -621,7 +218,23 @@ class Stree(BaseEstimator, ClassifierMixin):
f"Maximum depth has to be greater than 1... got (max_depth=\
{self.max_depth})"
)
kernels = ["linear", "rbf", "poly", "sigmoid"]
if self.multiclass_strategy not in ["ovr", "ovo"]:
raise ValueError(
"mutliclass_strategy has to be either ovr or ovo"
f" but got {self.multiclass_strategy}"
)
if self.multiclass_strategy == "ovo":
if self.kernel == "liblinear":
raise ValueError(
"The kernel liblinear is incompatible with ovo "
"multiclass_strategy"
)
if self.split_criteria == "max_samples":
raise ValueError(
"The multiclass_strategy 'ovo' is incompatible with "
"split_criteria 'max_samples'"
)
kernels = ["liblinear", "linear", "rbf", "poly", "sigmoid"]
if self.kernel not in kernels:
raise ValueError(f"Kernel {self.kernel} not in {kernels}")
check_classification_targets(y)
@@ -653,12 +266,12 @@ class Stree(BaseEstimator, ClassifierMixin):
self.n_features_ = X.shape[1]
self.n_features_in_ = X.shape[1]
self.max_features_ = self._initialize_max_features()
self.tree_ = self.train(X, y, sample_weight, 1, "root")
self.tree_ = self._train(X, y, sample_weight, 1, "root")
self.X_ = X
self.y_ = y
return self
def train(
def _train(
self,
X: np.ndarray,
y: np.ndarray,
@@ -723,10 +336,10 @@ class Stree(BaseEstimator, ClassifierMixin):
node.make_predictor()
return node
node.set_up(
self.train(X_U, y_u, sw_u, depth + 1, title + f" - Up({depth+1})")
self._train(X_U, y_u, sw_u, depth + 1, title + f" - Up({depth+1})")
)
node.set_down(
self.train(
self._train(
X_D, y_d, sw_d, depth + 1, title + f" - Down({depth+1})"
)
)
@@ -741,7 +354,7 @@ class Stree(BaseEstimator, ClassifierMixin):
C=self.C,
tol=self.tol,
)
if self.kernel == "linear"
if self.kernel == "liblinear"
else SVC(
kernel=self.kernel,
max_iter=self.max_iter,
@@ -750,6 +363,7 @@ class Stree(BaseEstimator, ClassifierMixin):
gamma=self.gamma,
degree=self.degree,
random_state=self.random_state,
decision_function_shape=self.multiclass_strategy,
)
)
@@ -862,6 +476,23 @@ class Stree(BaseEstimator, ClassifierMixin):
tree = None
return Siterator(tree)
def graph(self, title="") -> str:
"""Graphviz code representing the tree
Returns
-------
str
graphviz code
"""
output = (
"digraph STree {\nlabel=<STree "
f"{title}>\nfontsize=30\nfontcolor=blue\nlabelloc=t\n"
)
for node in self:
output += node.graph()
output += "}\n"
return output
def __str__(self) -> str:
"""String representation of the tree
@@ -892,6 +523,12 @@ class Stree(BaseEstimator, ClassifierMixin):
elif self.max_features is None:
max_features = self.n_features_
elif isinstance(self.max_features, numbers.Integral):
if self.max_features > self.n_features_:
raise ValueError(
"Invalid value for max_features. "
"It can not be greater than number of features "
f"({self.n_features_})"
)
max_features = self.max_features
else: # float
if self.max_features > 0.0:

View File

@@ -1,11 +1,8 @@
from .Strees import Stree, Snode, Siterator, Splitter
__version__ = "1.0"
from .Strees import Stree, Siterator
__author__ = "Ricardo Montañana Gómez"
__copyright__ = "Copyright 2020-2021, Ricardo Montañana Gómez"
__license__ = "MIT License"
__author_email__ = "ricardo.montanana@alu.uclm.es"
__url__ = "https://github.com/doctorado-ml/stree"
__all__ = ["Stree", "Snode", "Siterator", "Splitter"]
__all__ = ["Stree", "Siterator"]

1
stree/_version.py Normal file
View File

@@ -0,0 +1 @@
__version__ = "1.2.4"

View File

@@ -1,14 +1,19 @@
import os
import unittest
import numpy as np
from stree import Stree, Snode
from stree import Stree
from stree.Splitter import Snode
from .utils import load_dataset
class Snode_test(unittest.TestCase):
def __init__(self, *args, **kwargs):
self._random_state = 1
self._clf = Stree(random_state=self._random_state)
self._clf = Stree(
random_state=self._random_state,
kernel="liblinear",
multiclass_strategy="ovr",
)
self._clf.fit(*load_dataset(self._random_state))
super().__init__(*args, **kwargs)

View File

@@ -5,8 +5,8 @@ import random
import numpy as np
from sklearn.svm import SVC
from sklearn.datasets import load_wine, load_iris
from stree import Splitter
from .utils import load_dataset
from stree.Splitter import Splitter
from .utils import load_dataset, load_disc_dataset
class Splitter_test(unittest.TestCase):
@@ -195,10 +195,14 @@ class Splitter_test(unittest.TestCase):
[0, 3, 7, 12], # random entropy impurity
[1, 7, 9, 12], # random gini max_samples
[1, 5, 8, 12], # random gini impurity
[6, 9, 11, 12], # mutual entropy max_samples
[6, 9, 11, 12], # mutual entropy impurity
[6, 9, 11, 12], # mutual gini max_samples
[6, 9, 11, 12], # mutual gini impurity
]
X, y = load_wine(return_X_y=True)
rn = 0
for feature_select in ["best", "random"]:
for feature_select in ["best", "random", "mutual"]:
for criterion in ["entropy", "gini"]:
for criteria in [
"max_samples",
@@ -221,7 +225,7 @@ class Splitter_test(unittest.TestCase):
# criteria,
# )
# )
self.assertListEqual(expected, list(computed))
self.assertListEqual(expected, sorted(list(computed)))
self.assertListEqual(
X[:, computed].tolist(), dataset.tolist()
)
@@ -240,3 +244,69 @@ class Splitter_test(unittest.TestCase):
Xs, computed = tcl.get_subspace(X, y, k)
self.assertListEqual(expected, list(computed))
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
def test_get_best_subspaces_discrete(self):
results = [
(4, [0, 3, 16, 18]),
(7, [0, 3, 13, 14, 16, 18, 19]),
(9, [0, 3, 7, 13, 14, 15, 16, 18, 19]),
]
X, y = load_disc_dataset(n_features=20)
for k, expected in results:
tcl = self.build(
feature_select="best",
)
Xs, computed = tcl.get_subspace(X, y, k)
self.assertListEqual(expected, list(computed))
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
def test_get_cfs_subspaces(self):
results = [
(4, [1, 5, 9, 12]),
(6, [1, 5, 9, 12, 4, 2]),
(7, [1, 5, 9, 12, 4, 2, 3]),
]
X, y = load_dataset(n_features=20, n_informative=7)
for k, expected in results:
tcl = self.build(feature_select="cfs")
Xs, computed = tcl.get_subspace(X, y, k)
self.assertListEqual(expected, list(computed))
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
def test_get_fcbf_subspaces(self):
results = [
(4, [1, 5, 9, 12]),
(6, [1, 5, 9, 12, 4, 2]),
(7, [1, 5, 9, 12, 4, 2, 16]),
]
for rs, expected in results:
X, y = load_dataset(n_features=20, n_informative=7)
tcl = self.build(feature_select="fcbf", random_state=rs)
Xs, computed = tcl.get_subspace(X, y, rs)
self.assertListEqual(expected, list(computed))
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
def test_get_iwss_subspaces(self):
results = [
(4, [1, 5, 9, 12]),
(6, [1, 5, 9, 12, 4, 15]),
]
for rs, expected in results:
X, y = load_dataset(n_features=20, n_informative=7)
tcl = self.build(feature_select="iwss", random_state=rs)
Xs, computed = tcl.get_subspace(X, y, rs)
self.assertListEqual(expected, list(computed))
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())
def test_get_trandom_subspaces(self):
results = [
(4, [3, 7, 9, 12]),
(6, [0, 1, 2, 8, 15, 18]),
(7, [1, 2, 4, 8, 10, 12, 13]),
]
for rs, expected in results:
X, y = load_dataset(n_features=20, n_informative=7)
tcl = self.build(feature_select="trandom", random_state=rs)
Xs, computed = tcl.get_subspace(X, y, rs)
self.assertListEqual(expected, list(computed))
self.assertListEqual(X[:, expected].tolist(), Xs.tolist())

View File

@@ -7,14 +7,16 @@ from sklearn.datasets import load_iris, load_wine
from sklearn.exceptions import ConvergenceWarning
from sklearn.svm import LinearSVC
from stree import Stree, Snode
from stree import Stree
from stree.Splitter import Snode
from .utils import load_dataset
from .._version import __version__
class Stree_test(unittest.TestCase):
def __init__(self, *args, **kwargs):
self._random_state = 1
self._kernels = ["linear", "rbf", "poly"]
self._kernels = ["liblinear", "linear", "rbf", "poly", "sigmoid"]
super().__init__(*args, **kwargs)
@classmethod
@@ -22,10 +24,9 @@ class Stree_test(unittest.TestCase):
os.environ["TESTING"] = "1"
def test_valid_kernels(self):
valid_kernels = ["linear", "rbf", "poly", "sigmoid"]
X, y = load_dataset()
for kernel in valid_kernels:
clf = Stree(kernel=kernel)
for kernel in self._kernels:
clf = Stree(kernel=kernel, multiclass_strategy="ovr")
clf.fit(X, y)
self.assertIsNotNone(clf.tree_)
@@ -55,14 +56,19 @@ class Stree_test(unittest.TestCase):
# i.e. The partition algorithm didn't forget any sample
self.assertEqual(node._y.shape[0], y_down.shape[0] + y_up.shape[0])
unique_y, count_y = np.unique(node._y, return_counts=True)
_, count_d = np.unique(y_down, return_counts=True)
_, count_u = np.unique(y_up, return_counts=True)
labels_d, count_d = np.unique(y_down, return_counts=True)
labels_u, count_u = np.unique(y_up, return_counts=True)
dict_d = {label: count_d[i] for i, label in enumerate(labels_d)}
dict_u = {label: count_u[i] for i, label in enumerate(labels_u)}
#
for i in unique_y:
number_up = count_u[i]
try:
number_down = count_d[i]
except IndexError:
number_up = dict_u[i]
except KeyError:
number_up = 0
try:
number_down = dict_d[i]
except KeyError:
number_down = 0
self.assertEqual(count_y[i], number_down + number_up)
# Is the partition made the same as the prediction?
@@ -77,14 +83,22 @@ class Stree_test(unittest.TestCase):
"""Check if the tree is built the same way as predictions of models"""
warnings.filterwarnings("ignore")
for kernel in self._kernels:
clf = Stree(kernel=kernel, random_state=self._random_state)
clf = Stree(
kernel="sigmoid",
multiclass_strategy="ovr" if kernel == "liblinear" else "ovo",
random_state=self._random_state,
)
clf.fit(*load_dataset(self._random_state))
self._check_tree(clf.tree_)
def test_single_prediction(self):
X, y = load_dataset(self._random_state)
for kernel in self._kernels:
clf = Stree(kernel=kernel, random_state=self._random_state)
clf = Stree(
kernel=kernel,
multiclass_strategy="ovr" if kernel == "liblinear" else "ovo",
random_state=self._random_state,
)
yp = clf.fit(X, y).predict((X[0, :].reshape(-1, X.shape[1])))
self.assertEqual(yp[0], y[0])
@@ -92,8 +106,12 @@ class Stree_test(unittest.TestCase):
# First 27 elements the predictions are the same as the truth
num = 27
X, y = load_dataset(self._random_state)
for kernel in self._kernels:
clf = Stree(kernel=kernel, random_state=self._random_state)
for kernel in ["liblinear", "linear", "rbf", "poly"]:
clf = Stree(
kernel=kernel,
multiclass_strategy="ovr" if kernel == "liblinear" else "ovo",
random_state=self._random_state,
)
yp = clf.fit(X, y).predict(X[:num, :])
self.assertListEqual(y[:num].tolist(), yp.tolist())
@@ -103,7 +121,11 @@ class Stree_test(unittest.TestCase):
"""
X, y = load_dataset(self._random_state)
for kernel in self._kernels:
clf = Stree(kernel=kernel, random_state=self._random_state)
clf = Stree(
kernel=kernel,
multiclass_strategy="ovr" if kernel == "liblinear" else "ovo",
random_state=self._random_state,
)
clf.fit(X, y)
# Compute prediction line by line
yp_line = np.array([], dtype=int)
@@ -135,9 +157,13 @@ class Stree_test(unittest.TestCase):
]
computed = []
expected_string = ""
clf = Stree(kernel="linear", random_state=self._random_state)
clf = Stree(
kernel="liblinear",
multiclass_strategy="ovr",
random_state=self._random_state,
)
clf.fit(*load_dataset(self._random_state))
for node in clf:
for node in iter(clf):
computed.append(str(node))
expected_string += str(node) + "\n"
self.assertListEqual(expected, computed)
@@ -173,7 +199,12 @@ class Stree_test(unittest.TestCase):
def test_check_max_depth(self):
depths = (3, 4)
for depth in depths:
tcl = Stree(random_state=self._random_state, max_depth=depth)
tcl = Stree(
kernel="liblinear",
multiclass_strategy="ovr",
random_state=self._random_state,
max_depth=depth,
)
tcl.fit(*load_dataset(self._random_state))
self.assertEqual(depth, tcl.depth_)
@@ -194,7 +225,7 @@ class Stree_test(unittest.TestCase):
for kernel in self._kernels:
clf = Stree(
kernel=kernel,
split_criteria="max_samples",
multiclass_strategy="ovr" if kernel == "liblinear" else "ovo",
random_state=self._random_state,
)
px = [[1, 2], [5, 6], [9, 10]]
@@ -205,26 +236,36 @@ class Stree_test(unittest.TestCase):
self.assertListEqual(py, clf.classes_.tolist())
def test_muticlass_dataset(self):
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=RuntimeWarning)
datasets = {
"Synt": load_dataset(random_state=self._random_state, n_classes=3),
"Iris": load_wine(return_X_y=True),
}
outcomes = {
"Synt": {
"max_samples linear": 0.9606666666666667,
"max_samples rbf": 0.7133333333333334,
"max_samples poly": 0.618,
"impurity linear": 0.9606666666666667,
"impurity rbf": 0.7133333333333334,
"impurity poly": 0.618,
"max_samples liblinear": 0.9493333333333334,
"max_samples linear": 0.9426666666666667,
"max_samples rbf": 0.9606666666666667,
"max_samples poly": 0.9373333333333334,
"max_samples sigmoid": 0.824,
"impurity liblinear": 0.9493333333333334,
"impurity linear": 0.9426666666666667,
"impurity rbf": 0.9606666666666667,
"impurity poly": 0.9373333333333334,
"impurity sigmoid": 0.824,
},
"Iris": {
"max_samples liblinear": 0.9550561797752809,
"max_samples linear": 1.0,
"max_samples rbf": 0.6910112359550562,
"max_samples poly": 0.6966292134831461,
"impurity linear": 1,
"impurity rbf": 0.6910112359550562,
"impurity poly": 0.6966292134831461,
"max_samples rbf": 0.6685393258426966,
"max_samples poly": 0.6853932584269663,
"max_samples sigmoid": 0.6404494382022472,
"impurity liblinear": 0.9550561797752809,
"impurity linear": 1.0,
"impurity rbf": 0.6685393258426966,
"impurity poly": 0.6853932584269663,
"impurity sigmoid": 0.6404494382022472,
},
}
@@ -233,18 +274,22 @@ class Stree_test(unittest.TestCase):
for criteria in ["max_samples", "impurity"]:
for kernel in self._kernels:
clf = Stree(
C=55,
max_iter=1e5,
max_iter=1e4,
multiclass_strategy="ovr"
if kernel == "liblinear"
else "ovo",
kernel=kernel,
random_state=self._random_state,
)
clf.fit(px, py)
outcome = outcomes[name][f"{criteria} {kernel}"]
# print(
# f"{name} {criteria} {kernel} {outcome} {clf.score(px"
# ", py)}"
# )
self.assertAlmostEqual(outcome, clf.score(px, py))
# print(f'"{criteria} {kernel}": {clf.score(px, py)},')
self.assertAlmostEqual(
outcome,
clf.score(px, py),
5,
f"{name} - {criteria} - {kernel}",
)
def test_max_features(self):
n_features = 16
@@ -269,6 +314,12 @@ class Stree_test(unittest.TestCase):
with self.assertRaises(ValueError):
_ = clf._initialize_max_features()
def test_wrong_max_features(self):
X, y = load_dataset(n_features=15)
clf = Stree(max_features=16)
with self.assertRaises(ValueError):
clf.fit(X, y)
def test_get_subspaces(self):
dataset = np.random.random((10, 16))
y = np.random.randint(0, 2, 10)
@@ -306,17 +357,20 @@ class Stree_test(unittest.TestCase):
clf.predict(X[:, :3])
# Tests of score
def test_score_binary(self):
"""Check score for binary classification."""
X, y = load_dataset(self._random_state)
accuracies = [
0.9506666666666667,
0.9493333333333334,
0.9606666666666667,
0.9433333333333334,
0.9153333333333333,
]
for kernel, accuracy_expected in zip(self._kernels, accuracies):
clf = Stree(
random_state=self._random_state,
multiclass_strategy="ovr" if kernel == "liblinear" else "ovo",
kernel=kernel,
)
clf.fit(X, y)
@@ -327,12 +381,19 @@ class Stree_test(unittest.TestCase):
self.assertAlmostEqual(accuracy_expected, accuracy_score)
def test_score_max_features(self):
"""Check score using max_features."""
X, y = load_dataset(self._random_state)
clf = Stree(random_state=self._random_state, max_features=2)
clf = Stree(
kernel="liblinear",
multiclass_strategy="ovr",
random_state=self._random_state,
max_features=2,
)
clf.fit(X, y)
self.assertAlmostEqual(0.9453333333333334, clf.score(X, y))
def test_bogus_splitter_parameter(self):
"""Check that bogus splitter parameter raises exception."""
clf = Stree(splitter="duck")
with self.assertRaises(ValueError):
clf.fit(*load_dataset())
@@ -340,7 +401,9 @@ class Stree_test(unittest.TestCase):
def test_multiclass_classifier_integrity(self):
"""Checks if the multiclass operation is done right"""
X, y = load_iris(return_X_y=True)
clf = Stree(random_state=0)
clf = Stree(
kernel="liblinear", multiclass_strategy="ovr", random_state=0
)
clf.fit(X, y)
score = clf.score(X, y)
# Check accuracy of the whole model
@@ -386,6 +449,7 @@ class Stree_test(unittest.TestCase):
self.assertListEqual([47], resdn[1].tolist())
def test_score_multiclass_rbf(self):
"""Test score for multiclass classification with rbf kernel."""
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
@@ -396,13 +460,14 @@ class Stree_test(unittest.TestCase):
clf2 = Stree(
kernel="rbf", random_state=self._random_state, normalize=True
)
self.assertEqual(0.768, clf.fit(X, y).score(X, y))
self.assertEqual(0.814, clf2.fit(X, y).score(X, y))
self.assertEqual(0.966, clf.fit(X, y).score(X, y))
self.assertEqual(0.964, clf2.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
self.assertEqual(0.6741573033707865, clf.fit(X, y).score(X, y))
self.assertEqual(0.6685393258426966, clf.fit(X, y).score(X, y))
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
def test_score_multiclass_poly(self):
"""Test score for multiclass classification with poly kernel."""
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
@@ -417,24 +482,81 @@ class Stree_test(unittest.TestCase):
random_state=self._random_state,
normalize=True,
)
self.assertEqual(0.786, clf.fit(X, y).score(X, y))
self.assertEqual(0.818, clf2.fit(X, y).score(X, y))
self.assertEqual(0.946, clf.fit(X, y).score(X, y))
self.assertEqual(0.972, clf2.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
self.assertEqual(0.702247191011236, clf.fit(X, y).score(X, y))
self.assertEqual(0.6067415730337079, clf2.fit(X, y).score(X, y))
self.assertEqual(0.7808988764044944, clf.fit(X, y).score(X, y))
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
def test_score_multiclass_liblinear(self):
"""Test score for multiclass classification with liblinear kernel."""
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
n_features=5,
n_samples=500,
)
clf = Stree(
kernel="liblinear",
multiclass_strategy="ovr",
random_state=self._random_state,
C=10,
)
clf2 = Stree(
kernel="liblinear",
multiclass_strategy="ovr",
random_state=self._random_state,
normalize=True,
)
self.assertEqual(0.968, clf.fit(X, y).score(X, y))
self.assertEqual(0.97, clf2.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
self.assertEqual(1.0, clf.fit(X, y).score(X, y))
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
def test_score_multiclass_sigmoid(self):
"""Test score for multiclass classification with sigmoid kernel."""
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
n_features=5,
n_samples=500,
)
clf = Stree(kernel="sigmoid", random_state=self._random_state, C=10)
clf2 = Stree(
kernel="sigmoid",
random_state=self._random_state,
normalize=True,
C=10,
)
self.assertEqual(0.796, clf.fit(X, y).score(X, y))
self.assertEqual(0.952, clf2.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
self.assertEqual(0.6910112359550562, clf.fit(X, y).score(X, y))
self.assertEqual(0.9662921348314607, clf2.fit(X, y).score(X, y))
def test_score_multiclass_linear(self):
"""Test score for multiclass classification with linear kernel."""
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=RuntimeWarning)
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
n_features=5,
n_samples=1500,
)
clf = Stree(kernel="linear", random_state=self._random_state)
clf = Stree(
kernel="liblinear",
multiclass_strategy="ovr",
random_state=self._random_state,
)
self.assertEqual(0.9533333333333334, clf.fit(X, y).score(X, y))
# Check with context based standardization
clf2 = Stree(
kernel="linear", random_state=self._random_state, normalize=True
kernel="liblinear",
multiclass_strategy="ovr",
random_state=self._random_state,
normalize=True,
)
self.assertEqual(0.9526666666666667, clf2.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
@@ -442,11 +564,13 @@ class Stree_test(unittest.TestCase):
self.assertEqual(1.0, clf2.fit(X, y).score(X, y))
def test_zero_all_sample_weights(self):
"""Test exception raises when all sample weights are zero."""
X, y = load_dataset(self._random_state)
with self.assertRaises(ValueError):
Stree().fit(X, y, np.zeros(len(y)))
def test_mask_samples_weighted_zero(self):
"""Check that the weighted zero samples are masked."""
X = np.array(
[
[1, 1],
@@ -461,7 +585,7 @@ class Stree_test(unittest.TestCase):
]
)
y = np.array([1, 1, 1, 2, 2, 2, 5, 5, 5])
yw = np.array([1, 1, 1, 5, 5, 5, 5, 5, 5])
yw = np.array([1, 1, 1, 1, 1, 1, 5, 5, 5])
w = [1, 1, 1, 0, 0, 0, 1, 1, 1]
model1 = Stree().fit(X, y)
model2 = Stree().fit(X, y, w)
@@ -474,6 +598,7 @@ class Stree_test(unittest.TestCase):
self.assertEqual(model2.score(X, y, w), 1)
def test_depth(self):
"""Check depth of the tree."""
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
@@ -489,6 +614,7 @@ class Stree_test(unittest.TestCase):
self.assertEqual(4, clf.depth_)
def test_nodes_leaves(self):
"""Check number of nodes and leaves."""
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
@@ -498,16 +624,17 @@ class Stree_test(unittest.TestCase):
clf = Stree(random_state=self._random_state)
clf.fit(X, y)
nodes, leaves = clf.nodes_leaves()
self.assertEqual(25, nodes)
self.assertEqual(13, leaves)
self.assertEqual(31, nodes)
self.assertEqual(16, leaves)
X, y = load_wine(return_X_y=True)
clf = Stree(random_state=self._random_state)
clf.fit(X, y)
nodes, leaves = clf.nodes_leaves()
self.assertEqual(9, nodes)
self.assertEqual(5, leaves)
self.assertEqual(11, nodes)
self.assertEqual(6, leaves)
def test_nodes_leaves_artificial(self):
"""Check leaves of artificial dataset."""
n1 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test1")
n2 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test2")
n3 = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test3")
@@ -524,3 +651,77 @@ class Stree_test(unittest.TestCase):
nodes, leaves = clf.nodes_leaves()
self.assertEqual(6, nodes)
self.assertEqual(2, leaves)
def test_bogus_multiclass_strategy(self):
"""Check invalid multiclass strategy."""
clf = Stree(multiclass_strategy="other")
X, y = load_wine(return_X_y=True)
with self.assertRaises(ValueError):
clf.fit(X, y)
def test_multiclass_strategy(self):
"""Check multiclass strategy."""
X, y = load_wine(return_X_y=True)
clf_o = Stree(multiclass_strategy="ovo")
clf_r = Stree(multiclass_strategy="ovr")
score_o = clf_o.fit(X, y).score(X, y)
score_r = clf_r.fit(X, y).score(X, y)
self.assertEqual(1.0, score_o)
self.assertEqual(0.9269662921348315, score_r)
def test_incompatible_hyperparameters(self):
"""Check incompatible hyperparameters."""
X, y = load_wine(return_X_y=True)
clf = Stree(kernel="liblinear", multiclass_strategy="ovo")
with self.assertRaises(ValueError):
clf.fit(X, y)
clf = Stree(multiclass_strategy="ovo", split_criteria="max_samples")
with self.assertRaises(ValueError):
clf.fit(X, y)
def test_version(self):
"""Check STree version."""
clf = Stree()
self.assertEqual(__version__, clf.version())
def test_graph(self):
"""Check graphviz representation of the tree."""
X, y = load_wine(return_X_y=True)
clf = Stree(random_state=self._random_state)
expected_head = (
"digraph STree {\nlabel=<STree >\nfontsize=30\n"
"fontcolor=blue\nlabelloc=t\n"
)
expected_tail = (
' [shape=box style=filled label="class=1 impurity=0.000 '
'classes=[1] samples=[1]"];\n}\n'
)
self.assertEqual(clf.graph(), expected_head + "}\n")
clf.fit(X, y)
computed = clf.graph()
computed_head = computed[: len(expected_head)]
num = -len(expected_tail)
computed_tail = computed[num:]
self.assertEqual(computed_head, expected_head)
self.assertEqual(computed_tail, expected_tail)
def test_graph_title(self):
X, y = load_wine(return_X_y=True)
clf = Stree(random_state=self._random_state)
expected_head = (
"digraph STree {\nlabel=<STree Sample title>\nfontsize=30\n"
"fontcolor=blue\nlabelloc=t\n"
)
expected_tail = (
' [shape=box style=filled label="class=1 impurity=0.000 '
'classes=[1] samples=[1]"];\n}\n'
)
self.assertEqual(clf.graph("Sample title"), expected_head + "}\n")
clf.fit(X, y)
computed = clf.graph("Sample title")
computed_head = computed[: len(expected_head)]
num = -len(expected_tail)
computed_tail = computed[num:]
self.assertEqual(computed_head, expected_head)
self.assertEqual(computed_tail, expected_tail)

View File

@@ -1,11 +1,14 @@
from sklearn.datasets import make_classification
import numpy as np
def load_dataset(random_state=0, n_classes=2, n_features=3, n_samples=1500):
def load_dataset(
random_state=0, n_classes=2, n_features=3, n_samples=1500, n_informative=3
):
X, y = make_classification(
n_samples=n_samples,
n_features=n_features,
n_informative=3,
n_informative=n_informative,
n_redundant=0,
n_repeated=0,
n_classes=n_classes,
@@ -15,3 +18,12 @@ def load_dataset(random_state=0, n_classes=2, n_features=3, n_samples=1500):
random_state=random_state,
)
return X, y
def load_disc_dataset(
random_state=0, n_classes=2, n_features=3, n_samples=1500
):
np.random.seed(random_state)
X = np.random.randint(1, 17, size=(n_samples, n_features)).astype(float)
y = np.random.randint(low=0, high=n_classes, size=(n_samples), dtype=int)
return X, y