Compare commits

...

6 Commits

Author SHA1 Message Date
0340584c52 Complete source comments
Change docstring type to numpy
update hyperameters table and explanation
2021-01-18 14:07:43 +01:00
9b3c7ccdfa Add Hyperparameters description to README
Comment get_subspace method
Add environment info for binder (runtime.txt)
2021-01-13 11:39:47 +01:00
Ricardo Montañana Gómez
e4ac5075e5 Add main workflow action (#20)
* Add main workflow action

* lock scikit-learn version to 0.23.2

* exchange codeship badge with githubs
2021-01-11 13:46:30 +01:00
Ricardo Montañana Gómez
36816074ff Combinatorial explosion (#19)
* Remove itertools combinations from subspaces

* Generates 5 random subspaces at most
2021-01-10 13:32:22 +01:00
475ad7e752 Fix mistakes in function comments 2020-11-11 19:14:36 +01:00
Ricardo Montañana Gómez
1c869e154e Enhance partition (#16)
#15 Create impurity function in Stree (consistent name, same criteria as other splitter parameter)
Create test for the new function
Update init test
Update test splitter parameters
Rename old impurity function to partition_impurity
close #15
* Complete implementation of splitter_type = impurity with tests
Remove max_distance & min_distance splitter types

* Fix mistake in computing multiclass node belief
Set default criterion for split to entropy instead of gini
Set default max_iter to 1e5 instead of 1e3
change up-down criterion to match SVC multiclass
Fix impurity method of splitting nodes
Update jupyter Notebooks
2020-11-03 11:36:05 +01:00
16 changed files with 1433 additions and 693 deletions

47
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: CI
on:
push:
branches: [master]
pull_request:
branches: [master]
workflow_dispatch:
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-latest, ubuntu-latest]
python: [3.8]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python }}
- name: Install dependencies
run: |
pip install -q --upgrade pip
pip install -q -r requirements.txt
pip install -q --upgrade codecov coverage black flake8 codacy-coverage
- name: Lint
run: |
black --check --diff stree
flake8 --count stree
- name: Tests
run: |
coverage run -m unittest -v stree.tests
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.xml
- name: Run codacy-coverage-reporter
if: runner.os == 'Linux'
uses: codacy/codacy-coverage-reporter-action@master
with:
project-token: ${{ secrets.CODACY_PROJECT_TOKEN }}
coverage-reports: coverage.xml

1
.gitignore vendored
View File

@@ -133,3 +133,4 @@ dmypy.json
.pre-commit-config.yaml
**.csv
.virtual_documents

View File

@@ -1,6 +1,6 @@
[![Codeship Status for Doctorado-ML/STree](https://app.codeship.com/projects/8b2bd350-8a1b-0138-5f2c-3ad36f3eb318/status?branch=master)](https://app.codeship.com/projects/399170)
![CI](https://github.com/Doctorado-ML/STree/workflows/CI/badge.svg)
[![codecov](https://codecov.io/gh/doctorado-ml/stree/branch/master/graph/badge.svg)](https://codecov.io/gh/doctorado-ml/stree)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/35fa3dfd53a24a339344b33d9f9f2f3d)](https://www.codacy.com/gh/Doctorado-ML/STree?utm_source=github.com&utm_medium=referral&utm_content=Doctorado-ML/STree&utm_campaign=Badge_Grade)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/35fa3dfd53a24a339344b33d9f9f2f3d)](https://www.codacy.com/gh/Doctorado-ML/STree?utm_source=github.com&utm_medium=referral&utm_content=Doctorado-ML/STree&utm_campaign=Badge_Grade)
# Stree
@@ -18,23 +18,45 @@ pip install git+https://github.com/doctorado-ml/stree
### Jupyter notebooks
* [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Doctorado-ML/STree/master?urlpath=lab/tree/notebooks/benchmark.ipynb) Benchmark
- [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/Doctorado-ML/STree/master?urlpath=lab/tree/notebooks/benchmark.ipynb) Benchmark
* [![Test](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/benchmark.ipynb) Benchmark
- [![Test](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/benchmark.ipynb) Benchmark
* [![Test2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/features.ipynb) Test features
- [![Test2](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/features.ipynb) Test features
* [![Adaboost](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/adaboost.ipynb) Adaboost
- [![Adaboost](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/adaboost.ipynb) Adaboost
* [![Gridsearch](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/gridsearch.ipynb) Gridsearch
- [![Gridsearch](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/gridsearch.ipynb) Gridsearch
* [![Test Graphics](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/test_graphs.ipynb) Test Graphics
- [![Test Graphics](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/test_graphs.ipynb) Test Graphics
### Command line
## Hyperparameters
```bash
python main.py
```
| | **Hyperparameter** | **Type/Values** | **Default** | **Meaning** |
| --- | ------------------ | ------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| \* | C | \<float\> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
| \* | kernel | {"linear", "poly", "rbf"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of linear, poly or rbf. |
| \* | max_iter | \<int\> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
| \* | random_state | \<int\> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
| | max_depth | \<int\> | None | Specifies the maximum depth of the tree |
| \* | tol | \<float\> | 1e-4 | Tolerance for stopping criterion. |
| \* | degree | \<int\> | 3 | Degree of the polynomial kernel function (poly). Ignored by all other kernels. |
| \* | gamma | {"scale", "auto"} or \<float\> | scale | Kernel coefficient for rbf and poly.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if auto, uses 1 / n_features. |
| | split_criteria | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\* |
| | criterion | {“gini”, “entropy”} | entropy | The function to measure the quality of a split (only used if max_features != num_features). <br>Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
| | min_samples_split | \<int\> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
| | max_features | \<int\>, \<float\> <br><br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
| | splitter | {"best", "random"} | random | The strategy used to choose the feature set at each node (only used if max_features != num_features). <br>Supported strategies are “best” to choose the best feature set and “random” to choose a random combination. <br>The algorithm generates 5 candidates at most to choose from in both strategies. |
\* Hyperparameter used by the support vector classifier of every node
\*\* **Splitting in a STree node**
The decision function is applied to the dataset and distances from samples to hyperplanes are computed in a matrix. This matrix has as many columns as classes the samples belongs to (if more than two, i.e. multiclass classification) or 1 column if it's a binary class dataset. In binary classification only one hyperplane is computed and therefore only one column is needed to store the distances of the samples to it. If three or more classes are present in the dataset we need as many hyperplanes as classes are there, and therefore one column per hyperplane is needed.
In case of multiclass classification we have to decide which column take into account to make the split, that depends on hyperparameter _split_criteria_, if "impurity" is chosen then STree computes information gain of every split candidate using each column and chooses the one that maximize the information gain, otherwise STree choses the column with more samples with a predicted class (the column with more positive numbers in it).
Once we have the column to take into account for the split, the algorithm splits samples with positive distances to hyperplane from the rest.
## Tests

View File

@@ -8,7 +8,7 @@ random_state = 1
X, y = load_iris(return_X_y=True)
Xtrain, Xtest, ytrain, ytest = train_test_split(
X, y, test_size=0.2, random_state=random_state
X, y, test_size=0.3, random_state=random_state
)
now = time.time()

File diff suppressed because one or more lines are too long

View File

@@ -34,9 +34,12 @@
"outputs": [],
"source": [
"import time\n",
"import warnings\n",
"from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier\n",
"from sklearn.model_selection import train_test_split\n",
"from stree import Stree"
"from sklearn.exceptions import ConvergenceWarning\n",
"from stree import Stree\n",
"warnings.filterwarnings(\"ignore\", category=ConvergenceWarning)"
]
},
{
@@ -59,9 +62,15 @@
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "Fraud: 0.173% 492\nValid: 99.827% 284315\nX.shape (100492, 28) y.shape (100492,)\nFraud: 0.644% 647\nValid: 99.356% 99845\n"
"output_type": "stream",
"text": [
"Fraud: 0.173% 492\n",
"Valid: 99.827% 284315\n",
"X.shape (100492, 28) y.shape (100492,)\n",
"Fraud: 0.651% 654\n",
"Valid: 99.349% 99838\n"
]
}
],
"source": [
@@ -127,14 +136,18 @@
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "Score Train: 0.9985784146480154\nScore Test: 0.9981093273185617\nTook 73.27 seconds\n"
"output_type": "stream",
"text": [
"Score Train: 0.9984504719663368\n",
"Score Test: 0.9983415151917209\n",
"Took 26.09 seconds\n"
]
}
],
"source": [
"now = time.time()\n",
"clf = Stree(max_depth=3, random_state=random_state)\n",
"clf = Stree(max_depth=3, random_state=random_state, max_iter=1e3)\n",
"clf.fit(Xtrain, ytrain)\n",
"print(\"Score Train: \", clf.score(Xtrain, ytrain))\n",
"print(\"Score Test: \", clf.score(Xtest, ytest))\n",
@@ -167,15 +180,19 @@
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "Kernel: linear\tTime: 93.78 seconds\tScore Train: 0.9983083\tScore Test: 0.9983083\nKernel: rbf\tTime: 18.32 seconds\tScore Train: 0.9935602\tScore Test: 0.9935651\nKernel: poly\tTime: 69.68 seconds\tScore Train: 0.9973132\tScore Test: 0.9972801\n"
"output_type": "stream",
"text": [
"Kernel: linear\tTime: 43.49 seconds\tScore Train: 0.9980098\tScore Test: 0.9980762\n",
"Kernel: rbf\tTime: 8.86 seconds\tScore Train: 0.9934891\tScore Test: 0.9934987\n",
"Kernel: poly\tTime: 41.14 seconds\tScore Train: 0.9972279\tScore Test: 0.9973133\n"
]
}
],
"source": [
"for kernel in ['linear', 'rbf', 'poly']:\n",
" now = time.time()\n",
" clf = AdaBoostClassifier(base_estimator=Stree(C=C, kernel=kernel, max_depth=max_depth, random_state=random_state), algorithm=\"SAMME\", n_estimators=n_estimators, random_state=random_state)\n",
" clf = AdaBoostClassifier(base_estimator=Stree(C=C, kernel=kernel, max_depth=max_depth, random_state=random_state, max_iter=1e3), algorithm=\"SAMME\", n_estimators=n_estimators, random_state=random_state)\n",
" clf.fit(Xtrain, ytrain)\n",
" score_train = clf.score(Xtrain, ytrain)\n",
" score_test = clf.score(Xtest, ytest)\n",
@@ -208,15 +225,19 @@
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "Kernel: linear\tTime: 387.06 seconds\tScore Train: 0.9985784\tScore Test: 0.9981093\nKernel: rbf\tTime: 144.00 seconds\tScore Train: 0.9992750\tScore Test: 0.9983415\nKernel: poly\tTime: 101.78 seconds\tScore Train: 0.9992466\tScore Test: 0.9981757\n"
"output_type": "stream",
"text": [
"Kernel: linear\tTime: 187.51 seconds\tScore Train: 0.9984505\tScore Test: 0.9983083\n",
"Kernel: rbf\tTime: 73.65 seconds\tScore Train: 0.9993461\tScore Test: 0.9985074\n",
"Kernel: poly\tTime: 52.19 seconds\tScore Train: 0.9993461\tScore Test: 0.9987727\n"
]
}
],
"source": [
"for kernel in ['linear', 'rbf', 'poly']:\n",
" now = time.time()\n",
" clf = BaggingClassifier(base_estimator=Stree(C=C, kernel=kernel, max_depth=max_depth, random_state=random_state), n_estimators=n_estimators, random_state=random_state)\n",
" clf = BaggingClassifier(base_estimator=Stree(C=C, kernel=kernel, max_depth=max_depth, random_state=random_state, max_iter=1e3), n_estimators=n_estimators, random_state=random_state)\n",
" clf.fit(Xtrain, ytrain)\n",
" score_train = clf.score(Xtrain, ytrain)\n",
" score_test = clf.score(Xtest, ytest)\n",
@@ -225,6 +246,11 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
@@ -235,14 +261,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6-final"
},
"orig_nbformat": 2,
"kernelspec": {
"name": "python37664bitgeneralvenve3128601eb614c5da59c5055670b6040",
"display_name": "Python 3.7.6 64-bit ('general': venv)"
"version": "3.8.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

File diff suppressed because one or more lines are too long

View File

@@ -30,45 +30,59 @@
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "zIHKVxthDZEa",
"colab": {},
"colab_type": "code",
"colab": {}
"id": "zIHKVxthDZEa"
},
"outputs": [],
"source": [
"from sklearn.ensemble import AdaBoostClassifier\n",
"from sklearn.svm import LinearSVC\n",
"from sklearn.model_selection import GridSearchCV, train_test_split\n",
"from stree import Stree"
],
"execution_count": 2,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "IEmq50QgDZEi",
"colab": {},
"colab_type": "code",
"colab": {}
"id": "IEmq50QgDZEi"
},
"outputs": [],
"source": [
"import os\n",
"if not os.path.isfile('data/creditcard.csv'):\n",
" !wget --no-check-certificate --content-disposition http://nube.jccm.es/index.php/s/Zs7SYtZQJ3RQ2H2/download\n",
" !tar xzf creditcard.tgz"
],
"execution_count": 3,
"outputs": []
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "z9Q-YUfBDZEq",
"colab_type": "code",
"colab": {},
"colab_type": "code",
"id": "z9Q-YUfBDZEq",
"outputId": "afc822fb-f16a-4302-8a67-2b9e2880159b",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fraud: 0.173% 492\n",
"Valid: 99.827% 284315\n",
"X.shape (1492, 28) y.shape (1492,)\n",
"Fraud: 33.177% 495\n",
"Valid: 66.823% 997\n"
]
}
],
"source": [
"random_state=1\n",
"\n",
@@ -107,14 +121,6 @@
"Xtest = data[1]\n",
"ytrain = data[2]\n",
"ytest = data[3]"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "Fraud: 0.173% 492\nValid: 99.827% 284315\nX.shape (1492, 28) y.shape (1492,)\nFraud: 32.976% 492\nValid: 67.024% 1000\n"
}
]
},
{
@@ -126,24 +132,47 @@
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "HmX3kR4PDZEw",
"colab": {},
"colab_type": "code",
"colab": {}
"id": "HmX3kR4PDZEw"
},
"outputs": [],
"source": [
"parameters = {\n",
" 'base_estimator': [Stree()],\n",
"parameters = [{\n",
" 'base_estimator': [Stree(random_state=random_state)],\n",
" 'n_estimators': [10, 25],\n",
" 'learning_rate': [.5, 1],\n",
" 'base_estimator__split_criteria': ['max_samples', 'impurity'],\n",
" 'base_estimator__tol': [.1, 1e-02],\n",
" 'base_estimator__max_depth': [3, 5],\n",
" 'base_estimator__C': [7, 55],\n",
" 'base_estimator__kernel': ['linear', 'poly', 'rbf']\n",
"}"
],
"execution_count": 5,
"outputs": []
" 'base_estimator__max_depth': [3, 5, 7],\n",
" 'base_estimator__C': [1, 7, 55],\n",
" 'base_estimator__kernel': ['linear']\n",
"},\n",
"{\n",
" 'base_estimator': [Stree(random_state=random_state)],\n",
" 'n_estimators': [10, 25],\n",
" 'learning_rate': [.5, 1],\n",
" 'base_estimator__split_criteria': ['max_samples', 'impurity'],\n",
" 'base_estimator__tol': [.1, 1e-02],\n",
" 'base_estimator__max_depth': [3, 5, 7],\n",
" 'base_estimator__C': [1, 7, 55],\n",
" 'base_estimator__degree': [3, 5, 7],\n",
" 'base_estimator__kernel': ['poly']\n",
"},\n",
"{\n",
" 'base_estimator': [Stree(random_state=random_state)],\n",
" 'n_estimators': [10, 25],\n",
" 'learning_rate': [.5, 1],\n",
" 'base_estimator__split_criteria': ['max_samples', 'impurity'],\n",
" 'base_estimator__tol': [.1, 1e-02],\n",
" 'base_estimator__max_depth': [3, 5, 7],\n",
" 'base_estimator__C': [1, 7, 55],\n",
" 'base_estimator__gamma': [.1, 1, 10],\n",
" 'base_estimator__kernel': ['rbf']\n",
"}]"
]
},
{
"cell_type": "code",
@@ -151,12 +180,26 @@
"metadata": {},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": "{'C': 1.0,\n 'criterion': 'gini',\n 'degree': 3,\n 'gamma': 'scale',\n 'kernel': 'linear',\n 'max_depth': None,\n 'max_features': None,\n 'max_iter': 1000,\n 'min_samples_split': 0,\n 'random_state': None,\n 'split_criteria': 'max_samples',\n 'splitter': 'random',\n 'tol': 0.0001}"
"text/plain": [
"{'C': 1.0,\n",
" 'criterion': 'entropy',\n",
" 'degree': 3,\n",
" 'gamma': 'scale',\n",
" 'kernel': 'linear',\n",
" 'max_depth': None,\n",
" 'max_features': None,\n",
" 'max_iter': 100000.0,\n",
" 'min_samples_split': 0,\n",
" 'random_state': None,\n",
" 'split_criteria': 'impurity',\n",
" 'splitter': 'random',\n",
" 'tol': 0.0001}"
]
},
"execution_count": 6,
"metadata": {},
"execution_count": 6
"output_type": "execute_result"
}
],
"source": [
@@ -165,61 +208,142 @@
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "CrcB8o6EDZE5",
"colab_type": "code",
"colab": {},
"colab_type": "code",
"id": "CrcB8o6EDZE5",
"outputId": "7703413a-d563-4289-a13b-532f38f82762",
"tags": []
},
"source": [
"random_state=2020\n",
"clf = AdaBoostClassifier(random_state=random_state, algorithm=\"SAMME\")\n",
"grid = GridSearchCV(clf, parameters, verbose=10, n_jobs=-1, return_train_score=True)\n",
"grid.fit(Xtrain, ytrain)"
],
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "Fitting 5 folds for each of 96 candidates, totalling 480 fits\n[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.\n[Parallel(n_jobs=-1)]: Done 2 tasks | elapsed: 2.0s\n[Parallel(n_jobs=-1)]: Done 9 tasks | elapsed: 2.4s\n[Parallel(n_jobs=-1)]: Done 16 tasks | elapsed: 2.7s\n[Parallel(n_jobs=-1)]: Done 25 tasks | elapsed: 3.3s\n[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 4.3s\n[Parallel(n_jobs=-1)]: Done 45 tasks | elapsed: 5.3s\n[Parallel(n_jobs=-1)]: Done 56 tasks | elapsed: 6.6s\n[Parallel(n_jobs=-1)]: Done 69 tasks | elapsed: 8.1s\n[Parallel(n_jobs=-1)]: Done 82 tasks | elapsed: 9.4s\n[Parallel(n_jobs=-1)]: Done 97 tasks | elapsed: 10.1s\n[Parallel(n_jobs=-1)]: Done 112 tasks | elapsed: 11.1s\n[Parallel(n_jobs=-1)]: Done 129 tasks | elapsed: 12.3s\n[Parallel(n_jobs=-1)]: Done 146 tasks | elapsed: 13.6s\n[Parallel(n_jobs=-1)]: Done 165 tasks | elapsed: 14.9s\n[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 16.2s\n[Parallel(n_jobs=-1)]: Done 205 tasks | elapsed: 17.6s\n[Parallel(n_jobs=-1)]: Done 226 tasks | elapsed: 19.1s\n[Parallel(n_jobs=-1)]: Done 249 tasks | elapsed: 21.6s\n[Parallel(n_jobs=-1)]: Done 272 tasks | elapsed: 25.9s\n[Parallel(n_jobs=-1)]: Done 297 tasks | elapsed: 30.4s\n[Parallel(n_jobs=-1)]: Done 322 tasks | elapsed: 36.7s\n[Parallel(n_jobs=-1)]: Done 349 tasks | elapsed: 38.1s\n[Parallel(n_jobs=-1)]: Done 376 tasks | elapsed: 39.6s\n[Parallel(n_jobs=-1)]: Done 405 tasks | elapsed: 41.9s\n[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 44.9s\n[Parallel(n_jobs=-1)]: Done 465 tasks | elapsed: 48.2s\n[Parallel(n_jobs=-1)]: Done 480 out of 480 | elapsed: 49.2s finished\n"
"output_type": "stream",
"text": [
"Fitting 5 folds for each of 1008 candidates, totalling 5040 fits\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": "GridSearchCV(estimator=AdaBoostClassifier(algorithm='SAMME', random_state=2020),\n n_jobs=-1,\n param_grid={'base_estimator': [Stree(C=55, max_depth=3, tol=0.01)],\n 'base_estimator__C': [7, 55],\n 'base_estimator__kernel': ['linear', 'poly', 'rbf'],\n 'base_estimator__max_depth': [3, 5],\n 'base_estimator__tol': [0.1, 0.01],\n 'learning_rate': [0.5, 1], 'n_estimators': [10, 25]},\n return_train_score=True, verbose=10)"
"name": "stderr",
"output_type": "stream",
"text": [
"[Parallel(n_jobs=-1)]: Using backend LokyBackend with 16 concurrent workers.\n",
"[Parallel(n_jobs=-1)]: Done 40 tasks | elapsed: 1.6s\n",
"[Parallel(n_jobs=-1)]: Done 130 tasks | elapsed: 3.1s\n",
"[Parallel(n_jobs=-1)]: Done 256 tasks | elapsed: 5.5s\n",
"[Parallel(n_jobs=-1)]: Done 418 tasks | elapsed: 9.3s\n",
"[Parallel(n_jobs=-1)]: Done 616 tasks | elapsed: 18.6s\n",
"[Parallel(n_jobs=-1)]: Done 850 tasks | elapsed: 28.2s\n",
"[Parallel(n_jobs=-1)]: Done 1120 tasks | elapsed: 35.4s\n",
"[Parallel(n_jobs=-1)]: Done 1426 tasks | elapsed: 43.5s\n",
"[Parallel(n_jobs=-1)]: Done 1768 tasks | elapsed: 51.3s\n",
"[Parallel(n_jobs=-1)]: Done 2146 tasks | elapsed: 1.0min\n",
"[Parallel(n_jobs=-1)]: Done 2560 tasks | elapsed: 1.2min\n",
"[Parallel(n_jobs=-1)]: Done 3010 tasks | elapsed: 1.4min\n",
"[Parallel(n_jobs=-1)]: Done 3496 tasks | elapsed: 1.7min\n",
"[Parallel(n_jobs=-1)]: Done 4018 tasks | elapsed: 2.1min\n",
"[Parallel(n_jobs=-1)]: Done 4576 tasks | elapsed: 2.6min\n",
"[Parallel(n_jobs=-1)]: Done 5040 out of 5040 | elapsed: 2.9min finished\n"
]
},
{
"data": {
"text/plain": [
"GridSearchCV(estimator=AdaBoostClassifier(algorithm='SAMME', random_state=1),\n",
" n_jobs=-1,\n",
" param_grid=[{'base_estimator': [Stree(C=55, max_depth=7,\n",
" random_state=1,\n",
" split_criteria='max_samples',\n",
" tol=0.1)],\n",
" 'base_estimator__C': [1, 7, 55],\n",
" 'base_estimator__kernel': ['linear'],\n",
" 'base_estimator__max_depth': [3, 5, 7],\n",
" 'base_estimator__split_criteria': ['max_samples',\n",
" 'impuri...\n",
" {'base_estimator': [Stree(random_state=1)],\n",
" 'base_estimator__C': [1, 7, 55],\n",
" 'base_estimator__gamma': [0.1, 1, 10],\n",
" 'base_estimator__kernel': ['rbf'],\n",
" 'base_estimator__max_depth': [3, 5, 7],\n",
" 'base_estimator__split_criteria': ['max_samples',\n",
" 'impurity'],\n",
" 'base_estimator__tol': [0.1, 0.01],\n",
" 'learning_rate': [0.5, 1],\n",
" 'n_estimators': [10, 25]}],\n",
" return_train_score=True, verbose=5)"
]
},
"execution_count": 7,
"metadata": {},
"execution_count": 7
"output_type": "execute_result"
}
],
"source": [
"clf = AdaBoostClassifier(random_state=random_state, algorithm=\"SAMME\")\n",
"grid = GridSearchCV(clf, parameters, verbose=5, n_jobs=-1, return_train_score=True)\n",
"grid.fit(Xtrain, ytrain)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"id": "ZjX88NoYDZE8",
"colab_type": "code",
"colab": {},
"colab_type": "code",
"id": "ZjX88NoYDZE8",
"outputId": "285163c8-fa33-4915-8ae7-61c4f7844344",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Best estimator: AdaBoostClassifier(algorithm='SAMME',\n",
" base_estimator=Stree(C=55, max_depth=7, random_state=1,\n",
" split_criteria='max_samples', tol=0.1),\n",
" learning_rate=0.5, n_estimators=25, random_state=1)\n",
"Best hyperparameters: {'base_estimator': Stree(C=55, max_depth=7, random_state=1, split_criteria='max_samples', tol=0.1), 'base_estimator__C': 55, 'base_estimator__kernel': 'linear', 'base_estimator__max_depth': 7, 'base_estimator__split_criteria': 'max_samples', 'base_estimator__tol': 0.1, 'learning_rate': 0.5, 'n_estimators': 25}\n",
"Best accuracy: 0.9511777695988222\n"
]
}
],
"source": [
"print(\"Best estimator: \", grid.best_estimator_)\n",
"print(\"Best hyperparameters: \", grid.best_params_)\n",
"print(\"Best accuracy: \", grid.best_score_)"
],
"execution_count": 8,
"outputs": [
]
},
{
"output_type": "stream",
"name": "stdout",
"text": "Best estimator: AdaBoostClassifier(algorithm='SAMME',\n base_estimator=Stree(C=55, max_depth=3, tol=0.01),\n learning_rate=0.5, n_estimators=25, random_state=2020)\nBest hyperparameters: {'base_estimator': Stree(C=55, max_depth=3, tol=0.01), 'base_estimator__C': 55, 'base_estimator__kernel': 'linear', 'base_estimator__max_depth': 3, 'base_estimator__tol': 0.01, 'learning_rate': 0.5, 'n_estimators': 25}\nBest accuracy: 0.9559440559440558\n"
}
"cell_type": "markdown",
"metadata": {},
"source": [
"Best estimator: AdaBoostClassifier(algorithm='SAMME',\n",
" base_estimator=Stree(C=55, max_depth=7, random_state=1,\n",
" split_criteria='max_samples', tol=0.1),\n",
" learning_rate=0.5, n_estimators=25, random_state=1)\n",
"Best hyperparameters: {'base_estimator': Stree(C=55, max_depth=7, random_state=1, split_criteria='max_samples', tol=0.1), 'base_estimator__C': 55, 'base_estimator__kernel': 'linear', 'base_estimator__max_depth': 7, 'base_estimator__split_criteria': 'max_samples', 'base_estimator__tol': 0.1, 'learning_rate': 0.5, 'n_estimators': 25}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Best accuracy: 0.9511777695988222"
]
}
],
"metadata": {
"colab": {
"name": "gridsearch.ipynb",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
@@ -230,18 +354,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6-final"
},
"orig_nbformat": 2,
"kernelspec": {
"name": "python37664bitgeneralvenvfbd0a23e74cf4e778460f5ffc6761f39",
"display_name": "Python 3.7.6 64-bit ('general': venv)"
},
"colab": {
"name": "gridsearch.ipynb",
"provenance": []
"version": "3.8.2"
}
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat_minor": 4
}

View File

@@ -1,4 +1,4 @@
numpy
scikit-learn
scikit-learn==0.23.2
pandas
ipympl

1
runtime.txt Normal file
View File

@@ -0,0 +1 @@
python-3.8

View File

@@ -1,6 +1,6 @@
import setuptools
__version__ = "0.9rc5"
__version__ = "0.9rc6"
__author__ = "Ricardo Montañana Gómez"
@@ -25,12 +25,12 @@ setuptools.setup(
classifiers=[
"Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Natural Language :: English",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Intended Audience :: Science/Research",
],
install_requires=["scikit-learn>=0.23.0", "numpy", "ipympl"],
install_requires=["scikit-learn==0.23.2", "numpy", "ipympl"],
test_suite="stree.tests",
zip_safe=False,
)

View File

@@ -3,15 +3,15 @@ __author__ = "Ricardo Montañana Gómez"
__copyright__ = "Copyright 2020, Ricardo Montañana Gómez"
__license__ = "MIT"
__version__ = "0.9"
Build an oblique tree classifier based on SVM Trees
Build an oblique tree classifier based on SVM nodes
"""
import os
import numbers
import random
import warnings
from math import log
from itertools import combinations
from math import log, factorial
from typing import Optional
import numpy as np
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.svm import SVC, LinearSVC
@@ -57,6 +57,7 @@ class Snode:
)
self._features = features
self._impurity = impurity
self._partition_column: int = -1
@classmethod
def copy(cls, node: "Snode") -> "Snode":
@@ -69,6 +70,12 @@ class Snode:
node._title,
)
def set_partition_column(self, col: int):
self._partition_column = col
def get_partition_column(self) -> int:
return self._partition_column
def set_down(self, son):
self._down = son
@@ -93,9 +100,8 @@ class Snode:
classes, card = np.unique(self._y, return_counts=True)
if len(classes) > 1:
max_card = max(card)
min_card = min(card)
self._class = classes[card == max_card][0]
self._belief = max_card / (max_card + min_card)
self._belief = max_card / np.sum(card)
else:
self._belief = 1
try:
@@ -104,24 +110,23 @@ class Snode:
self._class = None
def __str__(self) -> str:
if self.is_leaf():
count_values = np.unique(self._y, return_counts=True)
result = (
if self.is_leaf():
return (
f"{self._title} - Leaf class={self._class} belief="
f"{self._belief: .6f} impurity={self._impurity:.4f} "
f"counts={count_values}"
)
return result
else:
return (
f"{self._title} feaures={self._features} impurity="
f"{self._impurity:.4f} "
f"counts={count_values}"
)
class Siterator:
"""Stree preorder iterator
"""
"""Stree preorder iterator"""
def __init__(self, tree: Snode):
self._stack = []
@@ -167,20 +172,22 @@ class Splitter:
f"criterion must be gini or entropy got({criterion})"
)
if criteria not in ["min_distance", "max_samples", "max_distance"]:
if criteria not in [
"max_samples",
"impurity",
]:
raise ValueError(
"split_criteria has to be min_distance "
f"max_distance or max_samples got ({criteria})"
f"criteria has to be max_samples or impurity; got ({criteria})"
)
if splitter_type not in ["random", "best"]:
raise ValueError(
f"splitter must be either random or best got({splitter_type})"
f"splitter must be either random or best, got({splitter_type})"
)
self.criterion_function = getattr(self, f"_{self._criterion}")
self.decision_criteria = getattr(self, f"_{self._criteria}")
def impurity(self, y: np.array) -> np.array:
def partition_impurity(self, y: np.array) -> np.array:
return self.criterion_function(y)
@staticmethod
@@ -190,6 +197,18 @@ class Splitter:
@staticmethod
def _entropy(y: np.array) -> float:
"""Compute entropy of a labels set
Parameters
----------
y : np.array
set of labels
Returns
-------
float
entropy
"""
n_labels = len(y)
if n_labels <= 1:
return 0
@@ -208,6 +227,22 @@ class Splitter:
def information_gain(
self, labels: np.array, labels_up: np.array, labels_dn: np.array
) -> float:
"""Compute information gain of a split candidate
Parameters
----------
labels : np.array
labels of the dataset
labels_up : np.array
labels of one side
labels_dn : np.array
labels on the other side
Returns
-------
float
information gain
"""
imp_prev = self.criterion_function(labels)
card_up = card_dn = imp_up = imp_dn = 0
if labels_up is not None:
@@ -238,7 +273,7 @@ class Splitter:
node = Snode(
self._clf, dataset, labels, feature_set, 0.0, "subset"
)
self.partition(dataset, node)
self.partition(dataset, node, train=True)
y1, y2 = self.part(labels)
gain = self.information_gain(labels, y1, y2)
if gain > max_gain:
@@ -246,124 +281,206 @@ class Splitter:
selected = feature_set
return selected if selected is not None else feature_set
@staticmethod
def _generate_spaces(features: int, max_features: int) -> list:
"""Generate at most 5 feature random combinations
Parameters
----------
features : int
number of features in each combination
max_features : int
number of features in dataset
Returns
-------
list
list with up to 5 combination of features randomly selected
"""
comb = set()
# Generate at most 5 combinations
if max_features == features:
set_length = 1
else:
number = factorial(features) / (
factorial(max_features) * factorial(features - max_features)
)
set_length = min(5, number)
while len(comb) < set_length:
comb.add(
tuple(sorted(random.sample(range(features), max_features)))
)
return list(comb)
def _get_subspaces_set(
self, dataset: np.array, labels: np.array, max_features: int
) -> np.array:
features = range(dataset.shape[1])
features_sets = list(combinations(features, max_features))
"""Compute the indices of the features selected by splitter depending
on the self._splitter_type hyper parameter
Parameters
----------
dataset : np.array
array of samples
labels : np.array
labels of the dataset
max_features : int
number of features of the subspace
(<= number of features in dataset)
Returns
-------
np.array
indices of the features selected
"""
features_sets = self._generate_spaces(dataset.shape[1], max_features)
if len(features_sets) > 1:
if self._splitter_type == "random":
index = random.randint(0, len(features_sets) - 1)
return features_sets[index]
else:
# get only 3 sets at most
if len(features_sets) > 3:
features_sets = random.sample(features_sets, 3)
return self._select_best_set(dataset, labels, features_sets)
else:
return features_sets[0]
def get_subspace(
self, dataset: np.array, labels: np.array, max_features: int
) -> list:
"""Return the best subspace to make a split
) -> tuple:
"""Return a subspace of the selected dataset of max_features length.
Depending on hyperparmeter
Parameters
----------
dataset : np.array
array of samples (# samples, # features)
labels : np.array
labels of the dataset
max_features : int
number of features to form the subspace
Returns
-------
tuple
tuple with the dataset with only the features selected and the
indices of the features selected
"""
indices = self._get_subspaces_set(dataset, labels, max_features)
return dataset[:, indices], indices
@staticmethod
def _min_distance(data: np.array, _) -> np.array:
"""Assign class to min distances
def _impurity(self, data: np.array, y: np.array) -> np.array:
"""return column of dataset to be taken into account to split dataset
return a vector of classes so partition can separate class 0 from
the rest of classes, ie. class 0 goes to one splitted node and the
rest of classes go to the other
:param data: distances to hyper plane of every class
:type data: np.array (m, n_classes)
:param _: enable call compat with other measures
:type _: None
:return: vector with the class assigned to each sample
:rtype: np.array shape (m,)
Parameters
----------
data : np.array
distances to hyper plane of every class
y : np.array
vector of labels (classes)
Returns
-------
np.array
column of dataset to be taken into account to split dataset
"""
return np.argmin(data, axis=1)
@staticmethod
def _max_distance(data: np.array, _) -> np.array:
"""Assign class to max distances
return a vector of classes so partition can separate class 0 from
the rest of classes, ie. class 0 goes to one splitted node and the
rest of classes go to the other
:param data: distances to hyper plane of every class
:type data: np.array (m, n_classes)
:param _: enable call compat with other measures
:type _: None
:return: vector with the class assigned to each sample values
(can be 0, 1, ...)
:rtype: np.array shape (m,)
"""
return np.argmax(data, axis=1)
max_gain = 0
selected = -1
for col in range(data.shape[1]):
tup = y[data[:, col] > 0]
tdn = y[data[:, col] <= 0]
info_gain = self.information_gain(y, tup, tdn)
if info_gain > max_gain:
selected = col
max_gain = info_gain
return selected
@staticmethod
def _max_samples(data: np.array, y: np.array) -> np.array:
"""return distances of the class with more samples
"""return column of dataset to be taken into account to split dataset
:param data: distances to hyper plane of every class
:type data: np.array (m, n_classes)
:param y: vector of labels (classes)
:type y: np.array (m,)
:return: vector with distances to hyperplane (can be positive or neg.)
:rtype: np.array shape (m,)
Parameters
----------
data : np.array
distances to hyper plane of every class
y : np.array
column of dataset to be taken into account to split dataset
Returns
-------
np.array
column of dataset to be taken into account to split dataset
"""
# select the class with max number of samples
_, samples = np.unique(y, return_counts=True)
selected = np.argmax(samples)
return data[:, selected]
return np.argmax(samples)
def partition(self, samples: np.array, node: Snode):
def partition(self, samples: np.array, node: Snode, train: bool):
"""Set the criteria to split arrays. Compute the indices of the samples
that should go to one side of the tree (down)
that should go to one side of the tree (up)
"""
# data contains the distances of every sample to every class hyperplane
# array of (m, nc) nc = # classes
data = self._distances(node, samples)
if data.shape[0] < self._min_samples_split:
self._down = np.ones((data.shape[0]), dtype=bool)
# there aren't enough samples to split
self._up = np.ones((data.shape[0]), dtype=bool)
return
if data.ndim > 1:
# split criteria for multiclass
data = self.decision_criteria(data, node._y)
self._down = data > 0
# Convert data to a (m, 1) array selecting values for samples
if train:
# in train time we have to compute the column to take into
# account to split the dataset
col = self.decision_criteria(data, node._y)
node.set_partition_column(col)
else:
# in predcit time just use the column computed in train time
# is taking the classifier of class <col>
col = node.get_partition_column()
if col == -1:
# No partition is producing information gain
data = np.ones(data.shape)
data = data[:, col]
self._up = data > 0
def part(self, origin: np.array) -> list:
"""Split an array in two based on indices (self._up) and its complement
partition has to be called first to establish up indices
Parameters
----------
origin : np.array
dataset to split
Returns
-------
list
list with two splits of the array
"""
down = ~self._up
return [
origin[self._up] if any(self._up) else None,
origin[down] if any(down) else None,
]
@staticmethod
def _distances(node: Snode, data: np.ndarray) -> np.array:
"""Compute distances of the samples to the hyperplane of the node
:param node: node containing the svm classifier
:type node: Snode
:param data: samples to find out distance to hyperplane
:type data: np.ndarray
:return: array of shape (m, 1) with the distances of every sample to
the hyperplane of the node
:rtype: np.array
Parameters
----------
node : Snode
node containing the svm classifier
data : np.ndarray
samples to compute distance to hyperplane
Returns
-------
np.array
array of shape (m, nc) with the distances of every sample to
the hyperplane of every class. nc = # of classes
"""
return node._clf.decision_function(data[:, node._features])
def part(self, origin: np.array) -> list:
"""Split an array in two based on indices (down) and its complement
:param origin: dataset to split
:type origin: np.array
:param down: indices to use to split array
:type down: np.array
:return: list with two splits of the array
:rtype: list
"""
up = ~self._down
return [
origin[up] if any(up) else None,
origin[self._down] if any(self._down) else None,
]
class Stree(BaseEstimator, ClassifierMixin):
"""Estimator that is based on binary trees of svm nodes
@@ -377,14 +494,14 @@ class Stree(BaseEstimator, ClassifierMixin):
self,
C: float = 1.0,
kernel: str = "linear",
max_iter: int = 1000,
max_iter: int = 1e5,
random_state: int = None,
max_depth: int = None,
tol: float = 1e-4,
degree: int = 3,
gamma="scale",
split_criteria: str = "max_samples",
criterion: str = "gini",
split_criteria: str = "impurity",
criterion: str = "entropy",
min_samples_split: int = 0,
max_features=None,
splitter: str = "random",
@@ -405,6 +522,7 @@ class Stree(BaseEstimator, ClassifierMixin):
def _more_tags(self) -> dict:
"""Required by sklearn to supply features of the classifier
make mandatory the labels array
:return: the tag required
:rtype: dict
@@ -416,16 +534,19 @@ class Stree(BaseEstimator, ClassifierMixin):
) -> "Stree":
"""Build the tree based on the dataset of samples and its labels
:param X: dataset of samples to make predictions
:type X: np.array
:param y: samples labels
:type y: np.array
:param sample_weight: weights of the samples. Rescale C per sample.
Hi' weights force the classifier to put more emphasis on these points
:type sample_weight: np.array optional
:raises ValueError: if parameters C or max_depth are out of bounds
:return: itself to be able to chain actions: fit().predict() ...
:rtype: Stree
Returns
-------
Stree
itself to be able to chain actions: fit().predict() ...
Raises
------
ValueError
if C < 0
ValueError
if max_depth < 1
ValueError
if all samples have 0 or negative weights
"""
# Check parameters are Ok.
if self.C < 0:
@@ -448,6 +569,10 @@ class Stree(BaseEstimator, ClassifierMixin):
sample_weight = _check_sample_weight(
sample_weight, X, dtype=np.float64
)
if not any(sample_weight):
raise ValueError(
"Invalid input - all samples have zero or negative weights."
)
check_classification_targets(y)
# Initialize computed parameters
self.splitter_ = Splitter(
@@ -469,6 +594,8 @@ class Stree(BaseEstimator, ClassifierMixin):
self.max_features_ = self._initialize_max_features()
self.tree_ = self.train(X, y, sample_weight, 1, "root")
self._build_predictor()
self.X_ = X
self.y_ = y
return self
def train(
@@ -478,23 +605,27 @@ class Stree(BaseEstimator, ClassifierMixin):
sample_weight: np.ndarray,
depth: int,
title: str,
) -> Snode:
) -> Optional[Snode]:
"""Recursive function to split the original dataset into predictor
nodes (leaves)
:param X: samples dataset
:type X: np.ndarray
:param y: samples labels
:type y: np.ndarray
:param sample_weight: weight of samples. Rescale C per sample.
Hi weights force the classifier to put more emphasis on these points.
:type sample_weight: np.ndarray
:param depth: actual depth in the tree
:type depth: int
:param title: description of the node
:type title: str
:return: binary tree
:rtype: Snode
Parameters
----------
X : np.ndarray
samples dataset
y : np.ndarray
samples labels
sample_weight : np.ndarray
weight of samples. Rescale C per sample.
depth : int
actual depth in the tree
title : str
description of the node
Returns
-------
Optional[Snode]
binary tree
"""
if depth > self.__max_depth:
return None
@@ -521,10 +652,10 @@ class Stree(BaseEstimator, ClassifierMixin):
if np.unique(y_next).shape[0] != self.n_classes_:
sample_weight += 1e-5
clf.fit(Xs, y, sample_weight=sample_weight)
impurity = self.splitter_.impurity(y)
impurity = self.splitter_.partition_impurity(y)
node = Snode(clf, X, y, features, impurity, title, sample_weight)
self.depth_ = max(depth, self.depth_)
self.splitter_.partition(X, node)
self.splitter_.partition(X, node, True)
X_U, X_D = self.splitter_.part(X)
y_u, y_d = self.splitter_.part(y)
sw_u, sw_d = self.splitter_.part(sample_weight)
@@ -544,8 +675,7 @@ class Stree(BaseEstimator, ClassifierMixin):
return node
def _build_predictor(self):
"""Process the leaves to make them predictors
"""
"""Process the leaves to make them predictors"""
def run_tree(node: Snode):
if node.is_leaf():
@@ -557,8 +687,7 @@ class Stree(BaseEstimator, ClassifierMixin):
run_tree(self.tree_)
def _build_clf(self):
""" Build the correct classifier for the node
"""
"""Build the correct classifier for the node"""
return (
LinearSVC(
max_iter=self.max_iter,
@@ -581,12 +710,17 @@ class Stree(BaseEstimator, ClassifierMixin):
def _reorder_results(y: np.array, indices: np.array) -> np.array:
"""Reorder an array based on the array of indices passed
:param y: data untidy
:type y: np.array
:param indices: indices used to set order
:type indices: np.array
:return: array y ordered
:rtype: np.array
Parameters
----------
y : np.array
data untidy
indices : np.array
indices used to set order
Returns
-------
np.array
array y ordered
"""
# return array of same type given in y
y_ordered = y.copy()
@@ -598,10 +732,22 @@ class Stree(BaseEstimator, ClassifierMixin):
def predict(self, X: np.array) -> np.array:
"""Predict labels for each sample in dataset passed
:param X: dataset of samples
:type X: np.array
:return: array of labels
:rtype: np.array
Parameters
----------
X : np.array
dataset of samples
Returns
-------
np.array
array of labels
Raises
------
ValueError
if dataset with inconsistent number of features
NotFittedError
if model is not fitted
"""
def predict_class(
@@ -613,7 +759,7 @@ class Stree(BaseEstimator, ClassifierMixin):
# set a class for every sample in dataset
prediction = np.full((xp.shape[0], 1), node._class)
return prediction, indices
self.splitter_.partition(xp, node)
self.splitter_.partition(xp, node, train=False)
x_u, x_d = self.splitter_.part(xp)
i_u, i_d = self.splitter_.part(indices)
prx_u, prin_u = predict_class(x_u, i_u, node.get_up())
@@ -643,15 +789,19 @@ class Stree(BaseEstimator, ClassifierMixin):
) -> float:
"""Compute accuracy of the prediction
:param X: dataset of samples to make predictions
:type X: np.array
:param y_true: samples labels
:type y_true: np.array
:param sample_weight: weights of the samples. Rescale C per sample.
Hi' weights force the classifier to put more emphasis on these points
:type sample_weight: np.array optional
:return: accuracy of the prediction
:rtype: float
Parameters
----------
X : np.array
dataset of samples to make predictions
y : np.array
samples labels
sample_weight : np.array, optional
weights of the samples. Rescale C per sample, by default None
Returns
-------
float
accuracy of the prediction
"""
# sklearn check
check_is_fitted(self)
@@ -668,8 +818,10 @@ class Stree(BaseEstimator, ClassifierMixin):
"""Create an iterator to be able to visit the nodes of the tree in
preorder, can make a list with all the nodes in preorder
:return: an iterator, can for i in... and list(...)
:rtype: Siterator
Returns
-------
Siterator
an iterator, can for i in... and list(...)
"""
try:
tree = self.tree_
@@ -680,8 +832,10 @@ class Stree(BaseEstimator, ClassifierMixin):
def __str__(self) -> str:
"""String representation of the tree
:return: description of nodes in the tree in preorder
:rtype: str
Returns
-------
str
description of nodes in the tree in preorder
"""
output = ""
for i in self:

View File

@@ -40,12 +40,13 @@ class Snode_test(unittest.TestCase):
# Check Class
class_computed = classes[card == max_card]
self.assertEqual(class_computed, node._class)
# Check Partition column
self.assertEqual(node._partition_column, -1)
check_leave(self._clf.tree_)
def test_nodes_coefs(self):
"""Check if the nodes of the tree have the right attributes filled
"""
"""Check if the nodes of the tree have the right attributes filled"""
def run_tree(node: Snode):
if node._belief < 1:
@@ -54,16 +55,19 @@ class Snode_test(unittest.TestCase):
self.assertIsNotNone(node._clf.coef_)
if node.is_leaf():
return
run_tree(node.get_down())
run_tree(node.get_up())
run_tree(node.get_down())
run_tree(self._clf.tree_)
model = Stree(self._random_state)
model.fit(*load_dataset(self._random_state, 3, 4))
run_tree(model.tree_)
def test_make_predictor_on_leaf(self):
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
test.make_predictor()
self.assertEqual(1, test._class)
self.assertEqual(0.75, test._belief)
self.assertEqual(-1, test._partition_column)
def test_make_predictor_on_not_leaf(self):
test = Snode(None, [1, 2, 3, 4], [1, 0, 1, 1], [], 0.0, "test")
@@ -71,11 +75,14 @@ class Snode_test(unittest.TestCase):
test.make_predictor()
self.assertIsNone(test._class)
self.assertEqual(0, test._belief)
self.assertEqual(-1, test._partition_column)
self.assertEqual(-1, test.get_up()._partition_column)
def test_make_predictor_on_leaf_bogus_data(self):
test = Snode(None, [1, 2, 3, 4], [], [], 0.0, "test")
test.make_predictor()
self.assertIsNone(test._class)
self.assertEqual(-1, test._partition_column)
def test_copy_node(self):
px = [1, 2, 3, 4]
@@ -86,3 +93,4 @@ class Snode_test(unittest.TestCase):
self.assertListEqual(computed._y, py)
self.assertEqual("test", computed._title)
self.assertIsInstance(computed._clf, Stree)
self.assertEqual(test._partition_column, computed._partition_column)

View File

@@ -19,7 +19,7 @@ class Splitter_test(unittest.TestCase):
min_samples_split=0,
splitter_type="random",
criterion="gini",
criteria="min_distance",
criteria="max_samples",
random_state=None,
):
return Splitter(
@@ -46,11 +46,7 @@ class Splitter_test(unittest.TestCase):
_ = Splitter(clf=None)
for splitter_type in ["best", "random"]:
for criterion in ["gini", "entropy"]:
for criteria in [
"min_distance",
"max_samples",
"max_distance",
]:
for criteria in ["max_samples", "impurity"]:
tcl = self.build(
splitter_type=splitter_type,
criterion=criterion,
@@ -138,43 +134,45 @@ class Splitter_test(unittest.TestCase):
[0.7, 0.01, -0.1],
[0.7, -0.9, 0.5],
[0.1, 0.2, 0.3],
[-0.1, 0.2, 0.3],
[-0.1, 0.2, 0.3],
]
)
expected = np.array([0.2, 0.01, -0.9, 0.2])
y = [1, 2, 1, 0]
expected = data[:, 0]
y = [1, 2, 1, 0, 0, 0]
computed = tcl._max_samples(data, y)
self.assertEqual((4,), computed.shape)
self.assertListEqual(expected.tolist(), computed.tolist())
self.assertEqual(0, computed)
computed_data = data[:, computed]
self.assertEqual((6,), computed_data.shape)
self.assertListEqual(expected.tolist(), computed_data.tolist())
def test_min_distance(self):
tcl = self.build()
def test_impurity(self):
tcl = self.build(criteria="impurity")
data = np.array(
[
[-0.1, 0.2, -0.3],
[0.7, 0.01, -0.1],
[0.7, -0.9, 0.5],
[0.1, 0.2, 0.3],
[-0.1, 0.2, 0.3],
[-0.1, 0.2, 0.3],
]
)
expected = np.array([2, 2, 1, 0])
computed = tcl._min_distance(data, None)
self.assertEqual((4,), computed.shape)
self.assertListEqual(expected.tolist(), computed.tolist())
expected = data[:, 2]
y = np.array([1, 2, 1, 0, 0, 0])
computed = tcl._impurity(data, y)
self.assertEqual(2, computed)
computed_data = data[:, computed]
self.assertEqual((6,), computed_data.shape)
self.assertListEqual(expected.tolist(), computed_data.tolist())
def test_max_distance(self):
tcl = self.build(criteria="max_distance")
data = np.array(
[
[-0.1, 0.2, -0.3],
[0.7, 0.01, -0.1],
[0.7, -0.9, 0.5],
[0.1, 0.2, 0.3],
]
)
expected = np.array([1, 0, 0, 2])
computed = tcl._max_distance(data, None)
self.assertEqual((4,), computed.shape)
self.assertListEqual(expected.tolist(), computed.tolist())
def test_generate_subspaces(self):
features = 250
for max_features in range(2, features):
num = len(Splitter._generate_spaces(features, max_features))
self.assertEqual(5, num)
self.assertEqual(3, len(Splitter._generate_spaces(3, 2)))
self.assertEqual(4, len(Splitter._generate_spaces(4, 3)))
def test_best_splitter_few_sets(self):
X, y = load_iris(return_X_y=True)
@@ -186,27 +184,22 @@ class Splitter_test(unittest.TestCase):
def test_splitter_parameter(self):
expected_values = [
[2, 3, 5, 7], # best entropy min_distance
[0, 2, 4, 5], # best entropy max_samples
[0, 2, 8, 12], # best entropy max_distance
[1, 2, 5, 12], # best gini min_distance
[0, 3, 4, 10], # best gini max_samples
[1, 2, 9, 12], # best gini max_distance
[3, 9, 11, 12], # random entropy min_distance
[1, 5, 6, 9], # random entropy max_samples
[1, 2, 4, 8], # random entropy max_distance
[2, 6, 7, 12], # random gini min_distance
[3, 9, 10, 11], # random gini max_samples
[2, 5, 8, 12], # random gini max_distance
[1, 4, 9, 12], # best entropy max_samples
[1, 3, 6, 10], # best entropy impurity
[6, 8, 10, 12], # best gini max_samples
[7, 8, 10, 11], # best gini impurity
[0, 3, 8, 12], # random entropy max_samples
[0, 3, 9, 11], # random entropy impurity
[0, 4, 7, 12], # random gini max_samples
[0, 2, 5, 6], # random gini impurity
]
X, y = load_wine(return_X_y=True)
rn = 0
for splitter_type in ["best", "random"]:
for criterion in ["entropy", "gini"]:
for criteria in [
"min_distance",
"max_samples",
"max_distance",
"impurity",
]:
tcl = self.build(
splitter_type=splitter_type,
@@ -219,7 +212,9 @@ class Splitter_test(unittest.TestCase):
dataset, computed = tcl.get_subspace(X, y, max_features=4)
# print(
# "{}, # {:7s}{:8s}{:15s}".format(
# list(computed), splitter_type, criterion,
# list(computed),
# splitter_type,
# criterion,
# criteria,
# )
# )

View File

@@ -5,6 +5,7 @@ import warnings
import numpy as np
from sklearn.datasets import load_iris, load_wine
from sklearn.exceptions import ConvergenceWarning
from sklearn.svm import LinearSVC
from stree import Stree, Snode
from .utils import load_dataset
@@ -25,8 +26,10 @@ class Stree_test(unittest.TestCase):
correct number of labels and its sons have the right number of elements
in their dataset
Arguments:
node {Snode} -- node to check
Parameters
----------
node : Snode
node to check
"""
if node.is_leaf():
return
@@ -41,23 +44,22 @@ class Stree_test(unittest.TestCase):
_, count_u = np.unique(y_up, return_counts=True)
#
for i in unique_y:
number_down = count_d[i]
try:
number_up = count_u[i]
try:
number_down = count_d[i]
except IndexError:
number_up = 0
number_down = 0
self.assertEqual(count_y[i], number_down + number_up)
# Is the partition made the same as the prediction?
# as the node is not a leaf...
_, count_yp = np.unique(y_prediction, return_counts=True)
self.assertEqual(count_yp[0], y_up.shape[0])
self.assertEqual(count_yp[1], y_down.shape[0])
self.assertEqual(count_yp[1], y_up.shape[0])
self.assertEqual(count_yp[0], y_down.shape[0])
self._check_tree(node.get_down())
self._check_tree(node.get_up())
def test_build_tree(self):
"""Check if the tree is built the same way as predictions of models
"""
"""Check if the tree is built the same way as predictions of models"""
warnings.filterwarnings("ignore")
for kernel in self._kernels:
clf = Stree(kernel=kernel, random_state=self._random_state)
@@ -99,20 +101,22 @@ class Stree_test(unittest.TestCase):
self.assertListEqual(yp_line.tolist(), yp_once.tolist())
def test_iterator_and_str(self):
"""Check preorder iterator
"""
"""Check preorder iterator"""
expected = [
"root feaures=(0, 1, 2) impurity=0.5000",
"root - Down feaures=(0, 1, 2) impurity=0.0671",
"root - Down - Down, <cgaf> - Leaf class=1 belief= 0.975989 "
"impurity=0.0469 counts=(array([0, 1]), array([ 17, 691]))",
"root - Down - Up feaures=(0, 1, 2) impurity=0.3967",
"root - Down - Up - Down, <cgaf> - Leaf class=1 belief= 0.750000 "
"impurity=0.3750 counts=(array([0, 1]), array([1, 3]))",
"root - Down - Up - Up, <pure> - Leaf class=0 belief= 1.000000 "
"impurity=0.0000 counts=(array([0]), array([7]))",
"root - Up, <cgaf> - Leaf class=0 belief= 0.928297 impurity=0.1331"
" counts=(array([0, 1]), array([725, 56]))",
"root feaures=(0, 1, 2) impurity=1.0000 counts=(array([0, 1]), arr"
"ay([750, 750]))",
"root - Down, <cgaf> - Leaf class=0 belief= 0.928297 impurity=0.37"
"22 counts=(array([0, 1]), array([725, 56]))",
"root - Up feaures=(0, 1, 2) impurity=0.2178 counts=(array([0, 1])"
", array([ 25, 694]))",
"root - Up - Down feaures=(0, 1, 2) impurity=0.8454 counts=(array("
"[0, 1]), array([8, 3]))",
"root - Up - Down - Down, <pure> - Leaf class=0 belief= 1.000000 i"
"mpurity=0.0000 counts=(array([0]), array([7]))",
"root - Up - Down - Up, <cgaf> - Leaf class=1 belief= 0.750000 imp"
"urity=0.8113 counts=(array([0, 1]), array([1, 3]))",
"root - Up - Up, <cgaf> - Leaf class=1 belief= 0.975989 impurity=0"
".1634 counts=(array([0, 1]), array([ 17, 691]))",
]
computed = []
expected_string = ""
@@ -188,44 +192,43 @@ class Stree_test(unittest.TestCase):
def test_muticlass_dataset(self):
datasets = {
"Synt": load_dataset(random_state=self._random_state, n_classes=3),
"Iris": load_iris(return_X_y=True),
"Iris": load_wine(return_X_y=True),
}
outcomes = {
"Synt": {
"max_samples linear": 0.9533333333333334,
"max_samples rbf": 0.836,
"max_samples poly": 0.9473333333333334,
"min_distance linear": 0.9533333333333334,
"min_distance rbf": 0.836,
"min_distance poly": 0.9473333333333334,
"max_distance linear": 0.9533333333333334,
"max_distance rbf": 0.836,
"max_distance poly": 0.9473333333333334,
"max_samples linear": 0.9606666666666667,
"max_samples rbf": 0.7133333333333334,
"max_samples poly": 0.49066666666666664,
"impurity linear": 0.9606666666666667,
"impurity rbf": 0.7133333333333334,
"impurity poly": 0.49066666666666664,
},
"Iris": {
"max_samples linear": 0.98,
"max_samples rbf": 1.0,
"max_samples poly": 1.0,
"min_distance linear": 0.98,
"min_distance rbf": 1.0,
"min_distance poly": 1.0,
"max_distance linear": 0.98,
"max_distance rbf": 1.0,
"max_distance poly": 1.0,
"max_samples linear": 1.0,
"max_samples rbf": 0.6910112359550562,
"max_samples poly": 0.6966292134831461,
"impurity linear": 1,
"impurity rbf": 0.6910112359550562,
"impurity poly": 0.6966292134831461,
},
}
for name, dataset in datasets.items():
px, py = dataset
for criteria in ["max_samples", "min_distance", "max_distance"]:
for criteria in ["max_samples", "impurity"]:
for kernel in self._kernels:
clf = Stree(
C=1e4,
max_iter=1e4,
C=55,
max_iter=1e5,
kernel=kernel,
random_state=self._random_state,
)
clf.fit(px, py)
outcome = outcomes[name][f"{criteria} {kernel}"]
# print(
# f"{name} {criteria} {kernel} {outcome} {clf.score(px"
# ", py)}"
# )
self.assertAlmostEqual(outcome, clf.score(px, py))
def test_max_features(self):
@@ -297,7 +300,10 @@ class Stree_test(unittest.TestCase):
0.9433333333333334,
]
for kernel, accuracy_expected in zip(self._kernels, accuracies):
clf = Stree(random_state=self._random_state, kernel=kernel,)
clf = Stree(
random_state=self._random_state,
kernel=kernel,
)
clf.fit(X, y)
accuracy_score = clf.score(X, y)
yp = clf.predict(X)
@@ -309,81 +315,104 @@ class Stree_test(unittest.TestCase):
X, y = load_dataset(self._random_state)
clf = Stree(random_state=self._random_state, max_features=2)
clf.fit(X, y)
self.assertAlmostEqual(0.9426666666666667, clf.score(X, y))
def test_score_multi_class(self):
warnings.filterwarnings("ignore")
accuracies = [
0.8258427, # Wine linear min_distance
0.6741573, # Wine linear max_distance
0.8314607, # Wine linear max_samples
0.6629213, # Wine rbf min_distance
1.0000000, # Wine rbf max_distance
0.4044944, # Wine rbf max_samples
0.9157303, # Wine poly min_distance
1.0000000, # Wine poly max_distance
0.7640449, # Wine poly max_samples
0.9933333, # Iris linear min_distance
0.9666667, # Iris linear max_distance
0.9666667, # Iris linear max_samples
0.9800000, # Iris rbf min_distance
0.9800000, # Iris rbf max_distance
0.9800000, # Iris rbf max_samples
1.0000000, # Iris poly min_distance
1.0000000, # Iris poly max_distance
1.0000000, # Iris poly max_samples
0.8993333, # Synthetic linear min_distance
0.6533333, # Synthetic linear max_distance
0.9313333, # Synthetic linear max_samples
0.8320000, # Synthetic rbf min_distance
0.6660000, # Synthetic rbf max_distance
0.8320000, # Synthetic rbf max_samples
0.6066667, # Synthetic poly min_distance
0.6840000, # Synthetic poly max_distance
0.6340000, # Synthetic poly max_samples
]
datasets = [
("Wine", load_wine(return_X_y=True)),
("Iris", load_iris(return_X_y=True)),
(
"Synthetic",
load_dataset(self._random_state, n_classes=3, n_features=5),
),
]
for dataset_name, dataset in datasets:
X, y = dataset
for kernel in self._kernels:
for criteria in [
"min_distance",
"max_distance",
"max_samples",
]:
clf = Stree(
C=17,
random_state=self._random_state,
kernel=kernel,
split_criteria=criteria,
degree=5,
gamma="auto",
)
clf.fit(X, y)
accuracy_score = clf.score(X, y)
yp = clf.predict(X)
accuracy_computed = np.mean(yp == y)
# print(
# "{:.7f}, # {:7} {:5} {}".format(
# accuracy_score, dataset_name, kernel, criteria
# )
# )
accuracy_expected = accuracies.pop(0)
self.assertEqual(accuracy_score, accuracy_computed)
self.assertAlmostEqual(accuracy_expected, accuracy_score)
self.assertAlmostEqual(0.9246666666666666, clf.score(X, y))
def test_bogus_splitter_parameter(self):
clf = Stree(splitter="duck")
with self.assertRaises(ValueError):
clf.fit(*load_dataset())
def test_multiclass_classifier_integrity(self):
"""Checks if the multiclass operation is done right"""
X, y = load_iris(return_X_y=True)
clf = Stree(random_state=0)
clf.fit(X, y)
score = clf.score(X, y)
# Check accuracy of the whole model
self.assertAlmostEquals(0.98, score, 5)
svm = LinearSVC(random_state=0)
svm.fit(X, y)
self.assertAlmostEquals(0.9666666666666667, svm.score(X, y), 5)
data = svm.decision_function(X)
expected = [
0.4444444444444444,
0.35777777777777775,
0.4569777777777778,
]
ty = data.copy()
ty[data <= 0] = 0
ty[data > 0] = 1
ty = ty.astype(int)
for i in range(3):
self.assertAlmostEquals(
expected[i],
clf.splitter_._gini(ty[:, i]),
)
# 1st Branch
# up has to have 50 samples of class 0
# down should have 100 [50, 50]
up = data[:, 2] > 0
resup = np.unique(y[up], return_counts=True)
resdn = np.unique(y[~up], return_counts=True)
self.assertListEqual([1, 2], resup[0].tolist())
self.assertListEqual([3, 50], resup[1].tolist())
self.assertListEqual([0, 1], resdn[0].tolist())
self.assertListEqual([50, 47], resdn[1].tolist())
# 2nd Branch
# up should have 53 samples of classes [1, 2] [3, 50]
# down shoud have 47 samples of class 1
node_up = clf.tree_.get_down().get_up()
node_dn = clf.tree_.get_down().get_down()
resup = np.unique(node_up._y, return_counts=True)
resdn = np.unique(node_dn._y, return_counts=True)
self.assertListEqual([1, 2], resup[0].tolist())
self.assertListEqual([3, 50], resup[1].tolist())
self.assertListEqual([1], resdn[0].tolist())
self.assertListEqual([47], resdn[1].tolist())
def test_score_multiclass_rbf(self):
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
n_features=5,
n_samples=500,
)
clf = Stree(kernel="rbf", random_state=self._random_state)
self.assertEqual(0.824, clf.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
self.assertEqual(0.6741573033707865, clf.fit(X, y).score(X, y))
def test_score_multiclass_poly(self):
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
n_features=5,
n_samples=500,
)
clf = Stree(
kernel="poly", random_state=self._random_state, C=10, degree=5
)
self.assertEqual(0.786, clf.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
self.assertEqual(0.702247191011236, clf.fit(X, y).score(X, y))
def test_score_multiclass_linear(self):
X, y = load_dataset(
random_state=self._random_state,
n_classes=3,
n_features=5,
n_samples=1500,
)
clf = Stree(kernel="linear", random_state=self._random_state)
self.assertEqual(0.9533333333333334, clf.fit(X, y).score(X, y))
X, y = load_wine(return_X_y=True)
self.assertEqual(0.9550561797752809, clf.fit(X, y).score(X, y))
def test_zero_all_sample_weights(self):
X, y = load_dataset(self._random_state)
with self.assertRaises(ValueError):
Stree().fit(X, y, np.zeros(len(y)))
def test_weights_removing_class(self):
# This patch solves an stderr message from sklearn svm lib
# "WARNING: class label x specified in weight is not found"
@@ -407,7 +436,13 @@ class Stree_test(unittest.TestCase):
original = weights_no_zero.copy()
clf = Stree()
clf.fit(X, y)
node = clf.train(X, y, weights, 1, "test",)
node = clf.train(
X,
y,
weights,
1,
"test",
)
# if a class is lost with zero weights the patch adds epsilon
self.assertListEqual(weights.tolist(), weights_epsilon)
self.assertListEqual(node._sample_weight.tolist(), weights_epsilon)

View File

@@ -1,9 +1,9 @@
from sklearn.datasets import make_classification
def load_dataset(random_state=0, n_classes=2, n_features=3):
def load_dataset(random_state=0, n_classes=2, n_features=3, n_samples=1500):
X, y = make_classification(
n_samples=1500,
n_samples=n_samples,
n_features=n_features,
n_informative=3,
n_redundant=0,