Add Hyperparameters description to README

Comment get_subspace method
Add environment info for binder (runtime.txt)
This commit is contained in:
2021-01-13 11:39:47 +01:00
parent e4ac5075e5
commit 9b3c7ccdfa
3 changed files with 39 additions and 6 deletions

View File

@@ -30,11 +30,31 @@ pip install git+https://github.com/doctorado-ml/stree
- [![Test Graphics](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Doctorado-ML/STree/blob/master/notebooks/test_graphs.ipynb) Test Graphics
### Command line
## Hyperparameters
```bash
python main.py
```
| **Hyperparameter** | **used<br>in<br>scikit** | **Values** | **Default** | **Meaning** |
| ------------------ | ------------------------ | ---------------------------------------------- | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| C | Yes | <float> | 1.0 | Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. |
| kernel | Yes | {"linear", "poly", "rbf"} | linear | Specifies the kernel type to be used in the algorithm. It must be one of linear, poly or rbf. |
| max_iter | Yes | <int> | 1e5 | Hard limit on iterations within solver, or -1 for no limit. |
| random_state | Yes | <int> | None | Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False.<br>Pass an int for reproducible output across multiple function calls |
| max_depth | No | <int> | None | Specifies the maximum depth of the tree |
| tol | Yes | <float> | 1e-4 | Tolerance for stopping criterion. |
| degree | Yes | <int> | 3 | Degree of the polynomial kernel function (poly). Ignored by all other kernels. |
| gamma | Yes | {"scale", "auto"} or <float> | scale | Kernel coefficient for rbf and poly.<br>if gamma='scale' (default) is passed then it uses 1 / (n_features \* X.var()) as value of gamma,<br>if auto, uses 1 / n_features. |
| split_criteria | No | {"impurity", "max_samples"} | impurity | Decides (just in case of a multi class classification) which column (class) use to split the dataset in a node\*\* |
| criterion | No | {“gini”, “entropy”} | entropy | The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain. |
| min_samples_split | No | <int> | 0 | The minimum number of samples required to split an internal node. 0 (default) for any |
| max_features | No | <int>, <float> <br>or {“auto”, “sqrt”, “log2”} | None | The number of features to consider when looking for the split:<br>If int, then consider max_features features at each split.<br>If float, then max_features is a fraction and int(max_features \* n_features) features are considered at each split.<br>If “auto”, then max_features=sqrt(n_features).<br>If “sqrt”, then max_features=sqrt(n_features).<br>If “log2”, then max_features=log2(n_features).<br>If None, then max_features=n_features. |
| splitter | No | {"best", "random"} | | The strategy used to choose the feature set at each node (only used if max_features != num_features). Supported strategies are “best” to choose the best feature set and “random” to choose the a random combination. The algorithm generates 5 candidates at most to choose from. |
\*\* **Splitting in a STree node**
The decision function is applied to the dataset and distances from samples to hyperplanes are computed in a matrix. This matrix haas as many columns as classes the samples belongs to (if more than two, i.e. multiclass classification) or 1 column if it's a binary class dataset. In binary classification only one hyperplane is computed and therefore only one column is needed to store the distances of the samples to it. If three or more classes are present in the dataset we need as many hyperplanes as classes are there, and therefore one column per hyperplane is needed.
In case of multiclass classification we have to decide which column take into account to make the split, that depends on hyperparameter _split_criteria_, if "impurity" is chosen then STree computes information gain of every split candidate using each column and chooses the one that maximize the information gain, otherwise STree choses the column with more samples with a predicted class (the column with more positive numbers in it).
Once we have the column to take into account for the split, the algorithm splits samples with a positive distance to hyperplane from the rest.
## Tests

1
runtime.txt Normal file
View File

@@ -0,0 +1 @@
python-3.8

View File

@@ -286,7 +286,18 @@ class Splitter:
def get_subspace(
self, dataset: np.array, labels: np.array, max_features: int
) -> tuple:
"""Return the best/random subspace to make a split"""
"""Return a subspace of the selected dataset of max_features length.
Depending on hyperparmeter
:param dataset: [description]
:type dataset: np.array
:param labels: [description]
:type labels: np.array
:param max_features: [description]
:type max_features: int
:return: [description]
:rtype: tuple
"""
indices = self._get_subspaces_set(dataset, labels, max_features)
return dataset[:, indices], indices
@@ -328,7 +339,7 @@ class Splitter:
def partition(self, samples: np.array, node: Snode, train: bool):
"""Set the criteria to split arrays. Compute the indices of the samples
that should go to one side of the tree (down)
that should go to one side of the tree (up)
"""
# data contains the distances of every sample to every class hyperplane
@@ -428,6 +439,7 @@ class Stree(BaseEstimator, ClassifierMixin):
def _more_tags(self) -> dict:
"""Required by sklearn to supply features of the classifier
make mandatory the labels array
:return: the tag required
:rtype: dict