90 Commits

Author SHA1 Message Date
f264c84209 Add first select table with data 2023-06-05 02:09:09 +02:00
fba919de8c Login required 2023-06-05 00:52:14 +02:00
ddd1ae7c5b Working with bootstrap-flask 2023-06-04 22:51:02 +02:00
848ecbd5be init refactor 2023-06-04 19:09:29 +02:00
c10bf27a16 Fix tests 2023-05-22 11:15:19 +02:00
b6fc9096a1 Fix tests 2023-05-22 10:07:28 +02:00
83bd321dd6 Fix some excel issues 2023-05-21 22:22:15 +02:00
9041c412d5 Begin refactor Results 2023-05-21 21:05:58 +02:00
b55553847b Refactor folders structure (add excel) 2023-05-19 01:44:27 +02:00
0d02b690bb Allow to comment datasets in all.txt 2023-05-17 23:05:50 +02:00
1046c2e74b Update badges 2023-05-15 11:46:00 +02:00
e654aa9735 Update readme 2023-05-15 11:12:17 +02:00
e3d969c5d7 Add number of samples in report datasets balance 2023-05-09 10:25:54 +02:00
5c8b7062cc Fix max_value in manage list results 2023-04-07 22:48:09 +02:00
2ef30dfb80 Add AODENew model 2023-03-29 16:47:15 +02:00
d60df0cdf9 Update version number 2023-02-21 17:09:26 +01:00
e2504c7ae9 Add new models and repair tests 2023-02-21 17:08:50 +01:00
27bf414db9 Add TanNew model 2023-02-06 20:17:32 +01:00
d5cc2b2dcf Add discretize to reports and experiments 2023-02-05 20:18:27 +01:00
7df037b6f4 Add class name to fit_params 2023-02-05 11:29:34 +01:00
75ed3e8f6e Add KDBNew model and fit_feature hyperparameter 2023-02-04 18:29:10 +01:00
Ricardo Montañana Gómez
d454a318fc feat: Make nodes, leaves, depth labels customizable in .env 2023-01-22 11:37:03 +01:00
Ricardo Montañana Gómez
5ff6265a08 feat: Add discretize and fix stratified hyperparameters in be_main 2023-01-21 22:17:25 +01:00
Ricardo Montañana Gómez
520f8807e5 test: 🧪 Update a flaky test due to different console width in diff envs 2023-01-15 19:32:01 +01:00
Ricardo Montañana Gómez
149584be3d Update test results file 2023-01-15 11:28:21 +01:00
Ricardo Montañana Gómez
d327050b7c Merge pull request #9 from Doctorado-ML/continuous_features
Continuous features
2023-01-15 10:55:49 +01:00
Ricardo Montañana Gómez
d21e6cac0c ci: ⬆️ Update github actions 2023-01-15 10:29:06 +01:00
Ricardo Montañana Gómez
d84e0ffc6a Update print_strees test 2023-01-14 23:50:34 +01:00
Ricardo Montañana Gómez
6dc3a59df8 fix: 🧪 Fix tests with new scikit-learn version 2023-01-14 21:31:34 +01:00
Ricardo Montañana Gómez
7ef88bd5c7 Update Models_tests 2023-01-14 13:05:44 +01:00
Ricardo Montañana Gómez
acfbafbdce Update requirements 2023-01-08 12:41:11 +01:00
ae52148021 Remove ignore-nan from .env files
leave only as be_main hyperparameter
2023-01-08 12:25:59 +01:00
132d7827c3 Fix tests 100% coverage 2023-01-06 22:53:23 +01:00
d854d9ddf1 Fix tests 2023-01-06 14:29:52 +01:00
9ba6c55d49 Set k=2 in KDB to address memory problems 2023-01-06 14:29:22 +01:00
c21fd4849c Add ignore_nan and fit_params to experiments 2022-12-28 19:13:58 +01:00
671e5af45c Change discretizer algorithm 2022-12-25 12:11:00 +01:00
8e035ef196 feat: Add continuous features for datasets in Arff Files
Makes possible to leave untouched some already discrete variables if discretize is on on .env file
2022-12-17 19:24:37 +01:00
Ricardo Montañana Gómez
9bff48832b Merge pull request #8 from Doctorado-ML/refactor_args
Refactor args and add be_init_project
2022-11-24 00:23:14 +01:00
fea46834c8 Update bayesclass models 2022-11-24 00:20:29 +01:00
a94a33e028 Update actions 2022-11-23 22:33:22 +01:00
b05a62b2e8 Update requirements and github actions 2022-11-23 22:21:34 +01:00
2baaf753ef Add terminal support to debug github action 2022-11-23 12:58:00 +01:00
b01ee40df2 Update main.yml 2022-11-23 09:43:51 +01:00
ed308773ee Update main.yml 2022-11-23 09:34:43 +01:00
0782736338 Update tests be_init_project_tests 2022-11-23 01:31:01 +01:00
71a11110bd Update tests 2022-11-22 23:32:28 +01:00
3a2ec38671 Update be_list to new formats 2022-11-22 17:38:11 +01:00
f60d9365dd Refactor be_report and fix error in datasets 2022-11-22 16:47:03 +01:00
5d7ed6f1ed Fix be_list Results error 2022-11-22 16:26:24 +01:00
8aa76c27c3 Refactor Datasets 2022-11-22 16:26:04 +01:00
93f0db36fa Fix stratified default value from .env 2022-11-22 01:47:12 +01:00
4e0be95a00 Refactor be_list 2022-11-21 20:22:59 +01:00
e76366561c Add be_init_project to scripts 2022-11-21 00:07:29 +01:00
7e9bd7ae4a Begin refactor be_list arguments 2022-11-20 20:17:58 +01:00
3ade3f4022 Add incompatible hyparams to be_main 2022-11-20 19:10:28 +01:00
1b8a424ad3 Add subparser to be_report & tests 2022-11-20 18:23:26 +01:00
146304f4b5 Refactor Arguments to be child of ArgumentParser 2022-11-19 21:25:50 +01:00
07172b91c5 Add overrides to args parse for dataset/title in be_main 2022-11-19 21:16:29 +01:00
Ricardo Montañana Gómez
68d9cb776e Merge pull request #7 from Doctorado-ML:add_excel_belist
Add excel output of reports of be_list
2022-11-18 23:37:17 +01:00
c8124be119 Update version info 2022-11-18 23:36:43 +01:00
58c52849d8 Add AODE to models 2022-11-18 23:33:41 +01:00
d68fb47688 Remove extra space in report header 2022-11-17 13:42:27 +01:00
38667d61f7 Refactor be_list 2022-11-17 12:09:02 +01:00
dfd4f8179b Complete tests adding excel to be_list 2022-11-17 12:00:30 +01:00
8a9342c97b Add space to time column in report 2022-11-17 09:41:17 +01:00
974227166c Add excel to be_list 2022-11-17 01:36:19 +01:00
feea9c542a Add KDB model 2022-11-15 22:06:04 +01:00
a53e957c00 fix stochastic error in discretization 2022-11-14 21:51:53 +01:00
a2db4f1f6d Fix lint error in test 2022-11-14 17:27:18 +01:00
5a3ae6f440 Update version info and tests 2022-11-14 00:54:18 +01:00
Ricardo Montañana Gómez
8d06a2c5f6 Merge pull request #6 from Doctorado-ML/language_version
Add Discretizer to Datasets
Add excel to report datasets
Add report datasets sheet to benchmark excel
2022-11-13 22:51:50 +01:00
9039a634cf Exclude macos-latest with python 3.11 (no torch) 2022-11-13 22:14:01 +01:00
5b5d385b4c Fix uppercase mistake in filename 2022-11-13 20:04:26 +01:00
6ebcc31c36 Add bayesclass to requirements 2022-11-13 18:34:54 +01:00
cd2d803ff5 Update requirements 2022-11-13 18:10:42 +01:00
6aec5b2a97 Add tests to excel in report datasets 2022-11-13 17:44:45 +01:00
f1b9dc1fef Add excel to report dataset 2022-11-13 14:46:41 +01:00
2e6f49de8e Add discretize key to .env.dist 2022-11-12 19:38:14 +01:00
2d61cd11c2 refactor Discretization in datasets 2022-11-12 19:37:46 +01:00
4b442a46f2 Add Discretizer to Datasets 2022-11-10 11:47:01 +01:00
feaf85d0b8 Add Dataset load return a pandas dataframe 2022-11-04 18:40:50 +01:00
c62b06f263 Update Readme 2022-11-01 22:30:42 +01:00
Ricardo Montañana Gómez
b9eaa534bc Merge pull request #5 from Doctorado-ML/language_version
Disable sonar quality gate in CI
2022-11-01 21:24:12 +01:00
0d87e670f7 Disable sonar quality gate in CI
Update base score for Arff STree
2022-11-01 16:53:22 +01:00
Ricardo Montañana Gómez
c77feff54b Merge pull request #4 from Doctorado-ML/language_version
Add Language and language version to reports
Add custom seeds to .env
2022-11-01 14:07:59 +01:00
1e83db7956 Fix lint errors and update version info 2022-11-01 13:22:53 +01:00
8cf823e843 Add custom seeds to .env 2022-11-01 12:24:50 +01:00
97718e6e82 Add Language and language version to reports 2022-11-01 02:07:24 +01:00
Ricardo Montañana Gómez
5532beb88a Merge pull request #3 from Doctorado-ML/discretiz
Add Arff data source for experiments
Add consistent comparative results to reports
2022-10-25 16:55:04 +02:00
132 changed files with 4645 additions and 2105 deletions

View File

@@ -4,3 +4,5 @@ n_folds=5
model=ODTE
stratified=0
source_data=Tanveer
seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
discretize=0

View File

@@ -1,2 +1,3 @@
[flake8]
exclude = .git,__init__.py
ignore = E203, W503

View File

@@ -8,7 +8,7 @@ jobs:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- run: echo "project_version=$(git describe --tags --abbrev=0)" >> $GITHUB_ENV
@@ -22,7 +22,8 @@ jobs:
-Dsonar.python.version=3.10
# If you wish to fail your job when the Quality Gate is red, uncomment the
# following lines. This would typically be used to fail a deployment.
- uses: sonarsource/sonarqube-quality-gate-action@master
timeout-minutes: 5
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
#- uses: sonarsource/sonarqube-quality-gate-action@master
# timeout-minutes: 5
# env:
# SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
# SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}

View File

@@ -12,13 +12,13 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-latest, ubuntu-latest]
os: [ubuntu-latest]
python: ["3.10"]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python }}
# Make dot command available in the environment
@@ -43,6 +43,7 @@ jobs:
pip install -q --upgrade pip
pip install -q -r requirements.txt
pip install -q --upgrade codecov coverage black flake8
git clone https://github.com/Doctorado-ML/bayesclass.git
- name: Lint
run: |
black --check --diff benchmark
@@ -52,7 +53,7 @@ jobs:
coverage run -m unittest -v benchmark.tests
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.xml

View File

@@ -1,7 +1,7 @@
[![CI](https://github.com/Doctorado-ML/benchmark/actions/workflows/main.yml/badge.svg)](https://github.com/Doctorado-ML/benchmark/actions/workflows/main.yml)
[![codecov](https://codecov.io/gh/Doctorado-ML/benchmark/branch/main/graph/badge.svg?token=ZRP937NDSG)](https://codecov.io/gh/Doctorado-ML/benchmark)
[![Quality Gate Status](https://haystack.rmontanana.es:25000/api/project_badges/measure?project=benchmark&metric=alert_status&token=336a6e501988888543c3153baa91bad4b9914dd2)](http://haystack.local:25000/dashboard?id=benchmark)
[![Technical Debt](https://haystack.rmontanana.es:25000/api/project_badges/measure?project=benchmark&metric=sqale_index&token=336a6e501988888543c3153baa91bad4b9914dd2)](http://haystack.local:25000/dashboard?id=benchmark)
[![Quality Gate Status](https://sonar.rmontanana.es/api/project_badges/measure?project=benchmark&metric=alert_status&token=336a6e501988888543c3153baa91bad4b9914dd2)](https://sonar.rmontanana.es/dashboard?id=benchmark)
[![Technical Debt](https://sonar.rmontanana.es/api/project_badges/measure?project=benchmark&metric=sqale_index&token=336a6e501988888543c3153baa91bad4b9914dd2)](https://sonar.rmontanana.es/dashboard?id=benchmark)
![https://img.shields.io/badge/python-3.8%2B-blue](https://img.shields.io/badge/python-3.8%2B-brightgreen)
# benchmark
@@ -34,7 +34,7 @@ be_report -b STree
```python
# Datasets list
be_report
be_report datasets
# Report of given experiment
be_report -f results/results_STree_iMac27_2021-09-22_17:13:02.json
# Report of given experiment building excel file and compare with best results

View File

@@ -36,6 +36,7 @@ class EnvDefault(argparse.Action):
self, envvar, required=True, default=None, mandatory=False, **kwargs
):
self._args = EnvData.load()
self._overrides = {}
if required and not mandatory:
default = self._args[envvar]
required = False
@@ -47,24 +48,27 @@ class EnvDefault(argparse.Action):
setattr(namespace, self.dest, values)
class Arguments:
def __init__(self):
self.ap = argparse.ArgumentParser()
class Arguments(argparse.ArgumentParser):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
models_data = Models.define_models(random_state=0)
self._overrides = {}
self._subparser = None
self.parameters = {
"best": [
("-b", "--best"),
"best_paramfile": [
("-b", "--best_paramfile"),
{
"type": str,
"action": "store_true",
"required": False,
"help": "best results of models",
"default": False,
"help": "Use best hyperparams file?",
},
],
"color": [
("-c", "--color"),
{
"type": bool,
"required": False,
"action": "store_true",
"default": False,
"help": "use colors for the tree",
},
@@ -72,8 +76,9 @@ class Arguments:
"compare": [
("-c", "--compare"),
{
"type": bool,
"action": "store_true",
"required": False,
"default": False,
"help": "Compare accuracy with best results",
},
],
@@ -81,45 +86,57 @@ class Arguments:
("-d", "--dataset"),
{
"type": str,
"envvar": "dataset", # for compatiblity with EnvDefault
"action": EnvDefault,
"required": False,
"help": "dataset to work with",
},
],
"discretize": [
("--discretize",),
{
"action": EnvDefault,
"envvar": "discretize",
"required": True,
"help": "Discretize dataset",
"const": "1",
"nargs": "?",
},
],
"excel": [
("-x", "--excel"),
{
"type": bool,
"required": False,
"action": "store_true",
"default": False,
"help": "Generate Excel File",
},
],
"file": [
("-f", "--file"),
{"type": str, "required": False, "help": "Result file"},
],
"grid": [
("-g", "--grid"),
"fit_features": [
("--fit_features",),
{
"type": str,
"required": False,
"help": "grid results of model",
"action": EnvDefault,
"envvar": "fit_features",
"required": True,
"help": "Include features in fit call",
"const": "1",
"nargs": "?",
},
],
"grid_paramfile": [
("-g", "--grid_paramfile"),
{
"type": bool,
"required": False,
"action": "store_true",
"default": False,
"help": "Use best hyperparams file?",
"help": "Use grid output hyperparams file?",
},
],
"hidden": [
("--hidden",),
{
"type": str,
"required": False,
"action": "store_true",
"default": False,
"help": "Show hidden results",
},
@@ -128,6 +145,15 @@ class Arguments:
("-p", "--hyperparameters"),
{"type": str, "required": False, "default": "{}"},
],
"ignore_nan": [
("--ignore-nan",),
{
"default": False,
"action": "store_true",
"required": False,
"help": "Ignore nan results",
},
],
"key": [
("-k", "--key"),
{
@@ -140,8 +166,8 @@ class Arguments:
"lose": [
("-l", "--lose"),
{
"type": bool,
"default": False,
"action": "store_true",
"required": False,
"help": "show lose results",
},
@@ -178,9 +204,10 @@ class Arguments:
"nan": [
("--nan",),
{
"type": bool,
"action": "store_true",
"required": False,
"help": "Move nan results to hidden folder",
"default": False,
"help": "List nan results to hidden folder",
},
],
"number": [
@@ -202,15 +229,6 @@ class Arguments:
"help": "number of folds",
},
],
"paramfile": [
("-f", "--paramfile"),
{
"type": bool,
"required": False,
"default": False,
"help": "Use best hyperparams file?",
},
],
"platform": [
("-P", "--platform"),
{
@@ -224,7 +242,7 @@ class Arguments:
"quiet": [
("-q", "--quiet"),
{
"type": bool,
"action": "store_true",
"required": False,
"default": False,
},
@@ -232,7 +250,7 @@ class Arguments:
"report": [
("-r", "--report"),
{
"type": bool,
"action": "store_true",
"default": False,
"required": False,
"help": "Report results",
@@ -250,23 +268,29 @@ class Arguments:
],
"sql": [
("-q", "--sql"),
{"type": bool, "required": False, "help": "Generate SQL File"},
{
"required": False,
"action": "store_true",
"default": False,
"help": "Generate SQL File",
},
],
"stratified": [
("-t", "--stratified"),
{
"action": EnvDefault,
"envvar": "stratified",
"type": str,
"required": True,
"help": "Stratified",
"const": "1",
"nargs": "?",
},
],
"tex_output": [
("-t", "--tex-output"),
{
"type": bool,
"required": False,
"action": "store_true",
"default": False,
"help": "Generate Tex file with the table",
},
@@ -278,8 +302,8 @@ class Arguments:
"win": [
("-w", "--win"),
{
"type": bool,
"default": False,
"action": "store_true",
"required": False,
"help": "show win results",
},
@@ -287,12 +311,43 @@ class Arguments:
}
def xset(self, *arg_name, **kwargs):
names, default = self.parameters[arg_name[0]]
self.ap.add_argument(
names, parameters = self.parameters[arg_name[0]]
if "overrides" in kwargs:
self._overrides[names[0]] = (kwargs["overrides"], kwargs["const"])
del kwargs["overrides"]
self.add_argument(
*names,
**{**default, **kwargs},
**{**parameters, **kwargs},
)
return self
def add_subparser(
self, dest="subcommand", help_text="help for subcommand"
):
self._subparser = self.add_subparsers(dest=dest, help=help_text)
def add_subparsers_options(self, subparser, arguments):
command, help_text = subparser
parser = self._subparser.add_parser(command, help=help_text)
for name, args in arguments:
try:
names, parameters = self.parameters[name]
except KeyError:
names = (name,)
parameters = {}
# Order of args is important
parser.add_argument(*names, **{**args, **parameters})
def add_exclusive(self, hyperparameters, required=False):
group = self.add_mutually_exclusive_group(required=required)
for name in hyperparameters:
names, parameters = self.parameters[name]
group.add_argument(*names, **parameters)
def parse(self, args=None):
return self.ap.parse_args(args)
for key, (dest_key, value) in self._overrides.items():
if args is None:
args = sys.argv[1:]
if key in args:
args.extend((f"--{dest_key}", value))
return super().parse_args(args)

View File

@@ -1,8 +1,12 @@
import os
from types import SimpleNamespace
import pandas as pd
import numpy as np
import json
from scipy.io import arff
from .Utils import Files
from .Arguments import EnvData
from fimdlp.mdlp import FImdlp
class Diterator:
@@ -24,13 +28,23 @@ class DatasetsArff:
def folder():
return "datasets"
@staticmethod
def get_range_features(X, c_features):
if c_features.strip() == "all":
return list(range(X.shape[1]))
return json.loads(c_features)
def load(self, name, class_name):
file_name = os.path.join(self.folder(), self.dataset_names(name))
data = arff.loadarff(file_name)
df = pd.DataFrame(data[0])
df = df.dropna()
X = df.drop(class_name, axis=1).to_numpy()
df.dropna(axis=0, how="any", inplace=True)
self.dataset = df
X = df.drop(class_name, axis=1)
self.features = X.columns.to_list()
self.class_name = class_name
y, _ = pd.factorize(df[class_name])
X = X.to_numpy()
return X, y
@@ -43,15 +57,23 @@ class DatasetsTanveer:
def folder():
return "data"
def load(self, name, _):
@staticmethod
def get_range_features(X, name):
return []
def load(self, name, *args):
file_name = os.path.join(self.folder(), self.dataset_names(name))
data = pd.read_csv(
file_name,
sep="\t",
index_col=0,
)
X = data.drop("clase", axis=1).to_numpy()
X = data.drop("clase", axis=1)
self.features = X.columns
X = X.to_numpy()
y = data["clase"].to_numpy()
self.dataset = data
self.class_name = "clase"
return X, y
@@ -64,7 +86,11 @@ class DatasetsSurcov:
def folder():
return "datasets"
def load(self, name, _):
@staticmethod
def get_range_features(X, name):
return []
def load(self, name, *args):
file_name = os.path.join(self.folder(), self.dataset_names(name))
data = pd.read_csv(
file_name,
@@ -72,55 +98,142 @@ class DatasetsSurcov:
)
data.dropna(axis=0, how="any", inplace=True)
self.columns = data.columns
col_list = ["class"]
X = data.drop(col_list, axis=1).to_numpy()
X = data.drop(["class"], axis=1)
self.features = X.columns
self.class_name = "class"
self.dataset = data
X = X.to_numpy()
y = data["class"].to_numpy()
return X, y
class Datasets:
def __init__(self, dataset_name=None):
envData = EnvData.load()
class_name = getattr(
def __init__(self, dataset_name=None, discretize=None):
env_data = EnvData.load()
# DatasetsSurcov, DatasetsTanveer, DatasetsArff,...
source_name = getattr(
__import__(__name__),
f"Datasets{envData['source_data']}",
f"Datasets{env_data['source_data']}",
)
self.dataset = class_name()
self.class_names = []
self.load_names()
if dataset_name is not None:
try:
class_name = self.class_names[
self.data_sets.index(dataset_name)
]
self.class_names = [class_name]
except ValueError:
raise ValueError(f"Unknown dataset: {dataset_name}")
self.data_sets = [dataset_name]
self.discretize = (
env_data["discretize"] == "1"
if discretize is None
else discretize == "1"
)
self.dataset = source_name()
# initialize self.class_names & self.data_sets
class_names, sets = self._init_names(dataset_name)
self.class_names = class_names
self.data_sets = sets
self.states = {} # states of discretized variables
def load_names(self):
def _init_names(self, dataset_name):
file_name = os.path.join(self.dataset.folder(), Files.index)
default_class = "class"
self.continuous_features = {}
with open(file_name) as f:
self.data_sets = f.read().splitlines()
self.class_names = [default_class] * len(self.data_sets)
if "," in self.data_sets[0]:
sets = f.read().splitlines()
sets = [x for x in sets if not x.startswith("#")]
class_names = [default_class] * len(sets)
if "," in sets[0]:
result = []
class_names = []
for data in self.data_sets:
name, class_name = data.split(",")
for data in sets:
name, class_name, features = data.split(",", 2)
result.append(name)
class_names.append(class_name)
self.data_sets = result
self.class_names = class_names
self.continuous_features[name] = features
sets = result
else:
for name in sets:
self.continuous_features[name] = None
# Set as dataset list the dataset passed as argument
if dataset_name is None:
return class_names, sets
try:
class_name = class_names[sets.index(dataset_name)]
except ValueError:
raise ValueError(f"Unknown dataset: {dataset_name}")
return [class_name], [dataset_name]
def load(self, name):
def get_attributes(self, name):
tmp = self.discretize
self.discretize = False
X, y = self.load(name)
attr = SimpleNamespace()
attr.dataset = name
values, counts = np.unique(y, return_counts=True)
attr.classes = len(values)
attr.samples = X.shape[0]
attr.features = X.shape[1]
attr.cont_features = len(self.get_continuous_features())
attr.distribution = {}
comp = ""
sep = ""
for value, count in zip(values, counts):
comp += f"{sep}{count/sum(counts)*100:5.2f}% ({count}) "
sep = "/ "
attr.distribution[value.item()] = count / sum(counts)
attr.balance = comp
self.discretize = tmp
return attr
def get_features(self):
return self.dataset.features
def get_states(self, name):
return self.states[name] if name in self.states else None
def get_continuous_features(self):
return self.continuous_features_dataset
def get_class_name(self):
return self.dataset.class_name
def get_dataset(self):
return self.dataset.dataset
def build_states(self, name, X):
features = self.get_features()
self.states[name] = {
features[i]: np.unique(X[:, i]).tolist() for i in range(X.shape[1])
}
def load(self, name, dataframe=False):
try:
class_name = self.class_names[self.data_sets.index(name)]
return self.dataset.load(name, class_name)
X, y = self.dataset.load(name, class_name)
self.continuous_features_dataset = self.dataset.get_range_features(
X, self.continuous_features[name]
)
if self.discretize:
X = self.discretize_dataset(X, y)
self.build_states(name, X)
dataset = pd.DataFrame(X, columns=self.get_features())
dataset[self.get_class_name()] = y
self.dataset.dataset = dataset
if dataframe:
return self.get_dataset()
return X, y
except (ValueError, FileNotFoundError):
raise ValueError(f"Unknown dataset: {name}")
def discretize_dataset(self, X, y):
"""Supervised discretization with Fayyad and Irani's MDLP algorithm.
Parameters
----------
X : np.ndarray
array (n_samples, n_features) of features
y : np.ndarray
array (n_samples,) of labels
Returns
-------
tuple (X, y) of numpy.ndarray
"""
discretiz = FImdlp()
return discretiz.fit_transform(X, y)
def __iter__(self) -> Diterator:
return Diterator(self.data_sets)

View File

@@ -1,4 +1,5 @@
import os
import sys
import json
import random
import warnings
@@ -15,10 +16,13 @@ from sklearn.model_selection import (
from .Utils import Folders, Files, NO_RESULTS
from .Datasets import Datasets
from .Models import Models
from .Arguments import EnvData
class Randomized:
seeds = [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
@staticmethod
def seeds():
return json.loads(EnvData.load()["seeds"])
class BestResults:
@@ -108,8 +112,12 @@ class Experiment:
platform,
title,
progress_bar=True,
ignore_nan=True,
fit_features=None,
discretize=None,
folds=5,
):
env_data = EnvData.load()
today = datetime.now()
self.time = today.strftime("%H:%M:%S")
self.date = today.strftime("%Y-%m-%d")
@@ -127,7 +135,18 @@ class Experiment:
self.score_name = score_name
self.model_name = model_name
self.title = title
self.ignore_nan = ignore_nan
self.stratified = stratified == "1"
self.discretize = (
env_data["discretize"] == "1"
if discretize is None
else discretize == "1"
)
self.fit_features = (
env_data["fit_features"] == "1"
if fit_features is None
else fit_features == "1"
)
self.stratified_class = StratifiedKFold if self.stratified else KFold
self.datasets = datasets
dictionary = json.loads(hyperparams_dict)
@@ -154,7 +173,7 @@ class Experiment:
self.platform = platform
self.progress_bar = progress_bar
self.folds = folds
self.random_seeds = Randomized.seeds
self.random_seeds = Randomized.seeds()
self.results = []
self.duration = 0
self._init_experiment()
@@ -162,6 +181,10 @@ class Experiment:
def get_output_file(self):
return self.output_file
@staticmethod
def get_python_version():
return "{}.{}".format(sys.version_info.major, sys.version_info.minor)
def _build_classifier(self, random_state, hyperparameters):
self.model = Models.get_model(self.model_name, random_state)
clf = self.model
@@ -176,7 +199,20 @@ class Experiment:
self.leaves = []
self.depths = []
def _n_fold_crossval(self, X, y, hyperparameters):
def _build_fit_params(self, name):
if not self.fit_features:
return None
res = dict(
features=self.datasets.get_features(),
class_name=self.datasets.get_class_name(),
)
states = self.datasets.get_states(name)
if states is None:
return res
res["state_names"] = states
return res
def _n_fold_crossval(self, name, X, y, hyperparameters):
if self.scores != []:
raise ValueError("Must init experiment before!")
loop = tqdm(
@@ -193,7 +229,8 @@ class Experiment:
shuffle=True, random_state=random_state, n_splits=self.folds
)
clf = self._build_classifier(random_state, hyperparameters)
self.version = clf.version() if hasattr(clf, "version") else "-"
fit_params = self._build_fit_params(name)
self.version = Models.get_version(self.model_name, clf)
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
res = cross_validate(
@@ -201,11 +238,19 @@ class Experiment:
X,
y,
cv=kfold,
fit_params=fit_params,
return_estimator=True,
scoring=self.score_name,
scoring=self.score_name.replace("-", "_"),
)
self.scores.append(res["test_score"])
self.times.append(res["fit_time"])
if np.isnan(res["test_score"]).any():
if not self.ignore_nan:
print(res["test_score"])
raise ValueError("NaN in results")
results = res["test_score"][~np.isnan(res["test_score"])]
else:
results = res["test_score"]
self.scores.extend(results)
self.times.extend(res["fit_time"])
for result_item in res["estimator"]:
nodes_item, leaves_item, depth_item = Models.get_complexity(
self.model_name, result_item
@@ -237,12 +282,15 @@ class Experiment:
output["model"] = self.model_name
output["version"] = self.version
output["stratified"] = self.stratified
output["discretized"] = self.discretize
output["folds"] = self.folds
output["date"] = self.date
output["time"] = self.time
output["duration"] = self.duration
output["seeds"] = self.random_seeds
output["platform"] = self.platform
output["language_version"] = self.get_python_version()
output["language"] = "Python"
output["results"] = self.results
with open(self.output_file, "w") as f:
json.dump(output, f)
@@ -263,7 +311,7 @@ class Experiment:
n_classes = len(np.unique(y))
hyperparameters = self.hyperparameters_dict[name][1]
self._init_experiment()
self._n_fold_crossval(X, y, hyperparameters)
self._n_fold_crossval(name, X, y, hyperparameters)
self._add_results(name, hyperparameters, samp, feat, n_classes)
self._output_results()
self.duration = time.time() - now
@@ -301,7 +349,7 @@ class GridSearch:
self.progress_bar = progress_bar
self.folds = folds
self.platform = platform
self.random_seeds = Randomized.seeds
self.random_seeds = Randomized.seeds()
self.grid_file = os.path.join(
Folders.results, Files.grid_input(score_name, model_name)
)

127
benchmark/Manager.py Normal file
View File

@@ -0,0 +1,127 @@
import os
from types import SimpleNamespace
import xlsxwriter
from benchmark.Results import Report
from benchmark.ResultsFiles import Excel
from benchmark.Utils import Files, Folders, TextColor
def get_input(message="", is_test=False):
return "test" if is_test else input(message)
class Manage:
def __init__(self, summary):
self.summary = summary
self.cmd = SimpleNamespace(
quit="q", relist="r", delete="d", hide="h", excel="e"
)
def process_file(self, num, command, path):
num = int(num)
name = self.summary.data_filtered[num]["file"]
file_name_result = os.path.join(path, name)
verb1, verb2 = (
("delete", "Deleting")
if command == self.cmd.delete
else (
"hide",
"Hiding",
)
)
conf_message = (
TextColor.RED
+ f"Are you sure to {verb1} {file_name_result} (y/n)? "
)
confirm = get_input(message=conf_message)
if confirm == "y":
print(TextColor.YELLOW + f"{verb2} {file_name_result}")
if command == self.cmd.delete:
os.unlink(file_name_result)
else:
os.rename(
os.path.join(Folders.results, name),
os.path.join(Folders.hidden_results, name),
)
self.summary.data_filtered.pop(num)
get_input(message="Press enter to continue")
self.summary.list_results()
def manage_results(self):
"""Manage results showed in the summary
return True if excel file is created False otherwise
"""
message = (
TextColor.ENDC
+ f"Choose option {str(self.cmd).replace('namespace', '')}: "
)
path = (
Folders.hidden_results if self.summary.hidden else Folders.results
)
book = None
max_value = len(self.summary.data_filtered)
while True:
match get_input(message=message).split():
case [self.cmd.relist]:
self.summary.list_results()
case [self.cmd.quit]:
if book is not None:
book.close()
return True
return False
case [self.cmd.hide, num] if num.isdigit() and int(
num
) < max_value:
if self.summary.hidden:
print("Already hidden")
else:
self.process_file(
num, path=path, command=self.cmd.hide
)
case [self.cmd.delete, num] if num.isdigit() and int(
num
) < max_value:
self.process_file(
num=num, path=path, command=self.cmd.delete
)
case [self.cmd.excel, num] if num.isdigit() and int(
num
) < max_value:
# Add to excel file result #num
book = self.add_to_excel(num, path, book)
case [num] if num.isdigit() and int(num) < max_value:
# Report the result #num
self.report(num, path)
case _:
print("Invalid option. Try again!")
def report(self, num, path):
num = int(num)
file_name_result = os.path.join(
path, self.summary.data_filtered[num]["file"]
)
try:
rep = Report(file_name_result, compare=self.summary.compare)
rep.report()
except ValueError as e:
print(e)
def add_to_excel(self, num, path, book):
num = int(num)
file_name_result = os.path.join(
path, self.summary.data_filtered[num]["file"]
)
if book is None:
file_name = os.path.join(Folders.excel, Files.be_list_excel)
book = xlsxwriter.Workbook(file_name, {"nan_inf_to_errors": True})
excel = Excel(
file_name=file_name_result,
book=book,
compare=self.summary.compare,
)
excel.report()
print(f"Added {file_name_result} to {Files.be_list_excel}")
return book

View File

@@ -8,9 +8,30 @@ from sklearn.ensemble import (
)
from sklearn.svm import SVC
from stree import Stree
from bayesclass.clfs import TAN, KDB, AODE, KDBNew, TANNew, AODENew
from wodt import Wodt
from odte import Odte
from xgboost import XGBClassifier
import sklearn
import xgboost
import random
class MockModel(SVC):
# Only used for testing
def predict(self, X):
if random.random() < 0.1:
return [float("NaN")] * len(X)
return super().predict(X)
def nodes_leaves(self):
return 0, 0
def fit(self, X, y, **kwargs):
kwargs.pop("state_names", None)
kwargs.pop("features", None)
return super().fit(X, y, **kwargs)
class Models:
@@ -18,25 +39,31 @@ class Models:
def define_models(random_state):
return {
"STree": Stree(random_state=random_state),
"TAN": TAN(random_state=random_state),
"KDB": KDB(k=2),
"TANNew": TANNew(random_state=random_state),
"KDBNew": KDBNew(k=2),
"AODENew": AODENew(random_state=random_state),
"AODE": AODE(random_state=random_state),
"Cart": DecisionTreeClassifier(random_state=random_state),
"ExtraTree": ExtraTreeClassifier(random_state=random_state),
"Wodt": Wodt(random_state=random_state),
"SVC": SVC(random_state=random_state),
"ODTE": Odte(
base_estimator=Stree(random_state=random_state),
estimator=Stree(random_state=random_state),
random_state=random_state,
),
"BaggingStree": BaggingClassifier(
base_estimator=Stree(random_state=random_state),
estimator=Stree(random_state=random_state),
random_state=random_state,
),
"BaggingWodt": BaggingClassifier(
base_estimator=Wodt(random_state=random_state),
estimator=Wodt(random_state=random_state),
random_state=random_state,
),
"XGBoost": XGBClassifier(random_state=random_state),
"AdaBoostStree": AdaBoostClassifier(
base_estimator=Stree(
estimator=Stree(
random_state=random_state,
),
algorithm="SAMME",
@@ -44,6 +71,7 @@ class Models:
),
"GBC": GradientBoostingClassifier(random_state=random_state),
"RandomForest": RandomForestClassifier(random_state=random_state),
"Mock": MockModel(random_state=random_state),
}
@staticmethod
@@ -89,3 +117,15 @@ class Models:
nodes, leaves = result.nodes_leaves()
depth = result.depth_ if hasattr(result, "depth_") else 0
return nodes, leaves, depth
@staticmethod
def get_version(name, clf):
if hasattr(clf, "version"):
return clf.version()
if name in ["Cart", "ExtraTree", "RandomForest", "GBC", "SVC"]:
return sklearn.__version__
elif name.startswith("Bagging") or name.startswith("AdaBoost"):
return sklearn.__version__
elif name == "XGBoost":
return xgboost.__version__
return "Error"

File diff suppressed because it is too large Load Diff

433
benchmark/ResultsBase.py Normal file
View File

@@ -0,0 +1,433 @@
import abc
import json
import math
import os
from operator import itemgetter
from benchmark.Datasets import Datasets
from benchmark.Utils import NO_RESULTS, Files, Folders, TextColor
from .Arguments import ALL_METRICS, EnvData
from .Datasets import Datasets
from .Experiments import BestResults
from .Utils import Folders, Symbols
class BestResultsEver:
def __init__(self):
self.data = {}
for i in ["Tanveer", "Surcov", "Arff"]:
self.data[i] = {}
for metric in ALL_METRICS:
self.data[i][metric.replace("-", "_")] = ["self", 1.0]
self.data[i][metric] = ["self", 1.0]
self.data["Tanveer"]["accuracy"] = [
"STree_default (liblinear-ovr)",
40.282203,
]
self.data["Arff"]["accuracy"] = [
"STree_default (linear-ovo)",
22.109799,
]
def get_name_value(self, key, score):
return self.data[key][score]
class BaseReport(abc.ABC):
def __init__(self, file_name, best_file=False):
self.file_name = file_name
if not os.path.isfile(file_name):
if not os.path.isfile(os.path.join(Folders.results, file_name)):
raise FileNotFoundError(f"{file_name} does not exists!")
else:
self.file_name = os.path.join(Folders.results, file_name)
with open(self.file_name) as f:
self.data = json.load(f)
self.best_acc_file = best_file
if best_file:
self.lines = self.data
else:
self.lines = self.data["results"]
self.score_name = self.data["score_name"]
self.__load_env_data()
self.__compute_best_results_ever()
def __load_env_data(self):
# Set the labels for nodes, leaves, depth
env_data = EnvData.load()
self.nodes_label = env_data["nodes"]
self.leaves_label = env_data["leaves"]
self.depth_label = env_data["depth"]
self.key = env_data["source_data"]
self.margin = float(env_data["margin"])
def __compute_best_results_ever(self):
best = BestResultsEver()
self.best_score_name, self.best_score_value = best.get_name_value(
self.key, self.score_name
)
def _get_accuracy(self, item):
return self.data[item][0] if self.best_acc_file else item["score"]
def report(self):
self.header()
accuracy_total = 0.0
for result in self.lines:
self.print_line(result)
accuracy_total += self._get_accuracy(result)
self.footer(accuracy_total)
def _load_best_results(self, score, model):
best = BestResults(score, model, Datasets())
self.best_results = best.load({})
def _compute_status(self, dataset, accuracy: float):
status = " "
if self.compare:
# Compare with best results
best = self.best_results[dataset][0]
if accuracy == best:
status = Symbols.equal_best
elif accuracy > best:
status = Symbols.better_best
else:
# compare with dataset label distribution only if its a binary one
# down_arrow if accuracy is less than the ZeroR
# black_star if accuracy is greater than the ZeroR + margin%
if self.score_name == "accuracy":
dt = Datasets()
attr = dt.get_attributes(dataset)
if attr.classes == 2:
max_category = max(attr.distribution.values())
max_value = max_category * (1 + self.margin)
if max_value > 1:
max_value = 0.9995
status = (
Symbols.cross
if accuracy <= max_value
else Symbols.upward_arrow
if accuracy > max_value
else " "
)
if status != " ":
if status not in self._compare_totals:
self._compare_totals[status] = 1
else:
self._compare_totals[status] += 1
return status
def _status_meaning(self, status):
meaning = {
Symbols.equal_best: "Equal to best",
Symbols.better_best: "Better than best",
Symbols.cross: "Less than or equal to ZeroR",
Symbols.upward_arrow: f"Better than ZeroR + "
f"{self.margin*100:3.1f}%",
}
return meaning[status]
def _get_best_accuracy(self):
return self.best_score_value
def _get_message_best_accuracy(self):
return f"{self.score_name} compared to {self.best_score_name} .:"
@abc.abstractmethod
def header(self) -> None:
pass
@abc.abstractmethod
def print_line(self, result) -> None:
pass
@abc.abstractmethod
def footer(self, accuracy: float) -> None:
pass
class StubReport(BaseReport):
def __init__(self, file_name):
super().__init__(file_name=file_name, best_file=False)
def print_line(self, line) -> None:
pass
def header(self) -> None:
self.title = self.data["title"]
self.duration = self.data["duration"]
def footer(self, accuracy: float) -> None:
self.accuracy = accuracy
self.score = accuracy / self._get_best_accuracy()
class Summary:
def __init__(self, hidden=False, compare=False) -> None:
self.results = Files().get_all_results(hidden=hidden)
self.data = []
self.data_filtered = []
self.datasets = {}
self.models = set()
self.hidden = hidden
self.compare = compare
def get_models(self):
return sorted(self.models)
def acquire(self, given_score="any") -> None:
"""Get all results"""
for result in self.results:
(
score,
model,
platform,
date,
time,
stratified,
) = Files().split_file_name(result)
if given_score in ("any", score):
self.models.add(model)
report = StubReport(
os.path.join(
Folders.hidden_results
if self.hidden
else Folders.results,
result,
)
)
report.report()
entry = dict(
score=score,
model=model,
title=report.title,
platform=platform,
date=date,
time=time,
stratified=stratified,
file=result,
metric=report.score,
duration=report.duration,
)
self.datasets[result] = report.lines
self.data.append(entry)
def get_results_criteria(
self, score, model, input_data, sort_key, number, nan=False
):
data = self.data.copy() if input_data is None else input_data
if score:
data = [x for x in data if x["score"] == score]
if model:
data = [x for x in data if x["model"] == model]
if nan:
data = [x for x in data if x["metric"] != x["metric"]]
keys = (
itemgetter(sort_key, "time")
if sort_key == "date"
else itemgetter(sort_key, "date", "time")
)
data = sorted(data, key=keys, reverse=True)
if number > 0:
data = data[:number]
return data
def list_results(
self,
score=None,
model=None,
input_data=None,
sort_key="date",
number=0,
nan=False,
) -> None:
"""Print the list of results"""
if self.data_filtered == []:
self.data_filtered = self.get_results_criteria(
score, model, input_data, sort_key, number, nan=nan
)
if self.data_filtered == []:
raise ValueError(NO_RESULTS)
max_file = max(len(x["file"]) for x in self.data_filtered)
max_title = max(len(x["title"]) for x in self.data_filtered)
if self.hidden:
color1 = TextColor.GREEN
color2 = TextColor.YELLOW
else:
color1 = TextColor.LINE1
color2 = TextColor.LINE2
print(color1, end="")
print(
f" # {'Date':10s} {'File':{max_file}s} {'Score':8s} "
f"{'Time(h)':7s} {'Title':s}"
)
print(
"===",
"=" * 10
+ " "
+ "=" * max_file
+ " "
+ "=" * 8
+ " "
+ "=" * 7
+ " "
+ "=" * max_title,
)
print(
"\n".join(
[
(color2 if n % 2 == 0 else color1) + f"{n:3d} "
f"{x['date']} {x['file']:{max_file}s} "
f"{x['metric']:8.5f} "
f"{x['duration']/3600:7.3f} "
f"{x['title']}"
for n, x in enumerate(self.data_filtered)
]
)
)
def show_result(self, data: dict, title: str = "") -> None:
def whites(n: int) -> str:
return " " * n + color1 + "*"
if data == {}:
print(f"** {title} has No data **")
return
color1 = TextColor.CYAN
color2 = TextColor.YELLOW
file_name = data["file"]
metric = data["metric"]
result = StubReport(os.path.join(Folders.results, file_name))
length = 81
print(color1 + "*" * length)
if title != "":
print(
"*"
+ color2
+ TextColor.BOLD
+ f"{title:^{length - 2}s}"
+ TextColor.ENDC
+ color1
+ "*"
)
print("*" + "-" * (length - 2) + "*")
print("*" + whites(length - 2))
print(
"* "
+ color2
+ f"{result.data['title']:^{length - 4}}"
+ color1
+ " *"
)
print("*" + whites(length - 2))
print(
"* Model: "
+ color2
+ f"{result.data['model']:15s} "
+ color1
+ "Ver. "
+ color2
+ f"{result.data['version']:10s} "
+ color1
+ "Score: "
+ color2
+ f"{result.data['score_name']:10s} "
+ color1
+ "Metric: "
+ color2
+ f"{metric:10.7f}"
+ whites(length - 78)
)
print(color1 + "*" + whites(length - 2))
print(
"* Date : "
+ color2
+ f"{result.data['date']:15s}"
+ color1
+ " Time: "
+ color2
+ f"{result.data['time']:18s} "
+ color1
+ "Time Spent: "
+ color2
+ f"{result.data['duration']:9,.2f}"
+ color1
+ " secs."
+ whites(length - 78)
)
seeds = str(result.data["seeds"])
seeds_len = len(seeds)
print(
"* Seeds: "
+ color2
+ f"{seeds:{seeds_len}s} "
+ color1
+ "Platform: "
+ color2
+ f"{result.data['platform']:17s} "
+ whites(length - 79)
)
print(
"* Stratified: "
+ color2
+ f"{str(result.data['stratified']):15s}"
+ whites(length - 30)
)
print("* " + color2 + f"{file_name:60s}" + whites(length - 63))
print(color1 + "*" + whites(length - 2))
print(color1 + "*" * length)
def best_results(self, criterion=None, value=None, score="accuracy", n=10):
# First filter the same score results (accuracy, f1, ...)
haystack = [x for x in self.data if x["score"] == score]
haystack = (
haystack
if criterion is None or value is None
else [x for x in haystack if x[criterion] == value]
)
if haystack == []:
raise ValueError(NO_RESULTS)
return (
sorted(
haystack,
key=lambda x: -1.0 if math.isnan(x["metric"]) else x["metric"],
reverse=True,
)[:n]
if len(haystack) > 0
else {}
)
def best_result(
self, criterion=None, value=None, score="accuracy"
) -> dict:
return self.best_results(criterion, value, score)[0]
def best_results_datasets(self, score="accuracy") -> dict:
"""Get the best results for each dataset"""
dt = Datasets()
best_results = {}
for dataset in dt:
best_results[dataset] = (1, "", "", "")
haystack = [x for x in self.data if x["score"] == score]
# Search for the best results for each dataset
for entry in haystack:
for dataset in self.datasets[entry["file"]]:
if dataset["score"] < best_results[dataset["dataset"]][0]:
best_results[dataset["dataset"]] = (
dataset["score"],
dataset["hyperparameters"],
entry["file"],
entry["title"],
)
return best_results
def show_top(self, score="accuracy", n=10):
try:
self.list_results(
score=score,
input_data=self.best_results(score=score, n=n),
sort_key="metric",
)
except ValueError as e:
print(e)

1044
benchmark/ResultsFiles.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
import os
import sys
import subprocess
PYTHON_VERSION = "{}.{}".format(sys.version_info.major, sys.version_info.minor)
NO_RESULTS = "** No results found **"
NO_ENV = "File .env not found"
@@ -11,6 +13,8 @@ class Folders:
exreport = "exreport"
report = os.path.join(exreport, "exreport_output")
img = "img"
excel = "excel"
sql = "sql"
@staticmethod
def src():
@@ -25,6 +29,8 @@ class Files:
exreport_pdf = "Rplots.pdf"
benchmark_r = "benchmark.r"
dot_env = ".env"
datasets_report_excel = "ReportDatasets.xlsx"
be_list_excel = "some_results.xlsx"
@staticmethod
def exreport_output(score):
@@ -102,7 +108,8 @@ class Files:
)
return None
def get_all_results(self, hidden) -> list[str]:
@staticmethod
def get_all_results(hidden) -> list[str]:
result_path = os.path.join(
".", Folders.hidden_results if hidden else Folders.results
)
@@ -111,7 +118,7 @@ class Files:
else:
raise ValueError(f"{result_path} does not exist")
result = []
prefix, suffix = self.results_suffixes()
prefix, suffix = Files.results_suffixes()
for result_file in files_list:
if result_file.startswith(prefix) and result_file.endswith(suffix):
result.append(result_file)
@@ -122,6 +129,9 @@ class Symbols:
check_mark = "\N{heavy check mark}"
exclamation = "\N{heavy exclamation mark symbol}"
black_star = "\N{black star}"
cross = "\N{Ballot X}"
upward_arrow = "\N{Black-feathered north east arrow}"
down_arrow = "\N{downwards black arrow}"
equal_best = check_mark
better_best = black_star
@@ -142,3 +152,7 @@ class TextColor:
ENDC = "\033[0m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
WHITE = "\033[97m"
GREY = "\033[90m"
BLACK = "\033[90m"
DEFAULT = "\033[99m"

View File

@@ -1,10 +1,17 @@
from .Datasets import Datasets, DatasetsSurcov, DatasetsTanveer, DatasetsArff
from .ResultsBase import Summary
from .Datasets import (
Datasets,
DatasetsSurcov,
DatasetsTanveer,
DatasetsArff,
)
from .Experiments import Experiment
from .Results import Report, Summary
from .Results import Report
from ._version import __version__
__author__ = "Ricardo Montañana Gómez"
__copyright__ = "Copyright 2020-2022, Ricardo Montañana Gómez"
__copyright__ = "Copyright 2020-2023, Ricardo Montañana Gómez"
__license__ = "MIT License"
__author_email__ = "ricardo.montanana@alu.uclm.es"
__all__ = ["Experiment", "Datasets", "Report", "Summary"]
__all__ = ["Experiment", "Datasets", "Report", "Summary", __version__]

View File

@@ -1 +1 @@
__version__ = "0.1.1"
__version__ = "0.5.0"

View File

@@ -1,11 +1,11 @@
#!/usr/bin/env python
from benchmark.Results import Benchmark
from benchmark.ResultsFiles import Benchmark
from benchmark.Utils import Files
from benchmark.Arguments import Arguments
def main(args_test=None):
arguments = Arguments()
arguments = Arguments(prog="be_benchmark")
arguments.xset("score").xset("excel").xset("tex_output").xset("quiet")
args = arguments.parse(args_test)
benchmark = Benchmark(score=args.score, visualize=not args.quiet)

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python
import json
from benchmark.Results import Summary
from benchmark.ResultsBase import Summary
from benchmark.Arguments import ALL_METRICS, Arguments

View File

@@ -21,5 +21,5 @@ def main(args_test=None):
print(e)
else:
if args.report:
report = ReportBest(args.score, args.model, best=True, grid=False)
report = ReportBest(args.score, args.model, best=True)
report.report()

View File

@@ -46,7 +46,7 @@ def main(args_test=None):
'{"C": 7, "gamma": 0.1, "kernel": "rbf", "multiclass_strategy": '
'"ovr"}',
'{"C": 5, "kernel": "rbf", "gamma": "auto"}',
'{"C": 0.05, "max_iter": 10000.0, "kernel": "liblinear", '
'{"C": 0.05, "max_iter": 10000, "kernel": "liblinear", '
'"multiclass_strategy": "ovr"}',
'{"C":0.0275, "kernel": "liblinear", "multiclass_strategy": "ovr"}',
'{"C": 7, "gamma": 10.0, "kernel": "rbf", "multiclass_strategy": '
@@ -97,7 +97,7 @@ def main(args_test=None):
for item in results:
results_tmp = {"n_jobs": [-1], "n_estimators": [100]}
for key, value in results[item].items():
new_key = f"base_estimator__{key}"
new_key = f"estimator__{key}"
try:
results_tmp[new_key] = sorted(value)
except TypeError:
@@ -111,6 +111,7 @@ def main(args_test=None):
t2 = sorted([x for x in value if isinstance(x, str)])
results_tmp[new_key] = t1 + t2
output.append(results_tmp)
# save results
file_name = Files.grid_input(args.score, args.model)
file_output = os.path.join(Folders.results, file_name)

14
benchmark/scripts/be_flask.py Executable file
View File

@@ -0,0 +1,14 @@
#!/usr/bin/env python
import webbrowser
from benchmark.scripts.flask_app.app import create_app
# Launch a flask server to serve the results
def main(args_test=None):
app = create_app()
app.config["TEST"] = args_test is not None
output = app.config["OUTPUT"]
print("Output is ", output)
if output == "local":
webbrowser.open_new("http://127.0.0.1:1234/")
app.run(port=1234, host="0.0.0.0")

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env python
import os
from benchmark.Utils import Files, Folders
from benchmark.Arguments import Arguments
def main(args_test=None):
arguments = Arguments(prog="be_init_project")
arguments.add_argument("project_name", help="Project name")
args = arguments.parse(args_test)
folders = []
folders.append(args.project_name)
folders.append(os.path.join(args.project_name, Folders.results))
folders.append(os.path.join(args.project_name, Folders.hidden_results))
folders.append(os.path.join(args.project_name, Folders.exreport))
folders.append(os.path.join(args.project_name, Folders.report))
folders.append(os.path.join(args.project_name, Folders.img))
folders.append(os.path.join(args.project_name, Folders.excel))
folders.append(os.path.join(args.project_name, Folders.sql))
try:
for folder in folders:
print(f"Creating folder {folder}")
os.makedirs(folder)
except FileExistsError as e:
print(e)
exit(1)
env_src = os.path.join(Folders.src(), "..", f"{Files.dot_env}.dist")
env_to = os.path.join(args.project_name, Files.dot_env)
os.system(f"cp {env_src} {env_to}")
print("Done!")
print(
"Please, edit .env file with your settings and add a datasets folder"
)
print("with an all.txt file with the datasets you want to use.")
print("In that folder you have to include all the datasets you'll use.")

View File

@@ -1,19 +1,21 @@
#! /usr/bin/env python
import os
from benchmark.Results import Summary
from benchmark.Utils import Folders
from benchmark.ResultsBase import Summary
from benchmark.Utils import Files, Folders
from benchmark.Arguments import Arguments
from benchmark.Manager import Manage
"""List experiments of a model
"""
def main(args_test=None):
arguments = Arguments()
arguments = Arguments(prog="be_list")
arguments.xset("number").xset("model", required=False).xset("key")
arguments.xset("hidden").xset("nan").xset("score", required=False)
arguments.xset("score", required=False).xset("compare").xset("hidden")
arguments.xset("nan")
args = arguments.parse(args_test)
data = Summary(hidden=args.hidden)
data = Summary(hidden=args.hidden, compare=args.compare)
data.acquire()
try:
data.list_results(
@@ -21,33 +23,14 @@ def main(args_test=None):
model=args.model,
sort_key=args.key,
number=args.number,
nan=args.nan,
)
except ValueError as e:
print(e)
else:
if args.nan:
results_nan = []
results = data.get_results_criteria(
score=args.score,
model=args.model,
input_data=None,
sort_key=args.key,
number=args.number,
)
for result in results:
if result["metric"] != result["metric"]:
results_nan.append(result)
if results_nan != []:
print(
"\n"
+ "*" * 30
+ " Results with nan moved to hidden "
+ "*" * 30
)
data.list_results(input_data=results_nan)
for result in results_nan:
name = result["file"]
os.rename(
os.path.join(Folders.results, name),
os.path.join(Folders.hidden_results, name),
)
return
manager = Manage(data)
excel_generated = manager.manage_results()
if excel_generated:
name = os.path.join(Folders.excel, Files.be_list_excel)
print(f"Generated file: {name}")
Files.open(name, test=args_test is not None)

View File

@@ -10,28 +10,39 @@ from benchmark.Arguments import Arguments
def main(args_test=None):
arguments = Arguments()
arguments = Arguments(prog="be_main")
arguments.xset("stratified").xset("score").xset("model", mandatory=True)
arguments.xset("n_folds").xset("platform").xset("quiet").xset("title")
arguments.xset("hyperparameters").xset("paramfile").xset("report")
arguments.xset("grid_paramfile").xset("dataset")
arguments.xset("report").xset("ignore_nan").xset("discretize")
arguments.xset("fit_features")
arguments.add_exclusive(
["grid_paramfile", "best_paramfile", "hyperparameters"]
)
arguments.xset(
"dataset", overrides="title", const="Test with only one dataset"
)
args = arguments.parse(args_test)
report = args.report or args.dataset is not None
if args.grid_paramfile:
args.paramfile = False
args.best_paramfile = False
try:
job = Experiment(
score_name=args.score,
model_name=args.model,
stratified=args.stratified,
datasets=Datasets(dataset_name=args.dataset),
datasets=Datasets(
dataset_name=args.dataset, discretize=args.discretize
),
hyperparams_dict=args.hyperparameters,
hyperparams_file=args.paramfile,
hyperparams_file=args.best_paramfile,
grid_paramfile=args.grid_paramfile,
progress_bar=not args.quiet,
platform=args.platform,
ignore_nan=args.ignore_nan,
title=args.title,
folds=args.n_folds,
fit_features=args.fit_features,
discretize=args.discretize,
)
job.do_experiment()
except ValueError as e:

View File

@@ -1,46 +1,88 @@
#!/usr/bin/env python
from benchmark.Results import Report, Excel, SQL, ReportBest, ReportDatasets
from benchmark.Utils import Files
import os
from benchmark.Results import Report, ReportBest
from benchmark.ResultsFiles import Excel, SQLFile, ReportDatasets
from benchmark.Utils import Files, Folders
from benchmark.Arguments import Arguments
from pathlib import Path
"""Build report on screen of a result file, optionally generate excel and sql
file, and can compare results of report with best results obtained by model
file, and can compare results of report wibth best results obtained by model
If no argument is set, displays the datasets and its characteristics
"""
def main(args_test=None):
arguments = Arguments()
arguments.xset("file").xset("excel").xset("sql").xset("compare")
arguments.xset("best").xset("grid").xset("model", required=False)
arguments.xset("score", required=False)
is_test = args_test is not None
arguments = Arguments(prog="be_report")
arguments.add_subparser()
arguments.add_subparsers_options(
(
"best",
"Report best results obtained by any model/score. "
"See be_build_best",
),
[
("model", dict(required=False)),
("score", dict(required=False)),
],
)
arguments.add_subparsers_options(
(
"grid",
"Report grid results obtained by any model/score. "
"See be_build_grid",
),
[
("model", dict(required=False)),
("score", dict(required=False)),
],
)
arguments.add_subparsers_options(
("file", "Report file results"),
[
("file_name", {}),
("excel", {}),
("sql", {}),
("compare", {}),
],
)
arguments.add_subparsers_options(
("datasets", "Report datasets information"),
[
("excel", {}),
],
)
args = arguments.parse(args_test)
if args.best:
args.grid = None
if args.grid:
args.best = None
if args.file is None and args.best is None and args.grid is None:
ReportDatasets.report()
else:
if args.best is not None or args.grid is not None:
report = ReportBest(args.score, args.model, args.best, args.grid)
match args.subcommand:
case "best" | "grid":
best = args.subcommand == "best"
report = ReportBest(args.score, args.model, best)
report.report()
else:
case "file":
try:
report = Report(args.file, args.compare)
report = Report(args.file_name, args.compare)
report.report()
except FileNotFoundError as e:
print(e)
else:
report.report()
return
if args.sql:
sql = SQLFile(args.file_name)
sql.report()
if args.excel:
excel = Excel(
file_name=args.file,
file_name=Path(args.file_name).name,
compare=args.compare,
)
excel.report()
is_test = args_test is not None
Files.open(excel.get_file_name(), is_test)
if args.sql:
sql = SQL(args.file)
sql.report()
Files.open(
os.path.join(Folders.excel, excel.get_file_name()), is_test
)
case "datasets":
report = ReportDatasets(args.excel)
report.report()
if args.excel:
Files.open(report.get_file_name(), is_test)
case _:
arguments.print_help()

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python
from benchmark.Results import Summary
from benchmark.ResultsBase import Summary
from benchmark.Arguments import ALL_METRICS, Arguments

View File

@@ -0,0 +1,2 @@
OUTPUT="local"
FRAMEWORK="bulma"

View File

Binary file not shown.

View File

@@ -0,0 +1,39 @@
#!/usr/bin/env python
from flask import Flask
from flask_bootstrap import Bootstrap5
from flask_login import LoginManager
from .config import Config
from .models import User, db
from .results.main import results
from .main import main
bootstrap = Bootstrap5()
login_manager = LoginManager()
@login_manager.user_loader
def load_user(user_id):
return User.query.get(int(user_id))
def make_shell_context():
return {"db": db, "User": User}
def create_app():
app = Flask(__name__)
bootstrap.init_app(app)
# app.register_blueprint(results)
app.config.from_object(Config)
db.init_app(app)
login_manager.init_app(app)
login_manager.login_view = "main.login"
app.jinja_env.auto_reload = True
app.register_blueprint(results, url_prefix="/results")
app.register_blueprint(main)
app.shell_context_processor(make_shell_context)
with app.app_context():
db.create_all()
return app

View File

@@ -0,0 +1,17 @@
import os
from dotenv import load_dotenv
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, ".env"))
class Config(object):
FRAMEWORKS = ["bootstrap", "bulma"]
FRAMEWORK = os.environ.get("FRAMEWORK") or FRAMEWORKS[0]
OUTPUT = os.environ.get("OUTPUT") or "local" # local or docker
TEMPLATES_AUTO_RELOAD = True
SECRET_KEY = os.environ.get("SECRET_KEY") or "really-hard-to-guess-key"
SQLALCHEMY_DATABASE_URI = os.environ.get(
"DATABASE_URL"
) or "sqlite:///" + os.path.join(basedir, "app.db")
SQLALCHEMY_TRACK_MODIFICATIONS = False

View File

@@ -0,0 +1,22 @@
from flask_wtf import FlaskForm
from wtforms import (
StringField,
PasswordField,
BooleanField,
SubmitField,
)
from wtforms.validators import (
DataRequired,
Length,
)
class LoginForm(FlaskForm):
username = StringField(
"Username", validators=[DataRequired(), Length(1, 20)]
)
password = PasswordField(
"Password", validators=[DataRequired(), Length(4, 150)]
)
remember_me = BooleanField("Remember me")
submit = SubmitField()

View File

@@ -0,0 +1,51 @@
from flask import (
Blueprint,
render_template,
url_for,
flash,
redirect,
request,
)
from flask_login import login_user, current_user, logout_user, login_required
from werkzeug.urls import url_parse
from .forms import LoginForm
from .models import User
main = Blueprint("main", __name__)
@main.route("/")
@main.route("/index")
def index():
return render_template("index.html")
@main.route("/config")
@login_required
def config():
return render_template("config.html")
@main.route("/login", methods=["GET", "POST"])
def login():
if current_user.is_authenticated:
return redirect(url_for("main.index"))
form = LoginForm()
if form.validate_on_submit():
user = User.query.filter_by(username=form.username.data).first()
if user is None or not user.check_password(form.password.data):
flash("Invalid username or password")
return redirect(url_for("main.login"))
login_user(user, remember=form.remember_me.data)
flash("Logged in successfully.")
next_page = request.args.get("next")
if not next_page or url_parse(next_page).netloc != "":
next_page = url_for("main.index")
return redirect(next_page)
return render_template("login.html", title="Sign In", form=form)
@main.route("/logout")
def logout():
logout_user()
return redirect(url_for("main.index"))

View File

@@ -0,0 +1,29 @@
from hashlib import md5
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import Column, Integer, String
from flask_login import UserMixin
from werkzeug.security import generate_password_hash, check_password_hash
db = SQLAlchemy()
class User(UserMixin, db.Model):
id = Column(Integer, primary_key=True)
username = Column(String(64), index=True, unique=True)
email = Column(String(120), index=True, unique=True)
password_hash = Column(String(128))
def __repr__(self):
return "<User {} {}>".format(self.username, self.email)
def set_password(self, password):
self.password_hash = generate_password_hash(password)
def check_password(self, password):
return check_password_hash(self.password_hash, password)
def avatar(self, size):
digest = md5(self.email.lower().encode("utf-8")).hexdigest()
return "https://www.gravatar.com/avatar/{}?d=identicon&s={}".format(
digest, size
)

View File

@@ -0,0 +1,46 @@
import os
import json
import shutil
import xlsxwriter
from benchmark.Utils import Files, Folders
from benchmark.Arguments import EnvData
from benchmark.ResultsBase import StubReport
from benchmark.ResultsFiles import Excel, ReportDatasets
from benchmark.Datasets import Datasets
from flask import Blueprint, current_app, send_file
from flask import render_template, current_app, request, redirect, url_for
from flask_login import login_required
results = Blueprint("results", __name__, template_folder="templates")
@results.route("/select")
@login_required
def select(compare="False"):
# Get a list of files in a directory
files = {}
names = Files.get_all_results(hidden=False)
for name in names:
report = StubReport(os.path.join(Folders.results, name))
report.report()
files[name] = {
"duration": report.duration,
"score": report.score,
"title": report.title,
}
candidate = current_app.config["FRAMEWORKS"].copy()
candidate.remove(current_app.config["FRAMEWORK"])
return render_template(
"select.html",
files=files,
candidate=candidate[0],
framework=current_app.config["FRAMEWORK"],
compare=compare.capitalize() == "True",
)
return render_template("test.html")
@results.route("/datasets")
@login_required
def datasets(compare="False"):
return render_template("test.html")

View File

@@ -0,0 +1,212 @@
#!/usr/bin/env python
# import os
# import json
# import shutil
# import xlsxwriter
# from benchmark.Utils import Files, Folders
# from benchmark.Arguments import EnvData
# from benchmark.ResultsBase import StubReport
# from benchmark.ResultsFiles import Excel, ReportDatasets
# from benchmark.Datasets import Datasets
# from flask import Blueprint, current_app, send_file
# from flask import render_template, request, redirect, url_for
from flask import Blueprint, render_template
results = Blueprint("results", __name__, template_folder="results")
# FRAMEWORK = "framework"
# FRAMEWORKS = "frameworks"
# OUTPUT = "output"
# TEST = "test"
# class AjaxResponse:
# def __init__(self, success, file_name, code=200):
# self.success = success
# self.file_name = file_name
# self.code = code
# def to_string(self):
# return (
# json.dumps(
# {
# "success": self.success,
# "file": self.file_name,
# "output": current_app.config[OUTPUT],
# }
# ),
# self.code,
# {"ContentType": "application/json"},
# )
# def process_data(file_name, compare, data):
# report = StubReport(
# os.path.join(Folders.results, file_name), compare=compare
# )
# new_list = []
# for result in data["results"]:
# symbol = report._compute_status(result["dataset"], result["score"])
# result["symbol"] = symbol if symbol != " " else "&nbsp;"
# new_list.append(result)
# data["results"] = new_list
# # Compute summary with explanation of symbols
# summary = {}
# for key, value in report._compare_totals.items():
# summary[key] = (report._status_meaning(key), value)
# return summary
@results.route("/results/<compare>")
def results(compare="False"):
# # Get a list of files in a directory
# files = {}
# names = Files.get_all_results(hidden=False)
# for name in names:
# report = StubReport(os.path.join(Folders.results, name))
# report.report()
# files[name] = {
# "duration": report.duration,
# "score": report.score,
# "title": report.title,
# }
# candidate = current_app.config[FRAMEWORKS].copy()
# candidate.remove(current_app.config[FRAMEWORK])
# return render_template(
# "select.html",
# files=files,
# candidate=candidate[0],
# framework=current_app.config[FRAMEWORK],
# compare=compare.capitalize() == "True",
# )
return render_template("test.html")
"""
@results.route("/datasets/<compare>")
@results.route("datasets")
def datasets(compare=False):
dt = Datasets()
datos = []
for dataset in dt:
datos.append(dt.get_attributes(dataset))
return render_template(
"datasets.html",
datasets=datos,
compare=compare,
framework=current_app.config[FRAMEWORK],
)
@results.route("/showfile/<file_name>/<compare>")
def showfile(file_name, compare, back=None):
compare = compare.capitalize() == "True"
back = request.args["url"] if back is None else back
print(f"back [{back}]")
with open(os.path.join(Folders.results, file_name)) as f:
data = json.load(f)
try:
summary = process_data(file_name, compare, data)
except Exception as e:
return render_template("error.html", message=str(e), compare=compare)
return render_template(
"report.html",
data=data,
file=file_name,
summary=summary,
framework=current_app.config[FRAMEWORK],
back=back,
)
@results.route("/show", methods=["post"])
def show():
selected_file = request.form["selected-file"]
compare = request.form["compare"]
return showfile(
file_name=selected_file,
compare=compare,
back=url_for(
"main.index", compare=compare, output=current_app.config[OUTPUT]
),
)
@results.route("/excel", methods=["post"])
def excel():
selected_files = request.json["selectedFiles"]
compare = request.json["compare"]
book = None
if selected_files[0] == "datasets":
# Create a list of datasets
report = ReportDatasets(excel=True, output=False)
report.report()
excel_name = os.path.join(Folders.excel, Files.datasets_report_excel)
if current_app.config[OUTPUT] == "local":
Files.open(excel_name, test=current_app.config[TEST])
return AjaxResponse(True, Files.datasets_report_excel).to_string()
try:
for file_name in selected_files:
file_name_result = os.path.join(Folders.results, file_name)
if book is None:
file_excel = os.path.join(Folders.excel, Files.be_list_excel)
book = xlsxwriter.Workbook(
file_excel, {"nan_inf_to_errors": True}
)
excel = Excel(
file_name=file_name_result,
book=book,
compare=compare,
)
excel.report()
except Exception as e:
if book is not None:
book.close()
return AjaxResponse(
False, "Could not create excel file, " + str(e)
).to_string()
if book is not None:
book.close()
if current_app.config[OUTPUT] == "local":
Files.open(file_excel, test=current_app.config[TEST])
return AjaxResponse(True, Files.be_list_excel).to_string()
@results.route("/download/<file_name>")
def download(file_name):
src = os.path.join(Folders.current, Folders.excel, file_name)
dest = os.path.join(
Folders.src(), "scripts", "app", "static", "excel", file_name
)
shutil.copyfile(src, dest)
return send_file(dest, as_attachment=True)
@results.route("/config/<framework>/<compare>")
def config(framework, compare):
if framework not in current_app.config[FRAMEWORKS]:
message = f"framework {framework} not supported"
return render_template("error.html", message=message)
env = EnvData()
env.load()
env.args[FRAMEWORK] = framework
env.save()
current_app.config[FRAMEWORK] = framework
return redirect(url_for("main.index", compare=compare))
@results.route("/best_results/<file>/<compare>")
def best_results(file, compare):
compare = compare.capitalize() == "True"
try:
with open(os.path.join(Folders.results, file)) as f:
data = json.load(f)
except Exception as e:
return render_template("error.html", message=str(e), compare=compare)
return render_template(
"report_best.html",
data=data,
compare=compare,
framework=current_app.config[FRAMEWORK],
)
"""

View File

@@ -0,0 +1,50 @@
{%- macro get_button_tag(icon_name, method, visible=True, name="") -%}
<button class="btn btn-primary btn-small" onclick="{{ method }}" {{ "" if visible else "hidden='true'" }} {{ "" if name=="" else "name='" + name +"'"}}><i class="mdi mdi-{{ icon_name }}"></i>
</button>
{%- endmacro -%}
<table id="file-table"
class="table table-striped table-hover table-bordered">
<thead>
<tr>
<th>Model</th>
<th>Metric</th>
<th>Platform</th>
<th>Date</th>
<th>Time</th>
<th>Stratified</th>
<th>Title</th>
<th>Score</th>
<th>
<button class="btn btn-primary btn-small btn-danger"
onclick="setCheckBoxes(false)">
<i class="mdi mdi-checkbox-multiple-blank"></i>
</button>
<button class="btn btn-primary btn-small btn-success"
onclick="setCheckBoxes(true)">
<i class="mdi mdi-checkbox-multiple-marked"></i>
</button>
</tr>
</thead>
<tbody>
{% for file, data in files.items() %}
{% set parts = file.split('_') %}
{% set stratified = parts[6].split('.')[0] %}
<tr id="{{ file }}">
<td>{{ parts[2] }}</td>
<td>{{ parts[1] }}</td>
<td>{{ parts[3] }}</td>
<td>{{ parts[4] }}</td>
<td>{{ parts[5] }}</td>
<td>{{ 'True' if stratified =='1' else 'False' }}</td>
<td>{{ "%s" % data["title"] }}</td>
<td class="text-end">{{ "%.6f" % data["score"] }}</td>
<td>
{{ get_button_tag("table-eye", "showFile('" ~ file ~ "') ") | safe }}
{% set file_best = "best_results_" ~ parts[1] ~ "_" ~ parts[2] ~ ".json" %}
{{ get_button_tag("star-circle-outline", "redirectDouble('best_results', '" ~ file_best ~ "') ", visible=False, name="best_buttons") | safe }}
<input type="checkbox" name="selected_files" value="{{ file }}" />
</td>
</tr>
{% endfor %}
</tbody>
</table>

View File

@@ -0,0 +1,9 @@
{% extends "base.html" %}
{% block content %}
{% include "_table_select.html" %}
{% endblock %}
{% block jscript %}
{{ super() }}
<script src="https://cdn.datatables.net/1.10.25/js/jquery.dataTables.min.js"></script>
<script src="{{ url_for('static', filename="js/select.js") }}"></script>
{% endblock %}

View File

@@ -0,0 +1,51 @@
.alternate-font {
font-family: Arial;
}
tbody {
font-family: Courier;
}
.tag {
cursor: pointer;
}
.ajaxLoading {
cursor: progress !important;
}
#file-table tbody tr.selected td {
background-color: #0dcaf0;
color: white;
}
#report-table tbody tr.selected td {
background-color: #0dcaf0;
color: white;
}
.btn-small {
padding: 0.25rem 0.5rem;
font-size: 0.75rem;
}
body {
padding-bottom: 20px;
}
.navbar {
margin-bottom: 20px;
}
pre {
background: #ddd;
padding: 10px;
}
h2 {
margin-top: 20px;
}
footer {
margin: 20px;
}

View File

@@ -0,0 +1,29 @@
function excelFiles(selectedFiles, compare) {
var data = {
"selectedFiles": selectedFiles,
"compare": compare
};
// send data to server with ajax post
$.ajax({
type:'POST',
url:'/excel',
data: JSON.stringify(data),
contentType: "application/json",
dataType: 'json',
success: function(data){
if (data.success) {
if (data.output == "local") {
alert("Se ha generado el archivo " + data.file);
} else {
window.open('/download/' + data.file, "_blank");
}
} else {
alert(data.file);
}
},
error: function (xhr, ajaxOptions, thrownError) {
var mensaje = JSON.parse(xhr.responseText || '{\"mensaje\": \"Error indeterminado\"}');
alert(mensaje.mensaje);
}
});
}

View File

@@ -0,0 +1,97 @@
$(document).ready(function () {
var table = $("#file-table").DataTable({
paging: true,
searching: true,
ordering: true,
info: true,
"select.items": "row",
pageLength: 25,
columnDefs: [
{
targets: 8,
orderable: false,
},
],
//"language": {
// "lengthMenu": "_MENU_"
//}
});
$('#file-table').on( 'draw.dt', function () {
enable_disable_best_buttons();
} );
// Check if row is selected
$("#file-table tbody").on("click", "tr", function () {
if ($(this).hasClass("selected")) {
$(this).removeClass("selected");
} else {
table
.$("tr.selected")
.removeClass("selected");
$(this).addClass("selected");
}
});
// Show file with doubleclick
$("#file-table tbody").on("dblclick", "tr", function () {
showFile($(this).attr("id"));
});
$(document).ajaxStart(function () {
$("body").addClass("ajaxLoading");
});
$(document).ajaxStop(function () {
$("body").removeClass("ajaxLoading");
});
$('#compare').change(function() {
enable_disable_best_buttons();
});
enable_disable_best_buttons();
});
function enable_disable_best_buttons(){
if ($('#compare').is(':checked')) {
$("[name='best_buttons']").addClass("tag is-link is-normal");
$("[name='best_buttons']").removeAttr("hidden");
} else {
$("[name='best_buttons']").removeClass("tag is-link is-normal");
$("[name='best_buttons']").attr("hidden", true);
}
}
function showFile(selectedFile) {
var form = $(
'<form action="/show" method="post">' +
'<input type="hidden" name="selected-file" value="' +
selectedFile +
'" />' +
'<input type="hidden" name="compare" value=' +
$("#compare").is(":checked") +
" />" +
"</form>"
);
$("body").append(form);
form.submit();
}
function excel() {
var checkbox = document.getElementsByName("selected_files");
var selectedFiles = [];
for (var i = 0; i < checkbox.length; i++) {
if (checkbox[i].checked) {
selectedFiles.push(checkbox[i].value);
}
}
if (selectedFiles.length == 0) {
alert("Select at least one file");
return;
}
var compare = $("#compare").is(":checked");
excelFiles(selectedFiles, compare);
}
function setCheckBoxes(value) {
var checkbox = document.getElementsByName("selected_files");
for (i = 0; i < checkbox.length; i++) {
checkbox[i].checked = value;
}
}
function redirectDouble(route, parameter) {
location.href = "/"+ route + "/" + parameter + "/" + $("#compare").is(":checked");
}
function redirectSimple(route) {
location.href = "/" + route + "/" + $("#compare").is(":checked");
}

View File

@@ -0,0 +1,30 @@
{% from 'bootstrap5/nav.html' import render_nav_item %}
<nav class="navbar navbar-expand-sm navbar-light bg-light mb-4 justify-content-end">
<div class="container">
<button class="navbar-toggler"
type="button"
data-bs-toggle="collapse"
data-bs-target="#navbarSupportedContent"
aria-controls="navbarSupportedContent"
aria-expanded="false"
aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<!-- Left side of navbar -->
<ul class="navbar-nav me-auto">
{{ render_nav_item('main.index', 'Home') }}
</ul>
<ul class="navbar-nav justify-content-end">
{{ render_nav_item('results.select', 'Results') }}
{{ render_nav_item('results.datasets', 'Datasets') }}
{{ render_nav_item('main.config', 'Config') }}
{% if current_user.is_authenticated %}
{{ render_nav_item('main.logout', 'Logout') }}
{% else %}
{{ render_nav_item('main.login', 'Login') }}
{% endif %}
</ul>
</div>
</div>
</nav>

View File

@@ -0,0 +1,27 @@
<!DOCTYPE html>
<html lang="en">
<head>
{% block head %}
<meta charset="utf-8">
<meta name="viewport"
content="width=device-width, initial-scale=1, shrink-to-fit=no">
{% block styles %}{{ bootstrap.load_css() }}{% endblock %}
<title>Benchmark</title>
{% endblock %}
</head>
<body>
{% include "_nav.html" %}
{% with messages = get_flashed_messages() %}
{% if messages %}
{% for message in messages %}<div class="alert alert-info" role="alert">{{ message }}</div>{% endfor %}
{% endif %}
{% endwith %}
<div class="container">
{% block content %}{% endblock %}
</div>
</body>
{% block jscript %}
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
{{ bootstrap.load_js() }}
{% endblock %}
</html>

View File

@@ -0,0 +1,5 @@
{% extends "base.html" %}
{% block content %}
<h1>Home</h1>
<p>Welcome to the home page!</p>
{% endblock content %}

View File

@@ -0,0 +1,5 @@
{% extends "base.html" %}
{% block content %}
<h1>My First Heading</h1>
<p>My first paragraph.</p>
{% endblock %}

View File

@@ -0,0 +1,6 @@
{% extends 'base.html' %}
{% from 'bootstrap5/form.html' import render_form %}
{% block content %}
<h2>Login</h2>
{{ render_form(form) }}
{% endblock content %}

View File

@@ -5,3 +5,10 @@ model=ODTE
stratified=0
# Source of data Tanveer/Surcov
source_data=Tanveer
seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
discretize=0
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=0
margin=0.1

View File

@@ -4,3 +4,10 @@ n_folds=5
model=ODTE
stratified=0
source_data=Arff
seeds=[271, 314, 171]
discretize=1
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=1
margin=0.1

View File

@@ -5,3 +5,10 @@ model=ODTE
stratified=0
# Source of data Tanveer/Surcov
source_data=Tanveer
seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
discretize=0
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=0
margin=0.1

View File

@@ -5,3 +5,10 @@ model=ODTE
stratified=0
# Source of data Tanveer/Surcov
source_data=Surcov
seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
discretize=0
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=0
margin=0.1

2
benchmark/tests/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
ReportDatasets.xlsx
some_results.xlsx

View File

@@ -24,13 +24,11 @@ class ArgumentsTest(TestBase):
def test_parameters(self):
expected_parameters = {
"best": ("-b", "--best"),
"best_paramfile": ("-b", "--best_paramfile"),
"color": ("-c", "--color"),
"compare": ("-c", "--compare"),
"dataset": ("-d", "--dataset"),
"excel": ("-x", "--excel"),
"file": ("-f", "--file"),
"grid": ("-g", "--grid"),
"grid_paramfile": ("-g", "--grid_paramfile"),
"hidden": ("--hidden",),
"hyperparameters": ("-p", "--hyperparameters"),
@@ -42,7 +40,6 @@ class ArgumentsTest(TestBase):
"nan": ("--nan",),
"number": ("-n", "--number"),
"n_folds": ("-n", "--n_folds"),
"paramfile": ("-f", "--paramfile"),
"platform": ("-P", "--platform"),
"quiet": ("-q", "--quiet"),
"report": ("-r", "--report"),
@@ -98,3 +95,27 @@ class ArgumentsTest(TestBase):
finally:
os.chdir(path)
self.assertEqual(stderr.getvalue(), f"{NO_ENV}\n")
@patch("sys.stderr", new_callable=StringIO)
def test_overrides(self, stderr):
arguments = self.build_args()
arguments.xset("title")
arguments.xset("dataset", overrides="title", const="sample text")
test_args = ["-n", "3", "-m", "SVC", "-k", "1", "-d", "dataset"]
args = arguments.parse(test_args)
self.assertEqual(stderr.getvalue(), "")
self.assertEqual(args.title, "sample text")
@patch("sys.stderr", new_callable=StringIO)
def test_overrides_no_args(self, stderr):
arguments = self.build_args()
arguments.xset("title")
arguments.xset("dataset", overrides="title", const="sample text")
test_args = None
with self.assertRaises(SystemExit):
arguments.parse(test_args)
self.assertRegexpMatches(
stderr.getvalue(),
r"error: the following arguments are required: -m/--model, "
"-k/--key, --title",
)

View File

@@ -4,7 +4,8 @@ from unittest.mock import patch
from openpyxl import load_workbook
from .TestBase import TestBase
from ..Utils import Folders, Files, NO_RESULTS
from ..Results import Benchmark
from ..ResultsFiles import Benchmark
from .._version import __version__
class BenchmarkTest(TestBase):
@@ -14,10 +15,10 @@ class BenchmarkTest(TestBase):
files.append(Files.exreport(score))
files.append(Files.exreport_output(score))
files.append(Files.exreport_err(score))
files.append(Files.exreport_excel(score))
files.append(Files.exreport_pdf)
files.append(Files.tex_output("accuracy"))
self.remove_files(files, Folders.exreport)
self.remove_files([Files.exreport_excel("accuracy")], Folders.excel)
self.remove_files(files, ".")
return super().tearDown()
@@ -98,9 +99,16 @@ class BenchmarkTest(TestBase):
benchmark.excel()
file_name = benchmark.get_excel_file_name()
book = load_workbook(file_name)
replace = None
with_this = None
for sheet_name in book.sheetnames:
sheet = book[sheet_name]
self.check_excel_sheet(sheet, f"exreport_excel_{sheet_name}")
# ExcelTest.generate_excel_sheet(
# self, sheet, f"exreport_excel_{sheet_name}"
# )
if sheet_name == "Datasets":
replace = self.benchmark_version
with_this = __version__
self.check_excel_sheet(
sheet,
f"exreport_excel_{sheet_name}",
replace=replace,
with_this=with_this,
)

View File

@@ -18,7 +18,7 @@ class BestResultTest(TestBase):
"C": 7,
"gamma": 0.1,
"kernel": "rbf",
"max_iter": 10000.0,
"max_iter": 10000,
"multiclass_strategy": "ovr",
},
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",

View File

@@ -1,4 +1,3 @@
import shutil
from .TestBase import TestBase
from ..Experiments import Randomized
from ..Datasets import Datasets
@@ -17,13 +16,27 @@ class DatasetTest(TestBase):
self.set_env(".env.dist")
return super().tearDown()
@staticmethod
def set_env(env):
shutil.copy(env, ".env")
def test_Randomized(self):
expected = [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
self.assertSequenceEqual(Randomized.seeds, expected)
self.assertSequenceEqual(Randomized.seeds(), expected)
def test_Randomized_3_seeds(self):
self.set_env(".env.arff")
expected = [271, 314, 171]
self.assertSequenceEqual(Randomized.seeds(), expected)
def test_load_dataframe(self):
self.set_env(".env.arff")
dt = Datasets()
X, y = dt.load("iris", dataframe=False)
dataset = dt.load("iris", dataframe=True)
class_name = dt.get_class_name()
features = dt.get_features()
self.assertListEqual(y.tolist(), dataset[class_name].tolist())
for i in range(len(features)):
self.assertListEqual(
X[:, i].tolist(), dataset[features[i]].tolist()
)
def test_Datasets_iterator(self):
test = {

View File

@@ -2,8 +2,8 @@ import os
from openpyxl import load_workbook
from xlsxwriter import Workbook
from .TestBase import TestBase
from ..Results import Excel
from ..Utils import Folders
from ..ResultsFiles import Excel
from ..Utils import Folders, Files
class ExcelTest(TestBase):
@@ -13,7 +13,7 @@ class ExcelTest(TestBase):
"results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.xlsx",
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.xlsx",
]
self.remove_files(files, Folders.results)
self.remove_files(files, Folders.excel)
return super().tearDown()
def test_report_excel_compared(self):
@@ -21,7 +21,7 @@ class ExcelTest(TestBase):
report = Excel(file_name, compare=True)
report.report()
file_output = report.get_file_name()
book = load_workbook(file_output)
book = load_workbook(os.path.join(Folders.excel, file_output))
sheet = book["STree"]
self.check_excel_sheet(sheet, "excel_compared")
@@ -30,14 +30,14 @@ class ExcelTest(TestBase):
report = Excel(file_name, compare=False)
report.report()
file_output = report.get_file_name()
book = load_workbook(file_output)
book = load_workbook(os.path.join(Folders.excel, file_output))
sheet = book["STree"]
self.check_excel_sheet(sheet, "excel")
def test_Excel_Add_sheet(self):
file_name = "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json"
excel_file_name = file_name.replace(".json", ".xlsx")
book = Workbook(os.path.join(Folders.results, excel_file_name))
excel_file_name = file_name.replace(Files.report_ext, ".xlsx")
book = Workbook(os.path.join(Folders.excel, excel_file_name))
excel = Excel(file_name=file_name, book=book)
excel.report()
report = Excel(
@@ -46,7 +46,7 @@ class ExcelTest(TestBase):
)
report.report()
book.close()
book = load_workbook(os.path.join(Folders.results, excel_file_name))
book = load_workbook(os.path.join(Folders.excel, excel_file_name))
sheet = book["STree"]
self.check_excel_sheet(sheet, "excel_add_STree")
sheet = book["ODTE"]

View File

@@ -1,4 +1,6 @@
import json
from io import StringIO
from unittest.mock import patch
from .TestBase import TestBase
from ..Experiments import Experiment
from ..Datasets import Datasets
@@ -8,10 +10,12 @@ class ExperimentTest(TestBase):
def setUp(self):
self.exp = self.build_exp()
def build_exp(self, hyperparams=False, grid=False):
def build_exp(
self, hyperparams=False, grid=False, model="STree", ignore_nan=False
):
params = {
"score_name": "accuracy",
"model_name": "STree",
"model_name": model,
"stratified": "0",
"datasets": Datasets(),
"hyperparams_dict": "{}",
@@ -21,6 +25,7 @@ class ExperimentTest(TestBase):
"title": "Test",
"progress_bar": False,
"folds": 2,
"ignore_nan": ignore_nan,
}
return Experiment(**params)
@@ -31,6 +36,7 @@ class ExperimentTest(TestBase):
],
".",
)
self.set_env(".env.dist")
return super().tearDown()
def test_build_hyperparams_file(self):
@@ -46,7 +52,7 @@ class ExperimentTest(TestBase):
"C": 7,
"gamma": 0.1,
"kernel": "rbf",
"max_iter": 10000.0,
"max_iter": 10000,
"multiclass_strategy": "ovr",
},
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",
@@ -89,7 +95,7 @@ class ExperimentTest(TestBase):
def test_exception_n_fold_crossval(self):
self.exp.do_experiment()
with self.assertRaises(ValueError):
self.exp._n_fold_crossval([], [], {})
self.exp._n_fold_crossval("", [], [], {})
def test_do_experiment(self):
self.exp.do_experiment()
@@ -131,3 +137,42 @@ class ExperimentTest(TestBase):
):
for key, value in expected_result.items():
self.assertEqual(computed_result[key], value)
def test_build_fit_parameters(self):
self.set_env(".env.arff")
expected = {
"state_names": {
"sepallength": [0, 1, 2],
"sepalwidth": [0, 1, 2, 3, 4, 5],
"petallength": [0, 1, 2, 3],
"petalwidth": [0, 1, 2],
},
"features": [
"sepallength",
"sepalwidth",
"petallength",
"petalwidth",
],
}
exp = self.build_exp(model="TAN")
X, y = exp.datasets.load("iris")
computed = exp._build_fit_params("iris")
for key, value in expected["state_names"].items():
self.assertEqual(computed["state_names"][key], value)
for feature in expected["features"]:
self.assertIn(feature, computed["features"])
# Ask for states of a dataset that does not exist
computed = exp._build_fit_params("not_existing")
self.assertTrue("states" not in computed)
@patch("sys.stdout", new_callable=StringIO)
def test_experiment_with_nan_not_ignored(self, mock_output):
exp = self.build_exp(model="Mock")
self.assertRaises(ValueError, exp.do_experiment)
output_text = mock_output.getvalue().splitlines()
expected = "[ nan 0.8974359]"
self.assertEqual(expected, output_text[0])
def test_experiment_with_nan_ignored(self):
self.exp = self.build_exp(model="Mock", ignore_nan=True)
self.exp.do_experiment()

View File

@@ -15,6 +15,8 @@ from odte import Odte
from xgboost import XGBClassifier
from .TestBase import TestBase
from ..Models import Models
import xgboost
import sklearn
class ModelTest(TestBase):
@@ -33,22 +35,54 @@ class ModelTest(TestBase):
for key, value in test.items():
self.assertIsInstance(Models.get_model(key), value)
def test_Models_version(self):
def ver_stree():
return "1.2.3"
def ver_wodt():
return "h.j.k"
def ver_odte():
return "4.5.6"
test = {
"STree": [ver_stree, "1.2.3"],
"Wodt": [ver_wodt, "h.j.k"],
"ODTE": [ver_odte, "4.5.6"],
"RandomForest": [None, "7.8.9"],
"BaggingStree": [None, "x.y.z"],
"AdaBoostStree": [None, "w.x.z"],
"XGBoost": [None, "10.11.12"],
}
for key, value in test.items():
clf = Models.get_model(key)
if key in ["STree", "Wodt", "ODTE"]:
clf.version = value[0]
elif key == "XGBoost":
xgboost.__version__ = value[1]
else:
sklearn.__version__ = value[1]
self.assertEqual(Models.get_version(key, clf), value[1])
def test_bogus_Model_Version(self):
self.assertEqual(Models.get_version("unknown", None), "Error")
def test_BaggingStree(self):
clf = Models.get_model("BaggingStree")
self.assertIsInstance(clf, BaggingClassifier)
clf_base = clf.base_estimator
clf_base = clf.estimator
self.assertIsInstance(clf_base, Stree)
def test_BaggingWodt(self):
clf = Models.get_model("BaggingWodt")
self.assertIsInstance(clf, BaggingClassifier)
clf_base = clf.base_estimator
clf_base = clf.estimator
self.assertIsInstance(clf_base, Wodt)
def test_AdaBoostStree(self):
clf = Models.get_model("AdaBoostStree")
self.assertIsInstance(clf, AdaBoostClassifier)
clf_base = clf.base_estimator
clf_base = clf.estimator
self.assertIsInstance(clf_base, Stree)
def test_unknown_classifier(self):

View File

@@ -2,11 +2,17 @@ import os
from io import StringIO
from unittest.mock import patch
from .TestBase import TestBase
from ..Results import Report, BaseReport, ReportBest, ReportDatasets
from ..Results import Report, ReportBest
from ..ResultsFiles import ReportDatasets
from ..ResultsBase import BaseReport
from ..Manager import get_input
from ..Utils import Symbols
class ReportTest(TestBase):
def test_get_input(self):
self.assertEqual(get_input(is_test=True), "test")
def test_BaseReport(self):
with patch.multiple(BaseReport, __abstractmethods__=set()):
file_name = os.path.join(
@@ -60,19 +66,40 @@ class ReportTest(TestBase):
self.assertEqual(res, Symbols.better_best)
res = report._compute_status("balloons", 1.0)
self.assertEqual(res, Symbols.better_best)
report = Report(file_name=file_name)
with patch(self.output, new=StringIO()):
report.report()
res = report._compute_status("balloons", 0.99)
self.assertEqual(res, Symbols.upward_arrow)
report.margin = 0.9
res = report._compute_status("balloons", 0.99)
self.assertEqual(res, Symbols.cross)
def test_reportbase_compute_status(self):
with patch.multiple(BaseReport, __abstractmethods__=set()):
file_name = os.path.join(
"results",
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",
)
temp = BaseReport(file_name)
temp.compare = False
temp._compare_totals = {}
temp.score_name = "f1"
res = temp._compute_status("balloons", 0.99)
self.assertEqual(res, " ")
def test_report_file_not_found(self):
with self.assertRaises(FileNotFoundError):
_ = Report("unknown_file")
def test_report_best(self):
report = ReportBest("accuracy", "STree", best=True, grid=False)
report = ReportBest("accuracy", "STree", best=True)
with patch(self.output, new=StringIO()) as stdout:
report.report()
self.check_output_file(stdout, "report_best")
def test_report_grid(self):
report = ReportBest("accuracy", "STree", best=False, grid=True)
report = ReportBest("accuracy", "STree", best=False)
with patch(self.output, new=StringIO()) as stdout:
report.report()
file_name = "report_grid.test"
@@ -81,20 +108,21 @@ class ReportTest(TestBase):
output_text = stdout.getvalue().splitlines()
# Compare replacing STree version
for line, index in zip(expected, range(len(expected))):
if "1.2.4" in line:
if self.stree_version in line:
# replace STree version
line = self.replace_STree_version(line, output_text, index)
self.assertEqual(line, output_text[index])
def test_report_best_both(self):
report = ReportBest("accuracy", "STree", best=True, grid=True)
with patch(self.output, new=StringIO()) as stdout:
report.report()
self.check_output_file(stdout, "report_best")
@patch("sys.stdout", new_callable=StringIO)
def test_report_datasets(self, mock_output):
report = ReportDatasets()
report.report()
self.check_output_file(mock_output, "report_datasets")
file_name = f"report_datasets{self.ext}"
with open(os.path.join(self.test_files, file_name)) as f:
expected = f.read()
output_text = mock_output.getvalue().splitlines()
for line, index in zip(expected.splitlines(), range(len(expected))):
if self.benchmark_version in line:
# replace benchmark version
line = self.replace_benchmark_version(line, output_text, index)
self.assertEqual(line, output_text[index])

View File

@@ -1,7 +1,7 @@
import os
from .TestBase import TestBase
from ..Results import SQL
from ..Utils import Folders
from ..ResultsFiles import SQLFile
from ..Utils import Folders, Files
class SQLTest(TestBase):
@@ -9,14 +9,14 @@ class SQLTest(TestBase):
files = [
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.sql",
]
self.remove_files(files, Folders.results)
self.remove_files(files, Folders.sql)
return super().tearDown()
def test_report_SQL(self):
file_name = "results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json"
report = SQL(file_name)
report = SQLFile(file_name)
report.report()
file_name = os.path.join(
Folders.results, file_name.replace(".json", ".sql")
Folders.sql, file_name.replace(Files.report_ext, ".sql")
)
self.check_file_file(file_name, "sql")

View File

@@ -1,7 +1,7 @@
from io import StringIO
from unittest.mock import patch
from .TestBase import TestBase
from ..Results import Summary
from ..ResultsBase import Summary
from ..Utils import NO_RESULTS

View File

@@ -4,6 +4,7 @@ import pathlib
import sys
import csv
import unittest
import shutil
from importlib import import_module
from io import StringIO
from unittest.mock import patch
@@ -15,8 +16,14 @@ class TestBase(unittest.TestCase):
self.test_files = "test_files"
self.output = "sys.stdout"
self.ext = ".test"
self.benchmark_version = "0.2.0"
self.stree_version = "1.2.4"
super().__init__(*args, **kwargs)
@staticmethod
def set_env(env):
shutil.copy(env, ".env")
def remove_files(self, files, folder):
for file_name in files:
file_name = os.path.join(folder, file_name)
@@ -24,6 +31,7 @@ class TestBase(unittest.TestCase):
os.remove(file_name)
def generate_excel_sheet(self, sheet, file_name):
file_name += self.ext
with open(os.path.join(self.test_files, file_name), "w") as f:
for row in range(1, sheet.max_row + 1):
for col in range(1, sheet.max_column + 1):
@@ -31,7 +39,9 @@ class TestBase(unittest.TestCase):
if value is not None:
print(f'{row};{col};"{value}"', file=f)
def check_excel_sheet(self, sheet, file_name):
def check_excel_sheet(
self, sheet, file_name, replace=None, with_this=None
):
file_name += self.ext
with open(os.path.join(self.test_files, file_name), "r") as f:
expected = csv.reader(f, delimiter=";")
@@ -43,6 +53,9 @@ class TestBase(unittest.TestCase):
value = float(value)
except ValueError:
pass
if replace is not None and isinstance(value, str):
if replace in value:
value = value.replace(replace, with_this)
self.assertEqual(sheet.cell(int(row), int(col)).value, value)
def check_output_file(self, output, file_name):
@@ -51,10 +64,15 @@ class TestBase(unittest.TestCase):
expected = f.read()
self.assertEqual(output.getvalue(), expected)
@staticmethod
def replace_STree_version(line, output, index):
idx = line.find("1.2.4")
return line.replace("1.2.4", output[index][idx : idx + 5])
def replace_STree_version(self, line, output, index):
idx = line.find(self.stree_version)
return line.replace(self.stree_version, output[index][idx : idx + 5])
def replace_benchmark_version(self, line, output, index):
idx = line.find(self.benchmark_version)
return line.replace(
self.benchmark_version, output[index][idx : idx + 5]
)
def check_file_file(self, computed_file, expected_file):
with open(computed_file) as f:

View File

@@ -11,6 +11,8 @@ class UtilTest(TestBase):
self.assertEqual("results", Folders.results)
self.assertEqual("hidden_results", Folders.hidden_results)
self.assertEqual("exreport", Folders.exreport)
self.assertEqual("excel", Folders.excel)
self.assertEqual("img", Folders.img)
self.assertEqual(
os.path.join(Folders.exreport, "exreport_output"), Folders.report
)
@@ -178,6 +180,13 @@ class UtilTest(TestBase):
"model": "ODTE",
"stratified": "0",
"source_data": "Tanveer",
"seeds": "[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]",
"discretize": "0",
"nodes": "Nodes",
"leaves": "Leaves",
"depth": "Depth",
"fit_features": "0",
"margin": "0.1",
}
computed = EnvData().load()
self.assertDictEqual(computed, expected)

View File

@@ -13,6 +13,7 @@ from .PairCheck_test import PairCheckTest
from .Arguments_test import ArgumentsTest
from .scripts.Be_Pair_check_test import BePairCheckTest
from .scripts.Be_List_test import BeListTest
from .scripts.Be_Init_Project_test import BeInitProjectTest
from .scripts.Be_Report_test import BeReportTest
from .scripts.Be_Summary_test import BeSummaryTest
from .scripts.Be_Grid_test import BeGridTest

View File

@@ -1,2 +1,2 @@
iris,class
wine,class
iris,class,all
wine,class,[0, 1]

1
benchmark/tests/excel/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
#

View File

@@ -1 +1 @@
{"balance-scale": [0.98, {"splitter": "best", "max_features": "auto"}, "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json"], "balloons": [0.86, {"C": 7, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000.0, "multiclass_strategy": "ovr"}, "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"]}
{"balance-scale": [0.98, {"splitter": "best", "max_features": "auto"}, "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json"], "balloons": [0.86, {"C": 7, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000, "multiclass_strategy": "ovr"}, "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"]}

View File

@@ -6,7 +6,7 @@
"kernel": "liblinear",
"multiclass_strategy": "ovr"
},
"v. 1.3.0, Computed on Test on 2022-02-22 at 12:00:00 took 1s"
"v. 1.3.1, Computed on Test on 2022-02-22 at 12:00:00 took 1s"
],
"balloons": [
0.625,
@@ -15,6 +15,6 @@
"kernel": "linear",
"multiclass_strategy": "ovr"
},
"v. 1.3.0, Computed on Test on 2022-02-22 at 12:00:00 took 1s"
"v. 1.3.1, Computed on Test on 2022-02-22 at 12:00:00 took 1s"
]
}

View File

@@ -1,57 +1 @@
{
"score_name": "accuracy",
"title": "Gridsearched hyperparams v022.1b random_init",
"model": "ODTE",
"version": "0.3.2",
"stratified": false,
"folds": 5,
"date": "2022-04-20",
"time": "10:52:20",
"duration": 22591.471411943436,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "Galgo",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"base_estimator__C": 57,
"base_estimator__gamma": 0.1,
"base_estimator__kernel": "rbf",
"base_estimator__multiclass_strategy": "ovr",
"n_estimators": 100,
"n_jobs": -1
},
"nodes": 7.361199999999999,
"leaves": 4.180599999999999,
"depth": 3.536,
"score": 0.96352,
"score_std": 0.024949741481626608,
"time": 0.31663217544555666,
"time_std": 0.19918813895255585
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"base_estimator__C": 5,
"base_estimator__gamma": 0.14,
"base_estimator__kernel": "rbf",
"base_estimator__multiclass_strategy": "ovr",
"n_estimators": 100,
"n_jobs": -1
},
"nodes": 2.9951999999999996,
"leaves": 1.9975999999999998,
"depth": 1.9975999999999998,
"score": 0.785,
"score_std": 0.2461311755051675,
"time": 0.11560620784759522,
"time_std": 0.012784241828599895
}
]
}
{"score_name": "accuracy", "title": "Gridsearched hyperparams v022.1b random_init", "model": "ODTE", "version": "0.3.2", "language_version": "3.11x", "language": "Python", "stratified": false, "folds": 5, "date": "2022-04-20", "time": "10:52:20", "duration": 22591.471411943436, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "Galgo", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"base_estimator__C": 57, "base_estimator__gamma": 0.1, "base_estimator__kernel": "rbf", "base_estimator__multiclass_strategy": "ovr", "n_estimators": 100, "n_jobs": -1}, "nodes": 7.361199999999999, "leaves": 4.180599999999999, "depth": 3.536, "score": 0.96352, "score_std": 0.024949741481626608, "time": 0.31663217544555666, "time_std": 0.19918813895255585}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"base_estimator__C": 5, "base_estimator__gamma": 0.14, "base_estimator__kernel": "rbf", "base_estimator__multiclass_strategy": "ovr", "n_estimators": 100, "n_jobs": -1}, "nodes": 2.9951999999999996, "leaves": 1.9975999999999998, "depth": 1.9975999999999998, "score": 0.785, "score_std": 0.2461311755051675, "time": 0.11560620784759522, "time_std": 0.012784241828599895}], "discretized": false}

View File

@@ -1,43 +1 @@
{
"score_name": "accuracy",
"title": "Test default paramters with RandomForest",
"model": "RandomForest",
"version": "-",
"stratified": false,
"folds": 5,
"date": "2022-01-14",
"time": "12:39:30",
"duration": 272.7363500595093,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "iMac27",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {},
"nodes": 196.91440000000003,
"leaves": 98.42,
"depth": 10.681399999999998,
"score": 0.83616,
"score_std": 0.02649630917694009,
"time": 0.08222018241882324,
"time_std": 0.0013026326815120633
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {},
"nodes": 9.110800000000001,
"leaves": 4.58,
"depth": 3.0982,
"score": 0.625,
"score_std": 0.24958298553119898,
"time": 0.07016648769378662,
"time_std": 0.002460508923990468
}
]
}
{"score_name": "accuracy", "title": "Test default paramters with RandomForest", "model": "RandomForest", "version": "-", "language_version": "3.11x", "language": "Python", "stratified": false, "folds": 5, "date": "2022-01-14", "time": "12:39:30", "duration": 272.7363500595093, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "iMac27", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {}, "nodes": 196.91440000000003, "leaves": 98.42, "depth": 10.681399999999998, "score": 0.83616, "score_std": 0.02649630917694009, "time": 0.08222018241882324, "time_std": 0.0013026326815120633}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {}, "nodes": 9.110800000000001, "leaves": 4.58, "depth": 3.0982, "score": 0.625, "score_std": 0.24958298553119898, "time": 0.07016648769378662, "time_std": 0.002460508923990468}], "discretized": false}

View File

@@ -1,55 +1 @@
{
"score_name": "accuracy",
"model": "STree",
"stratified": false,
"folds": 5,
"date": "2021-09-30",
"time": "11:42:07",
"duration": 624.2505249977112,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "iMac27",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"C": 10000.0,
"gamma": 0.1,
"kernel": "rbf",
"max_iter": 10000.0,
"multiclass_strategy": "ovr"
},
"nodes": 7.0,
"leaves": 4.0,
"depth": 3.0,
"score": 0.97056,
"score_std": 0.015046806970251203,
"time": 0.01404867172241211,
"time_std": 0.002026269126958884
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"C": 7,
"gamma": 0.1,
"kernel": "rbf",
"max_iter": 10000.0,
"multiclass_strategy": "ovr"
},
"nodes": 3.0,
"leaves": 2.0,
"depth": 2.0,
"score": 0.86,
"score_std": 0.28501461950807594,
"time": 0.0008541679382324218,
"time_std": 3.629469326417878e-5
}
],
"title": "With gridsearched hyperparameters",
"version": "1.2.3"
}
{"score_name": "accuracy", "model": "STree", "stratified": false, "folds": 5, "language_version": "3.11x", "language": "Python", "date": "2021-09-30", "time": "11:42:07", "duration": 624.2505249977112, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "iMac27", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"C": 10000, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000, "multiclass_strategy": "ovr"}, "nodes": 7.0, "leaves": 4.0, "depth": 3.0, "score": 0.97056, "score_std": 0.015046806970251203, "time": 0.01404867172241211, "time_std": 0.002026269126958884}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"C": 7, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000, "multiclass_strategy": "ovr"}, "nodes": 3.0, "leaves": 2.0, "depth": 2.0, "score": 0.86, "score_std": 0.28501461950807594, "time": 0.0008541679382324218, "time_std": 3.629469326417878e-05}], "title": "With gridsearched hyperparameters", "version": "1.2.3", "discretized": false}

View File

@@ -1,49 +1 @@
{
"score_name": "accuracy",
"model": "STree",
"stratified": false,
"folds": 5,
"date": "2021-10-27",
"time": "09:40:40",
"duration": 3395.009148836136,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "iMac27",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"splitter": "best",
"max_features": "auto"
},
"nodes": 11.08,
"leaves": 5.9,
"depth": 5.9,
"score": 0.98,
"score_std": 0.001,
"time": 0.28520655155181884,
"time_std": 0.06031593282605064
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"splitter": "best",
"max_features": "auto"
},
"nodes": 4.12,
"leaves": 2.56,
"depth": 2.56,
"score": 0.695,
"score_std": 0.2756860130252853,
"time": 0.021201000213623047,
"time_std": 0.003526023309468471
}
],
"title": "default A",
"version": "1.2.3"
}
{"score_name": "accuracy", "model": "STree", "language": "Python", "language_version": "3.11x", "stratified": false, "folds": 5, "date": "2021-10-27", "time": "09:40:40", "duration": 3395.009148836136, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "iMac27", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"splitter": "best", "max_features": "auto"}, "nodes": 11.08, "leaves": 5.9, "depth": 5.9, "score": 0.98, "score_std": 0.001, "time": 0.28520655155181884, "time_std": 0.06031593282605064}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"splitter": "best", "max_features": "auto"}, "nodes": 4.12, "leaves": 2.56, "depth": 2.56, "score": 0.695, "score_std": 0.2756860130252853, "time": 0.021201000213623047, "time_std": 0.003526023309468471}], "title": "default A", "version": "1.2.3", "discretized": false}

View File

@@ -1,49 +1 @@
{
"score_name": "accuracy",
"model": "STree",
"stratified": false,
"folds": 5,
"date": "2021-11-01",
"time": "19:17:07",
"duration": 4115.042420864105,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "macbook-pro",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"max_features": "auto",
"splitter": "mutual"
},
"nodes": 18.78,
"leaves": 9.88,
"depth": 5.9,
"score": 0.97,
"score_std": 0.002,
"time": 0.23330417156219482,
"time_std": 0.048087665954193885
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"max_features": "auto",
"splitter": "mutual"
},
"nodes": 4.72,
"leaves": 2.86,
"depth": 2.78,
"score": 0.5566666666666668,
"score_std": 0.2941277122460771,
"time": 0.021352062225341795,
"time_std": 0.005808742398555902
}
],
"title": "default B",
"version": "1.2.3"
}
{"score_name": "accuracy", "model": "STree", "language_version": "3.11x", "language": "Python", "stratified": false, "folds": 5, "date": "2021-11-01", "time": "19:17:07", "duration": 4115.042420864105, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "macbook-pro", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"max_features": "auto", "splitter": "mutual"}, "nodes": 18.78, "leaves": 9.88, "depth": 5.9, "score": 0.97, "score_std": 0.002, "time": 0.23330417156219482, "time_std": 0.048087665954193885}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"max_features": "auto", "splitter": "mutual"}, "nodes": 4.72, "leaves": 2.86, "depth": 2.78, "score": 0.5566666666666668, "score_std": 0.2941277122460771, "time": 0.021352062225341795, "time_std": 0.005808742398555902}], "title": "default B", "version": "1.2.3", "discretized": false}

View File

@@ -2,6 +2,7 @@ import os
from openpyxl import load_workbook
from ...Utils import NO_RESULTS, Folders, Files
from ..TestBase import TestBase
from ..._version import __version__
class BeBenchmarkTest(TestBase):
@@ -15,16 +16,16 @@ class BeBenchmarkTest(TestBase):
files.append(Files.exreport(score))
files.append(Files.exreport_output(score))
files.append(Files.exreport_err(score))
files.append(Files.exreport_excel(self.score))
files.append(Files.exreport_pdf)
files.append(Files.tex_output(self.score))
self.remove_files(files, Folders.exreport)
self.remove_files([Files.exreport_excel(self.score)], Folders.excel)
self.remove_files(files, ".")
return super().tearDown()
def test_be_benchmark_complete(self):
stdout, stderr = self.execute_script(
"be_benchmark", ["-s", self.score, "-q", "1", "-t", "1", "-x", "1"]
"be_benchmark", ["-s", self.score, "-q", "-t", "-x"]
)
self.assertEqual(stderr.getvalue(), "")
# Check output
@@ -40,16 +41,26 @@ class BeBenchmarkTest(TestBase):
self.check_file_file(file_name, "exreport_tex")
# Check excel file
file_name = os.path.join(
Folders.exreport, Files.exreport_excel(self.score)
Folders.excel, Files.exreport_excel(self.score)
)
book = load_workbook(file_name)
replace = None
with_this = None
for sheet_name in book.sheetnames:
sheet = book[sheet_name]
self.check_excel_sheet(sheet, f"exreport_excel_{sheet_name}")
if sheet_name == "Datasets":
replace = self.benchmark_version
with_this = __version__
self.check_excel_sheet(
sheet,
f"exreport_excel_{sheet_name}",
replace=replace,
with_this=with_this,
)
def test_be_benchmark_single(self):
stdout, stderr = self.execute_script(
"be_benchmark", ["-s", self.score, "-q", "1"]
"be_benchmark", ["-s", self.score, "-q"]
)
self.assertEqual(stderr.getvalue(), "")
# Check output

View File

@@ -67,7 +67,7 @@ class BeBestTest(TestBase):
def test_be_build_best_report(self):
stdout, _ = self.execute_script(
"be_build_best", ["-s", "accuracy", "-m", "ODTE", "-r", "1"]
"be_build_best", ["-s", "accuracy", "-m", "ODTE", "-r"]
)
expected_data = {
"balance-scale": [

View File

@@ -4,6 +4,10 @@ from ...Utils import Folders, Files
from ..TestBase import TestBase
def get_test():
return "hola"
class BeGridTest(TestBase):
def setUp(self):
self.prepare_scripts_env()
@@ -65,7 +69,7 @@ class BeGridTest(TestBase):
def test_be_grid_no_input(self):
stdout, stderr = self.execute_script(
"be_grid",
["-m", "ODTE", "-s", "f1-weighted", "-q", "1"],
["-m", "ODTE", "-s", "f1-weighted", "-q"],
)
self.assertEqual(stderr.getvalue(), "")
grid_file = os.path.join(

View File

@@ -0,0 +1,67 @@
import os
from io import StringIO
from unittest.mock import patch
from ..TestBase import TestBase
from ...Utils import Folders
class BeInitProjectTest(TestBase):
def setUp(self):
self.prepare_scripts_env()
def tearDown(self):
if os.path.exists("test_project"):
os.system("rm -rf test_project")
def assertIsFile(self, file_name):
if not os.path.isfile(file_name):
raise AssertionError(f"File {str(file_name)} does not exist")
def assertIsFolder(self, path):
if not os.path.exists(path):
raise AssertionError(f"Folder {str(path)} does not exist")
def test_be_init_project(self):
test_project = "test_project"
stdout, stderr = self.execute_script("be_init_project", [test_project])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_init_project")
# check folders
expected = [
Folders.results,
Folders.hidden_results,
Folders.exreport,
Folders.report,
Folders.img,
Folders.excel,
]
for folder in expected:
self.assertIsFolder(os.path.join(test_project, folder))
self.assertIsFile(os.path.join(test_project, ".env"))
os.system(f"rm -rf {test_project}")
@patch("sys.stdout", new_callable=StringIO)
@patch("sys.stderr", new_callable=StringIO)
def test_be_init_project_no_arguments(self, stdout, stderr):
with self.assertRaises(SystemExit) as cm:
module = self.search_script("be_init_project")
module.main("")
self.assertEqual(cm.exception.code, 2)
self.check_output_file(stdout, "be_init_project_no_arguments")
self.assertEqual(stderr.getvalue(), "")
@patch("sys.stdout", new_callable=StringIO)
@patch("sys.stderr", new_callable=StringIO)
def test_be_init_project_twice(self, stdout, stderr):
test_project = "test_project"
self.execute_script("be_init_project", [test_project])
with self.assertRaises(SystemExit) as cm:
module = self.search_script("be_init_project")
module.main([test_project])
self.assertEqual(cm.exception.code, 1)
self.assertEqual(
stderr.getvalue(),
f"Creating folder {test_project}\n"
f"[Errno 17] File exists: '{test_project}'\n",
)
self.assertEqual(stdout.getvalue(), "")

View File

@@ -1,5 +1,8 @@
import os
from ...Utils import Folders, NO_RESULTS
import shutil
from unittest.mock import patch
from openpyxl import load_workbook
from ...Utils import Folders, Files, NO_RESULTS
from ..TestBase import TestBase
@@ -7,19 +10,94 @@ class BeListTest(TestBase):
def setUp(self):
self.prepare_scripts_env()
def test_be_list(self):
@patch("benchmark.Manager.get_input", return_value="q")
def test_be_list(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "summary_list_model")
self.check_output_file(stdout, "be_list_model")
def test_be_list_no_data(self):
@patch("benchmark.Manager.get_input", side_effect=iter(["x", "q"]))
def test_be_list_invalid_option(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_model_invalid")
@patch("benchmark.Manager.get_input", side_effect=iter(["0", "q"]))
def test_be_list_report(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_report")
@patch("benchmark.Manager.get_input", side_effect=iter(["r", "q"]))
def test_be_list_twice(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_model_2")
@patch("benchmark.Manager.get_input", side_effect=iter(["e 2", "q"]))
def test_be_list_report_excel(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_report_excel")
book = load_workbook(os.path.join(Folders.excel, Files.be_list_excel))
sheet = book["STree"]
self.check_excel_sheet(sheet, "excel")
@patch(
"benchmark.Manager.get_input",
side_effect=iter(["e 2", "e 1", "q"]),
)
def test_be_list_report_excel_twice(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_report_excel_2")
book = load_workbook(os.path.join(Folders.excel, Files.be_list_excel))
sheet = book["STree"]
self.check_excel_sheet(sheet, "excel")
sheet = book["STree2"]
self.check_excel_sheet(sheet, "excel2")
@patch("benchmark.Manager.get_input", return_value="q")
def test_be_list_no_data(self, input_data):
stdout, stderr = self.execute_script(
"be_list", ["-m", "Wodt", "-s", "f1-macro"]
)
self.assertEqual(stderr.getvalue(), "")
self.assertEqual(stdout.getvalue(), f"{NO_RESULTS}\n")
def test_be_list_nan(self):
@patch(
"benchmark.Manager.get_input",
side_effect=iter(["d 0", "y", "", "q"]),
)
# @patch("benchmark.ResultsBase.get_input", side_effect=iter(["q"]))
def test_be_list_delete(self, input_data):
def copy_files(source_folder, target_folder, file_name):
source = os.path.join(source_folder, file_name)
target = os.path.join(target_folder, file_name)
shutil.copyfile(source, target)
file_name = (
"results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:"
"35_0.json"
)
# move nan result from hidden to results
copy_files(Folders.hidden_results, Folders.results, file_name)
try:
# list and delete result
stdout, stderr = self.execute_script("be_list", "")
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_delete")
except Exception:
# delete the result copied if be_list couldn't
os.unlink(os.path.join(Folders.results, file_name))
self.fail("test_be_list_delete() should not raise exception")
@patch(
"benchmark.Manager.get_input",
side_effect=iter(["h 0", "y", "", "q"]),
)
def test_be_list_hide(self, input_data):
def swap_files(source_folder, target_folder, file_name):
source = os.path.join(source_folder, file_name)
target = os.path.join(target_folder, file_name)
@@ -32,19 +110,44 @@ class BeListTest(TestBase):
# move nan result from hidden to results
swap_files(Folders.hidden_results, Folders.results, file_name)
try:
# list and move nan result to hidden
stdout, stderr = self.execute_script("be_list", ["--nan", "1"])
# list and move nan result to hidden again
stdout, stderr = self.execute_script("be_list", "")
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_nan")
self.check_output_file(stdout, "be_list_hide")
except Exception:
# move back nan result file if be_list couldn't
# delete the result copied if be_list couldn't
swap_files(Folders.results, Folders.hidden_results, file_name)
self.fail("test_be_list_nan() should not raise exception")
self.fail("test_be_list_hide() should not raise exception")
def test_be_list_nan_no_nan(self):
stdout, stderr = self.execute_script("be_list", ["--nan", "1"])
@patch("benchmark.Manager.get_input", side_effect=iter(["h 0", "q"]))
def test_be_list_already_hidden(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--hidden"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_no_nan")
self.check_output_file(stdout, "be_list_already_hidden")
@patch("benchmark.Manager.get_input", side_effect=iter(["h 0", "n", "q"]))
def test_be_list_dont_hide(self, input_data):
stdout, stderr = self.execute_script("be_list", "")
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_default")
@patch("benchmark.Manager.get_input", side_effect=iter(["q"]))
def test_be_list_hidden_nan(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--hidden", "--nan"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_hidden_nan")
@patch("benchmark.Manager.get_input", side_effect=iter(["q"]))
def test_be_list_hidden(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--hidden"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_hidden")
@patch("benchmark.Manager.get_input", side_effect=iter(["0", "q"]))
def test_be_list_compare(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--compare"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_compare_fault")
def test_be_no_env(self):
path = os.getcwd()

View File

@@ -1,4 +1,5 @@
import os
import json
from io import StringIO
from unittest.mock import patch
from ...Results import Report
@@ -24,19 +25,21 @@ class BeMainTest(TestBase):
self.check_output_lines(
stdout=stdout,
file_name="be_main_dataset",
lines_to_compare=[0, 2, 3, 5, 6, 7, 8, 9, 11, 12, 13],
lines_to_compare=[0, 2, 3, 5, 6, 7, 8, 9, 11, 12, 13, 14],
)
def test_be_main_complete(self):
stdout, _ = self.execute_script(
"be_main",
["-s", self.score, "-m", "STree", "--title", "test", "-r", "1"],
["-s", self.score, "-m", "STree", "--title", "test", "-r"],
)
# keep the report name to delete it after
report_name = stdout.getvalue().splitlines()[-1].split("in ")[1]
self.files.append(report_name)
self.check_output_lines(
stdout, "be_main_complete", [0, 2, 3, 5, 6, 7, 8, 9, 12, 13, 14]
stdout,
"be_main_complete",
[0, 2, 3, 5, 6, 7, 8, 9, 12, 13, 14, 15],
)
def test_be_main_no_report(self):
@@ -66,10 +69,8 @@ class BeMainTest(TestBase):
"STree",
"--title",
"test",
"-f",
"1",
"-b",
"-r",
"1",
],
)
# keep the report name to delete it after
@@ -79,6 +80,48 @@ class BeMainTest(TestBase):
stdout, "be_main_best", [0, 2, 3, 5, 6, 7, 8, 9, 12, 13, 14]
)
@patch("sys.stdout", new_callable=StringIO)
@patch("sys.stderr", new_callable=StringIO)
def test_be_main_incompatible_params(self, stdout, stderr):
m1 = (
"be_main: error: argument -b/--best_paramfile: not allowed with "
"argument -p/--hyperparameters"
)
m2 = (
"be_main: error: argument -g/--grid_paramfile: not allowed with "
"argument -p/--hyperparameters"
)
m3 = (
"be_main: error: argument -g/--grid_paramfile: not allowed with "
"argument -p/--hyperparameters"
)
m4 = m1
p0 = [
"-s",
self.score,
"-m",
"SVC",
"--title",
"test",
]
pset = json.dumps(dict(C=17))
p1 = p0.copy()
p1.extend(["-p", pset, "-b"])
p2 = p0.copy()
p2.extend(["-p", pset, "-g"])
p3 = p0.copy()
p3.extend(["-p", pset, "-g", "-b"])
p4 = p0.copy()
p4.extend(["-b", "-g"])
parameters = [(p1, m1), (p2, m2), (p3, m3), (p4, m4)]
for parameter, message in parameters:
with self.assertRaises(SystemExit) as msg:
module = self.search_script("be_main")
module.main(parameter)
self.assertEqual(msg.exception.code, 2)
self.assertEqual(stderr.getvalue(), "")
self.assertRegexpMatches(stdout.getvalue(), message)
def test_be_main_best_params_non_existent(self):
model = "GBC"
stdout, stderr = self.execute_script(
@@ -90,10 +133,8 @@ class BeMainTest(TestBase):
model,
"--title",
"test",
"-f",
"1",
"-b",
"-r",
"1",
],
)
self.assertEqual(stderr.getvalue(), "")
@@ -117,9 +158,7 @@ class BeMainTest(TestBase):
"--title",
"test",
"-g",
"1",
"-r",
"1",
],
)
self.assertEqual(stderr.getvalue(), "")
@@ -142,9 +181,7 @@ class BeMainTest(TestBase):
"--title",
"test",
"-g",
"1",
"-r",
"1",
],
)
# keep the report name to delete it after

View File

@@ -18,7 +18,7 @@ class BePrintStrees(TestBase):
for name in self.datasets:
stdout, _ = self.execute_script(
"be_print_strees",
["-d", name, "-q", "1"],
["-d", name, "-q"],
)
file_name = os.path.join(Folders.img, f"stree_{name}.png")
self.files.append(file_name)
@@ -27,13 +27,13 @@ class BePrintStrees(TestBase):
stdout.getvalue(), f"File {file_name} generated\n"
)
computed_size = os.path.getsize(file_name)
self.assertGreater(computed_size, 25000)
self.assertGreater(computed_size, 24500)
def test_be_print_strees_dataset_color(self):
for name in self.datasets:
stdout, _ = self.execute_script(
"be_print_strees",
["-d", name, "-q", "1", "-c", "1"],
["-d", name, "-q", "-c"],
)
file_name = os.path.join(Folders.img, f"stree_{name}.png")
self.files.append(file_name)

View File

@@ -1,7 +1,10 @@
import os
from openpyxl import load_workbook
from ...Utils import Folders
from io import StringIO
from unittest.mock import patch
from ...Utils import Folders, Files
from ..TestBase import TestBase
from ..._version import __version__
class BeReportTest(TestBase):
@@ -10,10 +13,17 @@ class BeReportTest(TestBase):
def tearDown(self) -> None:
files = [
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.sql",
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.xlsx",
]
self.remove_files(files, Folders.results)
self.remove_files(
[Files.datasets_report_excel],
os.path.join(os.getcwd(), Folders.excel),
)
files = [
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.sql",
]
self.remove_files(files, Folders.sql)
return super().tearDown()
def test_be_report(self):
@@ -21,38 +31,70 @@ class BeReportTest(TestBase):
"results",
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",
)
stdout, stderr = self.execute_script("be_report", ["-f", file_name])
stdout, stderr = self.execute_script("be_report", ["file", file_name])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "report")
def test_be_report_not_found(self):
stdout, stderr = self.execute_script("be_report", ["-f", "unknown"])
stdout, stderr = self.execute_script("be_report", ["file", "unknown"])
self.assertEqual(stderr.getvalue(), "")
self.assertEqual(stdout.getvalue(), "unknown does not exists!\n")
def test_be_report_compare(self):
def test_be_report_compared(self):
file_name = "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"
stdout, stderr = self.execute_script(
"be_report", ["-f", file_name, "-c", "1"]
"be_report", ["file", file_name, "-c"]
)
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "report_compared")
def test_be_report_datatsets(self):
stdout, stderr = self.execute_script("be_report", [])
stdout, stderr = self.execute_script("be_report", ["datasets"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "report_datasets")
file_name = f"report_datasets{self.ext}"
with open(os.path.join(self.test_files, file_name)) as f:
expected = f.read()
output_text = stdout.getvalue().splitlines()
for line, index in zip(expected.splitlines(), range(len(expected))):
if self.benchmark_version in line:
# replace benchmark version
line = self.replace_benchmark_version(line, output_text, index)
self.assertEqual(line, output_text[index])
def test_be_report_datasets_excel(self):
stdout, stderr = self.execute_script("be_report", ["datasets", "-x"])
self.assertEqual(stderr.getvalue(), "")
file_name = f"report_datasets{self.ext}"
with open(os.path.join(self.test_files, file_name)) as f:
expected = f.read()
output_text = stdout.getvalue().splitlines()
for line, index in zip(expected.splitlines(), range(len(expected))):
if self.benchmark_version in line:
# replace benchmark version
line = self.replace_benchmark_version(line, output_text, index)
self.assertEqual(line, output_text[index])
file_name = os.path.join(
os.getcwd(), Folders.excel, Files.datasets_report_excel
)
book = load_workbook(file_name)
sheet = book["Datasets"]
self.check_excel_sheet(
sheet,
"exreport_excel_Datasets",
replace=self.benchmark_version,
with_this=__version__,
)
def test_be_report_best(self):
stdout, stderr = self.execute_script(
"be_report", ["-s", "accuracy", "-m", "STree", "-b", "1"]
"be_report", ["best", "-s", "accuracy", "-m", "STree"]
)
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "report_best")
def test_be_report_grid(self):
stdout, stderr = self.execute_script(
"be_report", ["-s", "accuracy", "-m", "STree", "-g", "1"]
"be_report", ["grid", "-s", "accuracy", "-m", "STree"]
)
self.assertEqual(stderr.getvalue(), "")
file_name = "report_grid.test"
@@ -66,22 +108,36 @@ class BeReportTest(TestBase):
line = self.replace_STree_version(line, output_text, index)
self.assertEqual(line, output_text[index])
def test_be_report_best_both(self):
stdout, stderr = self.execute_script(
"be_report",
["-s", "accuracy", "-m", "STree", "-b", "1", "-g", "1"],
)
@patch("sys.stderr", new_callable=StringIO)
def test_be_report_unknown_subcommand(self, stderr):
with self.assertRaises(SystemExit) as msg:
module = self.search_script("be_report")
module.main(["unknown"])
self.assertEqual(msg.exception.code, 2)
self.check_output_file(stderr, "report_unknown_subcommand")
def test_be_report_without_subcommand(self):
stdout, stderr = self.execute_script("be_report", "")
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "report_best")
self.maxDiff = None
# Can't use check_output_file because of the width of the console
# output is different in different environments
file_name = "report_without_subcommand" + self.ext
with open(os.path.join(self.test_files, file_name)) as f:
expected = f.read()
if expected == stdout.getvalue():
self.assertEqual(stdout.getvalue(), expected)
else:
self.check_output_file(stdout, "report_without_subcommand2")
def test_be_report_excel_compared(self):
file_name = "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"
stdout, stderr = self.execute_script(
"be_report",
["-f", file_name, "-x", "1", "-c", "1"],
["file", file_name, "-x", "-c"],
)
file_name = os.path.join(
Folders.results, file_name.replace(".json", ".xlsx")
Folders.excel, file_name.replace(Files.report_ext, ".xlsx")
)
book = load_workbook(file_name)
sheet = book["STree"]
@@ -93,10 +149,10 @@ class BeReportTest(TestBase):
file_name = "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"
stdout, stderr = self.execute_script(
"be_report",
["-f", file_name, "-x", "1"],
["file", file_name, "-x"],
)
file_name = os.path.join(
Folders.results, file_name.replace(".json", ".xlsx")
Folders.excel, file_name.replace(Files.report_ext, ".xlsx")
)
book = load_workbook(file_name)
sheet = book["STree"]
@@ -108,10 +164,10 @@ class BeReportTest(TestBase):
file_name = "results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json"
stdout, stderr = self.execute_script(
"be_report",
["-f", file_name, "-q", "1"],
["file", file_name, "-q"],
)
file_name = os.path.join(
Folders.results, file_name.replace(".json", ".sql")
Folders.sql, file_name.replace(Files.report_ext, ".sql")
)
self.check_file_file(file_name, "sql")
self.assertEqual(stderr.getvalue(), "")

1
benchmark/tests/sql/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
#

View File

@@ -6,13 +6,13 @@
"n_estimators": [
100
],
"base_estimator__C": [
"estimator__C": [
1.0
],
"base_estimator__kernel": [
"estimator__kernel": [
"linear"
],
"base_estimator__multiclass_strategy": [
"estimator__multiclass_strategy": [
"ovo"
]
},
@@ -23,7 +23,7 @@
"n_estimators": [
100
],
"base_estimator__C": [
"estimator__C": [
0.001,
0.0275,
0.05,
@@ -36,10 +36,10 @@
7,
10000.0
],
"base_estimator__kernel": [
"estimator__kernel": [
"liblinear"
],
"base_estimator__multiclass_strategy": [
"estimator__multiclass_strategy": [
"ovr"
]
},
@@ -50,7 +50,7 @@
"n_estimators": [
100
],
"base_estimator__C": [
"estimator__C": [
0.05,
1.0,
1.05,
@@ -62,7 +62,7 @@
57,
10000.0
],
"base_estimator__gamma": [
"estimator__gamma": [
0.001,
0.1,
0.14,
@@ -70,10 +70,10 @@
"auto",
"scale"
],
"base_estimator__kernel": [
"estimator__kernel": [
"rbf"
],
"base_estimator__multiclass_strategy": [
"estimator__multiclass_strategy": [
"ovr"
]
},
@@ -84,20 +84,20 @@
"n_estimators": [
100
],
"base_estimator__C": [
"estimator__C": [
0.05,
0.2,
1.0,
8.25
],
"base_estimator__gamma": [
"estimator__gamma": [
0.1,
"scale"
],
"base_estimator__kernel": [
"estimator__kernel": [
"poly"
],
"base_estimator__multiclass_strategy": [
"estimator__multiclass_strategy": [
"ovo",
"ovr"
]

View File

@@ -0,0 +1,12 @@
Creating folder test_project
Creating folder test_project/results
Creating folder test_project/hidden_results
Creating folder test_project/exreport
Creating folder test_project/exreport/exreport_output
Creating folder test_project/img
Creating folder test_project/excel
Creating folder test_project/sql
Done!
Please, edit .env file with your settings and add a datasets folder
with an all.txt file with the datasets you want to use.
In that folder you have to include all the datasets you'll use.

View File

@@ -0,0 +1,2 @@
usage: be_init_project [-h] project_name
be_init_project: error: the following arguments are required: project_name

View File

@@ -0,0 +1,5 @@
 # Date File Score Time(h) Title
=== ========== ================================================================ ======== ======= =======================
 0 2022-05-04 results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json nan 3.091 Default hyperparameters
 1 2021-11-01 results_accuracy_STree_iMac27_2021-11-01_23:55:16_0.json 0.97446 0.098 default
Already hidden

View File

@@ -0,0 +1,8 @@
 # Date File Score Time(h) Title
=== ========== =============================================================== ======== ======= ============================================
 0 2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
 1 2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
 2 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 3 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 4 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
results/best_results_accuracy_ODTE.json does not exist

View File

@@ -0,0 +1,7 @@
 # Date File Score Time(h) Title
=== ========== =============================================================== ======== ======= ============================================
 0 2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
 1 2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
 2 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 3 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 4 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters

View File

@@ -0,0 +1,16 @@
 # Date File Score Time(h) Title
=== ========== ================================================================ ======== ======= ============================================
 0 2022-05-04 results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json nan 3.091 Default hyperparameters
 1 2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
 2 2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
 3 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 4 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 5 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
Deleting results/results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json
 # Date File Score Time(h) Title
=== ========== =============================================================== ======== ======= ============================================
 0 2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
 1 2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
 2 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 3 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 4 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters

View File

@@ -0,0 +1,4 @@
 # Date File Score Time(h) Title
=== ========== ================================================================ ======== ======= =======================
 0 2022-05-04 results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json nan 3.091 Default hyperparameters
 1 2021-11-01 results_accuracy_STree_iMac27_2021-11-01_23:55:16_0.json 0.97446 0.098 default

View File

@@ -0,0 +1,3 @@
 # Date File Score Time(h) Title
=== ========== ================================================================ ======== ======= =======================
 0 2022-05-04 results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json nan 3.091 Default hyperparameters

View File

@@ -0,0 +1,16 @@
 # Date File Score Time(h) Title
=== ========== ================================================================ ======== ======= ============================================
 0 2022-05-04 results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json nan 3.091 Default hyperparameters
 1 2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
 2 2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
 3 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 4 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 5 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
Hiding results/results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json
 # Date File Score Time(h) Title
=== ========== =============================================================== ======== ======= ============================================
 0 2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
 1 2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
 2 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 3 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 4 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters

View File

@@ -0,0 +1,5 @@
 # Date File Score Time(h) Title
=== ========== ============================================================= ======== ======= =================================
 0 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters

View File

@@ -0,0 +1,10 @@
 # Date File Score Time(h) Title
=== ========== ============================================================= ======== ======= =================================
 0 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
 # Date File Score Time(h) Title
=== ========== ============================================================= ======== ======= =================================
 0 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters

View File

@@ -0,0 +1,6 @@
 # Date File Score Time(h) Title
=== ========== ============================================================= ======== ======= =================================
 0 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
Invalid option. Try again!

View File

@@ -1,13 +0,0 @@
Date File Score Time(h) Title
========== ================================================================ ======== ======= ============================================
2022-05-04 results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json nan 3.091 Default hyperparameters
2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
****************************** Results with nan moved to hidden ******************************
Date File Score Time(h) Title
========== ================================================================ ======== ======= =======================
2022-05-04 results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_0.json nan 3.091 Default hyperparameters

View File

@@ -1,7 +0,0 @@
Date File Score Time(h) Title
========== =============================================================== ======== ======= ============================================
2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters

View File

@@ -0,0 +1,21 @@
 # Date File Score Time(h) Title
=== ========== ============================================================= ======== ======= =================================
 0 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
*************************************************************************************************************************
* STree ver. 1.2.3 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2021-11-01 19:17:07 *
* default B *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 4115.04 seconds, 1.14 hours, on macbook-pro *
* Score is accuracy *
*************************************************************************************************************************
Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters
============================== ====== ===== === ======= ======= ======= =============== ================= ===============
balance-scale 625 4 3 18.78 9.88 5.90 0.970000±0.0020 0.233304±0.0481 {'max_features': 'auto', 'splitter': 'mutual'}
balloons 16 4 2 4.72 2.86 2.78 0.556667±0.2941✗ 0.021352±0.0058 {'max_features': 'auto', 'splitter': 'mutual'}
*************************************************************************************************************************
* ✗ Less than or equal to ZeroR...: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0379 *
*************************************************************************************************************************

View File

@@ -0,0 +1,7 @@
 # Date File Score Time(h) Title
=== ========== ============================================================= ======== ======= =================================
 0 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
Added results/results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json to some_results.xlsx
Generated file: excel/some_results.xlsx

View File

@@ -0,0 +1,8 @@
 # Date File Score Time(h) Title
=== ========== ============================================================= ======== ======= =================================
 0 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
Added results/results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json to some_results.xlsx
Added results/results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json to some_results.xlsx
Generated file: excel/some_results.xlsx

Some files were not shown because too many files have changed in this diff Show More