79 Commits

Author SHA1 Message Date
Ricardo Montañana Gómez
cf8fd3454e Update README.md 2025-05-06 14:05:42 +02:00
Ricardo Montañana Gómez
162cdc2da1 Merge pull request #11 from Doctorado-ML/rmontanana-patch-1
Update README.md
2025-05-06 14:05:06 +02:00
Ricardo Montañana Gómez
765112073c Update README.md 2025-05-06 14:04:14 +02:00
69e21584bd Fix tests in python 3.13 2024-12-16 01:27:34 +01:00
419c899c94 Fix some errors in tests 2024-12-16 00:53:11 +01:00
2a2ed81a6c Fix Arff datasets mistake
Fix table_report partial mistake
2024-12-14 23:50:58 +01:00
4c5502611a Update version and copyright 2024-09-18 16:00:31 +02:00
Ricardo Montañana Gómez
70f1da5fc7 Merge pull request #10 from Doctorado-ML/flask
Flask
2024-03-13 16:18:55 +01:00
Ricardo Montañana Gómez
14dba5edb8 Merge branch 'main' into flask 2024-03-13 16:18:47 +01:00
a31d62263d Remove Bayesian classifiers 2024-03-13 16:16:35 +01:00
3f3a18e4fe Remove Bayesian Classifiers 2024-03-12 16:01:03 +01:00
6844d13973 Update format to report 2023-06-26 11:07:12 +02:00
4b17cc2230 Add boostAODE model 2023-06-26 10:09:01 +02:00
257cb8e95a Fix excel route in select 2023-06-02 16:51:19 +02:00
34b4cb6477 Add automatic download excel files 2023-06-01 20:48:53 +02:00
0b258595f9 Fix datasets 2023-06-01 11:44:03 +02:00
ff25581e99 Separate folders 2023-06-01 11:40:41 +02:00
aeec3a65af Fix checkboxes on change page 2023-06-01 01:49:23 +02:00
b7d26b82b1 flavour fix 2023-06-01 01:14:38 +02:00
f7ed11562b Add format to report_best 2023-06-01 01:11:06 +02:00
a51fed6281 Begin best results report 2023-05-31 23:30:51 +02:00
7d5f3058c3 add blueprint to app 2023-05-31 17:21:35 +02:00
54d141e861 Create separate app 2023-05-31 17:00:48 +02:00
04ea568c71 Refactor macros and font family 2023-05-31 14:29:25 +02:00
d8285eb2bb Add excel to report datasets 2023-05-31 01:35:40 +02:00
5f7fb7d5ac Begin add datasets report with excel 2023-05-30 17:04:33 +02:00
dd3cb91951 Fix compare problem in Excel files 2023-05-30 00:47:24 +02:00
40af738ed9 Add persistence of checkbox compare on app 2023-05-30 00:08:36 +02:00
10c352fdb5 Add double click to show file 2023-05-29 23:07:10 +02:00
1b362f2110 Add wait cursor during ajax 2023-05-29 22:37:05 +02:00
007c419979 Add generate excel fault tolerance with compare 2023-05-29 20:07:00 +02:00
2df055334c Add select all/none buttons with icons 2023-05-29 19:35:54 +02:00
395a64abb7 Add button reset and refactor buttons in select
Change position of excel button in report
2023-05-29 18:53:08 +02:00
8fe4b888b8 Add icons to actions 2023-05-29 16:47:03 +02:00
60086b3925 container-fluid and error tolerance in compare 2023-05-29 12:06:44 +02:00
c55a0b29ab Add compare with best results in reports 2023-05-29 11:50:49 +02:00
c2415576c9 Add excel to report 2023-05-28 23:14:14 +02:00
663a0b0258 Add select row to report 2023-05-28 20:06:33 +02:00
655c1db889 Fix row selection in bootstrap 2023-05-28 18:09:11 +02:00
e17e7d4e00 Generate excel file from results 2023-05-28 18:03:20 +02:00
be62e38e77 Add title and score to select page 2023-05-28 11:56:14 +02:00
83cfc3e5f5 Enhance report templates 2023-05-28 03:20:15 +02:00
3928b9c583 Change partials criteria 2023-05-28 03:12:12 +02:00
219b626061 Add flask templates 2023-05-28 00:04:30 +02:00
c10bf27a16 Fix tests 2023-05-22 11:15:19 +02:00
b6fc9096a1 Fix tests 2023-05-22 10:07:28 +02:00
83bd321dd6 Fix some excel issues 2023-05-21 22:22:15 +02:00
9041c412d5 Begin refactor Results 2023-05-21 21:05:58 +02:00
b55553847b Refactor folders structure (add excel) 2023-05-19 01:44:27 +02:00
0d02b690bb Allow to comment datasets in all.txt 2023-05-17 23:05:50 +02:00
1046c2e74b Update badges 2023-05-15 11:46:00 +02:00
e654aa9735 Update readme 2023-05-15 11:12:17 +02:00
e3d969c5d7 Add number of samples in report datasets balance 2023-05-09 10:25:54 +02:00
5c8b7062cc Fix max_value in manage list results 2023-04-07 22:48:09 +02:00
2ef30dfb80 Add AODENew model 2023-03-29 16:47:15 +02:00
d60df0cdf9 Update version number 2023-02-21 17:09:26 +01:00
e2504c7ae9 Add new models and repair tests 2023-02-21 17:08:50 +01:00
27bf414db9 Add TanNew model 2023-02-06 20:17:32 +01:00
d5cc2b2dcf Add discretize to reports and experiments 2023-02-05 20:18:27 +01:00
7df037b6f4 Add class name to fit_params 2023-02-05 11:29:34 +01:00
75ed3e8f6e Add KDBNew model and fit_feature hyperparameter 2023-02-04 18:29:10 +01:00
Ricardo Montañana Gómez
d454a318fc feat: Make nodes, leaves, depth labels customizable in .env 2023-01-22 11:37:03 +01:00
Ricardo Montañana Gómez
5ff6265a08 feat: Add discretize and fix stratified hyperparameters in be_main 2023-01-21 22:17:25 +01:00
Ricardo Montañana Gómez
520f8807e5 test: 🧪 Update a flaky test due to different console width in diff envs 2023-01-15 19:32:01 +01:00
Ricardo Montañana Gómez
149584be3d Update test results file 2023-01-15 11:28:21 +01:00
Ricardo Montañana Gómez
d327050b7c Merge pull request #9 from Doctorado-ML/continuous_features
Continuous features
2023-01-15 10:55:49 +01:00
Ricardo Montañana Gómez
d21e6cac0c ci: ⬆️ Update github actions 2023-01-15 10:29:06 +01:00
Ricardo Montañana Gómez
d84e0ffc6a Update print_strees test 2023-01-14 23:50:34 +01:00
Ricardo Montañana Gómez
6dc3a59df8 fix: 🧪 Fix tests with new scikit-learn version 2023-01-14 21:31:34 +01:00
Ricardo Montañana Gómez
7ef88bd5c7 Update Models_tests 2023-01-14 13:05:44 +01:00
Ricardo Montañana Gómez
acfbafbdce Update requirements 2023-01-08 12:41:11 +01:00
ae52148021 Remove ignore-nan from .env files
leave only as be_main hyperparameter
2023-01-08 12:25:59 +01:00
132d7827c3 Fix tests 100% coverage 2023-01-06 22:53:23 +01:00
d854d9ddf1 Fix tests 2023-01-06 14:29:52 +01:00
9ba6c55d49 Set k=2 in KDB to address memory problems 2023-01-06 14:29:22 +01:00
c21fd4849c Add ignore_nan and fit_params to experiments 2022-12-28 19:13:58 +01:00
671e5af45c Change discretizer algorithm 2022-12-25 12:11:00 +01:00
8e035ef196 feat: Add continuous features for datasets in Arff Files
Makes possible to leave untouched some already discrete variables if discretize is on on .env file
2022-12-17 19:24:37 +01:00
Ricardo Montañana Gómez
9bff48832b Merge pull request #8 from Doctorado-ML/refactor_args
Refactor args and add be_init_project
2022-11-24 00:23:14 +01:00
104 changed files with 3462 additions and 2181 deletions

View File

@@ -18,7 +18,7 @@ jobs:
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python }} - name: Set up Python ${{ matrix.python }}
uses: actions/setup-python@v2 uses: actions/setup-python@v4
with: with:
python-version: ${{ matrix.python }} python-version: ${{ matrix.python }}
# Make dot command available in the environment # Make dot command available in the environment
@@ -53,7 +53,7 @@ jobs:
coverage run -m unittest -v benchmark.tests coverage run -m unittest -v benchmark.tests
coverage xml coverage xml
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
uses: codecov/codecov-action@v1 uses: codecov/codecov-action@v3
with: with:
token: ${{ secrets.CODECOV_TOKEN }} token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.xml files: ./coverage.xml

View File

@@ -1,12 +1,9 @@
[![CI](https://github.com/Doctorado-ML/benchmark/actions/workflows/main.yml/badge.svg)](https://github.com/Doctorado-ML/benchmark/actions/workflows/main.yml)
[![codecov](https://codecov.io/gh/Doctorado-ML/benchmark/branch/main/graph/badge.svg?token=ZRP937NDSG)](https://codecov.io/gh/Doctorado-ML/benchmark) [![codecov](https://codecov.io/gh/Doctorado-ML/benchmark/branch/main/graph/badge.svg?token=ZRP937NDSG)](https://codecov.io/gh/Doctorado-ML/benchmark)
[![Quality Gate Status](https://haystack.rmontanana.es:25000/api/project_badges/measure?project=benchmark&metric=alert_status&token=336a6e501988888543c3153baa91bad4b9914dd2)](https://haystack.rmontanana.es:25000/dashboard?id=benchmark)
[![Technical Debt](https://haystack.rmontanana.es:25000/api/project_badges/measure?project=benchmark&metric=sqale_index&token=336a6e501988888543c3153baa91bad4b9914dd2)](https://haystack.rmontanana.es:25000/dashboard?id=benchmark)
![https://img.shields.io/badge/python-3.8%2B-blue](https://img.shields.io/badge/python-3.8%2B-brightgreen) ![https://img.shields.io/badge/python-3.8%2B-blue](https://img.shields.io/badge/python-3.8%2B-brightgreen)
# benchmark # benchmark
Benchmarking models Benchmarking Python models
## Experimentation ## Experimentation
@@ -34,7 +31,7 @@ be_report -b STree
```python ```python
# Datasets list # Datasets list
be_report be_report datasets
# Report of given experiment # Report of given experiment
be_report -f results/results_STree_iMac27_2021-09-22_17:13:02.json be_report -f results/results_STree_iMac27_2021-09-22_17:13:02.json
# Report of given experiment building excel file and compare with best results # Report of given experiment building excel file and compare with best results

View File

@@ -13,21 +13,27 @@ ALL_METRICS = (
class EnvData: class EnvData:
@staticmethod def __init__(self):
def load(): self.args = {}
args = {}
def load(self):
try: try:
with open(Files.dot_env) as f: with open(Files.dot_env) as f:
for line in f.read().splitlines(): for line in f.read().splitlines():
if line == "" or line.startswith("#"): if line == "" or line.startswith("#"):
continue continue
key, value = line.split("=") key, value = line.split("=")
args[key] = value self.args[key] = value
except FileNotFoundError: except FileNotFoundError:
print(NO_ENV, file=sys.stderr) print(NO_ENV, file=sys.stderr)
exit(1) exit(1)
else: else:
return args return self.args
def save(self):
with open(Files.dot_env, "w") as f:
for key, value in self.args.items():
f.write(f"{key}={value}\n")
class EnvDefault(argparse.Action): class EnvDefault(argparse.Action):
@@ -35,7 +41,7 @@ class EnvDefault(argparse.Action):
def __init__( def __init__(
self, envvar, required=True, default=None, mandatory=False, **kwargs self, envvar, required=True, default=None, mandatory=False, **kwargs
): ):
self._args = EnvData.load() self._args = EnvData().load()
self._overrides = {} self._overrides = {}
if required and not mandatory: if required and not mandatory:
default = self._args[envvar] default = self._args[envvar]
@@ -92,6 +98,17 @@ class Arguments(argparse.ArgumentParser):
"help": "dataset to work with", "help": "dataset to work with",
}, },
], ],
"discretize": [
("--discretize",),
{
"action": EnvDefault,
"envvar": "discretize",
"required": True,
"help": "Discretize dataset",
"const": "1",
"nargs": "?",
},
],
"excel": [ "excel": [
("-x", "--excel"), ("-x", "--excel"),
{ {
@@ -101,6 +118,17 @@ class Arguments(argparse.ArgumentParser):
"help": "Generate Excel File", "help": "Generate Excel File",
}, },
], ],
"fit_features": [
("--fit_features",),
{
"action": EnvDefault,
"envvar": "fit_features",
"required": True,
"help": "Include features in fit call",
"const": "1",
"nargs": "?",
},
],
"grid_paramfile": [ "grid_paramfile": [
("-g", "--grid_paramfile"), ("-g", "--grid_paramfile"),
{ {
@@ -123,6 +151,15 @@ class Arguments(argparse.ArgumentParser):
("-p", "--hyperparameters"), ("-p", "--hyperparameters"),
{"type": str, "required": False, "default": "{}"}, {"type": str, "required": False, "default": "{}"},
], ],
"ignore_nan": [
("--ignore-nan",),
{
"default": False,
"action": "store_true",
"required": False,
"help": "Ignore nan results",
},
],
"key": [ "key": [
("-k", "--key"), ("-k", "--key"),
{ {
@@ -198,6 +235,19 @@ class Arguments(argparse.ArgumentParser):
"help": "number of folds", "help": "number of folds",
}, },
], ],
"output": [
("-o", "--output"),
{
"type": str,
"default": "local",
"choices": ["local", "docker"],
"required": False,
"help": (
"in be_flask tells if it is running in local or "
"in docker {local, docker}"
),
},
],
"platform": [ "platform": [
("-P", "--platform"), ("-P", "--platform"),
{ {
@@ -251,6 +301,8 @@ class Arguments(argparse.ArgumentParser):
"envvar": "stratified", "envvar": "stratified",
"required": True, "required": True,
"help": "Stratified", "help": "Stratified",
"const": "1",
"nargs": "?",
}, },
], ],
"tex_output": [ "tex_output": [

View File

@@ -2,10 +2,11 @@ import os
from types import SimpleNamespace from types import SimpleNamespace
import pandas as pd import pandas as pd
import numpy as np import numpy as np
import json
from scipy.io import arff from scipy.io import arff
from .Utils import Files from .Utils import Files
from .Arguments import EnvData from .Arguments import EnvData
from mdlp.discretization import MDLP from fimdlp.mdlp import FImdlp
class Diterator: class Diterator:
@@ -27,6 +28,14 @@ class DatasetsArff:
def folder(): def folder():
return "datasets" return "datasets"
@staticmethod
def get_range_features(X, c_features):
if c_features.strip() == "all":
return list(range(X.shape[1]))
if c_features.strip() == "none":
return []
return json.loads(c_features)
def load(self, name, class_name): def load(self, name, class_name):
file_name = os.path.join(self.folder(), self.dataset_names(name)) file_name = os.path.join(self.folder(), self.dataset_names(name))
data = arff.loadarff(file_name) data = arff.loadarff(file_name)
@@ -34,7 +43,7 @@ class DatasetsArff:
df.dropna(axis=0, how="any", inplace=True) df.dropna(axis=0, how="any", inplace=True)
self.dataset = df self.dataset = df
X = df.drop(class_name, axis=1) X = df.drop(class_name, axis=1)
self.features = X.columns self.features = X.columns.to_list()
self.class_name = class_name self.class_name = class_name
y, _ = pd.factorize(df[class_name]) y, _ = pd.factorize(df[class_name])
X = X.to_numpy() X = X.to_numpy()
@@ -50,6 +59,10 @@ class DatasetsTanveer:
def folder(): def folder():
return "data" return "data"
@staticmethod
def get_range_features(X, name):
return []
def load(self, name, *args): def load(self, name, *args):
file_name = os.path.join(self.folder(), self.dataset_names(name)) file_name = os.path.join(self.folder(), self.dataset_names(name))
data = pd.read_csv( data = pd.read_csv(
@@ -75,6 +88,10 @@ class DatasetsSurcov:
def folder(): def folder():
return "datasets" return "datasets"
@staticmethod
def get_range_features(X, name):
return []
def load(self, name, *args): def load(self, name, *args):
file_name = os.path.join(self.folder(), self.dataset_names(name)) file_name = os.path.join(self.folder(), self.dataset_names(name))
data = pd.read_csv( data = pd.read_csv(
@@ -93,41 +110,49 @@ class DatasetsSurcov:
class Datasets: class Datasets:
def __init__(self, dataset_name=None): def __init__(self, dataset_name=None, discretize=None):
envData = EnvData.load() env_data = EnvData().load()
# DatasetsSurcov, DatasetsTanveer, DatasetsArff,... # DatasetsSurcov, DatasetsTanveer, DatasetsArff,...
source_name = getattr( source_name = getattr(
__import__(__name__), __import__(__name__),
f"Datasets{envData['source_data']}", f"Datasets{env_data['source_data']}",
)
self.discretize = (
env_data["discretize"] == "1"
if discretize is None
else discretize == "1"
) )
self.discretize = envData["discretize"] == "1"
self.dataset = source_name() self.dataset = source_name()
self.class_names = []
self.data_sets = []
# initialize self.class_names & self.data_sets # initialize self.class_names & self.data_sets
class_names, sets = self._init_names(dataset_name) class_names, sets = self._init_names(dataset_name)
self.class_names = class_names self.class_names = class_names
self.data_sets = sets self.data_sets = sets
self.states = {} # states of discretized variables
def _init_names(self, dataset_name): def _init_names(self, dataset_name):
file_name = os.path.join(self.dataset.folder(), Files.index) file_name = os.path.join(self.dataset.folder(), Files.index)
default_class = "class" self.continuous_features = {}
with open(file_name) as f: with open(file_name) as f:
sets = f.read().splitlines() sets = f.read().splitlines()
class_names = [default_class] * len(sets) sets = [x for x in sets if not x.startswith("#")]
if "," in sets[0]: results = []
result = [] class_names = []
class_names = [] for set_name in sets:
for data in sets: try:
name, class_name = data.split(",") name, class_name, features = set_name.split(";")
result.append(name) except ValueError:
class_names.append(class_name) class_name = "class"
sets = result features = "all"
name = set_name
results.append(name)
class_names.append(class_name)
features = features.strip()
self.continuous_features[name] = features
# Set as dataset list the dataset passed as argument # Set as dataset list the dataset passed as argument
if dataset_name is None: if dataset_name is None:
return class_names, sets return class_names, results
try: try:
class_name = class_names[sets.index(dataset_name)] class_name = class_names[results.index(dataset_name)]
except ValueError: except ValueError:
raise ValueError(f"Unknown dataset: {dataset_name}") raise ValueError(f"Unknown dataset: {dataset_name}")
return [class_name], [dataset_name] return [class_name], [dataset_name]
@@ -137,34 +162,54 @@ class Datasets:
self.discretize = False self.discretize = False
X, y = self.load(name) X, y = self.load(name)
attr = SimpleNamespace() attr = SimpleNamespace()
attr.dataset = name
values, counts = np.unique(y, return_counts=True) values, counts = np.unique(y, return_counts=True)
comp = "" attr.classes = len(values)
sep = ""
for count in counts:
comp += f"{sep}{count/sum(counts)*100:5.2f}%"
sep = "/ "
attr.balance = comp
attr.classes = len(np.unique(y))
attr.samples = X.shape[0] attr.samples = X.shape[0]
attr.features = X.shape[1] attr.features = X.shape[1]
attr.cont_features = len(self.get_continuous_features())
attr.distribution = {}
comp = ""
sep = ""
for value, count in zip(values, counts):
comp += f"{sep}{count/sum(counts)*100:5.2f}% ({count}) "
sep = "/ "
attr.distribution[value.item()] = count / sum(counts)
attr.balance = comp
self.discretize = tmp self.discretize = tmp
return attr return attr
def get_features(self): def get_features(self):
return self.dataset.features return self.dataset.features
def get_states(self, name):
return self.states[name] if name in self.states else None
def get_continuous_features(self):
return self.continuous_features_dataset
def get_class_name(self): def get_class_name(self):
return self.dataset.class_name return self.dataset.class_name
def get_dataset(self): def get_dataset(self):
return self.dataset.dataset return self.dataset.dataset
def build_states(self, name, X):
features = self.get_features()
self.states[name] = {
features[i]: np.unique(X[:, i]).tolist() for i in range(X.shape[1])
}
def load(self, name, dataframe=False): def load(self, name, dataframe=False):
try: try:
class_name = self.class_names[self.data_sets.index(name)] class_name = self.class_names[self.data_sets.index(name)]
X, y = self.dataset.load(name, class_name) X, y = self.dataset.load(name, class_name)
self.continuous_features_dataset = self.dataset.get_range_features(
X, self.continuous_features[name]
)
if self.discretize: if self.discretize:
X = self.discretize_dataset(X, y) X = self.discretize_dataset(X, y)
self.build_states(name, X)
dataset = pd.DataFrame(X, columns=self.get_features()) dataset = pd.DataFrame(X, columns=self.get_features())
dataset[self.get_class_name()] = y dataset[self.get_class_name()] = y
self.dataset.dataset = dataset self.dataset.dataset = dataset
@@ -188,9 +233,8 @@ class Datasets:
------- -------
tuple (X, y) of numpy.ndarray tuple (X, y) of numpy.ndarray
""" """
discretiz = MDLP(random_state=17, dtype=np.int32) discretiz = FImdlp()
Xdisc = discretiz.fit_transform(X, y) return discretiz.fit_transform(X, y)
return Xdisc
def __iter__(self) -> Diterator: def __iter__(self) -> Diterator:
return Diterator(self.data_sets) return Diterator(self.data_sets)

View File

@@ -22,7 +22,7 @@ from .Arguments import EnvData
class Randomized: class Randomized:
@staticmethod @staticmethod
def seeds(): def seeds():
return json.loads(EnvData.load()["seeds"]) return json.loads(EnvData().load()["seeds"])
class BestResults: class BestResults:
@@ -112,8 +112,12 @@ class Experiment:
platform, platform,
title, title,
progress_bar=True, progress_bar=True,
ignore_nan=True,
fit_features=None,
discretize=None,
folds=5, folds=5,
): ):
env_data = EnvData().load()
today = datetime.now() today = datetime.now()
self.time = today.strftime("%H:%M:%S") self.time = today.strftime("%H:%M:%S")
self.date = today.strftime("%Y-%m-%d") self.date = today.strftime("%Y-%m-%d")
@@ -131,7 +135,18 @@ class Experiment:
self.score_name = score_name self.score_name = score_name
self.model_name = model_name self.model_name = model_name
self.title = title self.title = title
self.ignore_nan = ignore_nan
self.stratified = stratified == "1" self.stratified = stratified == "1"
self.discretize = (
env_data["discretize"] == "1"
if discretize is None
else discretize == "1"
)
self.fit_features = (
env_data["fit_features"] == "1"
if fit_features is None
else fit_features == "1"
)
self.stratified_class = StratifiedKFold if self.stratified else KFold self.stratified_class = StratifiedKFold if self.stratified else KFold
self.datasets = datasets self.datasets = datasets
dictionary = json.loads(hyperparams_dict) dictionary = json.loads(hyperparams_dict)
@@ -184,7 +199,20 @@ class Experiment:
self.leaves = [] self.leaves = []
self.depths = [] self.depths = []
def _n_fold_crossval(self, X, y, hyperparameters): def _build_fit_params(self, name):
if not self.fit_features:
return None
res = dict(
features=self.datasets.get_features(),
class_name=self.datasets.get_class_name(),
)
states = self.datasets.get_states(name)
if states is None:
return res
res["state_names"] = states
return res
def _n_fold_crossval(self, name, X, y, hyperparameters):
if self.scores != []: if self.scores != []:
raise ValueError("Must init experiment before!") raise ValueError("Must init experiment before!")
loop = tqdm( loop = tqdm(
@@ -201,6 +229,7 @@ class Experiment:
shuffle=True, random_state=random_state, n_splits=self.folds shuffle=True, random_state=random_state, n_splits=self.folds
) )
clf = self._build_classifier(random_state, hyperparameters) clf = self._build_classifier(random_state, hyperparameters)
fit_params = self._build_fit_params(name)
self.version = Models.get_version(self.model_name, clf) self.version = Models.get_version(self.model_name, clf)
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.filterwarnings("ignore") warnings.filterwarnings("ignore")
@@ -209,11 +238,19 @@ class Experiment:
X, X,
y, y,
cv=kfold, cv=kfold,
fit_params=fit_params,
return_estimator=True, return_estimator=True,
scoring=self.score_name, scoring=self.score_name.replace("-", "_"),
) )
self.scores.append(res["test_score"]) if np.isnan(res["test_score"]).any():
self.times.append(res["fit_time"]) if not self.ignore_nan:
print(res["test_score"])
raise ValueError("NaN in results")
results = res["test_score"][~np.isnan(res["test_score"])]
else:
results = res["test_score"]
self.scores.extend(results)
self.times.extend(res["fit_time"])
for result_item in res["estimator"]: for result_item in res["estimator"]:
nodes_item, leaves_item, depth_item = Models.get_complexity( nodes_item, leaves_item, depth_item = Models.get_complexity(
self.model_name, result_item self.model_name, result_item
@@ -245,6 +282,7 @@ class Experiment:
output["model"] = self.model_name output["model"] = self.model_name
output["version"] = self.version output["version"] = self.version
output["stratified"] = self.stratified output["stratified"] = self.stratified
output["discretized"] = self.discretize
output["folds"] = self.folds output["folds"] = self.folds
output["date"] = self.date output["date"] = self.date
output["time"] = self.time output["time"] = self.time
@@ -273,7 +311,7 @@ class Experiment:
n_classes = len(np.unique(y)) n_classes = len(np.unique(y))
hyperparameters = self.hyperparameters_dict[name][1] hyperparameters = self.hyperparameters_dict[name][1]
self._init_experiment() self._init_experiment()
self._n_fold_crossval(X, y, hyperparameters) self._n_fold_crossval(name, X, y, hyperparameters)
self._add_results(name, hyperparameters, samp, feat, n_classes) self._add_results(name, hyperparameters, samp, feat, n_classes)
self._output_results() self._output_results()
self.duration = time.time() - now self.duration = time.time() - now

127
benchmark/Manager.py Normal file
View File

@@ -0,0 +1,127 @@
import os
from types import SimpleNamespace
import xlsxwriter
from benchmark.Results import Report
from benchmark.ResultsFiles import Excel
from benchmark.Utils import Files, Folders, TextColor
def get_input(message="", is_test=False):
return "test" if is_test else input(message)
class Manage:
def __init__(self, summary):
self.summary = summary
self.cmd = SimpleNamespace(
quit="q", relist="r", delete="d", hide="h", excel="e"
)
def process_file(self, num, command, path):
num = int(num)
name = self.summary.data_filtered[num]["file"]
file_name_result = os.path.join(path, name)
verb1, verb2 = (
("delete", "Deleting")
if command == self.cmd.delete
else (
"hide",
"Hiding",
)
)
conf_message = (
TextColor.RED
+ f"Are you sure to {verb1} {file_name_result} (y/n)? "
)
confirm = get_input(message=conf_message)
if confirm == "y":
print(TextColor.YELLOW + f"{verb2} {file_name_result}")
if command == self.cmd.delete:
os.unlink(file_name_result)
else:
os.rename(
os.path.join(Folders.results, name),
os.path.join(Folders.hidden_results, name),
)
self.summary.data_filtered.pop(num)
get_input(message="Press enter to continue")
self.summary.list_results()
def manage_results(self):
"""Manage results showed in the summary
return True if excel file is created False otherwise
"""
message = (
TextColor.ENDC
+ f"Choose option {str(self.cmd).replace('namespace', '')}: "
)
path = (
Folders.hidden_results if self.summary.hidden else Folders.results
)
book = None
max_value = len(self.summary.data_filtered)
while True:
match get_input(message=message).split():
case [self.cmd.relist]:
self.summary.list_results()
case [self.cmd.quit]:
if book is not None:
book.close()
return True
return False
case [self.cmd.hide, num] if num.isdigit() and int(
num
) < max_value:
if self.summary.hidden:
print("Already hidden")
else:
self.process_file(
num, path=path, command=self.cmd.hide
)
case [self.cmd.delete, num] if num.isdigit() and int(
num
) < max_value:
self.process_file(
num=num, path=path, command=self.cmd.delete
)
case [self.cmd.excel, num] if num.isdigit() and int(
num
) < max_value:
# Add to excel file result #num
book = self.add_to_excel(num, path, book)
case [num] if num.isdigit() and int(num) < max_value:
# Report the result #num
self.report(num, path)
case _:
print("Invalid option. Try again!")
def report(self, num, path):
num = int(num)
file_name_result = os.path.join(
path, self.summary.data_filtered[num]["file"]
)
try:
rep = Report(file_name_result, compare=self.summary.compare)
rep.report()
except ValueError as e:
print(e)
def add_to_excel(self, num, path, book):
num = int(num)
file_name_result = os.path.join(
path, self.summary.data_filtered[num]["file"]
)
if book is None:
file_name = os.path.join(Folders.excel, Files.be_list_excel)
book = xlsxwriter.Workbook(file_name, {"nan_inf_to_errors": True})
excel = Excel(
file_name=file_name_result,
book=book,
compare=self.summary.compare,
)
excel.report()
print(f"Added {file_name_result} to {Files.be_list_excel}")
return book

View File

@@ -8,41 +8,64 @@ from sklearn.ensemble import (
) )
from sklearn.svm import SVC from sklearn.svm import SVC
from stree import Stree from stree import Stree
from bayesclass.clfs import TAN, KDB, AODE
# from bayesclass.clfs import TAN, KDB, AODE, KDBNew, TANNew, AODENew, BoostAODE
from wodt import Wodt from wodt import Wodt
from odte import Odte from odte import Odte
from xgboost import XGBClassifier from xgboost import XGBClassifier
import sklearn import sklearn
import xgboost import xgboost
import random
class MockModel(SVC):
# Only used for testing
def predict(self, X):
if random.random() < 0.1:
return [float("NaN")] * len(X)
return super().predict(X)
def nodes_leaves(self):
return 0, 0
def fit(self, X, y, **kwargs):
kwargs.pop("state_names", None)
kwargs.pop("features", None)
return super().fit(X, y, **kwargs)
class Models: class Models:
@staticmethod @staticmethod
def define_models(random_state): def define_models(random_state):
return { return {
"STree": Stree(random_state=random_state), "STree": Stree(random_state=random_state),
"TAN": TAN(random_state=random_state), # "TAN": TAN(random_state=random_state),
"KDB": KDB(k=3), # "KDB": KDB(k=2),
"AODE": AODE(random_state=random_state), # "TANNew": TANNew(random_state=random_state),
# "KDBNew": KDBNew(k=2),
# "AODENew": AODENew(random_state=random_state),
# "AODE": AODE(random_state=random_state),
# "BoostAODE": BoostAODE(random_state=random_state),
"Cart": DecisionTreeClassifier(random_state=random_state), "Cart": DecisionTreeClassifier(random_state=random_state),
"ExtraTree": ExtraTreeClassifier(random_state=random_state), "ExtraTree": ExtraTreeClassifier(random_state=random_state),
"Wodt": Wodt(random_state=random_state), "Wodt": Wodt(random_state=random_state),
"SVC": SVC(random_state=random_state), "SVC": SVC(random_state=random_state),
"ODTE": Odte( "ODTE": Odte(
base_estimator=Stree(random_state=random_state), estimator=Stree(random_state=random_state),
random_state=random_state, random_state=random_state,
), ),
"BaggingStree": BaggingClassifier( "BaggingStree": BaggingClassifier(
base_estimator=Stree(random_state=random_state), estimator=Stree(random_state=random_state),
random_state=random_state, random_state=random_state,
), ),
"BaggingWodt": BaggingClassifier( "BaggingWodt": BaggingClassifier(
base_estimator=Wodt(random_state=random_state), estimator=Wodt(random_state=random_state),
random_state=random_state, random_state=random_state,
), ),
"XGBoost": XGBClassifier(random_state=random_state), "XGBoost": XGBClassifier(random_state=random_state),
"AdaBoostStree": AdaBoostClassifier( "AdaBoostStree": AdaBoostClassifier(
base_estimator=Stree( estimator=Stree(
random_state=random_state, random_state=random_state,
), ),
algorithm="SAMME", algorithm="SAMME",
@@ -50,6 +73,7 @@ class Models:
), ),
"GBC": GradientBoostingClassifier(random_state=random_state), "GBC": GradientBoostingClassifier(random_state=random_state),
"RandomForest": RandomForestClassifier(random_state=random_state), "RandomForest": RandomForestClassifier(random_state=random_state),
"Mock": MockModel(random_state=random_state),
} }
@staticmethod @staticmethod

File diff suppressed because it is too large Load Diff

446
benchmark/ResultsBase.py Normal file
View File

@@ -0,0 +1,446 @@
import abc
import json
import math
import os
from operator import itemgetter
from benchmark.Datasets import Datasets
from benchmark.Utils import NO_RESULTS, Files, Folders, TextColor
from .Arguments import ALL_METRICS, EnvData
from .Datasets import Datasets
from .Experiments import BestResults
from .Utils import Folders, Symbols
class BestResultsEver:
def __init__(self):
self.data = {}
for i in ["Tanveer", "Surcov", "Arff"]:
self.data[i] = {}
for metric in ALL_METRICS:
self.data[i][metric.replace("-", "_")] = ["self", 1.0]
self.data[i][metric] = ["self", 1.0]
self.data["Tanveer"]["accuracy"] = [
"STree_default (liblinear-ovr)",
40.282203,
]
self.data["Arff"]["accuracy"] = [
"STree_default (linear-ovo)",
22.109799,
]
def get_name_value(self, key, score):
return self.data[key][score]
class BaseReport(abc.ABC):
def __init__(self, file_name, best_file=False):
self.file_name = file_name
if not os.path.isfile(file_name):
if not os.path.isfile(os.path.join(Folders.results, file_name)):
raise FileNotFoundError(f"{file_name} does not exists!")
else:
self.file_name = os.path.join(Folders.results, file_name)
with open(self.file_name) as f:
self.data = json.load(f)
self.best_acc_file = best_file
if best_file:
self.lines = self.data
else:
self.lines = self.data["results"]
self.score_name = self.data["score_name"]
self.__load_env_data()
self.__compute_best_results_ever()
self._compare_totals = {}
def __load_env_data(self):
# Set the labels for nodes, leaves, depth
env_data = EnvData().load()
self.nodes_label = env_data["nodes"]
self.leaves_label = env_data["leaves"]
self.depth_label = env_data["depth"]
self.key = env_data["source_data"]
self.margin = float(env_data["margin"])
def __compute_best_results_ever(self):
best = BestResultsEver()
self.best_score_name, self.best_score_value = best.get_name_value(
self.key, self.score_name
)
def _get_accuracy(self, item):
return self.data[item][0] if self.best_acc_file else item["score"]
def report(self):
self.header()
accuracy_total = 0.0
for result in self.lines:
self.print_line(result)
accuracy_total += self._get_accuracy(result)
self.footer(accuracy_total)
def _load_best_results(self, score, model):
best = BestResults(score, model, Datasets())
self.best_results = best.load({})
def _compute_status(self, dataset, accuracy: float):
status = " "
if self.compare:
# Compare with best results
best = self.best_results[dataset][0]
if accuracy == best:
status = Symbols.equal_best
elif accuracy > best:
status = Symbols.better_best
else:
# compare with dataset label distribution only if its a binary one
# down_arrow if accuracy is less than the ZeroR
# black_star if accuracy is greater than the ZeroR + margin%
if self.score_name == "accuracy":
dt = Datasets()
attr = dt.get_attributes(dataset)
if attr.classes == 2:
max_category = max(attr.distribution.values())
max_value = max_category * (1 + self.margin)
if max_value > 1:
max_value = 0.9995
status = (
Symbols.cross
if accuracy <= max_value
else (
Symbols.upward_arrow
if accuracy > max_value
else " "
)
)
if status != " ":
if status not in self._compare_totals:
self._compare_totals[status] = 1
else:
self._compare_totals[status] += 1
return status
def _status_meaning(self, status):
meaning = {
Symbols.equal_best: "Equal to best",
Symbols.better_best: "Better than best",
Symbols.cross: "Less than or equal to ZeroR",
Symbols.upward_arrow: f"Better than ZeroR + "
f"{self.margin*100:3.1f}%",
}
return meaning[status]
def _get_best_accuracy(self):
return self.best_score_value
def _get_message_best_accuracy(self):
return f"{self.score_name} compared to {self.best_score_name} .:"
@abc.abstractmethod
def header(self) -> None:
pass
@abc.abstractmethod
def print_line(self, result) -> None:
pass
@abc.abstractmethod
def footer(self, accuracy: float) -> None:
pass
class StubReport(BaseReport):
def __init__(self, file_name, compare=False):
self.compare = compare
super().__init__(file_name=file_name, best_file=False)
if self.compare:
self._load_best_results(self.score_name, self.data["model"])
def print_line(self, line) -> None:
pass
def header(self) -> None:
self.title = self.data["title"]
self.duration = self.data["duration"]
self.model = self.data["model"]
self.date = self.data["date"]
self.time = self.data["time"]
self.metric = self.data["score_name"]
self.platform = self.data["platform"]
def footer(self, accuracy: float) -> None:
self.accuracy = accuracy
self.score = accuracy / self._get_best_accuracy()
class Summary:
def __init__(self, hidden=False, compare=False) -> None:
self.results = Files.get_all_results(hidden=hidden)
self.data = []
self.data_filtered = []
self.datasets = {}
self.models = set()
self.hidden = hidden
self.compare = compare
def get_models(self):
return sorted(self.models)
def acquire(self, given_score="any") -> None:
"""Get all results"""
for result in self.results:
(
score,
model,
platform,
date,
time,
stratified,
) = Files().split_file_name(result)
if given_score in ("any", score):
self.models.add(model)
report = StubReport(
os.path.join(
(
Folders.hidden_results
if self.hidden
else Folders.results
),
result,
)
)
report.report()
entry = dict(
score=score,
model=model,
title=report.title,
platform=platform,
date=date,
time=time,
stratified=stratified,
file=result,
metric=report.score,
duration=report.duration,
)
self.datasets[result] = report.lines
self.data.append(entry)
def get_results_criteria(
self, score, model, input_data, sort_key, number, nan=False
):
data = self.data.copy() if input_data is None else input_data
if score:
data = [x for x in data if x["score"] == score]
if model:
data = [x for x in data if x["model"] == model]
if nan:
data = [x for x in data if x["metric"] != x["metric"]]
keys = (
itemgetter(sort_key, "time")
if sort_key == "date"
else itemgetter(sort_key, "date", "time")
)
data = sorted(data, key=keys, reverse=True)
if number > 0:
data = data[:number]
return data
def list_results(
self,
score=None,
model=None,
input_data=None,
sort_key="date",
number=0,
nan=False,
) -> None:
"""Print the list of results"""
if self.data_filtered == []:
self.data_filtered = self.get_results_criteria(
score, model, input_data, sort_key, number, nan=nan
)
if self.data_filtered == []:
raise ValueError(NO_RESULTS)
max_file = max(len(x["file"]) for x in self.data_filtered)
max_title = max(len(x["title"]) for x in self.data_filtered)
if self.hidden:
color1 = TextColor.GREEN
color2 = TextColor.YELLOW
else:
color1 = TextColor.LINE1
color2 = TextColor.LINE2
print(color1, end="")
print(
f" # {'Date':10s} {'File':{max_file}s} {'Score':8s} "
f"{'Time(h)':7s} {'Title':s}"
)
print(
"===",
"=" * 10
+ " "
+ "=" * max_file
+ " "
+ "=" * 8
+ " "
+ "=" * 7
+ " "
+ "=" * max_title,
)
print(
"\n".join(
[
(color2 if n % 2 == 0 else color1) + f"{n:3d} "
f"{x['date']} {x['file']:{max_file}s} "
f"{x['metric']:8.5f} "
f"{x['duration']/3600:7.3f} "
f"{x['title']}"
for n, x in enumerate(self.data_filtered)
]
)
)
def show_result(self, data: dict, title: str = "") -> None:
def whites(n: int) -> str:
return " " * n + color1 + "*"
if data == {}:
print(f"** {title} has No data **")
return
color1 = TextColor.CYAN
color2 = TextColor.YELLOW
file_name = data["file"]
metric = data["metric"]
result = StubReport(os.path.join(Folders.results, file_name))
length = 81
print(color1 + "*" * length)
if title != "":
print(
"*"
+ color2
+ TextColor.BOLD
+ f"{title:^{length - 2}s}"
+ TextColor.ENDC
+ color1
+ "*"
)
print("*" + "-" * (length - 2) + "*")
print("*" + whites(length - 2))
print(
"* "
+ color2
+ f"{result.data['title']:^{length - 4}}"
+ color1
+ " *"
)
print("*" + whites(length - 2))
print(
"* Model: "
+ color2
+ f"{result.data['model']:15s} "
+ color1
+ "Ver. "
+ color2
+ f"{result.data['version']:10s} "
+ color1
+ "Score: "
+ color2
+ f"{result.data['score_name']:10s} "
+ color1
+ "Metric: "
+ color2
+ f"{metric:10.7f}"
+ whites(length - 78)
)
print(color1 + "*" + whites(length - 2))
print(
"* Date : "
+ color2
+ f"{result.data['date']:15s}"
+ color1
+ " Time: "
+ color2
+ f"{result.data['time']:18s} "
+ color1
+ "Time Spent: "
+ color2
+ f"{result.data['duration']:9,.2f}"
+ color1
+ " secs."
+ whites(length - 78)
)
seeds = str(result.data["seeds"])
seeds_len = len(seeds)
print(
"* Seeds: "
+ color2
+ f"{seeds:{seeds_len}s} "
+ color1
+ "Platform: "
+ color2
+ f"{result.data['platform']:17s} "
+ whites(length - 79)
)
print(
"* Stratified: "
+ color2
+ f"{str(result.data['stratified']):15s}"
+ whites(length - 30)
)
print("* " + color2 + f"{file_name:60s}" + whites(length - 63))
print(color1 + "*" + whites(length - 2))
print(color1 + "*" * length)
def best_results(self, criterion=None, value=None, score="accuracy", n=10):
# First filter the same score results (accuracy, f1, ...)
haystack = [x for x in self.data if x["score"] == score]
haystack = (
haystack
if criterion is None or value is None
else [x for x in haystack if x[criterion] == value]
)
if haystack == []:
raise ValueError(NO_RESULTS)
return (
sorted(
haystack,
key=lambda x: -1.0 if math.isnan(x["metric"]) else x["metric"],
reverse=True,
)[:n]
if len(haystack) > 0
else {}
)
def best_result(
self, criterion=None, value=None, score="accuracy"
) -> dict:
return self.best_results(criterion, value, score)[0]
def best_results_datasets(self, score="accuracy") -> dict:
"""Get the best results for each dataset"""
dt = Datasets()
best_results = {}
for dataset in dt:
best_results[dataset] = (1, "", "", "")
haystack = [x for x in self.data if x["score"] == score]
# Search for the best results for each dataset
for entry in haystack:
for dataset in self.datasets[entry["file"]]:
if dataset["score"] < best_results[dataset["dataset"]][0]:
best_results[dataset["dataset"]] = (
dataset["score"],
dataset["hyperparameters"],
entry["file"],
entry["title"],
)
return best_results
def show_top(self, score="accuracy", n=10):
try:
self.list_results(
score=score,
input_data=self.best_results(score=score, n=n),
sort_key="metric",
)
except ValueError as e:
print(e)

1044
benchmark/ResultsFiles.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -13,6 +13,9 @@ class Folders:
exreport = "exreport" exreport = "exreport"
report = os.path.join(exreport, "exreport_output") report = os.path.join(exreport, "exreport_output")
img = "img" img = "img"
excel = "excel"
sql = "sql"
current = os.getcwd()
@staticmethod @staticmethod
def src(): def src():
@@ -106,7 +109,8 @@ class Files:
) )
return None return None
def get_all_results(self, hidden) -> list[str]: @staticmethod
def get_all_results(hidden) -> list[str]:
result_path = os.path.join( result_path = os.path.join(
".", Folders.hidden_results if hidden else Folders.results ".", Folders.hidden_results if hidden else Folders.results
) )
@@ -115,7 +119,7 @@ class Files:
else: else:
raise ValueError(f"{result_path} does not exist") raise ValueError(f"{result_path} does not exist")
result = [] result = []
prefix, suffix = self.results_suffixes() prefix, suffix = Files.results_suffixes()
for result_file in files_list: for result_file in files_list:
if result_file.startswith(prefix) and result_file.endswith(suffix): if result_file.startswith(prefix) and result_file.endswith(suffix):
result.append(result_file) result.append(result_file)
@@ -126,6 +130,9 @@ class Symbols:
check_mark = "\N{heavy check mark}" check_mark = "\N{heavy check mark}"
exclamation = "\N{heavy exclamation mark symbol}" exclamation = "\N{heavy exclamation mark symbol}"
black_star = "\N{black star}" black_star = "\N{black star}"
cross = "\N{Ballot X}"
upward_arrow = "\N{Black-feathered north east arrow}"
down_arrow = "\N{downwards black arrow}"
equal_best = check_mark equal_best = check_mark
better_best = black_star better_best = black_star

View File

@@ -1,3 +1,4 @@
from .ResultsBase import Summary
from .Datasets import ( from .Datasets import (
Datasets, Datasets,
DatasetsSurcov, DatasetsSurcov,
@@ -5,11 +6,11 @@ from .Datasets import (
DatasetsArff, DatasetsArff,
) )
from .Experiments import Experiment from .Experiments import Experiment
from .Results import Report, Summary from .Results import Report
from ._version import __version__ from ._version import __version__
__author__ = "Ricardo Montañana Gómez" __author__ = "Ricardo Montañana Gómez"
__copyright__ = "Copyright 2020-2023, Ricardo Montañana Gómez" __copyright__ = "Copyright 2020-2024, Ricardo Montañana Gómez"
__license__ = "MIT License" __license__ = "MIT License"
__author_email__ = "ricardo.montanana@alu.uclm.es" __author_email__ = "ricardo.montanana@alu.uclm.es"

View File

@@ -1 +1 @@
__version__ = "0.4.0" __version__ = "1.0.1"

View File

20
benchmark/scripts/app/app.py Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env python
from benchmark.Arguments import EnvData
from flask import Flask
from .main import main, OUTPUT
FRAMEWORK = "framework"
FRAMEWORKS = "frameworks"
TEST = "test"
def create_app(output="local"):
app = Flask(__name__)
config = EnvData().load()
app.register_blueprint(main)
app.config[FRAMEWORK] = config[FRAMEWORK]
app.config[FRAMEWORKS] = ["bootstrap", "bulma"]
app.config[OUTPUT] = output
app.jinja_env.auto_reload = True
app.config["TEMPLATES_AUTO_RELOAD"] = True
return app

210
benchmark/scripts/app/main.py Executable file
View File

@@ -0,0 +1,210 @@
#!/usr/bin/env python
import os
import json
import shutil
import xlsxwriter
from dotenv import dotenv_values
from benchmark.Utils import Files, Folders
from benchmark.Arguments import EnvData
from benchmark.ResultsBase import StubReport
from benchmark.ResultsFiles import Excel, ReportDatasets
from benchmark.Datasets import Datasets
from flask import Blueprint, current_app, send_file
from flask import render_template, request, redirect, url_for
main = Blueprint("main", __name__)
FRAMEWORK = "framework"
FRAMEWORKS = "frameworks"
OUTPUT = "output"
TEST = "test"
class AjaxResponse:
def __init__(self, success, file_name, code=200):
self.success = success
self.file_name = file_name
self.code = code
def to_string(self):
return (
json.dumps(
{
"success": self.success,
"file": self.file_name,
"output": current_app.config[OUTPUT],
}
),
self.code,
{"ContentType": "application/json"},
)
def process_data(file_name, compare, data):
report = StubReport(
os.path.join(Folders.results, file_name), compare=compare
)
new_list = []
for result in data["results"]:
symbol = report._compute_status(result["dataset"], result["score"])
result["symbol"] = symbol if symbol != " " else "&nbsp;"
new_list.append(result)
data["results"] = new_list
# Compute summary with explanation of symbols
summary = {}
for key, value in report._compare_totals.items():
summary[key] = (report._status_meaning(key), value)
return summary
@main.route("/index/<compare>")
@main.route("/")
def index(compare="False"):
# Get a list of files in a directory
files = {}
names = Files.get_all_results(hidden=False)
for name in names:
report = StubReport(os.path.join(Folders.results, name))
report.report()
files[name] = {
"duration": report.duration,
"score": report.score,
"title": report.title,
}
candidate = current_app.config[FRAMEWORKS].copy()
candidate.remove(current_app.config[FRAMEWORK])
return render_template(
"select.html",
files=files,
candidate=candidate[0],
framework=current_app.config[FRAMEWORK],
compare=compare.capitalize() == "True",
)
@main.route("/datasets/<compare>")
def datasets(compare):
dt = Datasets()
datos = []
for dataset in dt:
datos.append(dt.get_attributes(dataset))
return render_template(
"datasets.html",
datasets=datos,
compare=compare,
framework=current_app.config[FRAMEWORK],
)
@main.route("/showfile/<file_name>/<compare>")
def showfile(file_name, compare, back=None):
compare = compare.capitalize() == "True"
back = request.args["url"] if back is None else back
app_config = dotenv_values(".env")
with open(os.path.join(Folders.results, file_name)) as f:
data = json.load(f)
try:
summary = process_data(file_name, compare, data)
except Exception as e:
return render_template("error.html", message=str(e), compare=compare)
return render_template(
"report.html",
data=data,
file=file_name,
summary=summary,
framework=current_app.config[FRAMEWORK],
back=back,
app_config=app_config,
)
@main.route("/show", methods=["post"])
def show():
selected_file = request.form["selected-file"]
compare = request.form["compare"]
return showfile(
file_name=selected_file,
compare=compare,
back=url_for(
"main.index", compare=compare, output=current_app.config[OUTPUT]
),
)
@main.route("/excel", methods=["post"])
def excel():
selected_files = request.json["selectedFiles"]
compare = request.json["compare"]
book = None
if selected_files[0] == "datasets":
# Create a list of datasets
report = ReportDatasets(excel=True, output=False)
report.report()
excel_name = os.path.join(Folders.excel, Files.datasets_report_excel)
if current_app.config[OUTPUT] == "local":
Files.open(excel_name, test=current_app.config[TEST])
return AjaxResponse(True, Files.datasets_report_excel).to_string()
try:
for file_name in selected_files:
file_name_result = os.path.join(Folders.results, file_name)
if book is None:
file_excel = os.path.join(Folders.excel, Files.be_list_excel)
book = xlsxwriter.Workbook(
file_excel, {"nan_inf_to_errors": True}
)
excel = Excel(
file_name=file_name_result,
book=book,
compare=compare,
)
excel.report()
except Exception as e:
if book is not None:
book.close()
return AjaxResponse(
False, "Could not create excel file, " + str(e)
).to_string()
if book is not None:
book.close()
if current_app.config[OUTPUT] == "local":
Files.open(file_excel, test=current_app.config[TEST])
return AjaxResponse(True, Files.be_list_excel).to_string()
@main.route("/download/<file_name>")
def download(file_name):
src = os.path.join(Folders.current, Folders.excel, file_name)
dest = os.path.join(
Folders.src(), "scripts", "app", "static", "excel", file_name
)
shutil.copyfile(src, dest)
return send_file(dest, as_attachment=True)
@main.route("/config/<framework>/<compare>")
def config(framework, compare):
if framework not in current_app.config[FRAMEWORKS]:
message = f"framework {framework} not supported"
return render_template("error.html", message=message)
env = EnvData()
env.load()
env.args[FRAMEWORK] = framework
env.save()
current_app.config[FRAMEWORK] = framework
return redirect(url_for("main.index", compare=compare))
@main.route("/best_results/<file>/<compare>")
def best_results(file, compare):
compare = compare.capitalize() == "True"
try:
with open(os.path.join(Folders.results, file)) as f:
data = json.load(f)
except Exception as e:
return render_template("error.html", message=str(e), compare=compare)
return render_template(
"report_best.html",
data=data,
compare=compare,
framework=current_app.config[FRAMEWORK],
)

View File

@@ -0,0 +1,30 @@
.alternate-font {
font-family: Arial;
}
tbody {
font-family: Courier;
}
.tag {
cursor: pointer;
}
.ajaxLoading {
cursor: progress !important;
}
#file-table tbody tr.selected td {
background-color: #0dcaf0;
color: white;
}
#report-table tbody tr.selected td {
background-color: #0dcaf0;
color: white;
}
.btn-small {
padding: 0.25rem 0.5rem;
font-size: 0.75rem;
}

View File

@@ -0,0 +1 @@
*.xlsx

View File

@@ -0,0 +1,29 @@
function excelFiles(selectedFiles, compare) {
var data = {
"selectedFiles": selectedFiles,
"compare": compare
};
// send data to server with ajax post
$.ajax({
type:'POST',
url:'/excel',
data: JSON.stringify(data),
contentType: "application/json",
dataType: 'json',
success: function(data){
if (data.success) {
if (data.output == "local") {
alert("Se ha generado el archivo " + data.file);
} else {
window.open('/download/' + data.file, "_blank");
}
} else {
alert(data.file);
}
},
error: function (xhr, ajaxOptions, thrownError) {
var mensaje = JSON.parse(xhr.responseText || '{\"mensaje\": \"Error indeterminado\"}');
alert(mensaje.mensaje);
}
});
}

View File

@@ -0,0 +1,20 @@
<!DOCTYPE html>
<html>
<head>
<title>{{ title }}</title>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha3/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-KK94CHFLLe+nY2dmCWGMq91rCGa5gtU4mk92HdvYe+M/SXH301p5ILy+dN9+nJOZ" crossorigin="anonymous" />
<link href="https://fonts.googleapis.com/css?family=Montserrat:300,400,500,600" rel="stylesheet" />
<link rel="stylesheet" href="https://cdn.datatables.net/1.10.25/css/jquery.dataTables.min.css" />
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/MaterialDesign-Webfont/7.1.96/css/materialdesignicons.css" integrity="sha512-lD1LHcZ8tFHvMFNeo6qOLY/HjzSPCasPJOAoir22byDxlZI1R71S5lZel8zRL2TZ+Dut1wOHfYgSU2lHXuL00w==" crossorigin="anonymous" referrerpolicy="no-referrer" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
</head>
<body>
{% block content %}
{% endblock %}
</body>
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
{% block jscript %}
{% endblock %}
</html>

View File

@@ -0,0 +1,19 @@
<!DOCTYPE html>
<html>
<head>
<title>{{ title }}</title>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.9.3/css/bulma.min.css" />
<link rel="stylesheet" href="https://cdn.datatables.net/1.10.25/css/jquery.dataTables.min.css" />
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/MaterialDesign-Webfont/7.1.96/css/materialdesignicons.css" integrity="sha512-lD1LHcZ8tFHvMFNeo6qOLY/HjzSPCasPJOAoir22byDxlZI1R71S5lZel8zRL2TZ+Dut1wOHfYgSU2lHXuL00w==" crossorigin="anonymous" referrerpolicy="no-referrer" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}" />
</head>
<body>
{% block content %}
{% endblock %}
</body>
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
{% block jscript %}
{% endblock %}
</html>

View File

@@ -0,0 +1,68 @@
{% extends 'base_' ~ framework ~ '.html' %}
{% macro javascript(file) %}
<script src="{{ url_for('static', filename=file) }}"></script>
{% endmacro %}
{% if framework == 'bootstrap' %}
{% set button_class = 'btn btn-primary btn-small' %}
{% set h1_class = 'text-center' %}
{% set table_class = 'table table-striped table-hover table-bordered' %}
{% set head_class = 'bg-primary text-white' %}
{% set text_right = 'text-end' %}
{% set container = 'container' %}
{% set selected = 'selected' %}
{%- macro header(title, close, url) -%}
<div class="p-4 bg-primary text-white">
{%- if close -%}
<button type="button" class="btn-close" aria-label="Close" onclick="location.href = '{{ url }}'"></button>
{%- endif -%}
<h1 class="alternate-font">{{ title }}</h1>
</div>
{%- endmacro -%}
{% else %}
{% set button_class = 'button is-primary is-small' %}
{% set h1_class = 'title is-1 has-text-centered' %}
{% set table_class = 'table is-striped is-hoverable cell-border is-bordered' %}
{% set head_class = 'is-selected' %}
{% set text_right = 'has-text-right' %}
{% set container = 'container' %}
{% set selected = 'is-selected' %}
{%- macro header(title, close, url) -%}
<div class="hero is-info is-bold">
<div class="hero-body">
{%- if close -%}
<button class="delete is-large" onclick="location.href = '{{ url }}'"></button>
{%- endif -%}
<h1 class="is-size-3 alternate-font">{{ title }}</h1>
</div>
</div>
{%- endmacro -%}
{% endif %}
{% block content %}
<div class="{{ container }}">
{{ header('Benchmark Datasets Report', True, url_for('main.index', compare = compare)) }}
<button class="{{ button_class }}" onclick="excelFiles(['datasets'], false)"><i class="mdi mdi-file-excel"></i> Excel</button>
{% include 'partials/datasets_table.html' %}
</div>
{% endblock %}
{% block jscript %}
{{ javascript("js/excelFiles.js") }}
<script>
$(document).ready(function () {
$(document).ajaxStart(function(){
$("body").addClass('ajaxLoading');
});
$(document).ajaxStop(function(){
$("body").removeClass('ajaxLoading');
});
});
// Check if row is selected
$('#file-table tbody').on('click', 'tr', function () {
if ($(this).hasClass('{{ selected }}')) {
$(this).removeClass('{{ selected }}');
} else {
$('#file-table tbody tr.{{ selected }}').removeClass("{{ selected }}")
$(this).addClass('{{ selected }}');
}
});
</script>
{% endblock %}

View File

@@ -0,0 +1,20 @@
<!DOCTYPE html>
<html>
<head>
<title>Error</title>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha3/dist/css/bootstrap.min.css" rel="stylesheet"
integrity="sha384-KK94CHFLLe+nY2dmCWGMq91rCGa5gtU4mk92HdvYe+M/SXH301p5ILy+dN9+nJOZ" crossorigin="anonymous">
</head>
<body>
<div class="container">
<div class="alert alert-danger my-5" role="alert">
<h4 class="alert-heading"><button class="btn-close btn-sm" type="button"
onclick="location.href='/index/{{ compare }}';"></button>Error</h4>
<p>There was an error processing action, {{ message }}. Please try again later.</p>
<hr>
<p class="mb-0">If the problem persists, please contact support.</p>
</div>
</div>
</body>
</html>

View File

@@ -0,0 +1,22 @@
$(document).ready(function () {
// Check if row is selected
$('#report-table tbody').on('click', 'tr', function () {
if ($(this).hasClass('{{ selected }}')) {
$(this).removeClass('{{ selected }}');
} else {
$('#report-table tbody tr.{{ selected }}').removeClass("{{ selected }}")
$(this).addClass('{{ selected }}');
}
});
$(document).ajaxStart(function(){
$("body").addClass('ajaxLoading');
});
$(document).ajaxStop(function(){
$("body").removeClass('ajaxLoading');
});
});
function excelFile() {
var selectedFiles = ["{{ file }}"];
var compare = "{{ compare }}" == "True";
excelFiles(selectedFiles, compare)
}

View File

@@ -0,0 +1,97 @@
$(document).ready(function () {
var table = $("#file-table").DataTable({
paging: true,
searching: true,
ordering: true,
info: true,
"select.items": "row",
pageLength: 25,
columnDefs: [
{
targets: 8,
orderable: false,
},
],
//"language": {
// "lengthMenu": "_MENU_"
//}
});
$('#file-table').on( 'draw.dt', function () {
enable_disable_best_buttons();
} );
// Check if row is selected
$("#file-table tbody").on("click", "tr", function () {
if ($(this).hasClass("{{ select.selected() }}")) {
$(this).removeClass("{{ select.selected() }}");
} else {
table
.$("tr.{{ select.selected() }}")
.removeClass("{{ select.selected() }}");
$(this).addClass("{{ select.selected() }}");
}
});
// Show file with doubleclick
$("#file-table tbody").on("dblclick", "tr", function () {
showFile($(this).attr("id"));
});
$(document).ajaxStart(function () {
$("body").addClass("ajaxLoading");
});
$(document).ajaxStop(function () {
$("body").removeClass("ajaxLoading");
});
$('#compare').change(function() {
enable_disable_best_buttons();
});
enable_disable_best_buttons();
});
function enable_disable_best_buttons(){
if ($('#compare').is(':checked')) {
$("[name='best_buttons']").addClass("tag is-link is-normal");
$("[name='best_buttons']").removeAttr("hidden");
} else {
$("[name='best_buttons']").removeClass("tag is-link is-normal");
$("[name='best_buttons']").attr("hidden", true);
}
}
function showFile(selectedFile) {
var form = $(
'<form action="/show" method="post">' +
'<input type="hidden" name="selected-file" value="' +
selectedFile +
'" />' +
'<input type="hidden" name="compare" value=' +
$("#compare").is(":checked") +
" />" +
"</form>"
);
$("body").append(form);
form.submit();
}
function excel() {
var checkbox = document.getElementsByName("selected_files");
var selectedFiles = [];
for (var i = 0; i < checkbox.length; i++) {
if (checkbox[i].checked) {
selectedFiles.push(checkbox[i].value);
}
}
if (selectedFiles.length == 0) {
alert("Select at least one file");
return;
}
var compare = $("#compare").is(":checked");
excelFiles(selectedFiles, compare);
}
function setCheckBoxes(value) {
var checkbox = document.getElementsByName("selected_files");
for (i = 0; i < checkbox.length; i++) {
checkbox[i].checked = value;
}
}
function redirectDouble(route, parameter) {
location.href = "/"+ route + "/" + parameter + "/" + $("#compare").is(":checked");
}
function redirectSimple(route) {
location.href = "/" + route + "/" + $("#compare").is(":checked");
}

View File

@@ -0,0 +1,56 @@
{%- macro header(title, close=False, url="") -%}
<div class="p-4 bg-primary text-white">
{%- if close -%}
<button type="button" class="btn-close" aria-label="Close" onclick="location.href = '{{url}}'"></button>
{%- endif -%}
<h1 class="alternate-font">{{ title }}</h1>
</div>
{%- endmacro -%}
{%- macro get_table_class() -%}
table table-striped table-hover table-bordered
{%- endmacro -%}
{%- macro icon(icon_name) -%}
<i class="mdi mdi-{{icon_name}}"></i>
{%- endmacro -%}
{%- macro get_button(text, action) -%}
<button class="btn btn-primary btn-small" onclick="{{ action }}">{{ text|safe }}</button>
{%- endmacro -%}
{%- macro get_button_class() -%}
button btn-primary btn-small
{%- endmacro %}
{%- macro get_button_tag(icon_name, method, visible=True, name="") -%}
<button class="btn btn-primary btn-small" onclick="{{ method }}" {{ "" if visible else "hidden='true'" }} {{ "" if name=="" else "name='" + name +"'"}}><i class="mdi mdi-{{ icon_name }}"></i></button>
{%- endmacro -%}
{%- macro get_button_reset() -%}
<button class="btn btn-primary btn-small btn-danger" onclick="setCheckBoxes(false)"><i class="mdi mdi-checkbox-multiple-blank"></i></button>
{%- endmacro -%}
{%- macro get_button_all() -%}
<button class="btn btn-primary btn-small btn-success" onclick="setCheckBoxes(true)"><i class="mdi mdi-checkbox-multiple-marked"></i></button>
{%- endmacro -%}
{%- macro get_tag_class() -%}
badge bg-info bg-small
{%- endmacro -%}
{%- macro get_container_class() -%}
container-fluid
{%- endmacro -%}
{%- macro selected() -%}
selected
{%- endmacro -%}
{%- macro get_level_class() -%}
navbar
{%- endmacro -%}
{%- macro get_align_right() -%}
text-end
{%- endmacro -%}
{%- macro get_left_position() -%}
float-left
{%- endmacro -%}
{%- macro get_right_position() -%}
float-right
{%- endmacro -%}
{%- macro get_row_head_class() -%}
bg-primary text-white
{%- endmacro -%}
{%- macro get_align_center() -%}
text-center
{%- endmacro -%}

View File

@@ -0,0 +1,58 @@
{%- macro header(title, close=False, url="") -%}
<div class="hero is-info is-bold">
<div class="hero-body">
{%- if close -%}
<button class="delete is-large" onclick="location.href = '{{ url }}'"></button>
{%- endif -%}
<h1 class="is-size-3 alternate-font">{{ title }}</h1>
</div>
</div>
{%- endmacro -%}
{%- macro get_table_class() -%}
table is-striped is-hoverable cell-border is-bordered
{%- endmacro -%}
{%- macro icon(icon_name) -%}
<i class="mdi mdi-{{icon_name}}"></i>
{%- endmacro -%}
{%- macro get_button(text, action) -%}
<button class="button is-primary is-small" onclick="{{ action }}">{{ text|safe }}</button>
{%- endmacro -%}
{%- macro get_button_tag(icon_name, method, visible=True, name="") -%}
<span class="{{ "tag is-link is-normal" if visible else "" }}" type="button" onclick="{{ method }}" {{ "" if visible else "hidden='true'" }} {{ "" if name=="" else "name='" + name +"'"}}>{{icon(icon_name)}}</span>
{%- endmacro -%}
{%- macro get_button_reset() -%}
<span class="tag is-link is-danger" type="button" onclick="setCheckBoxes(false)"><i class="mdi mdi-checkbox-multiple-blank"></i></span>
{%- endmacro -%}
{%- macro get_button_all() -%}
<span class="tag is-link is-success" type="button" onclick="setCheckBoxes(true)"><i class="mdi mdi-checkbox-multiple-marked"></i></span>
{%- endmacro -%}
{%- macro get_tag_class() -%}
tag is-info is-small
{%- endmacro -%}
{%- macro get_container_class() -%}
container is-fluid
{%- endmacro -%}
{%- macro selected() -%}
is-selected
{%- endmacro -%}
{%- macro get_level_class() -%}
level
{%- endmacro -%}
{%- macro get_align_right() -%}
has-text-right
{%- endmacro -%}
{%- macro get_align_center() -%}
has-text-center
{%- endmacro -%}
{%- macro get_left_position() -%}
float-left
{%- endmacro -%}
{%- macro get_right_position() -%}
float-right
{%- endmacro -%}
{%- macro get_row_head_class() -%}
is-selected
{%- endmacro -%}
{%- macro get_align_center() -%}
has-text-center
{%- endmacro -%}

View File

@@ -0,0 +1,27 @@
{% extends "base_" ~ framework ~ ".html" %}
{% block content %}
<table id="file-table" class="{{ table_class }}">
<thead>
<tr class="{{ head_class }}">
<th class="{{ text_center }}">Dataset</th>
<th class="{{ text_center }}">Samples</th>
<th class="{{ text_center }}">Features</th>
<th class="{{ text_center }}">Cont. Feat.</th>
<th class="{{ text_center }}">Classes</th>
<th class="{{ text_center }}">Balance</th>
</tr>
</thead>
<tbody>
{% for dataset in datasets %}
<tr>
<td>{{ dataset.dataset }}</td>
<td class="{{ text_right }}">{{ "{:,}".format(dataset.samples) }}</td>
<td class="{{ text_right }}">{{ "{:,}".format(dataset.features) }}</td>
<td class="{{ text_right }}">{{ dataset.cont_features }}</td>
<td class="{{ text_right }}">{{ dataset.classes }}</td>
<td>{{ dataset.balance }}</td>
</tr>
{% endfor %}
</tbody>
</table>
{% endblock %}

View File

@@ -0,0 +1,14 @@
{% for item in data.results %}
<tr>
<td>{{ item.dataset }}</td>
<td class="{{ right }}">{{ '{:,}'.format(item.samples) }}</td>
<td class="{{ right }}">{{"%d" % item.features}}</td>
<td class="{{ right }}">{{"%d" % item.classes}}</td>
<td class="{{ right }}">{{ '{:,.2f}'.format(item.nodes|float) }}</td>
<td class="{{ right }}">{{ '{:,.2f}'.format(item.leaves|float) }}</td>
<td class="{{ right }}">{{ '{:,.2f}'.format(item.depth|float) }}</td>
<td class="{{ right }}">{{"%.6f±%.4f" % (item.score, item.score_std)}} {{ item.symbol|safe }}</td>
<td class="{{ right }}">{{"%.6f±%.4f" % (item.time, item.time_std)}}</td>
<td class="{{ center }}">{{ item.hyperparameters }}</td>
</tr>
{% endfor %}

View File

@@ -0,0 +1,102 @@
<div id="app">
<section class="section">
<div class="container-fluid">
<div class="p-4 bg-primary text-white">
<button type="button"
class="btn-close"
aria-label="Close"
onclick="location.href = '{{ back }}'"></button>
<h1>{{ data.title }}</h1>
</div>
<div>
<table class="table table-bordered">
<thead>
<tr class="bg-info text-white">
<th class="text-center">Platform</th>
<th class="text-center">Model</th>
<th class="text-center">Date</th>
<th class="text-center">Time</th>
{% if data.duration > 7200 %}
{% set unit = "h" %}
{% set divider = 3600 %}
{% else %}
{% set unit = "min" %}
{% set divider = 60 %}
{% endif %}
<th class="text-center">Duration ({{ unit }})</th>
<th class="text-center">Stratified</th>
<th class="text-center">Discretized</th>
<th class="text-center"># Folds</th>
</tr>
<tr>
<th class="text-center">{{ data.platform }}</th>
<th class="text-center">{{ data.model }} {{ data.version }}</th>
<th class="text-center">{{ data.date }}</th>
<th class="text-center">{{ data.time }}</th>
<th class="text-center">{{ "%.2f" % (data.duration/divider) }}</th>
<th class="text-center">{{ data.stratified }}</th>
<th class="text-center">{{ data.discretized }}</th>
<th class="text-center">{{ data.folds }}</th>
</tr>
<tr>
<th class="text-center bg-info text-white">Language</th>
<th class="text-center" colspan=3>{{ data.language }} {{ data.language_version }}</th>
<th class="text-center bg-info text-white">Seeds</th>
<th class="text-center" colspan=6>{{ data.seeds }}</th>
</tr>
</thead>
</table>
<div>
<button class="{{ button }}" onclick="excelFile()">
<i class="mdi mdi-file-excel"></i> Excel
</button>
</div>
<table id="report-table"
class="table table-striped table-hover table-bordered">
<thead>
<tr class="bg-primary text-white">
<th class="text-center">Dataset</th>
<th class="text-center">Samples</th>
<th class="text-center">Features</th>
<th class="text-center">Classes</th>
<th class="text-center">{{ app_config.nodes }}</th>
<th class="text-center">{{ app_config.leaves }}</th>
<th class="text-center">{{ app_config.depth }}</th>
<th class="text-center">{{ data.score_name|capitalize }}</th>
<th class="text-center">Time</th>
<th class="text-center">hyperparameters</th>
</tr>
</thead>
<tbody>
{% include "partials/table_report.html" %}
</tbody>
</table>
{% if summary|length > 0 %}
<div class="col-4 col-lg-4">
<table class="table table-bordered">
<thead>
<tr>
<th class="text-center bg-primary text-white">Symbol</th>
<th class="text-center bg-primary text-white">Meaning</th>
<th class="text-center bg-primary text-white">Count</th>
</tr>
</thead>
{% include "partials/table_summary.html" %}
</table>
</div>
{% endif %}
<button type="button"
class="btn-close"
aria-label="Close"
onclick="location.href = '{{ back }}'"></button>
<h7>
<b>
Total score: {{ "%.6f" % (data.results | sum(attribute="score") ) }}
</b>
</h7>
<h7>
Number of files: {{ data.results | length }}
</h7>
</div>
</section>
</div>

View File

@@ -0,0 +1,100 @@
<div id="app">
<header>
<div class="container is-fluid">
<div class="hero is-info is-bold">
<div class="hero-body">
<button class="delete is-large" onclick="location.href = '{{ back }}'"></button>
<h1 class="is-size-3">{{ data.title }}</h1>
</div>
</div>
</div>
</header>
<section class="section">
<div class="container is-fluid">
<div>
<table class="table is-fullwidth is-striped is-bordered">
<thead>
<tr class="is-selected">
<th class="has-text-centered">Platform</th>
<th class="has-text-centered">Model</th>
<th class="has-text-centered">Date</th>
<th class="has-text-centered">Time</th>
{% if data.duration > 7200 %}
{% set unit = "h" %}
{% set divider = 3600 %}
{% else %}
{% set unit = "min" %}
{% set divider = 60 %}
{% endif %}
<th class="has-text-centered">Duration ({{ unit }})</th>
<th class="has-text-centered">Stratified</th>
<th class="has-text-centered">Discretized</th>
<th class="has-text-centered"># Folds</th>
</tr>
<tr>
<th class="has-text-centered">{{ data.platform }}</th>
<th class="has-text-centered">{{ data.model }} {{ data.version }}</th>
<th class="has-text-centered">{{ data.date }}</th>
<th class="has-text-centered">{{ data.time }}</th>
<th class="has-text-centered">{{ "%.2f" % (data.duration/divider) }}</th>
<th class="has-text-centered">{{ data.stratified }}</th>
<th class="has-text-centered">{{ data.discretized }}</th>
<th class="has-text-centered">{{ data.folds }}</th>
</tr>
<tr>
<th class="has-text-center is-selected">Language</th>
<th class="has-text-centered" colspan=3>{{ data.language }} {{ data.language_version }}</th>
<th class="has-text-centered is-selected">Seeds</th>
<th class="has-text-centered" colspan=6>{{ data.seeds }}</th>
</tr>
</thead>
</table>
<div>
<button class="{{ button }}" onclick="excelFile()">
<i class="mdi mdi-file-excel"></i> Excel
</button>
</div>
<table id="report-table"
class="table is-fullwidth is-striped is-hoverable is-bordered">
<thead>
<tr class="is-selected">
<th class="has-text-centered">Dataset</th>
<th class="has-text-centered">Samples</th>
<th class="has-text-centered">Features</th>
<th class="has-text-centered">Classes</th>
<th class="has-text-centered">{{ app_config.nodes }}</th>
<th class="has-text-centered">{{ app_config.leaves }}</th>
<th class="has-text-centered">{{ app_config.depth }}</th>
<th class="has-text-centered">{{ data.score_name|capitalize }}</th>
<th class="has-text-centered">Time</th>
<th class="has-text-centered">hyperparameters</th>
</tr>
</thead>
<tbody>
{% include "partials/table_report.html" %}
</tbody>
</table>
{% if summary|length > 0 %}
<div class="col-2 col-lg-2">
<table class="table is-bordered">
<thead>
<tr class="is-selected">
<th class="has-text-centered">Symbol</th>
<th class="has-text-centered">Meaning</th>
<th class="has-text-centered">Count</th>
</tr>
</thead>
{% include "partials/table_summary.html" %}
</table>
</div>
{% endif %}
<h2 class="has-text-white has-background-primary">
<b>
<button class="delete" onclick="location.href = '{{ back }}'"></button>
Total score: {{ "%.6f" % (data.results | sum(attribute="score") ) }}
</b>
</h2>
<h2>Number of files: {{ data.results | length }}</h2>
</div>
</section>
</div>

View File

@@ -0,0 +1,41 @@
<table id="file-table" class="{{ select.get_table_class() }}">
<thead>
<tr>
<th>Model</th>
<th>Metric</th>
<th>Platform</th>
<th>Date</th>
<th>Time</th>
<th>Stratified</th>
<th>Title</th>
<th>Score</th>
<th>{{ select.get_button_reset()|safe }} {{ select.get_button_all()|safe }}</th>
</tr>
</thead>
<tbody>
{% for file, data in files.items() %}
{% set parts = file.split('_') %}
{% set stratified = parts[6].split('.')[0] %}
<tr id="{{ file }}">
<td>{{ parts[2] }}</td>
<td>{{ parts[1] }}</td>
<td>{{ parts[3] }}</td>
<td>{{ parts[4] }}</td>
<td>{{ parts[5] }}</td>
<td>{{ 'True' if stratified =='1' else 'False' }}</td>
<td>{{ "%s" % data["title"] }}</td>
<td class="{{ select.get_align_right() }}">{{ "%.6f" % data["score"] }}</td>
<td>
{{ select.get_button_tag("table-eye", "showFile('" ~ file ~ "')") | safe }}
{% set file_best = "best_results_" ~ parts[1] ~ "_" ~ parts[2] ~ ".json" %}
{{ select.get_button_tag("star-circle-outline", "redirectDouble('best_results', '" ~ file_best ~ "')", visible=False, name="best_buttons") | safe }}
<input
type="checkbox"
name="selected_files"
value="{{ file }}"
/>
</td>
</tr>
{% endfor %}
</tbody>
</table>

View File

@@ -0,0 +1,15 @@
<div class="{{ select.get_container_class() }}">
{{ select.header("Benchmark Results") }}
<div class="{{ select.get_level_class() }}">
<div class="{{ select.get_left_position() }}">
{{ select.get_button("Use " ~ candidate, "redirectDouble('config', '" ~ candidate ~ "')")|safe }}
{{ select.get_button(select.icon("excel") ~ " Excel", "excel()")|safe }}
{{ select.get_button(select.icon("database-eye") ~ " Datasets", "redirectSimple('datasets')")|safe }}
</div>
<div class="{{ select.get_right_position() }}">
<input type="checkbox" id="compare" name="compare" {% if compare %} {{ "checked" }} {% endif %}>
<span class="{{ select.get_tag_class() }}">Comparing with best results</span>
</div>
</div>
{% include "partials/table_select.html" %}
</div>

View File

@@ -0,0 +1,13 @@
{% for key, value in summary.items() %}
<tr>
<td class="{{ center }}">
{{key}}
</td>
<td >
{{value[0]}}
</td>
<td class={{ right }}>
{{'{:,}'.format(value[1])}}
</td>
</tr>
{% endfor %}

View File

@@ -0,0 +1,29 @@
{% macro javascript(file) %}
<script src="{{ url_for('static', filename=file) }}"></script>
{% endmacro %}
{% set title = 'Report Viewer' %}
{% extends 'base_' ~ framework ~ '.html' %}
{% block content %}
{% if framework == 'bootstrap' %}
{% set center = 'text-center' %}
{% set right = 'text-end' %}
{% set button = 'btn btn-primary' %}
{% include 'partials/table_report_bootstrap.html' %}
{% else %}
{% set center = 'has-text-centered' %}
{% set right = 'has-text-right' %}
{% set button = 'button is-primary' %}
{% include 'partials/table_report_bulma.html' %}
{% endif %}
{% endblock %}
{% block jscript %}
{% if framework == 'bootstrap' %}
{% set selected = 'selected' %}
{% else %}
{% set selected = 'is-selected' %}
{% endif %}
<script>
{% include "js/report.js" %}
</script>
{{ javascript("js/excelFiles.js") }}
{% endblock %}

View File

@@ -0,0 +1,47 @@
{% set title = "Best Results" %}
{% extends "base_" ~ framework ~ ".html" %}
{% import "partials/cfg_select_" ~ framework ~ ".jinja" as select %}
{% block content %}
<div class="container">
{{ select.header(title, True, url_for("main.index", compare=compare)) }}
<table id="file-table" class="{{ select.get_table_class() }}">
<thead>
<tr class="{{ select.get_row_head_class() }}">
<th class="{{ select.get_align_center() }}">Dataset</th>
<th class="{{ select.get_align_center() }}">Score</th>
<th class="{{ select.get_align_center() }}">Hyperparameters</th>
<th class="{{ select.get_align_center() }}">File</th>
</tr>
</thead>
<tbody>
{% for dataset, info in data.items() %}
<tr>
<td>{{ dataset }}</td>
<td class="{{ select.get_align_right() }}">{{ '%9.7f' % info[0] }}</td>
<td class="{{ select.get_align_center() }}">{{ info[1] }}</td>
<td>
{% set url = url_for(request.endpoint, **request.view_args)|urlencode %}
<a href="{{ url_for('main.showfile', file_name = info[2], compare = compare) }}?url={{ url }}">{{ info[2] }}</a>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% endblock %}
{% block jscript %}
<script>
$(document).ready(function () {
// Check if row is selected
$('#file-table tbody').on('click', 'tr', function () {
if ($(this).hasClass('{{ select.selected() }}')) {
$(this).removeClass('{{ select.selected() }}');
} else {
$('#file-table tbody tr.{{ select.selected() }}').removeClass("{{ select.selected() }}")
$(this).addClass('{{ select.selected() }}');
}
});
});
</script>
{% endblock %}

View File

@@ -0,0 +1,20 @@
{% macro javascript(file) %}
<script src="{{ url_for('static', filename=file) }}"></script>
{% endmacro %}
{% set title = 'Benchmark Results' %}
{% extends 'base_' ~ framework ~ '.html' %}
{% import 'partials/cfg_select_' ~ framework ~ '.jinja' as select %}
{% block content %}
{% include 'partials/table_select_design.html' %}
{% endblock %}
{% block jscript %}
<script src="https://cdn.datatables.net/1.10.25/js/jquery.dataTables.min.js"></script>
{% if framework == 'bootstrap' %}
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.3/dist/js/bootstrap.min.js" integrity="sha384-cuYeSxntonz0PPNlHhBs68uyIAVpIIOZZ5JqeqvYYIcEL727kskC66kF92t6Xl2V" crossorigin="anonymous"></script>
{% endif %}
<script>
{% include'/js/select.js' %}
</script>
{{ javascript('js/excelFiles.js') }}
{% endblock %}

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from benchmark.Results import Benchmark from benchmark.ResultsFiles import Benchmark
from benchmark.Utils import Files from benchmark.Utils import Files
from benchmark.Arguments import Arguments from benchmark.Arguments import Arguments

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
import json import json
from benchmark.Results import Summary from benchmark.ResultsBase import Summary
from benchmark.Arguments import ALL_METRICS, Arguments from benchmark.Arguments import ALL_METRICS, Arguments

View File

@@ -46,7 +46,7 @@ def main(args_test=None):
'{"C": 7, "gamma": 0.1, "kernel": "rbf", "multiclass_strategy": ' '{"C": 7, "gamma": 0.1, "kernel": "rbf", "multiclass_strategy": '
'"ovr"}', '"ovr"}',
'{"C": 5, "kernel": "rbf", "gamma": "auto"}', '{"C": 5, "kernel": "rbf", "gamma": "auto"}',
'{"C": 0.05, "max_iter": 10000.0, "kernel": "liblinear", ' '{"C": 0.05, "max_iter": 10000, "kernel": "liblinear", '
'"multiclass_strategy": "ovr"}', '"multiclass_strategy": "ovr"}',
'{"C":0.0275, "kernel": "liblinear", "multiclass_strategy": "ovr"}', '{"C":0.0275, "kernel": "liblinear", "multiclass_strategy": "ovr"}',
'{"C": 7, "gamma": 10.0, "kernel": "rbf", "multiclass_strategy": ' '{"C": 7, "gamma": 10.0, "kernel": "rbf", "multiclass_strategy": '
@@ -97,7 +97,7 @@ def main(args_test=None):
for item in results: for item in results:
results_tmp = {"n_jobs": [-1], "n_estimators": [100]} results_tmp = {"n_jobs": [-1], "n_estimators": [100]}
for key, value in results[item].items(): for key, value in results[item].items():
new_key = f"base_estimator__{key}" new_key = f"estimator__{key}"
try: try:
results_tmp[new_key] = sorted(value) results_tmp[new_key] = sorted(value)
except TypeError: except TypeError:
@@ -111,6 +111,7 @@ def main(args_test=None):
t2 = sorted([x for x in value if isinstance(x, str)]) t2 = sorted([x for x in value if isinstance(x, str)])
results_tmp[new_key] = t1 + t2 results_tmp[new_key] = t1 + t2
output.append(results_tmp) output.append(results_tmp)
# save results # save results
file_name = Files.grid_input(args.score, args.model) file_name = Files.grid_input(args.score, args.model)
file_output = os.path.join(Folders.results, file_name) file_output = os.path.join(Folders.results, file_name)

18
benchmark/scripts/be_flask.py Executable file
View File

@@ -0,0 +1,18 @@
#!/usr/bin/env python
import webbrowser
from benchmark.Arguments import Arguments
from benchmark.scripts.app.app import create_app, TEST, OUTPUT
# Launch a flask server to serve the results
def main(args_test=None):
arguments = Arguments(prog="be_flask")
arguments.xset("output")
args = arguments.parse(args_test)
app = create_app()
app.config[TEST] = args_test is not None
app.config[OUTPUT] = args.output
print("Output is ", args.output)
if args.output == "local":
webbrowser.open_new("http://127.0.0.1:1234/")
app.run(port=1234, host="0.0.0.0")

View File

@@ -15,6 +15,9 @@ def main(args_test=None):
folders.append(os.path.join(args.project_name, Folders.exreport)) folders.append(os.path.join(args.project_name, Folders.exreport))
folders.append(os.path.join(args.project_name, Folders.report)) folders.append(os.path.join(args.project_name, Folders.report))
folders.append(os.path.join(args.project_name, Folders.img)) folders.append(os.path.join(args.project_name, Folders.img))
folders.append(os.path.join(args.project_name, Folders.excel))
folders.append(os.path.join(args.project_name, Folders.sql))
try: try:
for folder in folders: for folder in folders:
print(f"Creating folder {folder}") print(f"Creating folder {folder}")

View File

@@ -1,7 +1,9 @@
#! /usr/bin/env python #! /usr/bin/env python
from benchmark.Results import Summary import os
from benchmark.Utils import Files from benchmark.ResultsBase import Summary
from benchmark.Utils import Files, Folders
from benchmark.Arguments import Arguments from benchmark.Arguments import Arguments
from benchmark.Manager import Manage
"""List experiments of a model """List experiments of a model
""" """
@@ -26,7 +28,9 @@ def main(args_test=None):
except ValueError as e: except ValueError as e:
print(e) print(e)
return return
excel_generated = data.manage_results() manager = Manage(data)
excel_generated = manager.manage_results()
if excel_generated: if excel_generated:
print(f"Generated file: {Files.be_list_excel}") name = os.path.join(Folders.excel, Files.be_list_excel)
Files.open(Files.be_list_excel, test=args_test is not None) print(f"Generated file: {name}")
Files.open(name, test=args_test is not None)

View File

@@ -13,7 +13,8 @@ def main(args_test=None):
arguments = Arguments(prog="be_main") arguments = Arguments(prog="be_main")
arguments.xset("stratified").xset("score").xset("model", mandatory=True) arguments.xset("stratified").xset("score").xset("model", mandatory=True)
arguments.xset("n_folds").xset("platform").xset("quiet").xset("title") arguments.xset("n_folds").xset("platform").xset("quiet").xset("title")
arguments.xset("report") arguments.xset("report").xset("ignore_nan").xset("discretize")
arguments.xset("fit_features")
arguments.add_exclusive( arguments.add_exclusive(
["grid_paramfile", "best_paramfile", "hyperparameters"] ["grid_paramfile", "best_paramfile", "hyperparameters"]
) )
@@ -29,14 +30,19 @@ def main(args_test=None):
score_name=args.score, score_name=args.score,
model_name=args.model, model_name=args.model,
stratified=args.stratified, stratified=args.stratified,
datasets=Datasets(dataset_name=args.dataset), datasets=Datasets(
dataset_name=args.dataset, discretize=args.discretize
),
hyperparams_dict=args.hyperparameters, hyperparams_dict=args.hyperparameters,
hyperparams_file=args.best_paramfile, hyperparams_file=args.best_paramfile,
grid_paramfile=args.grid_paramfile, grid_paramfile=args.grid_paramfile,
progress_bar=not args.quiet, progress_bar=not args.quiet,
platform=args.platform, platform=args.platform,
ignore_nan=args.ignore_nan,
title=args.title, title=args.title,
folds=args.n_folds, folds=args.n_folds,
fit_features=args.fit_features,
discretize=args.discretize,
) )
job.do_experiment() job.do_experiment()
except ValueError as e: except ValueError as e:

View File

@@ -1,7 +1,10 @@
#!/usr/bin/env python #!/usr/bin/env python
from benchmark.Results import Report, Excel, SQL, ReportBest, ReportDatasets import os
from benchmark.Utils import Files from benchmark.Results import Report, ReportBest
from benchmark.ResultsFiles import Excel, SQLFile, ReportDatasets
from benchmark.Utils import Files, Folders
from benchmark.Arguments import Arguments from benchmark.Arguments import Arguments
from pathlib import Path
"""Build report on screen of a result file, optionally generate excel and sql """Build report on screen of a result file, optionally generate excel and sql
@@ -65,15 +68,17 @@ def main(args_test=None):
print(e) print(e)
return return
if args.sql: if args.sql:
sql = SQL(args.file_name) sql = SQLFile(args.file_name)
sql.report() sql.report()
if args.excel: if args.excel:
excel = Excel( excel = Excel(
file_name=args.file_name, file_name=Path(args.file_name).name,
compare=args.compare, compare=args.compare,
) )
excel.report() excel.report()
Files.open(excel.get_file_name(), is_test) Files.open(
os.path.join(Folders.excel, excel.get_file_name()), is_test
)
case "datasets": case "datasets":
report = ReportDatasets(args.excel) report = ReportDatasets(args.excel)
report.report() report.report()

View File

@@ -1,5 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from benchmark.Results import Summary from benchmark.ResultsBase import Summary
from benchmark.Arguments import ALL_METRICS, Arguments from benchmark.Arguments import ALL_METRICS, Arguments

View File

@@ -7,3 +7,8 @@ stratified=0
source_data=Tanveer source_data=Tanveer
seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
discretize=0 discretize=0
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=0
margin=0.1

View File

@@ -6,3 +6,8 @@ stratified=0
source_data=Arff source_data=Arff
seeds=[271, 314, 171] seeds=[271, 314, 171]
discretize=1 discretize=1
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=1
margin=0.1

View File

@@ -7,3 +7,8 @@ stratified=0
source_data=Tanveer source_data=Tanveer
seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
discretize=0 discretize=0
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=0
margin=0.1

View File

@@ -7,3 +7,8 @@ stratified=0
source_data=Surcov source_data=Surcov
seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] seeds=[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
discretize=0 discretize=0
nodes=Nodes
leaves=Leaves
depth=Depth
fit_features=0
margin=0.1

View File

@@ -68,7 +68,7 @@ class ArgumentsTest(TestBase):
test_args = ["-n", "3", "-k", "date"] test_args = ["-n", "3", "-k", "date"]
with self.assertRaises(SystemExit): with self.assertRaises(SystemExit):
arguments.parse(test_args) arguments.parse(test_args)
self.assertRegexpMatches( self.assertRegex(
stderr.getvalue(), stderr.getvalue(),
r"error: the following arguments are required: -m/--model", r"error: the following arguments are required: -m/--model",
) )
@@ -79,7 +79,7 @@ class ArgumentsTest(TestBase):
test_args = ["-n", "3", "-m", "SVC"] test_args = ["-n", "3", "-m", "SVC"]
with self.assertRaises(SystemExit): with self.assertRaises(SystemExit):
arguments.parse(test_args) arguments.parse(test_args)
self.assertRegexpMatches( self.assertRegex(
stderr.getvalue(), stderr.getvalue(),
r"error: the following arguments are required: -k/--key", r"error: the following arguments are required: -k/--key",
) )
@@ -114,7 +114,7 @@ class ArgumentsTest(TestBase):
test_args = None test_args = None
with self.assertRaises(SystemExit): with self.assertRaises(SystemExit):
arguments.parse(test_args) arguments.parse(test_args)
self.assertRegexpMatches( self.assertRegex(
stderr.getvalue(), stderr.getvalue(),
r"error: the following arguments are required: -m/--model, " r"error: the following arguments are required: -m/--model, "
"-k/--key, --title", "-k/--key, --title",

View File

@@ -4,7 +4,7 @@ from unittest.mock import patch
from openpyxl import load_workbook from openpyxl import load_workbook
from .TestBase import TestBase from .TestBase import TestBase
from ..Utils import Folders, Files, NO_RESULTS from ..Utils import Folders, Files, NO_RESULTS
from ..Results import Benchmark from ..ResultsFiles import Benchmark
from .._version import __version__ from .._version import __version__
@@ -15,10 +15,10 @@ class BenchmarkTest(TestBase):
files.append(Files.exreport(score)) files.append(Files.exreport(score))
files.append(Files.exreport_output(score)) files.append(Files.exreport_output(score))
files.append(Files.exreport_err(score)) files.append(Files.exreport_err(score))
files.append(Files.exreport_excel(score))
files.append(Files.exreport_pdf) files.append(Files.exreport_pdf)
files.append(Files.tex_output("accuracy")) files.append(Files.tex_output("accuracy"))
self.remove_files(files, Folders.exreport) self.remove_files(files, Folders.exreport)
self.remove_files([Files.exreport_excel("accuracy")], Folders.excel)
self.remove_files(files, ".") self.remove_files(files, ".")
return super().tearDown() return super().tearDown()
@@ -90,15 +90,6 @@ class BenchmarkTest(TestBase):
self.assertTrue(os.path.exists(benchmark.get_tex_file())) self.assertTrue(os.path.exists(benchmark.get_tex_file()))
self.check_file_file(benchmark.get_tex_file(), "exreport_tex") self.check_file_file(benchmark.get_tex_file(), "exreport_tex")
@staticmethod
def generate_excel_sheet(test, sheet, file_name):
with open(os.path.join("test_files", file_name), "w") as f:
for row in range(1, sheet.max_row + 1):
for col in range(1, sheet.max_column + 1):
value = sheet.cell(row=row, column=col).value
if value is not None:
print(f'{row};{col};"{value}"', file=f)
def test_excel_output(self): def test_excel_output(self):
benchmark = Benchmark("accuracy", visualize=False) benchmark = Benchmark("accuracy", visualize=False)
benchmark.compile_results() benchmark.compile_results()

View File

@@ -18,7 +18,7 @@ class BestResultTest(TestBase):
"C": 7, "C": 7,
"gamma": 0.1, "gamma": 0.1,
"kernel": "rbf", "kernel": "rbf",
"max_iter": 10000.0, "max_iter": 10000,
"multiclass_strategy": "ovr", "multiclass_strategy": "ovr",
}, },
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json", "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",

View File

@@ -1,4 +1,3 @@
import shutil
from .TestBase import TestBase from .TestBase import TestBase
from ..Experiments import Randomized from ..Experiments import Randomized
from ..Datasets import Datasets from ..Datasets import Datasets
@@ -17,10 +16,6 @@ class DatasetTest(TestBase):
self.set_env(".env.dist") self.set_env(".env.dist")
return super().tearDown() return super().tearDown()
@staticmethod
def set_env(env):
shutil.copy(env, ".env")
def test_Randomized(self): def test_Randomized(self):
expected = [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] expected = [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]
self.assertSequenceEqual(Randomized.seeds(), expected) self.assertSequenceEqual(Randomized.seeds(), expected)

View File

@@ -2,8 +2,8 @@ import os
from openpyxl import load_workbook from openpyxl import load_workbook
from xlsxwriter import Workbook from xlsxwriter import Workbook
from .TestBase import TestBase from .TestBase import TestBase
from ..Results import Excel from ..ResultsFiles import Excel
from ..Utils import Folders from ..Utils import Folders, Files
class ExcelTest(TestBase): class ExcelTest(TestBase):
@@ -13,7 +13,7 @@ class ExcelTest(TestBase):
"results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.xlsx", "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.xlsx",
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.xlsx", "results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.xlsx",
] ]
self.remove_files(files, Folders.results) self.remove_files(files, Folders.excel)
return super().tearDown() return super().tearDown()
def test_report_excel_compared(self): def test_report_excel_compared(self):
@@ -21,7 +21,7 @@ class ExcelTest(TestBase):
report = Excel(file_name, compare=True) report = Excel(file_name, compare=True)
report.report() report.report()
file_output = report.get_file_name() file_output = report.get_file_name()
book = load_workbook(file_output) book = load_workbook(os.path.join(Folders.excel, file_output))
sheet = book["STree"] sheet = book["STree"]
self.check_excel_sheet(sheet, "excel_compared") self.check_excel_sheet(sheet, "excel_compared")
@@ -30,14 +30,14 @@ class ExcelTest(TestBase):
report = Excel(file_name, compare=False) report = Excel(file_name, compare=False)
report.report() report.report()
file_output = report.get_file_name() file_output = report.get_file_name()
book = load_workbook(file_output) book = load_workbook(os.path.join(Folders.excel, file_output))
sheet = book["STree"] sheet = book["STree"]
self.check_excel_sheet(sheet, "excel") self.check_excel_sheet(sheet, "excel")
def test_Excel_Add_sheet(self): def test_Excel_Add_sheet(self):
file_name = "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json" file_name = "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json"
excel_file_name = file_name.replace(".json", ".xlsx") excel_file_name = file_name.replace(Files.report_ext, ".xlsx")
book = Workbook(os.path.join(Folders.results, excel_file_name)) book = Workbook(os.path.join(Folders.excel, excel_file_name))
excel = Excel(file_name=file_name, book=book) excel = Excel(file_name=file_name, book=book)
excel.report() excel.report()
report = Excel( report = Excel(
@@ -46,7 +46,7 @@ class ExcelTest(TestBase):
) )
report.report() report.report()
book.close() book.close()
book = load_workbook(os.path.join(Folders.results, excel_file_name)) book = load_workbook(os.path.join(Folders.excel, excel_file_name))
sheet = book["STree"] sheet = book["STree"]
self.check_excel_sheet(sheet, "excel_add_STree") self.check_excel_sheet(sheet, "excel_add_STree")
sheet = book["ODTE"] sheet = book["ODTE"]

View File

@@ -1,4 +1,6 @@
import json import json
from io import StringIO
from unittest.mock import patch
from .TestBase import TestBase from .TestBase import TestBase
from ..Experiments import Experiment from ..Experiments import Experiment
from ..Datasets import Datasets from ..Datasets import Datasets
@@ -8,10 +10,12 @@ class ExperimentTest(TestBase):
def setUp(self): def setUp(self):
self.exp = self.build_exp() self.exp = self.build_exp()
def build_exp(self, hyperparams=False, grid=False): def build_exp(
self, hyperparams=False, grid=False, model="STree", ignore_nan=False
):
params = { params = {
"score_name": "accuracy", "score_name": "accuracy",
"model_name": "STree", "model_name": model,
"stratified": "0", "stratified": "0",
"datasets": Datasets(), "datasets": Datasets(),
"hyperparams_dict": "{}", "hyperparams_dict": "{}",
@@ -21,6 +25,7 @@ class ExperimentTest(TestBase):
"title": "Test", "title": "Test",
"progress_bar": False, "progress_bar": False,
"folds": 2, "folds": 2,
"ignore_nan": ignore_nan,
} }
return Experiment(**params) return Experiment(**params)
@@ -31,6 +36,7 @@ class ExperimentTest(TestBase):
], ],
".", ".",
) )
self.set_env(".env.dist")
return super().tearDown() return super().tearDown()
def test_build_hyperparams_file(self): def test_build_hyperparams_file(self):
@@ -46,7 +52,7 @@ class ExperimentTest(TestBase):
"C": 7, "C": 7,
"gamma": 0.1, "gamma": 0.1,
"kernel": "rbf", "kernel": "rbf",
"max_iter": 10000.0, "max_iter": 10000,
"multiclass_strategy": "ovr", "multiclass_strategy": "ovr",
}, },
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json", "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",
@@ -89,7 +95,7 @@ class ExperimentTest(TestBase):
def test_exception_n_fold_crossval(self): def test_exception_n_fold_crossval(self):
self.exp.do_experiment() self.exp.do_experiment()
with self.assertRaises(ValueError): with self.assertRaises(ValueError):
self.exp._n_fold_crossval([], [], {}) self.exp._n_fold_crossval("", [], [], {})
def test_do_experiment(self): def test_do_experiment(self):
self.exp.do_experiment() self.exp.do_experiment()
@@ -131,3 +137,42 @@ class ExperimentTest(TestBase):
): ):
for key, value in expected_result.items(): for key, value in expected_result.items():
self.assertEqual(computed_result[key], value) self.assertEqual(computed_result[key], value)
def test_build_fit_parameters(self):
self.set_env(".env.arff")
expected = {
"state_names": {
"sepallength": [0, 1, 2],
"sepalwidth": [0, 1, 2, 3, 4, 5],
"petallength": [0, 1, 2, 3],
"petalwidth": [0, 1, 2],
},
"features": [
"sepallength",
"sepalwidth",
"petallength",
"petalwidth",
],
}
exp = self.build_exp(model="TAN")
X, y = exp.datasets.load("iris")
computed = exp._build_fit_params("iris")
for key, value in expected["state_names"].items():
self.assertEqual(computed["state_names"][key], value)
for feature in expected["features"]:
self.assertIn(feature, computed["features"])
# Ask for states of a dataset that does not exist
computed = exp._build_fit_params("not_existing")
self.assertTrue("states" not in computed)
@patch("sys.stdout", new_callable=StringIO)
def test_experiment_with_nan_not_ignored(self, mock_output):
exp = self.build_exp(model="Mock")
self.assertRaises(ValueError, exp.do_experiment)
output_text = mock_output.getvalue().splitlines()
expected = "[ nan 0.8974359]"
self.assertEqual(expected, output_text[0])
def test_experiment_with_nan_ignored(self):
self.exp = self.build_exp(model="Mock", ignore_nan=True)
self.exp.do_experiment()

View File

@@ -70,19 +70,19 @@ class ModelTest(TestBase):
def test_BaggingStree(self): def test_BaggingStree(self):
clf = Models.get_model("BaggingStree") clf = Models.get_model("BaggingStree")
self.assertIsInstance(clf, BaggingClassifier) self.assertIsInstance(clf, BaggingClassifier)
clf_base = clf.base_estimator clf_base = clf.estimator
self.assertIsInstance(clf_base, Stree) self.assertIsInstance(clf_base, Stree)
def test_BaggingWodt(self): def test_BaggingWodt(self):
clf = Models.get_model("BaggingWodt") clf = Models.get_model("BaggingWodt")
self.assertIsInstance(clf, BaggingClassifier) self.assertIsInstance(clf, BaggingClassifier)
clf_base = clf.base_estimator clf_base = clf.estimator
self.assertIsInstance(clf_base, Wodt) self.assertIsInstance(clf_base, Wodt)
def test_AdaBoostStree(self): def test_AdaBoostStree(self):
clf = Models.get_model("AdaBoostStree") clf = Models.get_model("AdaBoostStree")
self.assertIsInstance(clf, AdaBoostClassifier) self.assertIsInstance(clf, AdaBoostClassifier)
clf_base = clf.base_estimator clf_base = clf.estimator
self.assertIsInstance(clf_base, Stree) self.assertIsInstance(clf_base, Stree)
def test_unknown_classifier(self): def test_unknown_classifier(self):
@@ -102,7 +102,7 @@ class ModelTest(TestBase):
test = { test = {
"STree": ((11, 6, 4), 1.0), "STree": ((11, 6, 4), 1.0),
"Wodt": ((303, 152, 50), 0.9382022471910112), "Wodt": ((303, 152, 50), 0.9382022471910112),
"ODTE": ((7.86, 4.43, 3.37), 1.0), "ODTE": ((786, 443, 337), 1.0),
"Cart": ((23, 12, 5), 1.0), "Cart": ((23, 12, 5), 1.0),
"SVC": ((0, 0, 0), 0.7078651685393258), "SVC": ((0, 0, 0), 0.7078651685393258),
"RandomForest": ((21.3, 11, 5.26), 1.0), "RandomForest": ((21.3, 11, 5.26), 1.0),

View File

@@ -2,7 +2,10 @@ import os
from io import StringIO from io import StringIO
from unittest.mock import patch from unittest.mock import patch
from .TestBase import TestBase from .TestBase import TestBase
from ..Results import Report, BaseReport, ReportBest, ReportDatasets, get_input from ..Results import Report, ReportBest
from ..ResultsFiles import ReportDatasets
from ..ResultsBase import BaseReport
from ..Manager import get_input
from ..Utils import Symbols from ..Utils import Symbols
@@ -63,6 +66,27 @@ class ReportTest(TestBase):
self.assertEqual(res, Symbols.better_best) self.assertEqual(res, Symbols.better_best)
res = report._compute_status("balloons", 1.0) res = report._compute_status("balloons", 1.0)
self.assertEqual(res, Symbols.better_best) self.assertEqual(res, Symbols.better_best)
report = Report(file_name=file_name)
with patch(self.output, new=StringIO()):
report.report()
res = report._compute_status("balloons", 0.99)
self.assertEqual(res, Symbols.upward_arrow)
report.margin = 0.9
res = report._compute_status("balloons", 0.99)
self.assertEqual(res, Symbols.cross)
def test_reportbase_compute_status(self):
with patch.multiple(BaseReport, __abstractmethods__=set()):
file_name = os.path.join(
"results",
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",
)
temp = BaseReport(file_name)
temp.compare = False
temp._compare_totals = {}
temp.score_name = "f1"
res = temp._compute_status("balloons", 0.99)
self.assertEqual(res, " ")
def test_report_file_not_found(self): def test_report_file_not_found(self):
with self.assertRaises(FileNotFoundError): with self.assertRaises(FileNotFoundError):
@@ -87,7 +111,6 @@ class ReportTest(TestBase):
if self.stree_version in line: if self.stree_version in line:
# replace STree version # replace STree version
line = self.replace_STree_version(line, output_text, index) line = self.replace_STree_version(line, output_text, index)
self.assertEqual(line, output_text[index]) self.assertEqual(line, output_text[index])
@patch("sys.stdout", new_callable=StringIO) @patch("sys.stdout", new_callable=StringIO)

View File

@@ -1,7 +1,7 @@
import os import os
from .TestBase import TestBase from .TestBase import TestBase
from ..Results import SQL from ..ResultsFiles import SQLFile
from ..Utils import Folders from ..Utils import Folders, Files
class SQLTest(TestBase): class SQLTest(TestBase):
@@ -9,14 +9,14 @@ class SQLTest(TestBase):
files = [ files = [
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.sql", "results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.sql",
] ]
self.remove_files(files, Folders.results) self.remove_files(files, Folders.sql)
return super().tearDown() return super().tearDown()
def test_report_SQL(self): def test_report_SQL(self):
file_name = "results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json" file_name = "results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json"
report = SQL(file_name) report = SQLFile(file_name)
report.report() report.report()
file_name = os.path.join( file_name = os.path.join(
Folders.results, file_name.replace(".json", ".sql") Folders.sql, file_name.replace(Files.report_ext, ".sql")
) )
self.check_file_file(file_name, "sql") self.check_file_file(file_name, "sql")

View File

@@ -1,7 +1,7 @@
from io import StringIO from io import StringIO
from unittest.mock import patch from unittest.mock import patch
from .TestBase import TestBase from .TestBase import TestBase
from ..Results import Summary from ..ResultsBase import Summary
from ..Utils import NO_RESULTS from ..Utils import NO_RESULTS

View File

@@ -4,6 +4,7 @@ import pathlib
import sys import sys
import csv import csv
import unittest import unittest
import shutil
from importlib import import_module from importlib import import_module
from io import StringIO from io import StringIO
from unittest.mock import patch from unittest.mock import patch
@@ -19,6 +20,10 @@ class TestBase(unittest.TestCase):
self.stree_version = "1.2.4" self.stree_version = "1.2.4"
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
@staticmethod
def set_env(env):
shutil.copy(env, ".env")
def remove_files(self, files, folder): def remove_files(self, files, folder):
for file_name in files: for file_name in files:
file_name = os.path.join(folder, file_name) file_name = os.path.join(folder, file_name)
@@ -26,6 +31,7 @@ class TestBase(unittest.TestCase):
os.remove(file_name) os.remove(file_name)
def generate_excel_sheet(self, sheet, file_name): def generate_excel_sheet(self, sheet, file_name):
file_name += self.ext
with open(os.path.join(self.test_files, file_name), "w") as f: with open(os.path.join(self.test_files, file_name), "w") as f:
for row in range(1, sheet.max_row + 1): for row in range(1, sheet.max_row + 1):
for col in range(1, sheet.max_column + 1): for col in range(1, sheet.max_column + 1):

View File

@@ -11,6 +11,8 @@ class UtilTest(TestBase):
self.assertEqual("results", Folders.results) self.assertEqual("results", Folders.results)
self.assertEqual("hidden_results", Folders.hidden_results) self.assertEqual("hidden_results", Folders.hidden_results)
self.assertEqual("exreport", Folders.exreport) self.assertEqual("exreport", Folders.exreport)
self.assertEqual("excel", Folders.excel)
self.assertEqual("img", Folders.img)
self.assertEqual( self.assertEqual(
os.path.join(Folders.exreport, "exreport_output"), Folders.report os.path.join(Folders.exreport, "exreport_output"), Folders.report
) )
@@ -116,7 +118,7 @@ class UtilTest(TestBase):
def test_Files_get_results(self): def test_Files_get_results(self):
os.chdir(os.path.dirname(os.path.abspath(__file__))) os.chdir(os.path.dirname(os.path.abspath(__file__)))
self.assertCountEqual( self.assertCountEqual(
Files().get_all_results(hidden=False), Files.get_all_results(hidden=False),
[ [
"results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json", "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json",
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json", "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json",
@@ -128,7 +130,7 @@ class UtilTest(TestBase):
], ],
) )
self.assertCountEqual( self.assertCountEqual(
Files().get_all_results(hidden=True), Files.get_all_results(hidden=True),
[ [
"results_accuracy_STree_iMac27_2021-11-01_23:55:16_0.json", "results_accuracy_STree_iMac27_2021-11-01_23:55:16_0.json",
"results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_" "results_accuracy_XGBoost_MacBookpro16_2022-05-04_11:00:35_"
@@ -141,7 +143,7 @@ class UtilTest(TestBase):
# check with results # check with results
os.rename(Folders.results, f"{Folders.results}.test") os.rename(Folders.results, f"{Folders.results}.test")
try: try:
Files().get_all_results(hidden=False) Files.get_all_results(hidden=False)
except ValueError: except ValueError:
pass pass
else: else:
@@ -151,7 +153,7 @@ class UtilTest(TestBase):
# check with hidden_results # check with hidden_results
os.rename(Folders.hidden_results, f"{Folders.hidden_results}.test") os.rename(Folders.hidden_results, f"{Folders.hidden_results}.test")
try: try:
Files().get_all_results(hidden=True) Files.get_all_results(hidden=True)
except ValueError: except ValueError:
pass pass
else: else:
@@ -180,6 +182,11 @@ class UtilTest(TestBase):
"source_data": "Tanveer", "source_data": "Tanveer",
"seeds": "[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]", "seeds": "[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]",
"discretize": "0", "discretize": "0",
"nodes": "Nodes",
"leaves": "Leaves",
"depth": "Depth",
"fit_features": "0",
"margin": "0.1",
} }
computed = EnvData().load() computed = EnvData().load()
self.assertDictEqual(computed, expected) self.assertDictEqual(computed, expected)

View File

@@ -1,2 +1,2 @@
iris,class iris;class;all
wine,class wine;class;[0, 1]

1
benchmark/tests/excel/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
#

View File

@@ -1 +1 @@
{"balance-scale": [0.98, {"splitter": "best", "max_features": "auto"}, "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json"], "balloons": [0.86, {"C": 7, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000.0, "multiclass_strategy": "ovr"}, "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"]} {"balance-scale": [0.98, {"splitter": "best", "max_features": "auto"}, "results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json"], "balloons": [0.86, {"C": 7, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000, "multiclass_strategy": "ovr"}, "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"]}

View File

@@ -6,7 +6,7 @@
"kernel": "liblinear", "kernel": "liblinear",
"multiclass_strategy": "ovr" "multiclass_strategy": "ovr"
}, },
"v. 1.3.0, Computed on Test on 2022-02-22 at 12:00:00 took 1s" "v. 1.4.0, Computed on Test on 2022-02-22 at 12:00:00 took 1s"
], ],
"balloons": [ "balloons": [
0.625, 0.625,
@@ -15,6 +15,6 @@
"kernel": "linear", "kernel": "linear",
"multiclass_strategy": "ovr" "multiclass_strategy": "ovr"
}, },
"v. 1.3.0, Computed on Test on 2022-02-22 at 12:00:00 took 1s" "v. 1.4.0, Computed on Test on 2022-02-22 at 12:00:00 took 1s"
] ]
} }

View File

@@ -1,59 +1 @@
{ {"score_name": "accuracy", "title": "Gridsearched hyperparams v022.1b random_init", "model": "ODTE", "version": "0.3.2", "language_version": "3.11x", "language": "Python", "stratified": false, "folds": 5, "date": "2022-04-20", "time": "10:52:20", "duration": 22591.471411943436, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "Galgo", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"base_estimator__C": 57, "base_estimator__gamma": 0.1, "base_estimator__kernel": "rbf", "base_estimator__multiclass_strategy": "ovr", "n_estimators": 100, "n_jobs": -1}, "nodes": 7.361199999999999, "leaves": 4.180599999999999, "depth": 3.536, "score": 0.96352, "score_std": 0.024949741481626608, "time": 0.31663217544555666, "time_std": 0.19918813895255585}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"base_estimator__C": 5, "base_estimator__gamma": 0.14, "base_estimator__kernel": "rbf", "base_estimator__multiclass_strategy": "ovr", "n_estimators": 100, "n_jobs": -1}, "nodes": 2.9951999999999996, "leaves": 1.9975999999999998, "depth": 1.9975999999999998, "score": 0.785, "score_std": 0.2461311755051675, "time": 0.11560620784759522, "time_std": 0.012784241828599895}], "discretized": false}
"score_name": "accuracy",
"title": "Gridsearched hyperparams v022.1b random_init",
"model": "ODTE",
"version": "0.3.2",
"language_version": "3.11x",
"language": "Python",
"stratified": false,
"folds": 5,
"date": "2022-04-20",
"time": "10:52:20",
"duration": 22591.471411943436,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "Galgo",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"base_estimator__C": 57,
"base_estimator__gamma": 0.1,
"base_estimator__kernel": "rbf",
"base_estimator__multiclass_strategy": "ovr",
"n_estimators": 100,
"n_jobs": -1
},
"nodes": 7.361199999999999,
"leaves": 4.180599999999999,
"depth": 3.536,
"score": 0.96352,
"score_std": 0.024949741481626608,
"time": 0.31663217544555666,
"time_std": 0.19918813895255585
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"base_estimator__C": 5,
"base_estimator__gamma": 0.14,
"base_estimator__kernel": "rbf",
"base_estimator__multiclass_strategy": "ovr",
"n_estimators": 100,
"n_jobs": -1
},
"nodes": 2.9951999999999996,
"leaves": 1.9975999999999998,
"depth": 1.9975999999999998,
"score": 0.785,
"score_std": 0.2461311755051675,
"time": 0.11560620784759522,
"time_std": 0.012784241828599895
}
]
}

View File

@@ -1,45 +1 @@
{ {"score_name": "accuracy", "title": "Test default paramters with RandomForest", "model": "RandomForest", "version": "-", "language_version": "3.11x", "language": "Python", "stratified": false, "folds": 5, "date": "2022-01-14", "time": "12:39:30", "duration": 272.7363500595093, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "iMac27", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {}, "nodes": 196.91440000000003, "leaves": 98.42, "depth": 10.681399999999998, "score": 0.83616, "score_std": 0.02649630917694009, "time": 0.08222018241882324, "time_std": 0.0013026326815120633}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {}, "nodes": 9.110800000000001, "leaves": 4.58, "depth": 3.0982, "score": 0.625, "score_std": 0.24958298553119898, "time": 0.07016648769378662, "time_std": 0.002460508923990468}], "discretized": false}
"score_name": "accuracy",
"title": "Test default paramters with RandomForest",
"model": "RandomForest",
"version": "-",
"language_version": "3.11x",
"language": "Python",
"stratified": false,
"folds": 5,
"date": "2022-01-14",
"time": "12:39:30",
"duration": 272.7363500595093,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "iMac27",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {},
"nodes": 196.91440000000003,
"leaves": 98.42,
"depth": 10.681399999999998,
"score": 0.83616,
"score_std": 0.02649630917694009,
"time": 0.08222018241882324,
"time_std": 0.0013026326815120633
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {},
"nodes": 9.110800000000001,
"leaves": 4.58,
"depth": 3.0982,
"score": 0.625,
"score_std": 0.24958298553119898,
"time": 0.07016648769378662,
"time_std": 0.002460508923990468
}
]
}

View File

@@ -1,57 +1 @@
{ {"score_name": "accuracy", "model": "STree", "stratified": false, "folds": 5, "language_version": "3.11x", "language": "Python", "date": "2021-09-30", "time": "11:42:07", "duration": 624.2505249977112, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "iMac27", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"C": 10000, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000, "multiclass_strategy": "ovr"}, "nodes": 7.0, "leaves": 4.0, "depth": 3.0, "score": 0.97056, "score_std": 0.015046806970251203, "time": 0.01404867172241211, "time_std": 0.002026269126958884}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"C": 7, "gamma": 0.1, "kernel": "rbf", "max_iter": 10000, "multiclass_strategy": "ovr"}, "nodes": 3.0, "leaves": 2.0, "depth": 2.0, "score": 0.86, "score_std": 0.28501461950807594, "time": 0.0008541679382324218, "time_std": 3.629469326417878e-05}], "title": "With gridsearched hyperparameters", "version": "1.2.3", "discretized": false}
"score_name": "accuracy",
"model": "STree",
"stratified": false,
"folds": 5,
"language_version": "3.11x",
"language": "Python",
"date": "2021-09-30",
"time": "11:42:07",
"duration": 624.2505249977112,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "iMac27",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"C": 10000.0,
"gamma": 0.1,
"kernel": "rbf",
"max_iter": 10000.0,
"multiclass_strategy": "ovr"
},
"nodes": 7.0,
"leaves": 4.0,
"depth": 3.0,
"score": 0.97056,
"score_std": 0.015046806970251203,
"time": 0.01404867172241211,
"time_std": 0.002026269126958884
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"C": 7,
"gamma": 0.1,
"kernel": "rbf",
"max_iter": 10000.0,
"multiclass_strategy": "ovr"
},
"nodes": 3.0,
"leaves": 2.0,
"depth": 2.0,
"score": 0.86,
"score_std": 0.28501461950807594,
"time": 0.0008541679382324218,
"time_std": 3.629469326417878e-5
}
],
"title": "With gridsearched hyperparameters",
"version": "1.2.3"
}

View File

@@ -1,51 +1 @@
{ {"score_name": "accuracy", "model": "STree", "language": "Python", "language_version": "3.11x", "stratified": false, "folds": 5, "date": "2021-10-27", "time": "09:40:40", "duration": 3395.009148836136, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "iMac27", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"splitter": "best", "max_features": "auto"}, "nodes": 11.08, "leaves": 5.9, "depth": 5.9, "score": 0.98, "score_std": 0.001, "time": 0.28520655155181884, "time_std": 0.06031593282605064}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"splitter": "best", "max_features": "auto"}, "nodes": 4.12, "leaves": 2.56, "depth": 2.56, "score": 0.695, "score_std": 0.2756860130252853, "time": 0.021201000213623047, "time_std": 0.003526023309468471}], "title": "default A", "version": "1.2.3", "discretized": false}
"score_name": "accuracy",
"model": "STree",
"language": "Python",
"language_version": "3.11x",
"stratified": false,
"folds": 5,
"date": "2021-10-27",
"time": "09:40:40",
"duration": 3395.009148836136,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "iMac27",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"splitter": "best",
"max_features": "auto"
},
"nodes": 11.08,
"leaves": 5.9,
"depth": 5.9,
"score": 0.98,
"score_std": 0.001,
"time": 0.28520655155181884,
"time_std": 0.06031593282605064
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"splitter": "best",
"max_features": "auto"
},
"nodes": 4.12,
"leaves": 2.56,
"depth": 2.56,
"score": 0.695,
"score_std": 0.2756860130252853,
"time": 0.021201000213623047,
"time_std": 0.003526023309468471
}
],
"title": "default A",
"version": "1.2.3"
}

View File

@@ -1,51 +1 @@
{ {"score_name": "accuracy", "model": "STree", "language_version": "3.11x", "language": "Python", "stratified": false, "folds": 5, "date": "2021-11-01", "time": "19:17:07", "duration": 4115.042420864105, "seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1], "platform": "macbook-pro", "results": [{"dataset": "balance-scale", "samples": 625, "features": 4, "classes": 3, "hyperparameters": {"max_features": "auto", "splitter": "mutual"}, "nodes": 18.78, "leaves": 9.88, "depth": 5.9, "score": 0.97, "score_std": 0.002, "time": 0.23330417156219482, "time_std": 0.048087665954193885}, {"dataset": "balloons", "samples": 16, "features": 4, "classes": 2, "hyperparameters": {"max_features": "auto", "splitter": "mutual"}, "nodes": 4.72, "leaves": 2.86, "depth": 2.78, "score": 0.5566666666666668, "score_std": 0.2941277122460771, "time": 0.021352062225341795, "time_std": 0.005808742398555902}], "title": "default B", "version": "1.2.3", "discretized": false}
"score_name": "accuracy",
"model": "STree",
"language_version": "3.11x",
"language": "Python",
"stratified": false,
"folds": 5,
"date": "2021-11-01",
"time": "19:17:07",
"duration": 4115.042420864105,
"seeds": [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1],
"platform": "macbook-pro",
"results": [
{
"dataset": "balance-scale",
"samples": 625,
"features": 4,
"classes": 3,
"hyperparameters": {
"max_features": "auto",
"splitter": "mutual"
},
"nodes": 18.78,
"leaves": 9.88,
"depth": 5.9,
"score": 0.97,
"score_std": 0.002,
"time": 0.23330417156219482,
"time_std": 0.048087665954193885
},
{
"dataset": "balloons",
"samples": 16,
"features": 4,
"classes": 2,
"hyperparameters": {
"max_features": "auto",
"splitter": "mutual"
},
"nodes": 4.72,
"leaves": 2.86,
"depth": 2.78,
"score": 0.5566666666666668,
"score_std": 0.2941277122460771,
"time": 0.021352062225341795,
"time_std": 0.005808742398555902
}
],
"title": "default B",
"version": "1.2.3"
}

View File

@@ -16,10 +16,10 @@ class BeBenchmarkTest(TestBase):
files.append(Files.exreport(score)) files.append(Files.exreport(score))
files.append(Files.exreport_output(score)) files.append(Files.exreport_output(score))
files.append(Files.exreport_err(score)) files.append(Files.exreport_err(score))
files.append(Files.exreport_excel(self.score))
files.append(Files.exreport_pdf) files.append(Files.exreport_pdf)
files.append(Files.tex_output(self.score)) files.append(Files.tex_output(self.score))
self.remove_files(files, Folders.exreport) self.remove_files(files, Folders.exreport)
self.remove_files([Files.exreport_excel(self.score)], Folders.excel)
self.remove_files(files, ".") self.remove_files(files, ".")
return super().tearDown() return super().tearDown()
@@ -41,7 +41,7 @@ class BeBenchmarkTest(TestBase):
self.check_file_file(file_name, "exreport_tex") self.check_file_file(file_name, "exreport_tex")
# Check excel file # Check excel file
file_name = os.path.join( file_name = os.path.join(
Folders.exreport, Files.exreport_excel(self.score) Folders.excel, Files.exreport_excel(self.score)
) )
book = load_workbook(file_name) book = load_workbook(file_name)
replace = None replace = None

View File

@@ -33,6 +33,7 @@ class BeInitProjectTest(TestBase):
Folders.exreport, Folders.exreport,
Folders.report, Folders.report,
Folders.img, Folders.img,
Folders.excel,
] ]
for folder in expected: for folder in expected:
self.assertIsFolder(os.path.join(test_project, folder)) self.assertIsFolder(os.path.join(test_project, folder))

View File

@@ -10,53 +10,55 @@ class BeListTest(TestBase):
def setUp(self): def setUp(self):
self.prepare_scripts_env() self.prepare_scripts_env()
@patch("benchmark.Results.get_input", return_value="q") @patch("benchmark.Manager.get_input", return_value="q")
def test_be_list(self, input_data): def test_be_list(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"]) stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_model") self.check_output_file(stdout, "be_list_model")
@patch("benchmark.Results.get_input", side_effect=iter(["x", "q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["x", "q"]))
def test_be_list_invalid_option(self, input_data): def test_be_list_invalid_option(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"]) stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_model_invalid") self.check_output_file(stdout, "be_list_model_invalid")
@patch("benchmark.Results.get_input", side_effect=iter(["0", "q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["0", "q"]))
def test_be_list_report(self, input_data): def test_be_list_report(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"]) stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_report") self.check_output_file(stdout, "be_list_report")
@patch("benchmark.Results.get_input", side_effect=iter(["r", "q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["r", "q"]))
def test_be_list_twice(self, input_data): def test_be_list_twice(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"]) stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_model_2") self.check_output_file(stdout, "be_list_model_2")
@patch("benchmark.Results.get_input", side_effect=iter(["e 2", "q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["e 2", "q"]))
def test_be_list_report_excel(self, input_data): def test_be_list_report_excel(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"]) stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_report_excel") self.check_output_file(stdout, "be_list_report_excel")
book = load_workbook(Files.be_list_excel)
book = load_workbook(os.path.join(Folders.excel, Files.be_list_excel))
sheet = book["STree"] sheet = book["STree"]
self.check_excel_sheet(sheet, "excel") self.check_excel_sheet(sheet, "excel")
@patch( @patch(
"benchmark.Results.get_input", side_effect=iter(["e 2", "e 1", "q"]) "benchmark.Manager.get_input",
side_effect=iter(["e 2", "e 1", "q"]),
) )
def test_be_list_report_excel_twice(self, input_data): def test_be_list_report_excel_twice(self, input_data):
stdout, stderr = self.execute_script("be_list", ["-m", "STree"]) stdout, stderr = self.execute_script("be_list", ["-m", "STree"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_report_excel_2") self.check_output_file(stdout, "be_list_report_excel_2")
book = load_workbook(Files.be_list_excel) book = load_workbook(os.path.join(Folders.excel, Files.be_list_excel))
sheet = book["STree"] sheet = book["STree"]
self.check_excel_sheet(sheet, "excel") self.check_excel_sheet(sheet, "excel")
sheet = book["STree2"] sheet = book["STree2"]
self.check_excel_sheet(sheet, "excel2") self.check_excel_sheet(sheet, "excel2")
@patch("benchmark.Results.get_input", return_value="q") @patch("benchmark.Manager.get_input", return_value="q")
def test_be_list_no_data(self, input_data): def test_be_list_no_data(self, input_data):
stdout, stderr = self.execute_script( stdout, stderr = self.execute_script(
"be_list", ["-m", "Wodt", "-s", "f1-macro"] "be_list", ["-m", "Wodt", "-s", "f1-macro"]
@@ -65,9 +67,10 @@ class BeListTest(TestBase):
self.assertEqual(stdout.getvalue(), f"{NO_RESULTS}\n") self.assertEqual(stdout.getvalue(), f"{NO_RESULTS}\n")
@patch( @patch(
"benchmark.Results.get_input", side_effect=iter(["d 0", "y", "", "q"]) "benchmark.Manager.get_input",
side_effect=iter(["d 0", "y", "", "q"]),
) )
# @patch("benchmark.Results.get_input", side_effect=iter(["q"])) # @patch("benchmark.ResultsBase.get_input", side_effect=iter(["q"]))
def test_be_list_delete(self, input_data): def test_be_list_delete(self, input_data):
def copy_files(source_folder, target_folder, file_name): def copy_files(source_folder, target_folder, file_name):
source = os.path.join(source_folder, file_name) source = os.path.join(source_folder, file_name)
@@ -91,7 +94,8 @@ class BeListTest(TestBase):
self.fail("test_be_list_delete() should not raise exception") self.fail("test_be_list_delete() should not raise exception")
@patch( @patch(
"benchmark.Results.get_input", side_effect=iter(["h 0", "y", "", "q"]) "benchmark.Manager.get_input",
side_effect=iter(["h 0", "y", "", "q"]),
) )
def test_be_list_hide(self, input_data): def test_be_list_hide(self, input_data):
def swap_files(source_folder, target_folder, file_name): def swap_files(source_folder, target_folder, file_name):
@@ -115,30 +119,36 @@ class BeListTest(TestBase):
swap_files(Folders.results, Folders.hidden_results, file_name) swap_files(Folders.results, Folders.hidden_results, file_name)
self.fail("test_be_list_hide() should not raise exception") self.fail("test_be_list_hide() should not raise exception")
@patch("benchmark.Results.get_input", side_effect=iter(["h 0", "q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["h 0", "q"]))
def test_be_list_already_hidden(self, input_data): def test_be_list_already_hidden(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--hidden"]) stdout, stderr = self.execute_script("be_list", ["--hidden"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_already_hidden") self.check_output_file(stdout, "be_list_already_hidden")
@patch("benchmark.Results.get_input", side_effect=iter(["h 0", "n", "q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["h 0", "n", "q"]))
def test_be_list_dont_hide(self, input_data): def test_be_list_dont_hide(self, input_data):
stdout, stderr = self.execute_script("be_list", "") stdout, stderr = self.execute_script("be_list", "")
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_default") self.check_output_file(stdout, "be_list_default")
@patch("benchmark.Results.get_input", side_effect=iter(["q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["q"]))
def test_be_list_hidden_nan(self, input_data): def test_be_list_hidden_nan(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--hidden", "--nan"]) stdout, stderr = self.execute_script("be_list", ["--hidden", "--nan"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_hidden_nan") self.check_output_file(stdout, "be_list_hidden_nan")
@patch("benchmark.Results.get_input", side_effect=iter(["q"])) @patch("benchmark.Manager.get_input", side_effect=iter(["q"]))
def test_be_list_hidden(self, input_data): def test_be_list_hidden(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--hidden"]) stdout, stderr = self.execute_script("be_list", ["--hidden"])
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_hidden") self.check_output_file(stdout, "be_list_hidden")
@patch("benchmark.Manager.get_input", side_effect=iter(["0", "q"]))
def test_be_list_compare(self, input_data):
stdout, stderr = self.execute_script("be_list", ["--compare"])
self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "be_list_compare_fault")
def test_be_no_env(self): def test_be_no_env(self):
path = os.getcwd() path = os.getcwd()
os.chdir("..") os.chdir("..")

View File

@@ -25,7 +25,7 @@ class BeMainTest(TestBase):
self.check_output_lines( self.check_output_lines(
stdout=stdout, stdout=stdout,
file_name="be_main_dataset", file_name="be_main_dataset",
lines_to_compare=[0, 2, 3, 5, 6, 7, 8, 9, 11, 12, 13], lines_to_compare=[0, 2, 3, 5, 6, 7, 8, 9, 11, 12, 13, 14],
) )
def test_be_main_complete(self): def test_be_main_complete(self):
@@ -37,7 +37,9 @@ class BeMainTest(TestBase):
report_name = stdout.getvalue().splitlines()[-1].split("in ")[1] report_name = stdout.getvalue().splitlines()[-1].split("in ")[1]
self.files.append(report_name) self.files.append(report_name)
self.check_output_lines( self.check_output_lines(
stdout, "be_main_complete", [0, 2, 3, 5, 6, 7, 8, 9, 12, 13, 14] stdout,
"be_main_complete",
[0, 2, 3, 5, 6, 7, 8, 9, 12, 13, 14, 15],
) )
def test_be_main_no_report(self): def test_be_main_no_report(self):
@@ -118,7 +120,7 @@ class BeMainTest(TestBase):
module.main(parameter) module.main(parameter)
self.assertEqual(msg.exception.code, 2) self.assertEqual(msg.exception.code, 2)
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.assertRegexpMatches(stdout.getvalue(), message) self.assertRegex(stdout.getvalue(), message)
def test_be_main_best_params_non_existent(self): def test_be_main_best_params_non_existent(self):
model = "GBC" model = "GBC"

View File

@@ -27,7 +27,7 @@ class BePrintStrees(TestBase):
stdout.getvalue(), f"File {file_name} generated\n" stdout.getvalue(), f"File {file_name} generated\n"
) )
computed_size = os.path.getsize(file_name) computed_size = os.path.getsize(file_name)
self.assertGreater(computed_size, 25000) self.assertGreater(computed_size, 24500)
def test_be_print_strees_dataset_color(self): def test_be_print_strees_dataset_color(self):
for name in self.datasets: for name in self.datasets:

View File

@@ -13,11 +13,17 @@ class BeReportTest(TestBase):
def tearDown(self) -> None: def tearDown(self) -> None:
files = [ files = [
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.sql",
"results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.xlsx", "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.xlsx",
] ]
self.remove_files(files, Folders.results) self.remove_files(files, Folders.results)
self.remove_files([Files.datasets_report_excel], os.getcwd()) self.remove_files(
[Files.datasets_report_excel],
os.path.join(os.getcwd(), Folders.excel),
)
files = [
"results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.sql",
]
self.remove_files(files, Folders.sql)
return super().tearDown() return super().tearDown()
def test_be_report(self): def test_be_report(self):
@@ -34,7 +40,7 @@ class BeReportTest(TestBase):
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.assertEqual(stdout.getvalue(), "unknown does not exists!\n") self.assertEqual(stdout.getvalue(), "unknown does not exists!\n")
def test_be_report_compare(self): def test_be_report_compared(self):
file_name = "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json" file_name = "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"
stdout, stderr = self.execute_script( stdout, stderr = self.execute_script(
"be_report", ["file", file_name, "-c"] "be_report", ["file", file_name, "-c"]
@@ -67,7 +73,9 @@ class BeReportTest(TestBase):
# replace benchmark version # replace benchmark version
line = self.replace_benchmark_version(line, output_text, index) line = self.replace_benchmark_version(line, output_text, index)
self.assertEqual(line, output_text[index]) self.assertEqual(line, output_text[index])
file_name = os.path.join(os.getcwd(), Files.datasets_report_excel) file_name = os.path.join(
os.getcwd(), Folders.excel, Files.datasets_report_excel
)
book = load_workbook(file_name) book = load_workbook(file_name)
sheet = book["Datasets"] sheet = book["Datasets"]
self.check_excel_sheet( self.check_excel_sheet(
@@ -111,7 +119,16 @@ class BeReportTest(TestBase):
def test_be_report_without_subcommand(self): def test_be_report_without_subcommand(self):
stdout, stderr = self.execute_script("be_report", "") stdout, stderr = self.execute_script("be_report", "")
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")
self.check_output_file(stdout, "report_without_subcommand") self.maxDiff = None
# Can't use check_output_file because of the width of the console
# output is different in different environments
file_name = "report_without_subcommand" + self.ext
with open(os.path.join(self.test_files, file_name)) as f:
expected = f.read()
if expected == stdout.getvalue():
self.assertEqual(stdout.getvalue(), expected)
else:
self.check_output_file(stdout, "report_without_subcommand2")
def test_be_report_excel_compared(self): def test_be_report_excel_compared(self):
file_name = "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json" file_name = "results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json"
@@ -120,7 +137,7 @@ class BeReportTest(TestBase):
["file", file_name, "-x", "-c"], ["file", file_name, "-x", "-c"],
) )
file_name = os.path.join( file_name = os.path.join(
Folders.results, file_name.replace(".json", ".xlsx") Folders.excel, file_name.replace(Files.report_ext, ".xlsx")
) )
book = load_workbook(file_name) book = load_workbook(file_name)
sheet = book["STree"] sheet = book["STree"]
@@ -135,7 +152,7 @@ class BeReportTest(TestBase):
["file", file_name, "-x"], ["file", file_name, "-x"],
) )
file_name = os.path.join( file_name = os.path.join(
Folders.results, file_name.replace(".json", ".xlsx") Folders.excel, file_name.replace(Files.report_ext, ".xlsx")
) )
book = load_workbook(file_name) book = load_workbook(file_name)
sheet = book["STree"] sheet = book["STree"]
@@ -150,7 +167,7 @@ class BeReportTest(TestBase):
["file", file_name, "-q"], ["file", file_name, "-q"],
) )
file_name = os.path.join( file_name = os.path.join(
Folders.results, file_name.replace(".json", ".sql") Folders.sql, file_name.replace(Files.report_ext, ".sql")
) )
self.check_file_file(file_name, "sql") self.check_file_file(file_name, "sql")
self.assertEqual(stderr.getvalue(), "") self.assertEqual(stderr.getvalue(), "")

1
benchmark/tests/sql/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
#

View File

@@ -6,13 +6,13 @@
"n_estimators": [ "n_estimators": [
100 100
], ],
"base_estimator__C": [ "estimator__C": [
1.0 1.0
], ],
"base_estimator__kernel": [ "estimator__kernel": [
"linear" "linear"
], ],
"base_estimator__multiclass_strategy": [ "estimator__multiclass_strategy": [
"ovo" "ovo"
] ]
}, },
@@ -23,7 +23,7 @@
"n_estimators": [ "n_estimators": [
100 100
], ],
"base_estimator__C": [ "estimator__C": [
0.001, 0.001,
0.0275, 0.0275,
0.05, 0.05,
@@ -36,10 +36,10 @@
7, 7,
10000.0 10000.0
], ],
"base_estimator__kernel": [ "estimator__kernel": [
"liblinear" "liblinear"
], ],
"base_estimator__multiclass_strategy": [ "estimator__multiclass_strategy": [
"ovr" "ovr"
] ]
}, },
@@ -50,7 +50,7 @@
"n_estimators": [ "n_estimators": [
100 100
], ],
"base_estimator__C": [ "estimator__C": [
0.05, 0.05,
1.0, 1.0,
1.05, 1.05,
@@ -62,7 +62,7 @@
57, 57,
10000.0 10000.0
], ],
"base_estimator__gamma": [ "estimator__gamma": [
0.001, 0.001,
0.1, 0.1,
0.14, 0.14,
@@ -70,10 +70,10 @@
"auto", "auto",
"scale" "scale"
], ],
"base_estimator__kernel": [ "estimator__kernel": [
"rbf" "rbf"
], ],
"base_estimator__multiclass_strategy": [ "estimator__multiclass_strategy": [
"ovr" "ovr"
] ]
}, },
@@ -84,20 +84,20 @@
"n_estimators": [ "n_estimators": [
100 100
], ],
"base_estimator__C": [ "estimator__C": [
0.05, 0.05,
0.2, 0.2,
1.0, 1.0,
8.25 8.25
], ],
"base_estimator__gamma": [ "estimator__gamma": [
0.1, 0.1,
"scale" "scale"
], ],
"base_estimator__kernel": [ "estimator__kernel": [
"poly" "poly"
], ],
"base_estimator__multiclass_strategy": [ "estimator__multiclass_strategy": [
"ovo", "ovo",
"ovr" "ovr"
] ]

View File

@@ -4,6 +4,8 @@ Creating folder test_project/hidden_results
Creating folder test_project/exreport Creating folder test_project/exreport
Creating folder test_project/exreport/exreport_output Creating folder test_project/exreport/exreport_output
Creating folder test_project/img Creating folder test_project/img
Creating folder test_project/excel
Creating folder test_project/sql
Done! Done!
Please, edit .env file with your settings and add a datasets folder Please, edit .env file with your settings and add a datasets folder
with an all.txt file with the datasets you want to use. with an all.txt file with the datasets you want to use.

View File

@@ -0,0 +1,8 @@
 # Date File Score Time(h) Title
=== ========== =============================================================== ======== ======= ============================================
 0 2022-04-20 results_accuracy_ODTE_Galgo_2022-04-20_10:52:20_0.json 0.04341 6.275 Gridsearched hyperparams v022.1b random_init
 1 2022-01-14 results_accuracy_RandomForest_iMac27_2022-01-14_12:39:30_0.json 0.03627 0.076 Test default paramters with RandomForest
 2 2021-11-01 results_accuracy_STree_macbook-pro_2021-11-01_19:17:07_0.json 0.03790 1.143 default B
 3 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 4 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
results/best_results_accuracy_ODTE.json does not exist

View File

@@ -6,7 +6,7 @@
************************************************************************************************************************* *************************************************************************************************************************
* STree ver. 1.2.3 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2021-11-01 19:17:07 * * STree ver. 1.2.3 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2021-11-01 19:17:07 *
* default B * * default B *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False * * Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 4115.04 seconds, 1.14 hours, on macbook-pro * * Execution took 4115.04 seconds, 1.14 hours, on macbook-pro *
* Score is accuracy * * Score is accuracy *
************************************************************************************************************************* *************************************************************************************************************************
@@ -14,7 +14,8 @@
Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters
============================== ====== ===== === ======= ======= ======= =============== ================= =============== ============================== ====== ===== === ======= ======= ======= =============== ================= ===============
balance-scale 625 4 3 18.78 9.88 5.90 0.970000±0.0020 0.233304±0.0481 {'max_features': 'auto', 'splitter': 'mutual'} balance-scale 625 4 3 18.78 9.88 5.90 0.970000±0.0020 0.233304±0.0481 {'max_features': 'auto', 'splitter': 'mutual'}
balloons 16 4 2 4.72 2.86 2.78 0.556667±0.2941 0.021352±0.0058 {'max_features': 'auto', 'splitter': 'mutual'} balloons 16 4 2 4.72 2.86 2.78 0.556667±0.2941 0.021352±0.0058 {'max_features': 'auto', 'splitter': 'mutual'}
************************************************************************************************************************* *************************************************************************************************************************
* ✗ Less than or equal to ZeroR...: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0379 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0379 *
************************************************************************************************************************* *************************************************************************************************************************

View File

@@ -4,4 +4,4 @@
 1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A  1 2021-10-27 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json 0.04158 0.943 default A
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters  2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
Added results/results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json to some_results.xlsx Added results/results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json to some_results.xlsx
Generated file: some_results.xlsx Generated file: excel/some_results.xlsx

View File

@@ -5,4 +5,4 @@
 2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters  2 2021-09-30 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json 0.04544 0.173 With gridsearched hyperparameters
Added results/results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json to some_results.xlsx Added results/results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json to some_results.xlsx
Added results/results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json to some_results.xlsx Added results/results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json to some_results.xlsx
Generated file: some_results.xlsx Generated file: excel/some_results.xlsx

View File

@@ -1,7 +1,7 @@
************************************************************************************************************************* *************************************************************************************************************************
* STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-09 00:15:25 * * STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-09 00:15:25 *
* test * * test *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False * * Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 0.80 seconds, 0.00 hours, on iMac27 * * Execution took 0.80 seconds, 0.00 hours, on iMac27 *
* Score is accuracy * * Score is accuracy *
************************************************************************************************************************* *************************************************************************************************************************
@@ -9,8 +9,9 @@
Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters
============================== ====== ===== === ======= ======= ======= =============== ================= =============== ============================== ====== ===== === ======= ======= ======= =============== ================= ===============
balance-scale 625 4 3 23.32 12.16 6.44 0.840160±0.0304 0.013745±0.0019 {'splitter': 'best', 'max_features': 'auto'} balance-scale 625 4 3 23.32 12.16 6.44 0.840160±0.0304 0.013745±0.0019 {'splitter': 'best', 'max_features': 'auto'}
balloons 16 4 2 3.00 2.00 2.00 0.860000±0.2850 0.000388±0.0000 {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'} balloons 16 4 2 3.00 2.00 2.00 0.860000±0.2850 0.000388±0.0000 {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}
************************************************************************************************************************* *************************************************************************************************************************
* ➶ Better than ZeroR + 10.0%.....: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0422 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0422 *
************************************************************************************************************************* *************************************************************************************************************************
Results in results/results_accuracy_STree_iMac27_2022-05-09_00:15:25_0.json Results in results/results_accuracy_STree_iMac27_2022-05-09_00:15:25_0.json

View File

@@ -1,7 +1,7 @@
************************************************************************************************************************* *************************************************************************************************************************
* STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-08 20:14:43 * * STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-08 20:14:43 *
* test * * test *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False * * Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 0.48 seconds, 0.00 hours, on iMac27 * * Execution took 0.48 seconds, 0.00 hours, on iMac27 *
* Score is accuracy * * Score is accuracy *
************************************************************************************************************************* *************************************************************************************************************************
@@ -11,6 +11,7 @@ Dataset Sampl. Feat. Cls Nodes Leaves Depth Score
balance-scale 625 4 3 17.36 9.18 6.18 0.908480±0.0247 0.007388±0.0013 {} balance-scale 625 4 3 17.36 9.18 6.18 0.908480±0.0247 0.007388±0.0013 {}
balloons 16 4 2 4.64 2.82 2.66 0.663333±0.3009 0.000664±0.0002 {} balloons 16 4 2 4.64 2.82 2.66 0.663333±0.3009 0.000664±0.0002 {}
************************************************************************************************************************* *************************************************************************************************************************
* ➶ Better than ZeroR + 10.0%.....: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0390 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0390 *
************************************************************************************************************************* *************************************************************************************************************************
Results in results/results_accuracy_STree_iMac27_2022-05-08_20:14:43_0.json Results in results/results_accuracy_STree_iMac27_2022-05-08_20:14:43_0.json

View File

@@ -1,15 +1,16 @@
************************************************************************************************************************* *************************************************************************************************************************
* STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-08 19:38:28 * * STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-08 19:38:28 *
* Test with only one dataset * * Test with only one dataset *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False * * Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 0.06 seconds, 0.00 hours, on iMac27 * * Execution took 0.06 seconds, 0.00 hours, on iMac27 *
* Score is accuracy * * Score is accuracy *
************************************************************************************************************************* *************************************************************************************************************************
Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters
============================== ====== ===== === ======= ======= ======= =============== ================= =============== ============================== ====== ===== === ======= ======= ======= =============== ================= ===============
balloons 16 4 2 4.64 2.82 2.66 0.663333±0.3009 0.000671±0.0001 {} balloons 16 4 2 4.64 2.82 2.66 0.663333±0.3009 0.000671±0.0001 {}
************************************************************************************************************************* *************************************************************************************************************************
* ➶ Better than ZeroR + 10.0%.....: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0165 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0165 *
************************************************************************************************************************* *************************************************************************************************************************
Partial result file removed: results/results_accuracy_STree_iMac27_2022-05-08_19:38:28_0.json Partial result file removed: results/results_accuracy_STree_iMac27_2022-05-08_19:38:28_0.json

View File

@@ -1,7 +1,7 @@
************************************************************************************************************************* *************************************************************************************************************************
* STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-09 00:21:06 * * STree ver. 1.2.4 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2022-05-09 00:21:06 *
* test * * test *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False * * Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 0.89 seconds, 0.00 hours, on iMac27 * * Execution took 0.89 seconds, 0.00 hours, on iMac27 *
* Score is accuracy * * Score is accuracy *
************************************************************************************************************************* *************************************************************************************************************************
@@ -11,6 +11,7 @@ Dataset Sampl. Feat. Cls Nodes Leaves Depth Score
balance-scale 625 4 3 26.12 13.56 7.94 0.910720±0.0249 0.015852±0.0027 {'C': 1.0, 'kernel': 'liblinear', 'multiclass_strategy': 'ovr'} balance-scale 625 4 3 26.12 13.56 7.94 0.910720±0.0249 0.015852±0.0027 {'C': 1.0, 'kernel': 'liblinear', 'multiclass_strategy': 'ovr'}
balloons 16 4 2 4.64 2.82 2.66 0.663333±0.3009 0.000640±0.0001 {'C': 1.0, 'kernel': 'linear', 'multiclass_strategy': 'ovr'} balloons 16 4 2 4.64 2.82 2.66 0.663333±0.3009 0.000640±0.0001 {'C': 1.0, 'kernel': 'linear', 'multiclass_strategy': 'ovr'}
************************************************************************************************************************* *************************************************************************************************************************
* ➶ Better than ZeroR + 10.0%.....: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0391 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0391 *
************************************************************************************************************************* *************************************************************************************************************************
Results in results/results_accuracy_STree_iMac27_2022-05-09_00:21:06_0.json Results in results/results_accuracy_STree_iMac27_2022-05-09_00:21:06_0.json

View File

@@ -3,12 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;" 624.25 s" 3;5;" 624.25 s"
3;7;" " 3;7;"Platform"
3;8;"Platform"
3;9;"iMac27" 3;9;"iMac27"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;5;" 0.17 h" 4;5;" 0.17 h"
4;10;"Stratified: False" 4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -17,10 +17,11 @@
6;6;"Leaves" 6;6;"Leaves"
6;7;"Depth" 6;7;"Depth"
6;8;"Score" 6;8;"Score"
6;9;"Score Std." 6;9;"Stat"
6;10;"Time" 6;10;"Score Std."
6;11;"Time Std." 6;11;"Time"
6;12;"Hyperparameters" 6;12;"Time Std."
6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
@@ -29,10 +30,11 @@
7;6;"4" 7;6;"4"
7;7;"3" 7;7;"3"
7;8;"0.97056" 7;8;"0.97056"
7;9;"0.0150468069702512" 7;9;" "
7;10;"0.01404867172241211" 7;10;"0.0150468069702512"
7;11;"0.002026269126958884" 7;11;"0.01404867172241211"
7;12;"{'C': 10000.0, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'}" 7;12;"0.002026269126958884"
7;13;"{'C': 10000, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}"
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
@@ -41,8 +43,12 @@
8;6;"2" 8;6;"2"
8;7;"2" 8;7;"2"
8;8;"0.86" 8;8;"0.86"
8;9;"0.2850146195080759" 8;9;""
8;10;"0.0008541679382324218" 8;10;"0.2850146195080759"
8;11;"3.629469326417878e-05" 8;11;"0.0008541679382324218"
8;12;"{'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'}" 8;12;"3.629469326417878e-05"
10;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0454" 8;13;"{'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}"
11;2;"➶"
11;3;"1"
11;4;"Better than ZeroR + 10.0%"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0454"

View File

@@ -3,12 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;"3,395.01 s" 3;5;"3,395.01 s"
3;7;" " 3;7;"Platform"
3;8;"Platform"
3;9;"iMac27" 3;9;"iMac27"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;5;" 0.94 h" 4;5;" 0.94 h"
4;10;"Stratified: False" 4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -17,10 +17,11 @@
6;6;"Leaves" 6;6;"Leaves"
6;7;"Depth" 6;7;"Depth"
6;8;"Score" 6;8;"Score"
6;9;"Score Std." 6;9;"Stat"
6;10;"Time" 6;10;"Score Std."
6;11;"Time Std." 6;11;"Time"
6;12;"Hyperparameters" 6;12;"Time Std."
6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
@@ -29,10 +30,11 @@
7;6;"5.9" 7;6;"5.9"
7;7;"5.9" 7;7;"5.9"
7;8;"0.98" 7;8;"0.98"
7;9;"0.001" 7;9;" "
7;10;"0.2852065515518188" 7;10;"0.001"
7;11;"0.06031593282605064" 7;11;"0.2852065515518188"
7;12;"{'splitter': 'best', 'max_features': 'auto'}" 7;12;"0.06031593282605064"
7;13;"{'splitter': 'best', 'max_features': 'auto'}"
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
@@ -41,8 +43,12 @@
8;6;"2.56" 8;6;"2.56"
8;7;"2.56" 8;7;"2.56"
8;8;"0.695" 8;8;"0.695"
8;9;"0.2756860130252853" 8;9;""
8;10;"0.02120100021362305" 8;10;"0.2756860130252853"
8;11;"0.003526023309468471" 8;11;"0.02120100021362305"
8;12;"{'splitter': 'best', 'max_features': 'auto'}" 8;12;"0.003526023309468471"
10;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0416" 8;13;"{'splitter': 'best', 'max_features': 'auto'}"
11;2;"➶"
11;3;"1"
11;4;"Better than ZeroR + 10.0%"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0416"

View File

@@ -3,12 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;"22,591.47 s" 3;5;"22,591.47 s"
3;7;" " 3;7;"Platform"
3;8;"Platform"
3;9;"Galgo" 3;9;"Galgo"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;5;" 6.28 h" 4;5;" 6.28 h"
4;10;"Stratified: False" 4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -17,10 +17,11 @@
6;6;"Leaves" 6;6;"Leaves"
6;7;"Depth" 6;7;"Depth"
6;8;"Score" 6;8;"Score"
6;9;"Score Std." 6;9;"Stat"
6;10;"Time" 6;10;"Score Std."
6;11;"Time Std." 6;11;"Time"
6;12;"Hyperparameters" 6;12;"Time Std."
6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
@@ -29,10 +30,11 @@
7;6;"4.180599999999999" 7;6;"4.180599999999999"
7;7;"3.536" 7;7;"3.536"
7;8;"0.96352" 7;8;"0.96352"
7;9;"0.02494974148162661" 7;9;" "
7;10;"0.3166321754455567" 7;10;"0.02494974148162661"
7;11;"0.1991881389525559" 7;11;"0.3166321754455567"
7;12;"{'base_estimator__C': 57, 'base_estimator__gamma': 0.1, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}" 7;12;"0.1991881389525559"
7;13;"{'base_estimator__C': 57, 'base_estimator__gamma': 0.1, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}"
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
@@ -41,8 +43,12 @@
8;6;"1.9976" 8;6;"1.9976"
8;7;"1.9976" 8;7;"1.9976"
8;8;"0.785" 8;8;"0.785"
8;9;"0.2461311755051675" 8;9;""
8;10;"0.1156062078475952" 8;10;"0.2461311755051675"
8;11;"0.0127842418285999" 8;11;"0.1156062078475952"
8;12;"{'base_estimator__C': 5, 'base_estimator__gamma': 0.14, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}" 8;12;"0.0127842418285999"
10;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0434" 8;13;"{'base_estimator__C': 5, 'base_estimator__gamma': 0.14, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}"
11;2;"➶"
11;3;"1"
11;4;"Better than ZeroR + 10.0%"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0434"

View File

@@ -3,12 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;"3,395.01 s" 3;5;"3,395.01 s"
3;7;" " 3;7;"Platform"
3;8;"Platform"
3;9;"iMac27" 3;9;"iMac27"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;5;" 0.94 h" 4;5;" 0.94 h"
4;10;"Stratified: False" 4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -17,9 +17,11 @@
6;6;"Leaves" 6;6;"Leaves"
6;7;"Depth" 6;7;"Depth"
6;8;"Score" 6;8;"Score"
6;9;"Score Std." 6;9;"Stat"
6;10;"Time" 6;10;"Score Std."
6;11;"Time Std." 6;11;"Time"
6;12;"Time Std."
6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
@@ -28,9 +30,11 @@
7;6;"5.9" 7;6;"5.9"
7;7;"5.9" 7;7;"5.9"
7;8;"0.98" 7;8;"0.98"
7;9;"0.001" 7;9;" "
7;10;"0.2852065515518188" 7;10;"0.001"
7;11;"0.06031593282605064" 7;11;"0.2852065515518188"
7;12;"0.06031593282605064"
7;13;"{'splitter': 'best', 'max_features': 'auto'}"
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
@@ -39,8 +43,12 @@
8;6;"2.56" 8;6;"2.56"
8;7;"2.56" 8;7;"2.56"
8;8;"0.695" 8;8;"0.695"
8;9;"0.2756860130252853" 8;9;""
8;10;"0.02120100021362305" 8;10;"0.2756860130252853"
8;11;"0.003526023309468471" 8;11;"0.02120100021362305"
8;12;"{'splitter': 'best', 'max_features': 'auto'}" 8;12;"0.003526023309468471"
10;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0416" 8;13;"{'splitter': 'best', 'max_features': 'auto'}"
11;2;"➶"
11;3;"1"
11;4;"Better than ZeroR + 10.0%"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0416"

View File

@@ -3,10 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;" 624.25 s" 3;5;" 624.25 s"
3;8;"Platform" 3;7;"Platform"
3;9;"iMac27" 3;9;"iMac27"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;10;"Stratified: False" 4;5;" 0.17 h"
4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -21,32 +23,32 @@
6;12;"Time Std." 6;12;"Time Std."
6;13;"Hyperparameters" 6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;625 7;2;"625"
7;3;4 7;3;"4"
7;4;3 7;4;"3"
7;5;7 7;5;"7"
7;6;4 7;6;"4"
7;7;3 7;7;"3"
7;8;0.97056 7;8;"0.97056"
7;9;" " 7;9;" "
7;10;0.0150468069702512 7;10;"0.0150468069702512"
7;11;0.01404867172241211 7;11;"0.01404867172241211"
7;12;0.002026269126958884 7;12;"0.002026269126958884"
7;13;"{'C': 10000.0, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'}" 7;13;"{'C': 10000, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}"
8;1;"balloons" 8;1;"balloons"
8;2;16 8;2;"16"
8;3;4 8;3;"4"
8;4;2 8;4;"2"
8;5;3 8;5;"3"
8;6;2 8;6;"2"
8;7;2 8;7;"2"
8;8;0.86 8;8;"0.86"
8;9;"✔" 8;9;"✔"
8;10;0.2850146195080759 8;10;"0.2850146195080759"
8;11;0.0008541679382324218 8;11;"0.0008541679382324218"
8;12;3.629469326417878e-05 8;12;"3.629469326417878e-05"
8;13;"{'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'}" 8;13;"{'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}"
11;2;"✔" 11;2;"✔"
11;3;1 11;3;"1"
11;4;"Equal to best" 11;4;"Equal to best"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0454" 13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0454"

View File

@@ -1,25 +1,28 @@
1;1;"Datasets used in benchmark ver. 0.2.0" 1;1;"Datasets used in benchmark ver. 1.0.1"
2;1;" Default score accuracy" 2;1;" Default score accuracy"
2;2;"Cross validation" 2;2;"Cross validation"
2;5;"5 Folds" 2;6;"5 Folds"
3;2;"Stratified" 3;2;"Stratified"
3;5;"False" 3;6;"False"
4;2;"Discretized" 4;2;"Discretized"
4;5;"False" 4;6;"False"
5;2;"Seeds" 5;2;"Seeds"
5;5;"[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 5;6;"[57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
6;4;"Classes" 6;4;"Continuous"
6;5;"Balance" 6;5;"Classes"
6;6;"Balance"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
7;4;"3" 7;4;"0"
7;5;" 7.84%/ 46.08%/ 46.08%" 7;5;"3"
7;6;" 7.84% (49) / 46.08% (288) / 46.08% (288) "
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
8;4;"2" 8;4;"0"
8;5;"56.25%/ 43.75%" 8;5;"2"
8;6;"56.25% (9) / 43.75% (7) "

View File

@@ -3,12 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;"22,591.47 s" 3;5;"22,591.47 s"
3;7;" " 3;7;"Platform"
3;8;"Platform"
3;9;"Galgo" 3;9;"Galgo"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;5;" 6.28 h" 4;5;" 6.28 h"
4;10;"Stratified: False" 4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -17,10 +17,11 @@
6;6;"Leaves" 6;6;"Leaves"
6;7;"Depth" 6;7;"Depth"
6;8;"Score" 6;8;"Score"
6;9;"Score Std." 6;9;"Stat"
6;10;"Time" 6;10;"Score Std."
6;11;"Time Std." 6;11;"Time"
6;12;"Hyperparameters" 6;12;"Time Std."
6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
@@ -29,10 +30,11 @@
7;6;"4.180599999999999" 7;6;"4.180599999999999"
7;7;"3.536" 7;7;"3.536"
7;8;"0.96352" 7;8;"0.96352"
7;9;"0.02494974148162661" 7;9;" "
7;10;"0.3166321754455567" 7;10;"0.02494974148162661"
7;11;"0.1991881389525559" 7;11;"0.3166321754455567"
7;12;"{'base_estimator__C': 57, 'base_estimator__gamma': 0.1, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}" 7;12;"0.1991881389525559"
7;13;"{'base_estimator__C': 57, 'base_estimator__gamma': 0.1, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}"
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
@@ -41,8 +43,12 @@
8;6;"1.9976" 8;6;"1.9976"
8;7;"1.9976" 8;7;"1.9976"
8;8;"0.785" 8;8;"0.785"
8;9;"0.2461311755051675" 8;9;""
8;10;"0.1156062078475952" 8;10;"0.2461311755051675"
8;11;"0.0127842418285999" 8;11;"0.1156062078475952"
8;12;"{'base_estimator__C': 5, 'base_estimator__gamma': 0.14, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}" 8;12;"0.0127842418285999"
10;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0434" 8;13;"{'base_estimator__C': 5, 'base_estimator__gamma': 0.14, 'base_estimator__kernel': 'rbf', 'base_estimator__multiclass_strategy': 'ovr', 'n_estimators': 100, 'n_jobs': -1}"
11;2;"➶"
11;3;"1"
11;4;"Better than ZeroR + 10.0%"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0434"

View File

@@ -3,12 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;" 272.74 s" 3;5;" 272.74 s"
3;7;" " 3;7;"Platform"
3;8;"Platform"
3;9;"iMac27" 3;9;"iMac27"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;5;" 0.08 h" 4;5;" 0.08 h"
4;10;"Stratified: False" 4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -17,10 +17,11 @@
6;6;"Leaves" 6;6;"Leaves"
6;7;"Depth" 6;7;"Depth"
6;8;"Score" 6;8;"Score"
6;9;"Score Std." 6;9;"Stat"
6;10;"Time" 6;10;"Score Std."
6;11;"Time Std." 6;11;"Time"
6;12;"Hyperparameters" 6;12;"Time Std."
6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
@@ -29,10 +30,11 @@
7;6;"98.42" 7;6;"98.42"
7;7;"10.6814" 7;7;"10.6814"
7;8;"0.83616" 7;8;"0.83616"
7;9;"0.02649630917694009" 7;9;" "
7;10;"0.08222018241882324" 7;10;"0.02649630917694009"
7;11;"0.001302632681512063" 7;11;"0.08222018241882324"
7;12;"{}" 7;12;"0.001302632681512063"
7;13;"{}"
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
@@ -41,8 +43,12 @@
8;6;"4.58" 8;6;"4.58"
8;7;"3.0982" 8;7;"3.0982"
8;8;"0.625" 8;8;"0.625"
8;9;"0.249582985531199" 8;9;""
8;10;"0.07016648769378662" 8;10;"0.249582985531199"
8;11;"0.002460508923990468" 8;11;"0.07016648769378662"
8;12;"{}" 8;12;"0.002460508923990468"
10;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0363" 8;13;"{}"
11;2;"➶"
11;3;"1"
11;4;"Better than ZeroR + 10.0%"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0363"

View File

@@ -3,12 +3,12 @@
3;1;" Score is accuracy" 3;1;" Score is accuracy"
3;2;" Execution time" 3;2;" Execution time"
3;5;" 624.25 s" 3;5;" 624.25 s"
3;7;" " 3;7;"Platform"
3;8;"Platform"
3;9;"iMac27" 3;9;"iMac27"
3;10;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]" 3;11;"Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1]"
4;5;" 0.17 h" 4;5;" 0.17 h"
4;10;"Stratified: False" 4;11;"Stratified: False"
4;13;"Discretized: False"
6;1;"Dataset" 6;1;"Dataset"
6;2;"Samples" 6;2;"Samples"
6;3;"Features" 6;3;"Features"
@@ -17,10 +17,11 @@
6;6;"Leaves" 6;6;"Leaves"
6;7;"Depth" 6;7;"Depth"
6;8;"Score" 6;8;"Score"
6;9;"Score Std." 6;9;"Stat"
6;10;"Time" 6;10;"Score Std."
6;11;"Time Std." 6;11;"Time"
6;12;"Hyperparameters" 6;12;"Time Std."
6;13;"Hyperparameters"
7;1;"balance-scale" 7;1;"balance-scale"
7;2;"625" 7;2;"625"
7;3;"4" 7;3;"4"
@@ -29,10 +30,11 @@
7;6;"4" 7;6;"4"
7;7;"3" 7;7;"3"
7;8;"0.97056" 7;8;"0.97056"
7;9;"0.0150468069702512" 7;9;" "
7;10;"0.01404867172241211" 7;10;"0.0150468069702512"
7;11;"0.002026269126958884" 7;11;"0.01404867172241211"
7;12;"{'C': 10000.0, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'}" 7;12;"0.002026269126958884"
7;13;"{'C': 10000, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}"
8;1;"balloons" 8;1;"balloons"
8;2;"16" 8;2;"16"
8;3;"4" 8;3;"4"
@@ -41,8 +43,12 @@
8;6;"2" 8;6;"2"
8;7;"2" 8;7;"2"
8;8;"0.86" 8;8;"0.86"
8;9;"0.2850146195080759" 8;9;""
8;10;"0.0008541679382324218" 8;10;"0.2850146195080759"
8;11;"3.629469326417878e-05" 8;11;"0.0008541679382324218"
8;12;"{'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'}" 8;12;"3.629469326417878e-05"
10;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0454" 8;13;"{'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}"
11;2;"➶"
11;3;"1"
11;4;"Better than ZeroR + 10.0%"
13;1;"** accuracy compared to STree_default (liblinear-ovr) .: 0.0454"

View File

@@ -1,15 +1,16 @@
************************************************************************************************************************* *************************************************************************************************************************
* STree ver. 1.2.3 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2021-09-30 11:42:07 * * STree ver. 1.2.3 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2021-09-30 11:42:07 *
* With gridsearched hyperparameters * * With gridsearched hyperparameters *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False * * Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 624.25 seconds, 0.17 hours, on iMac27 * * Execution took 624.25 seconds, 0.17 hours, on iMac27 *
* Score is accuracy * * Score is accuracy *
************************************************************************************************************************* *************************************************************************************************************************
Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters
============================== ====== ===== === ======= ======= ======= =============== ================= =============== ============================== ====== ===== === ======= ======= ======= =============== ================= ===============
balance-scale 625 4 3 7.00 4.00 3.00 0.970560±0.0150 0.014049±0.0020 {'C': 10000.0, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'} balance-scale 625 4 3 7.00 4.00 3.00 0.970560±0.0150 0.014049±0.0020 {'C': 10000, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}
balloons 16 4 2 3.00 2.00 2.00 0.860000±0.2850 0.000854±0.0000 {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'} balloons 16 4 2 3.00 2.00 2.00 0.860000±0.2850 0.000854±0.0000 {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}
************************************************************************************************************************* *************************************************************************************************************************
* ➶ Better than ZeroR + 10.0%.....: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0454 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0454 *
************************************************************************************************************************* *************************************************************************************************************************

View File

@@ -5,7 +5,7 @@
Dataset Score File/Message Hyperparameters Dataset Score File/Message Hyperparameters
============================== ======== ============================================================================ ============================================= ============================== ======== ============================================================================ =============================================
balance-scale 0.980000 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json {'splitter': 'best', 'max_features': 'auto'} balance-scale 0.980000 results_accuracy_STree_iMac27_2021-10-27_09:40:40_0.json {'splitter': 'best', 'max_features': 'auto'}
balloons 0.860000 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'} balloons 0.860000 results_accuracy_STree_iMac27_2021-09-30_11:42:07_0.json {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}
****************************************************************************************************************************************************************** ******************************************************************************************************************************************************************
* accuracy compared to STree_default (liblinear-ovr) .: 0.0457 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0457 *
****************************************************************************************************************************************************************** ******************************************************************************************************************************************************************

View File

@@ -1,16 +1,16 @@
************************************************************************************************************************* *************************************************************************************************************************
* STree ver. 1.2.3 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2021-09-30 11:42:07 * * STree ver. 1.2.3 Python ver. 3.11x with 5 Folds cross validation and 10 random seeds. 2021-09-30 11:42:07 *
* With gridsearched hyperparameters * * With gridsearched hyperparameters *
* Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False * * Random seeds: [57, 31, 1714, 17, 23, 79, 83, 97, 7, 1] Stratified: False Discretized: False *
* Execution took 624.25 seconds, 0.17 hours, on iMac27 * * Execution took 624.25 seconds, 0.17 hours, on iMac27 *
* Score is accuracy * * Score is accuracy *
************************************************************************************************************************* *************************************************************************************************************************
Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters Dataset Sampl. Feat. Cls Nodes Leaves Depth Score Time Hyperparameters
============================== ====== ===== === ======= ======= ======= =============== ================= =============== ============================== ====== ===== === ======= ======= ======= =============== ================= ===============
balance-scale 625 4 3 7.00 4.00 3.00 0.970560±0.0150 0.014049±0.0020 {'C': 10000.0, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'} balance-scale 625 4 3 7.00 4.00 3.00 0.970560±0.0150 0.014049±0.0020 {'C': 10000, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}
balloons 16 4 2 3.00 2.00 2.00 0.860000±0.2850✔ 0.000854±0.0000 {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000.0, 'multiclass_strategy': 'ovr'} balloons 16 4 2 3.00 2.00 2.00 0.860000±0.2850✔ 0.000854±0.0000 {'C': 7, 'gamma': 0.1, 'kernel': 'rbf', 'max_iter': 10000, 'multiclass_strategy': 'ovr'}
************************************************************************************************************************* *************************************************************************************************************************
* ✔ Equal to best .....: 1 * * ✔ Equal to best.................: 1 *
* accuracy compared to STree_default (liblinear-ovr) .: 0.0454 * * accuracy compared to STree_default (liblinear-ovr) .: 0.0454 *
************************************************************************************************************************* *************************************************************************************************************************

Some files were not shown because too many files have changed in this diff Show More