50 Commits
v1.1.3 ... 211

Author SHA1 Message Date
c86c7f9ef0 Update github workflow remove coverage report 2025-07-19 21:59:54 +02:00
828e6a28c0 Update github workflow 2025-07-19 21:53:59 +02:00
8f2a0015d9 Update github workflow 2025-07-19 21:36:53 +02:00
8945a3f16e Update github workflow 2025-07-19 20:47:24 +02:00
80b7d6e6f7 Update github build workflow 2025-07-19 20:40:32 +02:00
b1d550f211 Fix release build 2025-07-19 20:29:06 +02:00
8a1b68376d Update debug build 2025-07-17 11:45:56 +02:00
Ricardo Montañana Gómez
563a84659f Fix conan and create new version (#11)
* First approach

* Fix debug conan build target

* Add viewcoverage and fix coverage generation

* Add more tests to cover new integrity checks

* Add tests to accomplish 100%

* Fix conan-create makefile target
2025-07-17 00:14:18 +02:00
1b9d924ebe Update version and dependencies 2025-07-16 23:40:33 +02:00
08d8910b34 Add version 2.7.1 2025-07-16 16:11:16 +02:00
Ricardo Montañana Gómez
6d8b55a808 Fix conan (#10)
* Fix debug conan build target

* Add viewcoverage and fix coverage generation

* Add more tests to cover new integrity checks

* Add tests to accomplish 100%

* Fix conan-create makefile target
2025-07-02 20:09:34 +02:00
c1759ba1ce Fix conan build 2025-06-28 19:17:44 +02:00
f1dae498ac Fix tests 2025-06-28 18:41:33 +02:00
4418ea8a6f Compiling right 2025-06-28 17:18:57 +02:00
159e24b5cb Remove submodule 2025-06-28 16:38:43 +02:00
77e28e728e Remove submodule 2025-06-28 16:38:19 +02:00
18db982dec Update build method 2025-06-28 13:55:04 +02:00
99b751a4d4 Claude enhancement proposal 2025-06-28 13:17:31 +02:00
059fd33b4e Begin adding conan dependency manager 2025-06-28 01:27:22 +02:00
e068bf0a54 Add technical analysis report 2025-06-27 12:35:48 +02:00
Ricardo Montañana Gómez
cfb993f5ec Update README.md 2024-11-29 14:43:37 +01:00
7d62d6af4a Remove unneeded ; 2024-11-20 20:07:09 +01:00
ea70535984 Update config variable names 2024-09-29 13:28:44 +02:00
2d8b949abd Refactor library version and installation 2024-07-23 00:36:31 +02:00
ab12622009 Add install cmake/make target 2024-07-22 22:01:33 +02:00
248a511972 Add flag to build sample in Makefile 2024-07-22 19:38:12 +02:00
d9bd0126f9 Fix version number in tests 2024-07-22 12:23:21 +02:00
210af46a88 Change library name to fimdlp 2024-07-22 11:26:16 +02:00
2db60e007d Update version in test 2024-07-04 18:21:26 +02:00
1cf245fa49 Update version number 2024-07-04 18:19:05 +02:00
Ricardo Montañana Gómez
e36d9af8f9 Fix BinDisc quantile mistakes (#9)
* Fix BinDisc quantile mistakes

* Fix FImdlp tests

* Fix tests, samples and remove uneeded support files

* Add coypright header to sources
Fix coverage report
Add coverage badge to README

* Update sonar github action

* Move sources to a folder and change ArffFiles files to library

* Add recursive submodules to github action
2024-07-04 17:27:39 +02:00
7b0673fd4b Update README 2024-06-24 11:47:03 +02:00
a1346e1943 Fix Error in percentile method 2024-06-24 10:55:26 +02:00
b3fc598c29 Update build.yml 2024-06-14 22:04:29 +02:00
cc1efa0b4e Update README 2024-06-14 22:01:11 +02:00
90965877eb Add Makefile with build & test actions 2024-06-14 21:17:30 +02:00
c4e6c041fe Fix int type 2024-06-09 00:29:55 +02:00
7938df7f0f Update sonar mdlp version 2024-06-08 13:25:28 +02:00
7ee9896734 Fix mistake in github action 2024-06-08 12:36:56 +02:00
8f7f605670 Fix mistake in github action 2024-06-08 12:32:18 +02:00
2f55b27691 Fix mistake in github action 2024-06-08 12:28:23 +02:00
378fbd51ef Fix mistake in github action 2024-06-08 12:25:17 +02:00
402d0da878 Fix mistake in github action 2024-06-08 12:23:28 +02:00
f34bcc2ed7 Add libtorch to github action 2024-06-08 12:20:51 +02:00
c9ba35fb58 update test script 2024-06-08 12:02:16 +02:00
e205668906 Add torch methods to discretize
Add fit_transform methods
2024-06-07 23:54:42 +02:00
633aa52849 Refactor sample build 2024-06-06 12:04:55 +02:00
61de687476 Fix library creation problem 2024-06-06 11:13:50 +02:00
7ff88c8e4b Update Discretizer version 2024-06-05 17:55:45 +02:00
Ricardo Montañana Gómez
638bb2a59e Discretizer (#8)
* Add better check in testKBins.py

* Add Discretizer base class for Both discretizers

* Refactor order of constructors init
2024-06-05 17:53:08 +02:00
58 changed files with 3752 additions and 1169 deletions

11
.conan/profiles/default Normal file
View File

@@ -0,0 +1,11 @@
[settings]
os=Linux
arch=x86_64
compiler=gcc
compiler.version=11
compiler.libcxx=libstdc++11
build_type=Release
[conf]
tools.system.package_manager:mode=install
tools.system.package_manager:sudo=True

View File

@@ -13,28 +13,35 @@ jobs:
env:
BUILD_WRAPPER_OUT_DIR: build_wrapper_output_directory # Directory where build-wrapper output will be placed
steps:
- uses: actions/checkout@v4.1.6
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
submodules: recursive
- name: Install sonar-scanner and build-wrapper
uses: SonarSource/sonarcloud-github-c-cpp@v2
- name: Install Python and Conan
run: |
sudo apt-get update
sudo apt-get -y install python3 python3-pip
pip3 install conan
- name: Install lcov & gcovr
run: |
sudo apt-get -y install lcov
sudo apt-get -y install gcovr
- name: Setup Conan profileson
run: |
conan profile detect --force
conan remote add cimmeria https://conan.rmontanana.es/artifactory/api/conan/Cimmeria
- name: Install dependencies with Conan
run: |
conan install . --build=missing -of build_debug -s build_type=Debug -o enable_testing=True
- name: Configure with CMake
run: |
cmake -S . -B build_debug -DCMAKE_TOOLCHAIN_FILE=build_debug/build/Debug/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Debug -DENABLE_TESTING=ON
- name: Tests & build-wrapper
run: |
cmake -S . -B build -Wno-dev
build-wrapper-linux-x86-64 --out-dir ${{ env.BUILD_WRAPPER_OUT_DIR }} cmake --build build/ --config Release
cd build
make
ctest -C Release --output-on-failure --test-dir tests
cd ..
gcovr -f CPPFImdlp.cpp -f Metrics.cpp -f BinDisc.cpp --txt --sonarqube=coverage.xml
- name: Run sonar-scanner
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
run: |
sonar-scanner --define sonar.cfamily.build-wrapper-output="${{ env.BUILD_WRAPPER_OUT_DIR }}" \
--define sonar.coverageReportPaths=coverage.xml
build-wrapper-linux-x86-64 --out-dir ${{ env.BUILD_WRAPPER_OUT_DIR }} cmake --build build_debug --config Debug -j 4
cp -r tests/datasets build_debug/tests/datasets
cd build_debug/tests
ctest --output-on-failure -j 4

5
.gitignore vendored
View File

@@ -33,8 +33,11 @@
**/build
build_Debug
build_Release
build_debug
build_release
**/lcoverage
.idea
cmake-*
**/CMakeFiles
**/gcovr-report
**/gcovr-report
CMakeUserPresets.json

11
.vscode/launch.json vendored
View File

@@ -8,15 +8,10 @@
"name": "C++ Launch config",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/build/sample/sample",
"cwd": "${workspaceFolder}/build/sample",
"args": [
"-f",
"glass"
],
"targetArchitecture": "arm64",
"program": "${workspaceFolder}/tests/build/BinDisc_unittest",
"cwd": "${workspaceFolder}/tests/build",
"args": [],
"launchCompleteCommand": "exec-run",
"preLaunchTask": "CMake: build",
"stopAtEntry": false,
"linux": {
"MIMode": "gdb",

106
.vscode/settings.json vendored
View File

@@ -5,5 +5,109 @@
},
"C_Cpp.default.configurationProvider": "ms-vscode.cmake-tools",
"cmake.configureOnOpen": true,
"sonarlint.pathToCompileCommands": "${workspaceFolder}/build/compile_commands.json"
"sonarlint.pathToCompileCommands": "${workspaceFolder}/build/compile_commands.json",
"files.associations": {
"*.rmd": "markdown",
"*.py": "python",
"vector": "cpp",
"__bit_reference": "cpp",
"__bits": "cpp",
"__config": "cpp",
"__debug": "cpp",
"__errc": "cpp",
"__hash_table": "cpp",
"__locale": "cpp",
"__mutex_base": "cpp",
"__node_handle": "cpp",
"__nullptr": "cpp",
"__split_buffer": "cpp",
"__string": "cpp",
"__threading_support": "cpp",
"__tuple": "cpp",
"array": "cpp",
"atomic": "cpp",
"bitset": "cpp",
"cctype": "cpp",
"chrono": "cpp",
"clocale": "cpp",
"cmath": "cpp",
"compare": "cpp",
"complex": "cpp",
"concepts": "cpp",
"cstdarg": "cpp",
"cstddef": "cpp",
"cstdint": "cpp",
"cstdio": "cpp",
"cstdlib": "cpp",
"cstring": "cpp",
"ctime": "cpp",
"cwchar": "cpp",
"cwctype": "cpp",
"exception": "cpp",
"initializer_list": "cpp",
"ios": "cpp",
"iosfwd": "cpp",
"istream": "cpp",
"limits": "cpp",
"locale": "cpp",
"memory": "cpp",
"mutex": "cpp",
"new": "cpp",
"optional": "cpp",
"ostream": "cpp",
"ratio": "cpp",
"sstream": "cpp",
"stdexcept": "cpp",
"streambuf": "cpp",
"string": "cpp",
"string_view": "cpp",
"system_error": "cpp",
"tuple": "cpp",
"type_traits": "cpp",
"typeinfo": "cpp",
"unordered_map": "cpp",
"variant": "cpp",
"algorithm": "cpp",
"iostream": "cpp",
"iomanip": "cpp",
"numeric": "cpp",
"set": "cpp",
"__tree": "cpp",
"deque": "cpp",
"list": "cpp",
"map": "cpp",
"unordered_set": "cpp",
"any": "cpp",
"condition_variable": "cpp",
"forward_list": "cpp",
"fstream": "cpp",
"stack": "cpp",
"thread": "cpp",
"__memory": "cpp",
"filesystem": "cpp",
"*.toml": "toml",
"utility": "cpp",
"span": "cpp",
"*.tcc": "cpp",
"bit": "cpp",
"charconv": "cpp",
"cinttypes": "cpp",
"codecvt": "cpp",
"functional": "cpp",
"iterator": "cpp",
"memory_resource": "cpp",
"random": "cpp",
"source_location": "cpp",
"format": "cpp",
"numbers": "cpp",
"semaphore": "cpp",
"stop_token": "cpp",
"text_encoding": "cpp",
"typeindex": "cpp",
"valarray": "cpp",
"csignal": "cpp",
"regex": "cpp",
"future": "cpp",
"shared_mutex": "cpp"
}
}

View File

@@ -1,138 +0,0 @@
#include <algorithm>
#include <limits>
#include <cmath>
#include "BinDisc.h"
#include <iostream>
#include <string>
namespace mdlp {
BinDisc::BinDisc(int n_bins, strategy_t strategy) : n_bins{ n_bins }, strategy{ strategy }
{
if (n_bins < 3) {
throw std::invalid_argument("n_bins must be greater than 2");
}
}
BinDisc::~BinDisc() = default;
void BinDisc::fit(samples_t& X)
{
cutPoints.clear();
if (X.empty()) {
cutPoints.push_back(std::numeric_limits<precision_t>::max());
return;
}
if (strategy == strategy_t::QUANTILE) {
fit_quantile(X);
} else if (strategy == strategy_t::UNIFORM) {
fit_uniform(X);
}
}
std::vector<precision_t> linspace(precision_t start, precision_t end, int num)
{
// Doesn't include end point as it is not needed
if (start == end) {
return { 0 };
}
precision_t delta = (end - start) / static_cast<precision_t>(num - 1);
std::vector<precision_t> linspc;
for (size_t i = 0; i < num - 1; ++i) {
precision_t val = start + delta * static_cast<precision_t>(i);
linspc.push_back(val);
}
return linspc;
}
size_t clip(const size_t n, size_t lower, size_t upper)
{
return std::max(lower, std::min(n, upper));
}
std::vector<precision_t> percentile(samples_t& data, std::vector<precision_t>& percentiles)
{
// Implementation taken from https://dpilger26.github.io/NumCpp/doxygen/html/percentile_8hpp_source.html
std::vector<precision_t> results;
results.reserve(percentiles.size());
for (auto percentile : percentiles) {
const size_t i = static_cast<size_t>(std::floor(static_cast<double>(data.size() - 1) * percentile / 100.));
const auto indexLower = clip(i, 0, data.size() - 1);
const double percentI = static_cast<double>(indexLower) / static_cast<double>(data.size() - 1);
const double fraction =
(percentile / 100.0 - percentI) /
(static_cast<double>(indexLower + 1) / static_cast<double>(data.size() - 1) - percentI);
const auto value = data[indexLower] + (data[indexLower + 1] - data[indexLower]) * fraction;
if (value != results.back())
results.push_back(value);
}
return results;
}
void BinDisc::fit_quantile(samples_t& X)
{
auto quantiles = linspace(0.0, 100.0, n_bins + 1);
auto data = X;
std::sort(data.begin(), data.end());
if (data.front() == data.back() || data.size() == 1) {
// if X is constant
cutPoints.push_back(std::numeric_limits<precision_t>::max());
return;
}
cutPoints = percentile(data, quantiles);
normalizeCutPoints();
}
void BinDisc::fit_uniform(samples_t& X)
{
auto minmax = std::minmax_element(X.begin(), X.end());
cutPoints = linspace(*minmax.first, *minmax.second, n_bins + 1);
normalizeCutPoints();
}
void BinDisc::normalizeCutPoints()
{
// Add max value to the end
cutPoints.push_back(std::numeric_limits<precision_t>::max());
// Remove first as it is not needed
cutPoints.erase(cutPoints.begin());
}
labels_t& BinDisc::transform(const samples_t& X)
{
discretizedData.clear();
discretizedData.reserve(X.size());
for (const precision_t& item : X) {
auto upper = std::upper_bound(cutPoints.begin(), cutPoints.end(), item);
discretizedData.push_back(upper - cutPoints.begin());
}
return discretizedData;
}
}
// void BinDisc::fit_quantile(samples_t& X)
// {
// cutPoints.clear();
// if (X.empty()) {
// cutPoints.push_back(std::numeric_limits<float>::max());
// return;
// }
// samples_t data = X;
// std::sort(data.begin(), data.end());
// float min_val = data.front();
// float max_val = data.back();
// // Handle case of all data points having the same value
// if (min_val == max_val) {
// cutPoints.push_back(std::numeric_limits<float>::max());
// return;
// }
// int first = X.size() / n_bins;
// cutPoints.push_back(data.at(first - 1));
// int bins_done = 1;
// int prev = first - 1;
// while (bins_done < n_bins) {
// int next = first * (bins_done + 1) - 1;
// while (next < X.size() && data.at(next) == data[prev]) {
// ++next;
// }
// if (next == X.size() || bins_done == n_bins - 1) {
// cutPoints.push_back(std::numeric_limits<float>::max());
// break;
// } else {
// cutPoints.push_back(data[next]);
// bins_done++;
// prev = next;
// }
// }
// }

View File

@@ -1,31 +0,0 @@
#ifndef BINDISC_H
#define BINDISC_H
#include "typesFImdlp.h"
#include <string>
namespace mdlp {
enum class strategy_t {
UNIFORM,
QUANTILE
};
class BinDisc {
public:
BinDisc(int n_bins = 3, strategy_t strategy = strategy_t::UNIFORM);
~BinDisc();
void fit(samples_t&);
inline cutPoints_t getCutPoints() const { return cutPoints; };
labels_t& transform(const samples_t&);
static inline std::string version() { return "1.0.0"; };
private:
void fit_uniform(samples_t&);
void fit_quantile(samples_t&);
void normalizeCutPoints();
int n_bins;
strategy_t strategy;
labels_t discretizedData = labels_t();
cutPoints_t cutPoints;
};
}
#endif

222
CHANGELOG.md Normal file
View File

@@ -0,0 +1,222 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [2.1.1] - 2025-07-17
### Internal Changes
- Updated Libtorch to version 2.7.1
- Updated ArffFiles library to version 1.2.1
- Enhance CMake configuration for better compatibility
## [2.1.0] - 2025-06-28
### Added
- Conan dependency manager support
- Technical analysis report
### Changed
- Updated README.md
- Refactored library version and installation system
- Updated config variable names
### Fixed
- Removed unneeded semicolon
## [2.0.1] - 2024-07-22
### Added
- CMake install target and make install command
- Flag to control sample building in Makefile
### Changed
- Library name changed to `fimdlp`
- Updated version numbers across test files
### Fixed
- Version number consistency in tests
## [2.0.0] - 2024-07-04
### Added
- Makefile with build & test actions for easier development
- PyTorch (libtorch) integration for tensor operations
### Changed
- Major refactoring of build system
- Updated build workflows and CI configuration
### Fixed
- BinDisc quantile calculation errors (#9)
- Error in percentile method calculation
- Integer type issues in calculations
- Multiple GitHub Actions configuration fixes
## [1.2.1] - 2024-06-08
### Added
- PyTorch tensor methods for discretization
- Improved library build system
### Changed
- Refactored sample build process
### Fixed
- Library creation and linking issues
- Multiple GitHub Actions workflow fixes
## [1.2.0] - 2024-06-05
### Added
- **Discretizer** - Abstract base class for all discretization algorithms (#8)
- **BinDisc** - K-bins discretization with quantile and uniform strategies (#7)
- Transform method to discretize values using existing cut points
- Support for multiple datasets in sample program
- Docker development container configuration
### Changed
- Refactored system types throughout the library
- Improved sample program with better dataset handling
- Enhanced build system with debug options
### Fixed
- Transform method initialization issues
- ARFF file attribute name extraction
- Sample program library binary separation
## [1.1.3] - 2024-06-05
### Added
- `max_cutpoints` hyperparameter for controlling algorithm complexity
- `max_depth` and `min_length` as configurable hyperparameters
- Enhanced sample program with hyperparameter support
- Additional datasets for testing
### Changed
- Improved constructor design and parameter handling
- Enhanced test coverage and reporting
- Refactored build system configuration
### Fixed
- Depth initialization in fit method
- Code quality improvements and smell fixes
- Exception handling in value cut point calculations
## [1.1.2] - 2023-04-01
### Added
- Comprehensive test suite with GitHub Actions CI
- SonarCloud integration for code quality analysis
- Enhanced build system with automated testing
### Changed
- Improved GitHub Actions workflow configuration
- Updated project structure for better maintainability
### Fixed
- Build system configuration issues
- Test execution and coverage reporting
## [1.1.1] - 2023-02-22
### Added
- Limits header for proper compilation
- Enhanced build system support
### Changed
- Updated version numbering system
- Improved SonarCloud configuration
### Fixed
- ValueCutPoint exception handling (removed unnecessary exception)
- Build system compatibility issues
- GitHub Actions token configuration
## [1.1.0] - 2023-02-21
### Added
- Classic algorithm implementation for performance comparison
- Enhanced ValueCutPoint logic with same_values detection
- Glass dataset support in sample program
- Debug configuration for development
### Changed
- Refactored ValueCutPoint algorithm for better accuracy
- Improved candidate selection logic
- Enhanced sample program with multiple datasets
### Fixed
- Sign error in valueCutPoint calculation
- Final cut value computation
- Duplicate dataset handling in sample
## [1.0.0.0] - 2022-12-21
### Added
- Initial release of MDLP (Minimum Description Length Principle) discretization library
- Core CPPFImdlp algorithm implementation based on Fayyad & Irani's paper
- Entropy and information gain calculation methods
- Sample program demonstrating library usage
- CMake build system
- Basic test suite
- ARFF file format support for datasets
### Features
- Recursive discretization using entropy-based criteria
- Stable sorting with tie-breaking for identical values
- Configurable algorithm parameters
- Cross-platform C++ implementation
---
## Release Notes
### Version 2.x
- **Breaking Changes**: Library renamed to `fimdlp`
- **Major Enhancement**: PyTorch integration for improved performance
- **New Features**: Comprehensive discretization framework with multiple algorithms
### Version 1.x
- **Core Algorithm**: MDLP discretization implementation
- **Extensibility**: Hyperparameter support and algorithm variants
- **Quality**: Comprehensive testing and CI/CD pipeline
### Version 1.0.x
- **Foundation**: Initial stable implementation
- **Algorithm**: Core MDLP discretization functionality

77
CLAUDE.md Normal file
View File

@@ -0,0 +1,77 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
This is a C++ implementation of the MDLP (Minimum Description Length Principle) discretization algorithm based on Fayyad & Irani's paper. The library provides discretization methods for continuous-valued attributes in classification learning.
## Build System
The project uses CMake with a Makefile wrapper for common tasks:
### Common Commands
- `make build` - Build release version with sample program
- `make test` - Run full test suite with coverage report
- `make install` - Install the library
### Build Configurations
- **Release**: Built in `build_release/` directory
- **Debug**: Built in `build_debug/` directory (for testing)
### Dependencies
- PyTorch (libtorch) - Required dependency
- GoogleTest - Fetched automatically for testing
- Coverage tools: lcov, genhtml
## Code Architecture
### Core Components
1. **Discretizer** (`src/Discretizer.h/cpp`) - Abstract base class for all discretizers
2. **CPPFImdlp** (`src/CPPFImdlp.h/cpp`) - Main MDLP algorithm implementation
3. **BinDisc** (`src/BinDisc.h/cpp`) - K-bins discretization (quantile/uniform strategies)
4. **Metrics** (`src/Metrics.h/cpp`) - Entropy and information gain calculations
### Key Data Types
- `samples_t` - Input data samples
- `labels_t` - Classification labels
- `indices_t` - Index arrays for sorting/processing
- `precision_t` - Floating-point precision type
### Algorithm Flow
1. Data is sorted using labels as tie-breakers for identical values
2. MDLP recursively finds optimal cut points using entropy-based criteria
3. Cut points are validated to ensure meaningful splits
4. Transform method maps continuous values to discrete bins
## Testing
Tests are built with GoogleTest and include:
- `Metrics_unittest` - Entropy/information gain tests
- `FImdlp_unittest` - Core MDLP algorithm tests
- `BinDisc_unittest` - K-bins discretization tests
- `Discretizer_unittest` - Base class functionality tests
### Running Tests
```bash
make test # Runs all tests and generates coverage report
cd build_debug/tests && ctest # Run tests directly
```
Coverage reports are generated at `build_debug/tests/coverage/index.html`.
## Sample Usage
The sample program demonstrates basic usage:
```bash
build_release/sample/sample -f iris -m 2
```
## Development Notes
- The library uses PyTorch tensors for efficient numerical operations
- Code follows C++17 standards
- Coverage is maintained at 100%
- The implementation handles edge cases like duplicate values and small intervals
- Conan package manager support is available via `conanfile.py`

View File

@@ -1,13 +1,81 @@
cmake_minimum_required(VERSION 3.20)
project(mdlp)
if (POLICY CMP0135)
cmake_policy(SET CMP0135 NEW)
endif ()
project(fimdlp
LANGUAGES CXX
DESCRIPTION "Discretization algorithm based on the paper by Fayyad & Irani Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning."
HOMEPAGE_URL "https://github.com/rmontanana/mdlp"
VERSION 2.1.1
)
set(CMAKE_CXX_STANDARD 17)
cmake_policy(SET CMP0135 NEW)
set(CMAKE_CXX_STANDARD 11)
# Find dependencies
find_package(Torch CONFIG REQUIRED)
add_library(mdlp CPPFImdlp.cpp Metrics.cpp)
# Options
# -------
option(ENABLE_TESTING OFF)
option(COVERAGE OFF)
add_subdirectory(config)
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -fno-elide-constructors")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -O3")
if (NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -fno-default-inline")
endif()
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
message(STATUS "Debug mode")
else()
message(STATUS "Release mode")
endif()
if (ENABLE_TESTING)
message(STATUS "Testing is enabled")
enable_testing()
set(CODE_COVERAGE ON)
set(GCC_COVERAGE_LINK_FLAGS "${GCC_COVERAGE_LINK_FLAGS} -lgcov --coverage")
add_subdirectory(tests)
else()
message(STATUS "Testing is disabled")
endif()
message(STATUS "Building sample")
add_subdirectory(sample)
add_subdirectory(tests)
include_directories(
${fimdlp_SOURCE_DIR}/src
${CMAKE_BINARY_DIR}/configured_files/include
)
add_library(fimdlp src/CPPFImdlp.cpp src/Metrics.cpp src/BinDisc.cpp src/Discretizer.cpp)
target_link_libraries(fimdlp PRIVATE torch::torch)
# Installation
# ------------
include(CMakePackageConfigHelpers)
write_basic_package_version_file(
"${CMAKE_CURRENT_BINARY_DIR}/fimdlpConfigVersion.cmake"
VERSION ${PROJECT_VERSION}
COMPATIBILITY AnyNewerVersion
)
install(TARGETS fimdlp
EXPORT fimdlpTargets
ARCHIVE DESTINATION lib
LIBRARY DESTINATION lib)
install(DIRECTORY src/ DESTINATION include/fimdlp FILES_MATCHING PATTERN "*.h")
install(FILES ${CMAKE_BINARY_DIR}/configured_files/include/config.h DESTINATION include/fimdlp)
install(EXPORT fimdlpTargets
FILE fimdlpTargets.cmake
NAMESPACE fimdlp::
DESTINATION lib/cmake/fimdlp)
configure_file(fimdlpConfig.cmake.in "${CMAKE_CURRENT_BINARY_DIR}/fimdlpConfig.cmake" @ONLY)
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/fimdlpConfig.cmake"
"${CMAKE_CURRENT_BINARY_DIR}/fimdlpConfigVersion.cmake"
DESTINATION lib/cmake/fimdlp)

155
CONAN_README.md Normal file
View File

@@ -0,0 +1,155 @@
# Conan Package for fimdlp
This directory contains the Conan package configuration for the fimdlp library.
## Dependencies
The package manages the following dependencies:
### Build Requirements
- **libtorch/2.4.1** - PyTorch C++ library for tensor operations
### Test Requirements (when testing enabled)
- **catch2/3.8.1** - Modern C++ testing framework
- **arff-files** - ARFF file format support (included locally in tests/lib/Files/)
## Building with Conan
### 1. Install Dependencies and Build
```bash
# Install dependencies
conan install . --output-folder=build --build=missing
# Build the project
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release
cmake --build .
```
### 2. Using the Build Script
```bash
# Build release version
./scripts/build_conan.sh
# Build with tests
./scripts/build_conan.sh --test
```
## Creating a Package
### 1. Create Package Locally
```bash
conan create . --profile:build=default --profile:host=default
```
### 2. Create Package with Options
```bash
# Create with testing enabled
conan create . -o enable_testing=True --profile:build=default --profile:host=default
# Create shared library version
conan create . -o shared=True --profile:build=default --profile:host=default
```
### 3. Using the Package Creation Script
```bash
./scripts/create_package.sh
```
## Uploading to Cimmeria
### 1. Configure Remote
```bash
# Add Cimmeria remote
conan remote add cimmeria https://conan.rmontanana.es/artifactory/api/conan/Cimmeria
# Login to Cimmeria
conan remote login cimmeria <username>
```
### 2. Upload Package
```bash
# Upload the package
conan upload fimdlp/2.1.0 --remote=cimmeria --all
# Or use the script (will configure remote instructions if not set up)
./scripts/create_package.sh
```
## Using the Package
### In conanfile.txt
```ini
[requires]
fimdlp/2.1.0
[generators]
CMakeDeps
CMakeToolchain
```
### In conanfile.py
```python
def requirements(self):
self.requires("fimdlp/2.1.0")
```
### In CMakeLists.txt
```cmake
find_package(fimdlp REQUIRED)
target_link_libraries(your_target fimdlp::fimdlp)
```
## Package Options
| Option | Values | Default | Description |
|--------|--------|---------|-------------|
| shared | True/False | False | Build shared library |
| fPIC | True/False | True | Position independent code |
| enable_testing | True/False | False | Enable test suite |
| enable_sample | True/False | False | Build sample program |
## Example Usage
```cpp
#include <fimdlp/CPPFImdlp.h>
#include <fimdlp/Metrics.h>
int main() {
// Create MDLP discretizer
CPPFImdlp discretizer;
// Calculate entropy
Metrics metrics;
std::vector<int> labels = {0, 1, 0, 1, 1};
double entropy = metrics.entropy(labels);
return 0;
}
```
## Testing
The package includes comprehensive tests that can be enabled with:
```bash
conan create . -o enable_testing=True
```
## Requirements
- C++17 compatible compiler
- CMake 3.20 or later
- Conan 2.0 or later

View File

@@ -1,42 +0,0 @@
#ifndef CPPFIMDLP_H
#define CPPFIMDLP_H
#include "typesFImdlp.h"
#include <limits>
#include <utility>
#include <string>
#include "Metrics.h"
namespace mdlp {
class CPPFImdlp {
public:
CPPFImdlp();
CPPFImdlp(size_t, int, float);
~CPPFImdlp();
void fit(samples_t&, labels_t&);
inline cutPoints_t getCutPoints() const { return cutPoints; };
labels_t& transform(const samples_t&);
inline int get_depth() const { return depth; };
static inline std::string version() { return "1.1.3"; };
protected:
size_t min_length = 3;
int depth = 0;
int max_depth = numeric_limits<int>::max();
float proposed_cuts = 0;
indices_t indices = indices_t();
samples_t X = samples_t();
labels_t y = labels_t();
Metrics metrics = Metrics(y, indices);
cutPoints_t cutPoints;
size_t num_cut_points = numeric_limits<size_t>::max();
labels_t discretizedData = labels_t();
static indices_t sortIndices(samples_t&, labels_t&);
void computeCutPoints(size_t, size_t, int);
void resizeCutPoints();
bool mdlp(size_t, size_t, size_t);
size_t getCandidate(size_t, size_t);
size_t compute_max_num_cut_points() const;
pair<precision_t, size_t> valueCutPoint(size_t, size_t, size_t);
};
}
#endif

85
Makefile Normal file
View File

@@ -0,0 +1,85 @@
SHELL := /bin/bash
.DEFAULT_GOAL := help
.PHONY: debug release install test conan-create viewcoverage
lcov := lcov
f_debug = build_debug
f_release = build_release
genhtml = genhtml
docscdir = docs
define build_target
@echo ">>> Building the project for $(1)..."
@if [ -d $(2) ]; then rm -fr $(2); fi
@conan install . --build=missing -of $(2) -s build_type=$(1) $(4)
@cmake -S . -B $(2) -DCMAKE_TOOLCHAIN_FILE=$(2)/build/$(1)/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=$(1) -D$(3)
@cmake --build $(2) --config $(1) -j 8
endef
debug: ## Build Debug version of the library
@$(call build_target,"Debug","$(f_debug)", "ENABLE_TESTING=ON", "-o enable_testing=True")
release: ## Build Release version of the library
@$(call build_target,"Release","$(f_release)", "ENABLE_TESTING=OFF", "-o enable_testing=False")
install: ## Install the library
@echo ">>> Installing the project..."
@cmake --build $(f_release) --target install -j 8
test: ## Build Debug version and run tests
@echo ">>> Building Debug version and running tests..."
@$(MAKE) debug;
@cp -r tests/datasets $(f_debug)/tests/datasets
@cd $(f_debug)/tests && ctest --output-on-failure -j 8
@echo ">>> Generating coverage report..."
@cd $(f_debug)/tests && $(lcov) --capture --directory ../ --demangle-cpp --ignore-errors source,source --ignore-errors mismatch --ignore-errors inconsistent --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info '/usr/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'lib/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'libtorch/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'tests/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'gtest/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info '*/.conan2/*' --ignore-errors unused --output-file coverage.info >/dev/null 2>&1;
@genhtml $(f_debug)/tests/coverage.info --demangle-cpp --output-directory $(f_debug)/tests/coverage --title "Discretizer mdlp Coverage Report" -s -k -f --legend
@echo "* Coverage report is generated at $(f_debug)/tests/coverage/index.html"
@which python || (echo ">>> Please install python"; exit 1)
@if [ ! -f $(f_debug)/tests/coverage.info ]; then \
echo ">>> No coverage.info file found!"; \
exit 1; \
fi
@echo ">>> Updating coverage badge..."
@env python update_coverage.py $(f_debug)/tests
@echo ">>> Done"
viewcoverage: ## View the html coverage report
@which $(genhtml) >/dev/null || (echo ">>> Please install lcov (genhtml not found)"; exit 1)
@if [ ! -d $(docscdir)/coverage ]; then mkdir -p $(docscdir)/coverage; fi
@if [ ! -f $(f_debug)/tests/coverage.info ]; then \
echo ">>> No coverage.info file found. Run make coverage first!"; \
exit 1; \
fi
@$(genhtml) $(f_debug)/tests/coverage.info --demangle-cpp --output-directory $(docscdir)/coverage --title "FImdlp Coverage Report" -s -k -f --legend >/dev/null 2>&1;
@xdg-open $(docscdir)/coverage/index.html || open $(docscdir)/coverage/index.html 2>/dev/null
@echo ">>> Done";
conan-create: ## Create the conan package
@echo ">>> Creating the conan package..."
conan create . --build=missing -tf "" -s:a build_type=Release
conan create . --build=missing -tf "" -s:a build_type=Debug -o "&:enable_testing=False"
@echo ">>> Done"
help: ## Show help message
@IFS=$$'\n' ; \
help_lines=(`fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##/:/'`); \
printf "%s\n\n" "Usage: make [task]"; \
printf "%-20s %s\n" "task" "help" ; \
printf "%-20s %s\n" "------" "----" ; \
for help_line in $${help_lines[@]}; do \
IFS=$$':' ; \
help_split=($$help_line) ; \
help_command=`echo $${help_split[0]} | sed -e 's/^ *//' -e 's/ *$$//'` ; \
help_info=`echo $${help_split[2]} | sed -e 's/^ *//' -e 's/ *$$//'` ; \
printf '\033[36m'; \
printf "%-20s %s" $$help_command ; \
printf '\033[0m'; \
printf "%s\n" $$help_info; \
done

View File

@@ -1,6 +1,9 @@
[![Build](https://github.com/rmontanana/mdlp/actions/workflows/build.yml/badge.svg)](https://github.com/rmontanana/mdlp/actions/workflows/build.yml)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=rmontanana_mdlp&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=rmontanana_mdlp)
[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=rmontanana_mdlp&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=rmontanana_mdlp)
[![Coverage Badge](https://img.shields.io/badge/Coverage-100,0%25-green)](html/index.html)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/rmontanana/mdlp)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.16025501.svg)](https://doi.org/10.5281/zenodo.16025501)
# <img src="logo.png" alt="logo" width="50"/> mdlp
@@ -14,28 +17,30 @@ The implementation tries to mitigate the problem of different label values with
Other features:
- Intervals with the same value of the variable are not taken into account for cutpoints.
- Intervals have to have more than two examples to be evaluated.
- Intervals have to have more than two examples to be evaluated (mdlp).
- The algorithm returns the cut points for the variable.
- The transform method uses the cut points returning its index in the following way:
The algorithm returns the cut points for the variable.
cut[i - 1] <= x < cut[i]
using the [std::upper_bound](https://en.cppreference.com/w/cpp/algorithm/upper_bound) method
- K-Bins discretization is also implemented, and "quantile" and "uniform" strategies are available.
## Sample
To run the sample, just execute the following commands:
```bash
cd sample
cmake -B build
cd build
make
./sample -f iris -m 2
./sample -h
make build
build_release/sample/sample -f iris -m 2
build_release/sample/sample -h
```
## Test
To run the tests and see coverage (llvm & gcovr have to be installed), execute the following commands:
To run the tests and see coverage (llvm with lcov and genhtml have to be installed), execute the following commands:
```bash
cd tests
./test
make test
```

View File

@@ -0,0 +1,525 @@
# Technical Analysis Report: MDLP Discretization Library
## Executive Summary
This document presents a comprehensive technical analysis of the MDLP (Minimum Description Length Principle) discretization library. The analysis covers project structure, code quality, architecture, testing methodology, documentation, and security assessment.
**Overall Rating: B+ (Good with Notable Issues)**
The library demonstrates solid software engineering practices with excellent test coverage and clean architectural design, but contains several security vulnerabilities and code quality issues that require attention before production deployment.
---
## Table of Contents
1. [Project Overview](#project-overview)
2. [Architecture & Design Analysis](#architecture--design-analysis)
3. [Code Quality Assessment](#code-quality-assessment)
4. [Testing Framework Analysis](#testing-framework-analysis)
5. [Security Analysis](#security-analysis)
6. [Documentation & Maintainability](#documentation--maintainability)
7. [Build System Evaluation](#build-system-evaluation)
8. [Strengths & Weaknesses Summary](#strengths--weaknesses-summary)
9. [Recommendations](#recommendations)
10. [Risk Assessment](#risk-assessment)
---
## Project Overview
### Description
The MDLP discretization library is a C++ implementation of Fayyad & Irani's Multi-Interval Discretization algorithm for continuous-valued attributes in classification learning. The library provides both traditional binning strategies and advanced MDLP-based discretization.
### Key Features
- **MDLP Algorithm**: Implementation of information-theoretic discretization
- **Multiple Strategies**: Uniform and quantile-based binning options
- **PyTorch Integration**: Native support for PyTorch tensors
- **High Performance**: Optimized algorithms with caching mechanisms
- **Complete Testing**: 100% code coverage with comprehensive test suite
### Technology Stack
- **Language**: C++17
- **Build System**: CMake 3.20+
- **Dependencies**: PyTorch (libtorch 2.7.0)
- **Testing**: Google Test (GTest)
- **Coverage**: lcov/genhtml
- **Package Manager**: Conan
---
## Architecture & Design Analysis
### Class Hierarchy
```
Discretizer (Abstract Base Class)
├── CPPFImdlp (MDLP Implementation)
└── BinDisc (Simple Binning)
Metrics (Standalone Utility Class)
```
### Design Patterns Identified
#### ✅ **Well-Implemented Patterns**
- **Template Method Pattern**: Base class provides `fit_transform()` while derived classes implement `fit()`
- **Facade Pattern**: Unified interface for both C++ vectors and PyTorch tensors
- **Composition**: `CPPFImdlp` composes `Metrics` for statistical calculations
#### ⚠️ **Pattern Issues**
- **Strategy Pattern**: `BinDisc` uses enum-based strategy instead of proper object-oriented strategy pattern
- **Interface Segregation**: `BinDisc.fit()` ignores `y` parameter, violating interface contract
### SOLID Principles Adherence
| Principle | Rating | Notes |
|-----------|--------|-------|
| **Single Responsibility** | ✅ Good | Each class has clear, focused responsibility |
| **Open/Closed** | ✅ Good | Easy to extend with new discretization algorithms |
| **Liskov Substitution** | ⚠️ Issues | `BinDisc` doesn't properly handle supervised interface |
| **Interface Segregation** | ✅ Good | Focused interfaces, not overly broad |
| **Dependency Inversion** | ✅ Good | Depends on abstractions, not implementations |
### Architectural Strengths
- **Clean Separation**: Algorithm logic, metrics, and data handling well-separated
- **Extensible Design**: Easy to add new discretization methods
- **Multi-Interface Support**: Both C++ native and PyTorch integration
- **Performance Optimized**: Caching and efficient data structures
### Architectural Weaknesses
- **Interface Inconsistency**: Mixed supervised/unsupervised interface handling
- **Complex Single Methods**: `computeCutPoints()` handles too many responsibilities
- **Tight Coupling**: Direct access to internal data structures
- **Limited Configuration**: Algorithm parameters scattered across classes
---
## Code Quality Assessment
### Code Style & Standards
- **Consistent Naming**: Good use of camelCase and snake_case conventions
- **Header Organization**: Proper SPDX licensing and copyright headers
- **Type Safety**: Centralized type definitions in `typesFImdlp.h`
- **Modern C++**: Good use of C++17 features
### Critical Code Issues
#### 🔴 **High Priority Issues**
**Memory Safety - Unsafe Pointer Operations**
```cpp
// Location: Discretizer.cpp:35-36
samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements);
labels_t y(y_.data_ptr<int>(), y_.data_ptr<int>() + num_elements);
```
- **Issue**: Direct pointer arithmetic without bounds checking
- **Risk**: Buffer overflow if tensor data is malformed
- **Fix**: Add tensor validation before pointer operations
#### 🟡 **Medium Priority Issues**
**Integer Underflow Risk**
```cpp
// Location: CPPFImdlp.cpp:98-100
n = cut - 1 - idxPrev; // Could underflow if cut <= idxPrev
m = idxNext - cut - 1; // Could underflow if idxNext <= cut
```
- **Issue**: Size arithmetic without underflow protection
- **Risk**: Extremely large values from underflow
- **Fix**: Add underflow validation
**Vector Access Without Bounds Checking**
```cpp
// Location: Multiple locations
X[indices[idx]] // No bounds validation
```
- **Issue**: Direct vector access using potentially invalid indices
- **Risk**: Out-of-bounds memory access
- **Fix**: Use `at()` method or add explicit bounds checking
### Performance Considerations
- **Caching Strategy**: Good use of entropy and information gain caching
- **Memory Efficiency**: Smart use of indices to avoid data copying
- **Algorithmic Complexity**: Efficient O(n log n) sorting with optimized cutpoint selection
---
## Testing Framework Analysis
### Test Organization
| Test File | Focus Area | Key Features |
|-----------|------------|-------------|
| `BinDisc_unittest.cpp` | Binning strategies | Parametric testing, multiple bin counts |
| `Discretizer_unittest.cpp` | Base interface | PyTorch integration, transform methods |
| `FImdlp_unittest.cpp` | MDLP algorithm | Real datasets, comprehensive scenarios |
| `Metrics_unittest.cpp` | Statistical calculations | Entropy, information gain validation |
### Testing Strengths
- **100% Code Coverage**: Complete line and branch coverage
- **Real Dataset Testing**: Uses Iris, Diabetes, Glass datasets from ARFF files
- **Edge Case Coverage**: Empty datasets, constant values, single elements
- **Parametric Testing**: Multiple configurations and strategies
- **Data-Driven Approach**: Systematic test generation with `tests.txt`
- **Multiple APIs**: Tests both C++ vectors and PyTorch tensors
### Testing Methodology
- **Framework**: Google Test with proper fixture usage
- **Precision Testing**: Consistent floating-point comparison margins
- **Exception Testing**: Proper error condition validation
- **Integration Testing**: End-to-end algorithm validation
### Testing Gaps
- **Performance Testing**: No benchmarks or performance regression tests
- **Memory Testing**: Limited memory pressure or leak testing
- **Thread Safety**: No concurrent access testing
- **Fuzzing**: No randomized input testing
---
## Security Analysis
### Overall Security Risk: **MEDIUM**
### Critical Security Vulnerabilities
#### 🔴 **HIGH RISK - Memory Safety**
**Unsafe PyTorch Tensor Operations**
- **Location**: `Discretizer.cpp:35-36, 42, 49-50`
- **Vulnerability**: Direct pointer arithmetic without validation
- **Impact**: Buffer overflow, memory corruption
- **Exploit Scenario**: Malformed tensor data causing out-of-bounds access
- **Mitigation**:
```cpp
if (!X_.is_contiguous() || !y_.is_contiguous()) {
throw std::invalid_argument("Tensors must be contiguous");
}
if (X_.dtype() != torch::kFloat32 || y_.dtype() != torch::kInt32) {
throw std::invalid_argument("Invalid tensor types");
}
```
#### 🟡 **MEDIUM RISK - Input Validation**
**Insufficient Parameter Validation**
- **Location**: Multiple entry points
- **Vulnerability**: Missing bounds checking on user inputs
- **Impact**: Integer overflow, out-of-bounds access
- **Examples**:
- `proposed_cuts` parameter without overflow protection
- Tensor dimensions not validated
- Array indices not bounds-checked
**Thread Safety Issues**
- **Location**: `Metrics` class cache containers
- **Vulnerability**: Shared state without synchronization
- **Impact**: Race conditions, data corruption
- **Mitigation**: Add mutex protection or document thread requirements
#### 🟢 **LOW RISK - Information Disclosure**
**Debug Information Leakage**
- **Location**: Sample code and test files
- **Vulnerability**: Detailed internal data exposure
- **Impact**: Minor information disclosure
- **Mitigation**: Remove or conditionalize debug output
### Security Recommendations
#### Immediate Actions
1. **Add Tensor Validation**: Comprehensive validation before pointer operations
2. **Implement Bounds Checking**: Explicit validation for all array access
3. **Add Overflow Protection**: Safe arithmetic operations
#### Short-term Actions
1. **Enhance Input Validation**: Parameter validation at all public interfaces
2. **Add Thread Safety**: Documentation or synchronization mechanisms
3. **Update Dependencies**: Ensure PyTorch is current and secure
---
## Documentation & Maintainability
### Current Documentation Status
#### ✅ **Available Documentation**
- **README.md**: Basic usage instructions and build commands
- **Code Comments**: SPDX headers and licensing information
- **Build Instructions**: CMake configuration and make targets
#### ❌ **Missing Documentation**
- **API Documentation**: No comprehensive API reference
- **Algorithm Documentation**: Limited explanation of MDLP implementation
- **Usage Examples**: Minimal code examples beyond basic sample
- **Configuration Guide**: No detailed parameter explanation
- **Architecture Documentation**: No design document or UML diagrams
### Maintainability Assessment
#### Strengths
- **Clear Code Structure**: Well-organized class hierarchy
- **Consistent Style**: Uniform naming and formatting conventions
- **Separation of Concerns**: Clear module boundaries
- **Version Control**: Proper git repository with meaningful commits
#### Weaknesses
- **Complex Methods**: Some functions handle multiple responsibilities
- **Magic Numbers**: Hardcoded values without explanation
- **Limited Comments**: Algorithm logic lacks explanatory comments
- **Configuration Scattered**: Parameters spread across multiple classes
### Documentation Recommendations
1. **Generate API Documentation**: Use Doxygen for comprehensive API docs
2. **Add Algorithm Explanation**: Document MDLP implementation details
3. **Create Usage Guide**: Comprehensive examples and tutorials
4. **Architecture Document**: High-level design documentation
5. **Configuration Reference**: Centralized parameter documentation
---
## Build System Evaluation
### CMake Configuration Analysis
#### Strengths
- **Modern CMake**: Uses version 3.20+ with current best practices
- **Multi-Configuration**: Separate debug/release builds
- **Dependency Management**: Proper PyTorch integration
- **Installation Support**: Complete install targets and package config
- **Testing Integration**: CTest integration with coverage
#### Build Features
```cmake
# Key configurations
set(CMAKE_CXX_STANDARD 17)
find_package(Torch CONFIG REQUIRED)
option(ENABLE_TESTING OFF)
option(ENABLE_SAMPLE OFF)
option(COVERAGE OFF)
```
### Build System Issues
#### Security Concerns
- **Debug Flags**: May affect release builds
- **Dependency Versions**: Fixed PyTorch version without security updates
#### Usability Issues
- **Complex Makefile**: Manual build directory management
- **Coverage Complexity**: Complex lcov command chain
### Build Recommendations
1. **Simplify Build Process**: Use CMake presets for common configurations
2. **Improve Dependency Management**: Flexible version constraints
3. **Add Build Validation**: Compiler and platform checks
4. **Enhance Documentation**: Detailed build instructions
---
## Strengths & Weaknesses Summary
### 🏆 **Key Strengths**
#### Technical Excellence
- **Algorithmic Correctness**: Faithful implementation of Fayyad & Irani algorithm
- **Performance Optimization**: Efficient caching and data structures
- **Code Coverage**: 100% test coverage with comprehensive edge cases
- **Modern C++**: Good use of C++17 features and best practices
#### Software Engineering
- **Clean Architecture**: Well-structured OOP design with clear separation
- **SOLID Principles**: Generally good adherence to design principles
- **Multi-Platform**: CMake-based build system for cross-platform support
- **Professional Quality**: Proper licensing, version control, CI/CD integration
#### API Design
- **Multiple Interfaces**: Both C++ native and PyTorch tensor support
- **Sklearn-like API**: Familiar `fit()`/`transform()`/`fit_transform()` pattern
- **Extensible**: Easy to add new discretization algorithms
### ⚠️ **Critical Weaknesses**
#### Security Issues
- **Memory Safety**: Unsafe pointer operations in PyTorch integration
- **Input Validation**: Insufficient bounds checking and parameter validation
- **Thread Safety**: Shared state without proper synchronization
#### Code Quality
- **Interface Consistency**: LSP violation in `BinDisc` class
- **Method Complexity**: Some functions handle too many responsibilities
- **Error Handling**: Inconsistent exception handling patterns
#### Documentation
- **API Documentation**: Minimal inline documentation
- **Usage Examples**: Limited practical examples
- **Architecture Documentation**: No high-level design documentation
---
## Recommendations
### 🚨 **Immediate Actions (HIGH Priority)**
#### Security Fixes
```cpp
// 1. Add tensor validation in Discretizer::fit_t()
void Discretizer::fit_t(const torch::Tensor& X_, const torch::Tensor& y_) {
// Validate tensor properties
if (!X_.is_contiguous() || !y_.is_contiguous()) {
throw std::invalid_argument("Tensors must be contiguous");
}
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
throw std::invalid_argument("Only 1D tensors supported");
}
if (X_.dtype() != torch::kFloat32 || y_.dtype() != torch::kInt32) {
throw std::invalid_argument("Invalid tensor types");
}
// ... rest of implementation
}
```
```cpp
// 2. Add bounds checking for vector access
inline precision_t safe_vector_access(const samples_t& vec, size_t idx) {
if (idx >= vec.size()) {
throw std::out_of_range("Vector index out of bounds");
}
return vec[idx];
}
```
```cpp
// 3. Add underflow protection in arithmetic operations
size_t safe_subtract(size_t a, size_t b) {
if (b > a) {
throw std::underflow_error("Subtraction would cause underflow");
}
return a - b;
}
```
### 📋 **Short-term Actions (MEDIUM Priority)**
#### Code Quality Improvements
1. **Fix Interface Consistency**: Separate supervised/unsupervised interfaces
2. **Refactor Complex Methods**: Break down `computeCutPoints()` function
3. **Standardize Error Handling**: Consistent exception types and messages
4. **Add Input Validation**: Comprehensive parameter checking
#### Thread Safety
```cpp
// Add thread safety to Metrics class
class Metrics {
private:
mutable std::mutex cache_mutex;
cacheEnt_t entropyCache;
cacheIg_t igCache;
public:
precision_t entropy(size_t start, size_t end) const {
std::lock_guard<std::mutex> lock(cache_mutex);
// ... implementation
}
};
```
### 📚 **Long-term Actions (LOW Priority)**
#### Documentation & Usability
1. **API Documentation**: Generate comprehensive Doxygen documentation
2. **Usage Examples**: Create detailed tutorial and example repository
3. **Performance Testing**: Add benchmarking and regression tests
4. **Architecture Documentation**: Create design documents and UML diagrams
#### Code Modernization
1. **Strategy Pattern**: Proper implementation for `BinDisc` strategies
2. **Configuration Management**: Centralized parameter handling
3. **Factory Pattern**: Discretizer creation factory
4. **Resource Management**: RAII patterns for memory safety
---
## Risk Assessment
### Risk Priority Matrix
| Risk Category | High | Medium | Low | Total |
|---------------|------|--------|-----|-------|
| **Security** | 1 | 7 | 2 | 10 |
| **Code Quality** | 2 | 5 | 3 | 10 |
| **Maintainability** | 0 | 3 | 4 | 7 |
| **Performance** | 0 | 1 | 2 | 3 |
| **Total** | **3** | **16** | **11** | **30** |
### Risk Impact Assessment
#### Critical Risks (Immediate Attention Required)
1. **Memory Safety Vulnerabilities**: Could lead to crashes or security exploits
2. **Interface Consistency Issues**: Violates expected behavior contracts
3. **Input Validation Gaps**: Potential for crashes with malformed input
#### Moderate Risks (Address in Next Release)
1. **Thread Safety Issues**: Problems in multi-threaded environments
2. **Complex Method Design**: Maintenance and debugging difficulties
3. **Documentation Gaps**: Reduced adoption and maintainability
#### Low Risks (Future Improvements)
1. **Performance Optimization**: Minor efficiency improvements
2. **Code Style Consistency**: Enhanced readability
3. **Build System Enhancements**: Improved developer experience
---
## Conclusion
The MDLP discretization library represents a solid implementation of an important machine learning algorithm with excellent test coverage and clean architectural design. However, it requires attention to security vulnerabilities and code quality issues before production deployment.
### Final Verdict
**Rating: B+ (Good with Notable Issues)**
- **Core Algorithm**: Excellent implementation of MDLP with proper mathematical foundations
- **Software Engineering**: Good OOP design following most best practices
- **Testing**: Exemplary test coverage and methodology
- **Security**: Notable vulnerabilities requiring immediate attention
- **Documentation**: Adequate but could be significantly improved
### Deployment Recommendation
**Not Ready for Production** without addressing HIGH priority security issues, particularly around memory safety and input validation. Once these are resolved, the library would be suitable for production use in most contexts.
### Next Steps
1. **Security Audit**: Address all HIGH and MEDIUM priority security issues
2. **Code Review**: Implement fixes for interface consistency and method complexity
3. **Documentation**: Create comprehensive API documentation and usage guides
4. **Testing**: Add performance benchmarks and stress testing
5. **Release**: Prepare version 2.1.0 with security and quality improvements
---
## Appendix
### Files Analyzed
- `src/CPPFImdlp.h` & `src/CPPFImdlp.cpp` - MDLP algorithm implementation
- `src/Discretizer.h` & `src/Discretizer.cpp` - Base class and PyTorch integration
- `src/BinDisc.h` & `src/BinDisc.cpp` - Simple binning strategies
- `src/Metrics.h` & `src/Metrics.cpp` - Statistical calculations
- `src/typesFImdlp.h` - Type definitions
- `CMakeLists.txt` - Build configuration
- `conanfile.py` - Dependency management
- `tests/*` - Comprehensive test suite
### Analysis Date
**Report Generated**: June 27, 2025
### Tools Used
- **Static Analysis**: Manual code review with security focus
- **Architecture Analysis**: SOLID principles and design pattern evaluation
- **Test Analysis**: Coverage and methodology assessment
- **Security Analysis**: Vulnerability assessment with risk prioritization
---
*This report provides a comprehensive technical analysis of the MDLP discretization library. For questions or clarifications, please refer to the project repository or contact the development team.*

16
conandata.yml Normal file
View File

@@ -0,0 +1,16 @@
sources:
"2.1.0":
url: "https://github.com/rmontanana/mdlp/archive/refs/tags/v2.1.0.tar.gz"
sha256: "placeholder_sha256_hash"
"2.0.1":
url: "https://github.com/rmontanana/mdlp/archive/refs/tags/v2.0.1.tar.gz"
sha256: "placeholder_sha256_hash"
"2.0.0":
url: "https://github.com/rmontanana/mdlp/archive/refs/tags/v2.0.0.tar.gz"
sha256: "placeholder_sha256_hash"
patches:
"2.1.0":
- patch_file: "patches/001-cmake-fix.patch"
patch_description: "Fix CMake configuration for Conan compatibility"
patch_type: "portability"

111
conanfile.py Normal file
View File

@@ -0,0 +1,111 @@
import os
import re
from conan import ConanFile
from conan.tools.cmake import CMakeToolchain, CMake, cmake_layout, CMakeDeps
from conan.tools.files import load, copy
class FimdlpConan(ConanFile):
name = "fimdlp"
version = "X.X.X"
license = "MIT"
author = "Ricardo Montañana <rmontanana@gmail.com>"
url = "https://github.com/rmontanana/mdlp"
description = "Discretization algorithm based on the paper by Fayyad & Irani Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning."
topics = ("machine-learning", "discretization", "mdlp", "classification")
# Package configuration
settings = "os", "compiler", "build_type", "arch"
options = {
"shared": [True, False],
"fPIC": [True, False],
"enable_testing": [True, False],
"enable_sample": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"enable_testing": False,
"enable_sample": False,
}
# Sources are located in the same place as this recipe, copy them to the recipe
exports_sources = "CMakeLists.txt", "src/*", "sample/*", "tests/*", "config/*", "fimdlpConfig.cmake.in"
def set_version(self):
content = load(self, "CMakeLists.txt")
version_pattern = re.compile(r'project\s*\([^\)]*VERSION\s+([0-9]+\.[0-9]+\.[0-9]+)', re.IGNORECASE | re.DOTALL)
match = version_pattern.search(content)
if match:
self.version = match.group(1)
else:
raise Exception("Version not found in CMakeLists.txt")
def config_options(self):
if self.settings.os == "Windows":
self.options.rm_safe("fPIC")
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def requirements(self):
# PyTorch dependency for tensor operations
self.requires("libtorch/2.7.1")
def build_requirements(self):
self.requires("arff-files/1.2.1") # for tests and sample
if self.options.enable_testing:
self.test_requires("gtest/1.16.0")
def layout(self):
cmake_layout(self)
def generate(self):
# Generate CMake configuration files
deps = CMakeDeps(self)
deps.generate()
tc = CMakeToolchain(self)
# Set CMake variables based on options
tc.variables["ENABLE_TESTING"] = self.options.enable_testing
tc.variables["ENABLE_SAMPLE"] = self.options.enable_sample
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
tc.generate()
def build(self):
cmake = CMake(self)
cmake.configure()
cmake.build()
# Run tests if enabled
if self.options.enable_testing:
cmake.test()
def package(self):
# Install using CMake
cmake = CMake(self)
cmake.install()
# Copy license file
copy(self, "LICENSE", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
def package_info(self):
# Library configuration
self.cpp_info.libs = ["fimdlp"]
self.cpp_info.includedirs = ["include"]
# CMake package configuration
self.cpp_info.set_property("cmake_file_name", "fimdlp")
self.cpp_info.set_property("cmake_target_name", "fimdlp::fimdlp")
# Compiler features
self.cpp_info.cppstd = "17"
# System libraries (if needed)
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.system_libs.append("m") # Math library
self.cpp_info.system_libs.append("pthread") # Threading
# Build information for consumers
self.cpp_info.builddirs = ["lib/cmake/fimdlp"]

4
config/CMakeLists.txt Normal file
View File

@@ -0,0 +1,4 @@
configure_file(
"config.h.in"
"${CMAKE_BINARY_DIR}/configured_files/include/config.h" ESCAPE_QUOTES
)

13
config/config.h.in Normal file
View File

@@ -0,0 +1,13 @@
#pragma once
#include <string>
#include <string_view>
#define PROJECT_VERSION_MAJOR @PROJECT_VERSION_MAJOR @
#define PROJECT_VERSION_MINOR @PROJECT_VERSION_MINOR @
#define PROJECT_VERSION_PATCH @PROJECT_VERSION_PATCH @
static constexpr std::string_view project_mdlp_name = "@PROJECT_NAME@";
static constexpr std::string_view project_mdlp_version = "@PROJECT_VERSION@";
static constexpr std::string_view project_mdlp_description = "@PROJECT_DESCRIPTION@";
static constexpr std::string_view git_mdlp_sha = "@GIT_SHA@";

2
fimdlpConfig.cmake.in Normal file
View File

@@ -0,0 +1,2 @@
@PACKAGE_INIT@
include("${CMAKE_CURRENT_LIST_DIR}/fimdlpTargets.cmake")

47
getversion.py Normal file
View File

@@ -0,0 +1,47 @@
# read the version from the CMakeLists.txt file
import re
import sys
from pathlib import Path
def get_version_from_cmakelists(cmakelists_path):
# Read the CMakeLists.txt file
try:
with open(cmakelists_path, 'r') as file:
content = file.read()
except IOError as e:
print(f"Error reading {cmakelists_path}: {e}")
sys.exit(1)
# Use regex to find the version line
# The regex pattern looks for a line that starts with 'project' and captures the version number
# in the format VERSION x.y.z where x, y, and z are digits.
# It allows for optional whitespace around the parentheses and the version number.
version_pattern = re.compile(
r'project\s*\([^\)]*VERSION\s+([0-9]+\.[0-9]+\.[0-9]+)', re.IGNORECASE | re.DOTALL
)
match = version_pattern.search(content)
if match:
return match.group(1)
else:
return None
def main():
# Get the path to the CMakeLists.txt file
cmakelists_path = Path(__file__).parent / "CMakeLists.txt"
# Check if the file exists
if not cmakelists_path.exists():
print(f"Error: {cmakelists_path} does not exist.")
sys.exit(1)
# Get the version from the CMakeLists.txt file
version = get_version_from_cmakelists(cmakelists_path)
if version:
print(f"Version: {version}")
else:
print("Version not found in CMakeLists.txt.")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,21 +0,0 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "lldb puro",
"type": "cppdbg",
// "targetArchitecture": "arm64",
"request": "launch",
"program": "${workspaceRoot}/build/sample",
"args": [
"-f",
"iris"
],
"stopAtEntry": false,
"cwd": "${workspaceRoot}/build/",
"environment": [],
"externalConsole": false,
"MIMode": "lldb"
},
]
}

View File

@@ -1,5 +1,12 @@
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_BUILD_TYPE Debug)
find_package(arff-files REQUIRED)
add_executable(sample sample.cpp ../tests/ArffFiles.cpp ../Metrics.cpp ../CPPFImdlp.cpp)
include_directories(
${fimdlp_SOURCE_DIR}/src
${CMAKE_BINARY_DIR}/configured_files/include
${arff-files_INCLUDE_DIRS}
)
add_executable(sample sample.cpp)
target_link_libraries(sample PRIVATE fimdlp torch::torch arff-files::arff-files)

View File

@@ -1,3 +1,9 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include <iostream>
#include <vector>
#include <iomanip>
@@ -5,13 +11,13 @@
#include <algorithm>
#include <cstring>
#include <getopt.h>
#include "../CPPFImdlp.h"
#include "../tests/ArffFiles.h"
#include <torch/torch.h>
#include <ArffFiles.hpp>
#include "Discretizer.h"
#include "CPPFImdlp.h"
#include "BinDisc.h"
using namespace std;
using namespace mdlp;
const string PATH = "../../tests/datasets/";
const string PATH = "tests/datasets/";
/* print a description of all supported options */
void usage(const char* path)
@@ -20,17 +26,17 @@ void usage(const char* path)
const char* basename = strrchr(path, '/');
basename = basename ? basename + 1 : path;
cout << "usage: " << basename << "[OPTION]" << endl;
cout << " -h, --help\t\t Print this help and exit." << endl;
cout
std::cout << "usage: " << basename << "[OPTION]" << std::endl;
std::cout << " -h, --help\t\t Print this help and exit." << std::endl;
std::cout
<< " -f, --file[=FILENAME]\t {all, diabetes, glass, iris, kdd_JapaneseVowels, letter, liver-disorders, mfeat-factors, test}."
<< endl;
cout << " -p, --path[=FILENAME]\t folder where the arff dataset is located, default " << PATH << endl;
cout << " -m, --max_depth=INT\t max_depth pased to discretizer. Default = MAX_INT" << endl;
cout
<< std::endl;
std::cout << " -p, --path[=FILENAME]\t folder where the arff dataset is located, default " << PATH << std::endl;
std::cout << " -m, --max_depth=INT\t max_depth pased to discretizer. Default = MAX_INT" << std::endl;
std::cout
<< " -c, --max_cutpoints=FLOAT\t percentage of lines expressed in decimal or integer number or cut points. Default = 0 -> any"
<< endl;
cout << " -n, --min_length=INT\t interval min_length pased to discretizer. Default = 3" << endl;
<< std::endl;
std::cout << " -n, --min_length=INT\t interval min_length pased to discretizer. Default = 3" << std::endl;
}
tuple<string, string, int, int, float> parse_arguments(int argc, char** argv)
@@ -96,56 +102,79 @@ void process_file(const string& path, const string& file_name, bool class_last,
file.load(path + file_name + ".arff", class_last);
const auto attributes = file.getAttributes();
const auto items = file.getSize();
cout << "Number of lines: " << items << endl;
cout << "Attributes: " << endl;
std::cout << "Number of lines: " << items << std::endl;
std::cout << "Attributes: " << std::endl;
for (auto attribute : attributes) {
cout << "Name: " << get<0>(attribute) << " Type: " << get<1>(attribute) << endl;
std::cout << "Name: " << get<0>(attribute) << " Type: " << get<1>(attribute) << std::endl;
}
cout << "Class name: " << file.getClassName() << endl;
cout << "Class type: " << file.getClassType() << endl;
cout << "Data: " << endl;
vector<samples_t>& X = file.getX();
labels_t& y = file.getY();
std::cout << "Class name: " << file.getClassName() << std::endl;
std::cout << "Class type: " << file.getClassType() << std::endl;
std::cout << "Data: " << std::endl;
std::vector<mdlp::samples_t>& X = file.getX();
mdlp::labels_t& y = file.getY();
for (int i = 0; i < 5; i++) {
for (auto feature : X) {
cout << fixed << setprecision(1) << feature[i] << " ";
std::cout << fixed << setprecision(1) << feature[i] << " ";
}
cout << y[i] << endl;
std::cout << y[i] << std::endl;
}
auto test = mdlp::CPPFImdlp(min_length, max_depth, max_cutpoints);
size_t total = 0;
for (auto i = 0; i < attributes.size(); i++) {
auto min_max = minmax_element(X[i].begin(), X[i].end());
cout << "Cut points for feature " << get<0>(attributes[i]) << ": [" << setprecision(3);
std::cout << "Cut points for feature " << get<0>(attributes[i]) << ": [" << setprecision(3);
test.fit(X[i], y);
auto cut_points = test.getCutPoints();
for (auto item : cut_points) {
cout << item;
std::cout << item;
if (item != cut_points.back())
cout << ", ";
std::cout << ", ";
}
total += test.getCutPoints().size();
cout << "]" << endl;
cout << "Min: " << *min_max.first << " Max: " << *min_max.second << endl;
cout << "--------------------------" << endl;
std::cout << "]" << std::endl;
std::cout << "Min: " << *min_max.first << " Max: " << *min_max.second << std::endl;
std::cout << "--------------------------" << std::endl;
}
std::cout << "Total cut points ...: " << total << std::endl;
std::cout << "Total feature states: " << total + attributes.size() << std::endl;
std::cout << "Version ............: " << test.version() << std::endl;
std::cout << "Transformed data (vector)..: " << std::endl;
test.fit(X[0], y);
auto data = test.transform(X[0]);
for (int i = 130; i < 135; i++) {
std::cout << std::fixed << std::setprecision(1) << X[0][i] << " " << data[i] << std::endl;
}
auto Xt = torch::tensor(X[0], torch::kFloat32);
auto yt = torch::tensor(y, torch::kInt32);
//test.fit_t(Xt, yt);
auto result = test.fit_transform_t(Xt, yt);
std::cout << "Transformed data (torch)...: " << std::endl;
for (int i = 130; i < 135; i++) {
std::cout << std::fixed << std::setprecision(1) << Xt[i].item<mdlp::precision_t>() << " " << result[i].item<int>() << std::endl;
}
auto disc = mdlp::BinDisc(3);
auto res_v = disc.fit_transform(X[0], y);
disc.fit_t(Xt, yt);
auto res_t = disc.transform_t(Xt);
std::cout << "Transformed data (BinDisc)...: " << std::endl;
for (int i = 130; i < 135; i++) {
std::cout << std::fixed << std::setprecision(1) << Xt[i].item<mdlp::precision_t>() << " " << res_v[i] << " " << res_t[i].item<int>() << std::endl;
}
cout << "Total cut points ...: " << total << endl;
cout << "Total feature states: " << total + attributes.size() << endl;
}
void process_all_files(const map<string, bool>& datasets, const string& path, int max_depth, int min_length,
float max_cutpoints)
{
cout << "Results: " << "Max_depth: " << max_depth << " Min_length: " << min_length << " Max_cutpoints: "
<< max_cutpoints << endl << endl;
std::cout << "Results: " << "Max_depth: " << max_depth << " Min_length: " << min_length << " Max_cutpoints: "
<< max_cutpoints << std::endl << std::endl;
printf("%-20s %4s %4s\n", "Dataset", "Feat", "Cuts Time(ms)");
printf("==================== ==== ==== ========\n");
for (const auto& dataset : datasets) {
ArffFiles file;
file.load(path + dataset.first + ".arff", dataset.second);
auto attributes = file.getAttributes();
vector<samples_t>& X = file.getX();
labels_t& y = file.getY();
std::vector<mdlp::samples_t>& X = file.getX();
mdlp::labels_t& y = file.getY();
size_t timing = 0;
size_t cut_points = 0;
for (auto i = 0; i < attributes.size(); i++) {
@@ -163,7 +192,7 @@ void process_all_files(const map<string, bool>& datasets, const string& path, in
int main(int argc, char** argv)
{
map<string, bool> datasets = {
std::map<std::string, bool> datasets = {
{"diabetes", true},
{"glass", true},
{"iris", true},
@@ -173,14 +202,14 @@ int main(int argc, char** argv)
{"mfeat-factors", true},
{"test", true}
};
string file_name;
string path;
std::string file_name;
std::string path;
int max_depth;
int min_length;
float max_cutpoints;
tie(file_name, path, max_depth, min_length, max_cutpoints) = parse_arguments(argc, argv);
if (datasets.find(file_name) == datasets.end() && file_name != "all") {
cout << "Invalid file name: " << file_name << endl;
std::cout << "Invalid file name: " << file_name << std::endl;
usage(argv[0]);
exit(1);
}
@@ -188,10 +217,10 @@ int main(int argc, char** argv)
process_all_files(datasets, path, max_depth, min_length, max_cutpoints);
else {
process_file(path, file_name, datasets[file_name], max_depth, min_length, max_cutpoints);
cout << "File name ....: " << file_name << endl;
cout << "Max depth ....: " << max_depth << endl;
cout << "Min length ...: " << min_length << endl;
cout << "Max cutpoints : " << max_cutpoints << endl;
std::cout << "File name ....: " << file_name << std::endl;
std::cout << "Max depth ....: " << max_depth << std::endl;
std::cout << "Min length ...: " << min_length << std::endl;
std::cout << "Max cutpoints : " << max_cutpoints << std::endl;
}
return 0;
}

25
scripts/build_conan.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/bin/bash
# Build script for fimdlp using Conan
set -e
echo "Building fimdlp with Conan..."
# Clean previous builds
rm -rf build_conan
# Install dependencies and build
conan install . --output-folder=build_conan --build=missing --profile:build=default --profile:host=default
# Build the project
cd build_conan
cmake .. -DCMAKE_TOOLCHAIN_FILE=conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release
cmake --build .
echo "Build completed successfully!"
# Run tests if requested
if [ "$1" = "--test" ]; then
echo "Running tests..."
ctest --output-on-failure
fi

33
scripts/create_package.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# Script to create and upload fimdlp Conan package
set -e
PACKAGE_NAME="fimdlp"
PACKAGE_VERSION="2.1.0"
REMOTE_NAME="cimmeria"
echo "Creating Conan package for $PACKAGE_NAME/$PACKAGE_VERSION..."
# Create the package
conan create . --profile:build=default --profile:host=default
echo "Package created successfully!"
# Test the package
echo "Testing package..."
conan test test_package $PACKAGE_NAME/$PACKAGE_VERSION@ --profile:build=default --profile:host=default
echo "Package tested successfully!"
# Upload to Cimmeria (if remote is configured)
if conan remote list | grep -q "$REMOTE_NAME"; then
echo "Uploading package to $REMOTE_NAME..."
conan upload $PACKAGE_NAME/$PACKAGE_VERSION --remote=$REMOTE_NAME --all
echo "Package uploaded to $REMOTE_NAME successfully!"
else
echo "Remote '$REMOTE_NAME' not configured. To upload the package:"
echo "1. Add the remote: conan remote add $REMOTE_NAME <cimmeria-url>"
echo "2. Login: conan remote login $REMOTE_NAME <username>"
echo "3. Upload: conan upload $PACKAGE_NAME/$PACKAGE_VERSION --remote=$REMOTE_NAME --all"
fi

View File

@@ -3,7 +3,7 @@ sonar.organization=rmontanana
# This is the name and version displayed in the SonarCloud UI.
sonar.projectName=mdlp
sonar.projectVersion=1.1.3
sonar.projectVersion=2.0.1
# sonar.test.exclusions=tests/**
# sonar.tests=tests/
# sonar.coverage.exclusions=tests/**,sample/**

125
src/BinDisc.cpp Normal file
View File

@@ -0,0 +1,125 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include <algorithm>
#include <cmath>
#include "BinDisc.h"
#include <iostream>
#include <string>
namespace mdlp {
BinDisc::BinDisc(int n_bins, strategy_t strategy) :
Discretizer(), n_bins{ n_bins }, strategy{ strategy }
{
if (n_bins < 3) {
throw std::invalid_argument("n_bins must be greater than 2");
}
}
BinDisc::~BinDisc() = default;
void BinDisc::fit(samples_t& X)
{
// Input validation
if (X.empty()) {
throw std::invalid_argument("Input data X cannot be empty");
}
if (X.size() < static_cast<size_t>(n_bins)) {
throw std::invalid_argument("Input data size must be at least equal to n_bins");
}
cutPoints.clear();
if (strategy == strategy_t::QUANTILE) {
direction = bound_dir_t::RIGHT;
fit_quantile(X);
} else if (strategy == strategy_t::UNIFORM) {
direction = bound_dir_t::RIGHT;
fit_uniform(X);
}
}
void BinDisc::fit(samples_t& X, labels_t& y)
{
if (X.empty()) {
throw std::invalid_argument("X cannot be empty");
}
// BinDisc is inherently unsupervised, but we validate inputs for consistency
// Note: y parameter is validated but not used in binning strategy
fit(X);
}
std::vector<precision_t> BinDisc::linspace(precision_t start, precision_t end, int num)
{
// Input validation
if (num < 2) {
throw std::invalid_argument("Number of points must be at least 2 for linspace");
}
if (std::isnan(start) || std::isnan(end)) {
throw std::invalid_argument("Start and end values cannot be NaN");
}
if (std::isinf(start) || std::isinf(end)) {
throw std::invalid_argument("Start and end values cannot be infinite");
}
if (start == end) {
return { start, end };
}
precision_t delta = (end - start) / static_cast<precision_t>(num - 1);
std::vector<precision_t> linspc;
for (size_t i = 0; i < num; ++i) {
precision_t val = start + delta * static_cast<precision_t>(i);
linspc.push_back(val);
}
return linspc;
}
size_t clip(const size_t n, const size_t lower, const size_t upper)
{
return std::max(lower, std::min(n, upper));
}
std::vector<precision_t> BinDisc::percentile(samples_t& data, const std::vector<precision_t>& percentiles)
{
// Input validation
if (data.empty()) {
throw std::invalid_argument("Data cannot be empty for percentile calculation");
}
if (percentiles.empty()) {
throw std::invalid_argument("Percentiles cannot be empty");
}
// Implementation taken from https://dpilger26.github.io/NumCpp/doxygen/html/percentile_8hpp_source.html
std::vector<precision_t> results;
bool first = true;
results.reserve(percentiles.size());
for (auto percentile : percentiles) {
const auto i = static_cast<size_t>(std::floor(static_cast<precision_t>(data.size() - 1) * percentile / 100.));
const auto indexLower = clip(i, 0, data.size() - 2);
const precision_t percentI = static_cast<precision_t>(indexLower) / static_cast<precision_t>(data.size() - 1);
const precision_t fraction =
(percentile / 100.0 - percentI) /
(static_cast<precision_t>(indexLower + 1) / static_cast<precision_t>(data.size() - 1) - percentI);
if (const auto value = data[indexLower] + (data[indexLower + 1] - data[indexLower]) * fraction; value != results.back() || first) // first needed as results.back() return is undefined for empty vectors
results.push_back(value);
first = false;
}
return results;
}
void BinDisc::fit_quantile(const samples_t& X)
{
auto quantiles = linspace(0.0, 100.0, n_bins + 1);
auto data = X;
std::sort(data.begin(), data.end());
if (data.front() == data.back() || data.size() == 1) {
// if X is constant, pass any two given points that shall be ignored in transform
cutPoints.push_back(data.front());
cutPoints.push_back(data.front());
return;
}
cutPoints = percentile(data, quantiles);
}
void BinDisc::fit_uniform(const samples_t& X)
{
auto [vmin, vmax] = std::minmax_element(X.begin(), X.end());
cutPoints = linspace(*vmin, *vmax, n_bins + 1);
}
}

36
src/BinDisc.h Normal file
View File

@@ -0,0 +1,36 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#ifndef BINDISC_H
#define BINDISC_H
#include "typesFImdlp.h"
#include "Discretizer.h"
#include <string>
namespace mdlp {
enum class strategy_t {
UNIFORM,
QUANTILE
};
class BinDisc : public Discretizer {
public:
BinDisc(int n_bins = 3, strategy_t strategy = strategy_t::UNIFORM);
~BinDisc();
// y is included for compatibility with the Discretizer interface
void fit(samples_t& X_, labels_t& y) override;
void fit(samples_t& X);
protected:
std::vector<precision_t> linspace(precision_t start, precision_t end, int num);
std::vector<precision_t> percentile(samples_t& data, const std::vector<precision_t>& percentiles);
private:
void fit_uniform(const samples_t&);
void fit_quantile(const samples_t&);
int n_bins;
strategy_t strategy;
};
}
#endif

View File

@@ -1,33 +1,50 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include <numeric>
#include <algorithm>
#include <set>
#include <cmath>
#include <stdexcept>
#include "CPPFImdlp.h"
namespace mdlp {
CPPFImdlp::CPPFImdlp(size_t min_length_, int max_depth_, float proposed) : min_length(min_length_),
CPPFImdlp::CPPFImdlp(size_t min_length_, int max_depth_, float proposed) :
Discretizer(),
min_length(min_length_),
max_depth(max_depth_),
proposed_cuts(proposed)
{
// Input validation for constructor parameters
if (min_length_ < 3) {
throw std::invalid_argument("min_length must be greater than 2");
}
if (max_depth_ < 1) {
throw std::invalid_argument("max_depth must be greater than 0");
}
if (proposed < 0.0f) {
throw std::invalid_argument("proposed_cuts must be non-negative");
}
direction = bound_dir_t::RIGHT;
}
CPPFImdlp::CPPFImdlp() = default;
CPPFImdlp::~CPPFImdlp() = default;
size_t CPPFImdlp::compute_max_num_cut_points() const
{
// Set the actual maximum number of cut points as a number or as a percentage of the number of samples
if (proposed_cuts == 0) {
return numeric_limits<size_t>::max();
}
if (proposed_cuts < 0 || proposed_cuts > static_cast<float>(X.size())) {
if (proposed_cuts > static_cast<precision_t>(X.size())) {
throw invalid_argument("wrong proposed num_cuts value");
}
if (proposed_cuts < 1)
return static_cast<size_t>(round(static_cast<float>(X.size()) * proposed_cuts));
return static_cast<size_t>(proposed_cuts);
return static_cast<size_t>(round(static_cast<precision_t>(X.size()) * proposed_cuts));
return static_cast<size_t>(proposed_cuts); // The 2 extra cutpoints should not be considered here as this parameter is considered before they are added
}
void CPPFImdlp::fit(samples_t& X_, labels_t& y_)
@@ -39,17 +56,11 @@ namespace mdlp {
discretizedData.clear();
cutPoints.clear();
if (X.size() != y.size()) {
throw invalid_argument("X and y must have the same size");
throw std::invalid_argument("X and y must have the same size: " + std::to_string(X.size()) + " != " + std::to_string(y.size()));
}
if (X.empty() || y.empty()) {
throw invalid_argument("X and y must have at least one element");
}
if (min_length < 3) {
throw invalid_argument("min_length must be greater than 2");
}
if (max_depth < 1) {
throw invalid_argument("max_depth must be greater than 0");
}
indices = sortIndices(X_, y_);
metrics.setData(y, indices);
computeCutPoints(0, X.size(), 1);
@@ -60,6 +71,10 @@ namespace mdlp {
resizeCutPoints();
}
}
// Insert first & last X value to the cutpoints as them shall be ignored in transform
auto [vmin, vmax] = std::minmax_element(X.begin(), X.end());
cutPoints.push_back(*vmax);
cutPoints.insert(cutPoints.begin(), *vmin);
}
pair<precision_t, size_t> CPPFImdlp::valueCutPoint(size_t start, size_t cut, size_t end)
@@ -72,26 +87,33 @@ namespace mdlp {
precision_t previous;
precision_t actual;
precision_t next;
previous = X[indices[idxPrev]];
actual = X[indices[cut]];
next = X[indices[idxNext]];
previous = safe_X_access(idxPrev);
actual = safe_X_access(cut);
next = safe_X_access(idxNext);
// definition 2 of the paper => X[t-1] < X[t]
// get the first equal value of X in the interval
while (idxPrev > start && actual == previous) {
previous = X[indices[--idxPrev]];
--idxPrev;
previous = safe_X_access(idxPrev);
}
backWall = idxPrev == start && actual == previous;
// get the last equal value of X in the interval
while (idxNext < end - 1 && actual == next) {
next = X[indices[++idxNext]];
++idxNext;
next = safe_X_access(idxNext);
}
// # of duplicates before cutpoint
n = cut - 1 - idxPrev;
n = safe_subtract(safe_subtract(cut, 1), idxPrev);
// # of duplicates after cutpoint
m = idxNext - cut - 1;
// Decide which values to use
cut = cut + (backWall ? m + 1 : -n);
actual = X[indices[cut]];
if (backWall) {
m = int(idxNext - cut - 1) < 0 ? 0 : m; // Ensure m right
cut = cut + m + 1;
} else {
cut = safe_subtract(cut, n);
}
actual = safe_X_access(cut);
return { (actual + previous) / 2, cut };
}
@@ -100,7 +122,7 @@ namespace mdlp {
size_t cut;
pair<precision_t, size_t> result;
// Check if the interval length and the depth are Ok
if (end - start < min_length || depth_ > max_depth)
if (end < start || safe_subtract(end, start) < min_length || depth_ > max_depth)
return;
depth = depth_ > depth ? depth_ : depth;
cut = getCandidate(start, end);
@@ -120,14 +142,14 @@ namespace mdlp {
/* Definition 1: A binary discretization for A is determined by selecting the cut point TA for which
E(A, TA; S) is minimal amongst all the candidate cut points. */
size_t candidate = numeric_limits<size_t>::max();
size_t elements = end - start;
size_t elements = safe_subtract(end, start);
bool sameValues = true;
precision_t entropy_left;
precision_t entropy_right;
precision_t minEntropy;
// Check if all the values of the variable in the interval are the same
for (size_t idx = start + 1; idx < end; idx++) {
if (X[indices[idx]] != X[indices[start]]) {
if (safe_X_access(idx) != safe_X_access(start)) {
sameValues = false;
break;
}
@@ -137,7 +159,7 @@ namespace mdlp {
minEntropy = metrics.entropy(start, end);
for (size_t idx = start + 1; idx < end; idx++) {
// Cutpoints are always on boundaries (definition 2)
if (y[indices[idx]] == y[indices[idx - 1]])
if (safe_y_access(idx) == safe_y_access(idx - 1))
continue;
entropy_left = precision_t(idx - start) / static_cast<precision_t>(elements) * metrics.entropy(start, idx);
entropy_right = precision_t(end - idx) / static_cast<precision_t>(elements) * metrics.entropy(idx, end);
@@ -159,7 +181,7 @@ namespace mdlp {
precision_t ent;
precision_t ent1;
precision_t ent2;
auto N = precision_t(end - start);
auto N = precision_t(safe_subtract(end, start));
k = metrics.computeNumClasses(start, end);
k1 = metrics.computeNumClasses(start, cut);
k2 = metrics.computeNumClasses(cut, end);
@@ -179,6 +201,9 @@ namespace mdlp {
indices_t idx(X_.size());
std::iota(idx.begin(), idx.end(), 0);
stable_sort(idx.begin(), idx.end(), [&X_, &y_](size_t i1, size_t i2) {
if (i1 >= X_.size() || i2 >= X_.size() || i1 >= y_.size() || i2 >= y_.size()) {
throw std::out_of_range("Index out of bounds in sort comparison");
}
if (X_[i1] == X_[i2])
return y_[i1] < y_[i2];
else
@@ -197,7 +222,7 @@ namespace mdlp {
size_t end;
for (size_t idx = 0; idx < cutPoints.size(); idx++) {
end = begin;
while (X[indices[end]] < cutPoints[idx] && end < X.size())
while (end < indices.size() && safe_X_access(end) < cutPoints[idx] && end < X.size())
end++;
entropy = metrics.entropy(begin, end);
if (entropy > maxEntropy) {
@@ -208,14 +233,5 @@ namespace mdlp {
}
cutPoints.erase(cutPoints.begin() + static_cast<long>(maxEntropyIdx));
}
labels_t& CPPFImdlp::transform(const samples_t& data)
{
discretizedData.clear();
discretizedData.reserve(data.size());
for (const precision_t& item : data) {
auto upper = std::upper_bound(cutPoints.begin(), cutPoints.end(), item);
discretizedData.push_back(upper - cutPoints.begin());
}
return discretizedData;
}
}

73
src/CPPFImdlp.h Normal file
View File

@@ -0,0 +1,73 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#ifndef CPPFIMDLP_H
#define CPPFIMDLP_H
#include "typesFImdlp.h"
#include <limits>
#include <utility>
#include <string>
#include "Metrics.h"
#include "Discretizer.h"
namespace mdlp {
class CPPFImdlp : public Discretizer {
public:
CPPFImdlp() = default;
CPPFImdlp(size_t min_length_, int max_depth_, float proposed);
virtual ~CPPFImdlp() = default;
void fit(samples_t& X_, labels_t& y_) override;
inline int get_depth() const { return depth; };
protected:
size_t min_length = 3;
int depth = 0;
int max_depth = numeric_limits<int>::max();
float proposed_cuts = 0;
indices_t indices = indices_t();
samples_t X = samples_t();
labels_t y = labels_t();
Metrics metrics = Metrics(y, indices);
size_t num_cut_points = numeric_limits<size_t>::max();
static indices_t sortIndices(samples_t&, labels_t&);
void computeCutPoints(size_t, size_t, int);
void resizeCutPoints();
bool mdlp(size_t, size_t, size_t);
size_t getCandidate(size_t, size_t);
size_t compute_max_num_cut_points() const;
pair<precision_t, size_t> valueCutPoint(size_t, size_t, size_t);
inline precision_t safe_X_access(size_t idx) const
{
if (idx >= indices.size()) {
throw std::out_of_range("Index out of bounds for indices array");
}
size_t real_idx = indices[idx];
if (real_idx >= X.size()) {
throw std::out_of_range("Index out of bounds for X array");
}
return X[real_idx];
}
inline label_t safe_y_access(size_t idx) const
{
if (idx >= indices.size()) {
throw std::out_of_range("Index out of bounds for indices array");
}
size_t real_idx = indices[idx];
if (real_idx >= y.size()) {
throw std::out_of_range("Index out of bounds for y array");
}
return y[real_idx];
}
inline size_t safe_subtract(size_t a, size_t b) const
{
if (b > a) {
throw std::underflow_error("Subtraction would cause underflow");
}
return a - b;
}
};
}
#endif

107
src/Discretizer.cpp Normal file
View File

@@ -0,0 +1,107 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include "Discretizer.h"
namespace mdlp {
labels_t& Discretizer::transform(const samples_t& data)
{
// Input validation
if (data.empty()) {
throw std::invalid_argument("Data for transformation cannot be empty");
}
if (cutPoints.size() < 2) {
throw std::runtime_error("Discretizer not fitted yet or no valid cut points found");
}
discretizedData.clear();
discretizedData.reserve(data.size());
// CutPoints always have at least two items
// Have to ignore first and last cut points provided
auto first = cutPoints.begin() + 1;
auto last = cutPoints.end() - 1;
auto bound = direction == bound_dir_t::LEFT ? std::lower_bound<std::vector<precision_t>::iterator, precision_t> : std::upper_bound<std::vector<precision_t>::iterator, precision_t>;
for (const precision_t& item : data) {
auto pos = bound(first, last, item);
auto number = pos - first;
discretizedData.push_back(static_cast<label_t>(number));
}
return discretizedData;
}
labels_t& Discretizer::fit_transform(samples_t& X_, labels_t& y_)
{
fit(X_, y_);
return transform(X_);
}
void Discretizer::fit_t(const torch::Tensor& X_, const torch::Tensor& y_)
{
// Validate tensor properties for security
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
throw std::invalid_argument("Only 1D tensors supported");
}
if (X_.dtype() != torch::kFloat32) {
throw std::invalid_argument("X tensor must be Float32 type");
}
if (y_.dtype() != torch::kInt32) {
throw std::invalid_argument("y tensor must be Int32 type");
}
if (X_.numel() != y_.numel()) {
throw std::invalid_argument("X and y tensors must have same number of elements");
}
if (X_.numel() == 0) {
throw std::invalid_argument("Tensors cannot be empty");
}
auto num_elements = X_.numel();
samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements);
labels_t y(y_.data_ptr<int>(), y_.data_ptr<int>() + num_elements);
fit(X, y);
}
torch::Tensor Discretizer::transform_t(const torch::Tensor& X_)
{
// Validate tensor properties for security
if (X_.sizes().size() != 1) {
throw std::invalid_argument("Only 1D tensors supported");
}
if (X_.dtype() != torch::kFloat32) {
throw std::invalid_argument("X tensor must be Float32 type");
}
if (X_.numel() == 0) {
throw std::invalid_argument("Tensor cannot be empty");
}
auto num_elements = X_.numel();
samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements);
auto result = transform(X);
return torch::tensor(result, torch_label_t);
}
torch::Tensor Discretizer::fit_transform_t(const torch::Tensor& X_, const torch::Tensor& y_)
{
// Validate tensor properties for security
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
throw std::invalid_argument("Only 1D tensors supported");
}
if (X_.dtype() != torch::kFloat32) {
throw std::invalid_argument("X tensor must be Float32 type");
}
if (y_.dtype() != torch::kInt32) {
throw std::invalid_argument("y tensor must be Int32 type");
}
if (X_.numel() != y_.numel()) {
throw std::invalid_argument("X and y tensors must have same number of elements");
}
if (X_.numel() == 0) {
throw std::invalid_argument("Tensors cannot be empty");
}
auto num_elements = X_.numel();
samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements);
labels_t y(y_.data_ptr<int>(), y_.data_ptr<int>() + num_elements);
auto result = fit_transform(X, y);
return torch::tensor(result, torch_label_t);
}
}

40
src/Discretizer.h Normal file
View File

@@ -0,0 +1,40 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#ifndef DISCRETIZER_H
#define DISCRETIZER_H
#include <string>
#include <algorithm>
#include "typesFImdlp.h"
#include <torch/torch.h>
#include "config.h"
namespace mdlp {
enum class bound_dir_t {
LEFT,
RIGHT
};
const auto torch_label_t = torch::kInt32;
class Discretizer {
public:
Discretizer() = default;
virtual ~Discretizer() = default;
inline cutPoints_t getCutPoints() const { return cutPoints; };
virtual void fit(samples_t& X_, labels_t& y_) = 0;
labels_t& transform(const samples_t& data);
labels_t& fit_transform(samples_t& X_, labels_t& y_);
void fit_t(const torch::Tensor& X_, const torch::Tensor& y_);
torch::Tensor transform_t(const torch::Tensor& X_);
torch::Tensor fit_transform_t(const torch::Tensor& X_, const torch::Tensor& y_);
static inline std::string version() { return { project_mdlp_version.begin(), project_mdlp_version.end() }; };
protected:
labels_t discretizedData = labels_t();
cutPoints_t cutPoints; // At least two cutpoints must be provided, the first and the last will be ignored in transform
bound_dir_t direction; // used in transform
};
}
#endif

View File

@@ -1,11 +1,17 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include "Metrics.h"
#include <set>
#include <cmath>
using namespace std;
namespace mdlp {
Metrics::Metrics(labels_t& y_, indices_t& indices_): y(y_), indices(indices_),
numClasses(computeNumClasses(0, indices.size()))
Metrics::Metrics(labels_t& y_, indices_t& indices_) : y(y_), indices(indices_),
numClasses(computeNumClasses(0, indices_.size()))
{
}
@@ -20,6 +26,7 @@ namespace mdlp {
void Metrics::setData(const labels_t& y_, const indices_t& indices_)
{
std::lock_guard<std::mutex> lock(cache_mutex);
indices = indices_;
y = y_;
numClasses = computeNumClasses(0, indices.size());
@@ -29,15 +36,23 @@ namespace mdlp {
precision_t Metrics::entropy(size_t start, size_t end)
{
if (end - start < 2)
return 0;
// Check cache first with read lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
if (entropyCache.find({ start, end }) != entropyCache.end()) {
return entropyCache[{start, end}];
}
}
// Compute entropy outside of lock
precision_t p;
precision_t ventropy = 0;
int nElements = 0;
labels_t counts(numClasses + 1, 0);
if (end - start < 2)
return 0;
if (entropyCache.find({ start, end }) != entropyCache.end()) {
return entropyCache[{start, end}];
}
for (auto i = &indices[start]; i != &indices[end]; ++i) {
counts[y[*i]]++;
nElements++;
@@ -48,12 +63,27 @@ namespace mdlp {
ventropy -= p * log2(p);
}
}
entropyCache[{start, end}] = ventropy;
// Update cache with write lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
entropyCache[{start, end}] = ventropy;
}
return ventropy;
}
precision_t Metrics::informationGain(size_t start, size_t cut, size_t end)
{
// Check cache first with read lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
if (igCache.find(make_tuple(start, cut, end)) != igCache.end()) {
return igCache[make_tuple(start, cut, end)];
}
}
// Compute information gain outside of lock
precision_t iGain;
precision_t entropyInterval;
precision_t entropyLeft;
@@ -61,9 +91,7 @@ namespace mdlp {
size_t nElementsLeft = cut - start;
size_t nElementsRight = end - cut;
size_t nElements = end - start;
if (igCache.find(make_tuple(start, cut, end)) != igCache.end()) {
return igCache[make_tuple(start, cut, end)];
}
entropyInterval = entropy(start, end);
entropyLeft = entropy(start, cut);
entropyRight = entropy(cut, end);
@@ -71,7 +99,13 @@ namespace mdlp {
(static_cast<precision_t>(nElementsLeft) * entropyLeft +
static_cast<precision_t>(nElementsRight) * entropyRight) /
static_cast<precision_t>(nElements);
igCache[make_tuple(start, cut, end)] = iGain;
// Update cache with write lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
igCache[make_tuple(start, cut, end)] = iGain;
}
return iGain;
}

View File

@@ -1,7 +1,14 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#ifndef CCMETRICS_H
#define CCMETRICS_H
#include "typesFImdlp.h"
#include <mutex>
namespace mdlp {
class Metrics {
@@ -9,6 +16,7 @@ namespace mdlp {
labels_t& y;
indices_t& indices;
int numClasses;
mutable std::mutex cache_mutex;
cacheEnt_t entropyCache = cacheEnt_t();
cacheIg_t igCache = cacheIg_t();
public:

View File

@@ -1,3 +1,9 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#ifndef TYPES_H
#define TYPES_H
@@ -8,8 +14,9 @@
using namespace std;
namespace mdlp {
typedef float precision_t;
typedef int label_t;
typedef std::vector<precision_t> samples_t;
typedef std::vector<int> labels_t;
typedef std::vector<label_t> labels_t;
typedef std::vector<size_t> indices_t;
typedef std::vector<precision_t> cutPoints_t;
typedef std::map<std::pair<int, int>, precision_t> cacheEnt_t;

View File

@@ -0,0 +1,9 @@
cmake_minimum_required(VERSION 3.20)
project(test_fimdlp)
set(CMAKE_CXX_STANDARD 17)
find_package(fimdlp REQUIRED)
add_executable(test_fimdlp test_fimdlp.cpp)
target_link_libraries(test_fimdlp fimdlp::fimdlp)

View File

@@ -0,0 +1,9 @@
{
"version": 4,
"vendor": {
"conan": {}
},
"include": [
"build/Release/generators/CMakePresets.json"
]
}

View File

@@ -0,0 +1,9 @@
[requires]
fimdlp/2.0.1
[generators]
CMakeDeps
CMakeToolchain
[layout]
cmake_layout

View File

@@ -0,0 +1,39 @@
#include <iostream>
#include <vector>
#include <fimdlp/CPPFImdlp.h>
#include <fimdlp/BinDisc.h>
int main() {
std::cout << "Testing FIMDLP package..." << std::endl;
// Test data - simple continuous values with binary classification
mdlp::samples_t data = {1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0};
mdlp::labels_t labels = {0, 0, 0, 1, 1, 0, 1, 1, 1, 1};
std::cout << "Created test data with " << data.size() << " samples" << std::endl;
// Test MDLP discretizer
mdlp::CPPFImdlp discretizer;
discretizer.fit(data, labels);
auto cut_points = discretizer.getCutPoints();
std::cout << "MDLP found " << cut_points.size() << " cut points" << std::endl;
for (size_t i = 0; i < cut_points.size(); ++i) {
std::cout << "Cut point " << i << ": " << cut_points[i] << std::endl;
}
// Test BinDisc discretizer
mdlp::BinDisc bin_discretizer(3, mdlp::strategy_t::UNIFORM); // 3 bins, uniform strategy
bin_discretizer.fit(data, labels);
auto bin_cut_points = bin_discretizer.getCutPoints();
std::cout << "BinDisc found " << bin_cut_points.size() << " cut points" << std::endl;
for (size_t i = 0; i < bin_cut_points.size(); ++i) {
std::cout << "Bin cut point " << i << ": " << bin_cut_points[i] << std::endl;
}
std::cout << "FIMDLP package test completed successfully!" << std::endl;
return 0;
}

View File

@@ -0,0 +1,9 @@
cmake_minimum_required(VERSION 3.20)
project(test_fimdlp)
find_package(fimdlp REQUIRED)
find_package(Torch REQUIRED)
add_executable(test_fimdlp src/test_fimdlp.cpp)
target_link_libraries(test_fimdlp fimdlp::fimdlp torch::torch)
target_compile_features(test_fimdlp PRIVATE cxx_std_17)

View File

@@ -0,0 +1,10 @@
{
"version": 4,
"vendor": {
"conan": {}
},
"include": [
"build/gcc-14-x86_64-gnu17-release/generators/CMakePresets.json",
"build/gcc-14-x86_64-gnu17-debug/generators/CMakePresets.json"
]
}

28
test_package/conanfile.py Normal file
View File

@@ -0,0 +1,28 @@
import os
from conan import ConanFile
from conan.tools.cmake import CMake, cmake_layout
from conan.tools.build import can_run
class FimdlpTestConan(ConanFile):
settings = "os", "compiler", "build_type", "arch"
# VirtualBuildEnv and VirtualRunEnv can be avoided if "tools.env:CONAN_RUN_TESTS" is false
generators = "CMakeDeps", "CMakeToolchain", "VirtualRunEnv"
apply_env = False # avoid the default VirtualBuildEnv from the base class
test_type = "explicit"
def requirements(self):
self.requires(self.tested_reference_str)
def layout(self):
cmake_layout(self)
def build(self):
cmake = CMake(self)
cmake.configure()
cmake.build()
def test(self):
if can_run(self):
cmd = os.path.join(self.cpp.build.bindir, "test_fimdlp")
self.run(cmd, env="conanrun")

View File

@@ -0,0 +1,27 @@
#include <iostream>
#include <vector>
#include <fimdlp/CPPFImdlp.h>
#include <fimdlp/Metrics.h>
int main() {
std::cout << "Testing fimdlp library..." << std::endl;
// Simple test of the library
try {
// Test Metrics class
Metrics metrics;
std::vector<int> labels = {0, 0, 1, 1, 0, 1};
double entropy = metrics.entropy(labels);
std::cout << "Entropy calculated: " << entropy << std::endl;
// Test CPPFImdlp creation
CPPFImdlp discretizer;
std::cout << "CPPFImdlp instance created successfully" << std::endl;
std::cout << "fimdlp library test completed successfully!" << std::endl;
return 0;
} catch (const std::exception& e) {
std::cerr << "Error testing fimdlp library: " << e.what() << std::endl;
return 1;
}
}

View File

@@ -1,132 +0,0 @@
#include "ArffFiles.h"
#include <fstream>
#include <sstream>
#include <map>
using namespace std;
ArffFiles::ArffFiles() = default;
vector<string> ArffFiles::getLines() const
{
return lines;
}
unsigned long int ArffFiles::getSize() const
{
return lines.size();
}
vector<pair<string, string>> ArffFiles::getAttributes() const
{
return attributes;
}
string ArffFiles::getClassName() const
{
return className;
}
string ArffFiles::getClassType() const
{
return classType;
}
vector<mdlp::samples_t>& ArffFiles::getX()
{
return X;
}
vector<int>& ArffFiles::getY()
{
return y;
}
void ArffFiles::load(const string& fileName, bool classLast)
{
ifstream file(fileName);
if (!file.is_open()) {
throw invalid_argument("Unable to open file");
}
string line;
string keyword;
string attribute;
string type;
string type_w;
while (getline(file, line)) {
if (line.empty() || line[0] == '%' || line == "\r" || line == " ") {
continue;
}
if (line.find("@attribute") != string::npos || line.find("@ATTRIBUTE") != string::npos) {
stringstream ss(line);
ss >> keyword >> attribute;
type = "";
while (ss >> type_w)
type += type_w + " ";
attributes.emplace_back(trim(attribute), trim(type));
continue;
}
if (line[0] == '@') {
continue;
}
lines.push_back(line);
}
file.close();
if (attributes.empty())
throw invalid_argument("No attributes found");
if (classLast) {
className = get<0>(attributes.back());
classType = get<1>(attributes.back());
attributes.pop_back();
} else {
className = get<0>(attributes.front());
classType = get<1>(attributes.front());
attributes.erase(attributes.begin());
}
generateDataset(classLast);
}
void ArffFiles::generateDataset(bool classLast)
{
X = vector<mdlp::samples_t>(attributes.size(), mdlp::samples_t(lines.size()));
auto yy = vector<string>(lines.size(), "");
int labelIndex = classLast ? static_cast<int>(attributes.size()) : 0;
for (size_t i = 0; i < lines.size(); i++) {
stringstream ss(lines[i]);
string value;
int pos = 0;
int xIndex = 0;
while (getline(ss, value, ',')) {
if (pos++ == labelIndex) {
yy[i] = value;
} else {
X[xIndex++][i] = stof(value);
}
}
}
y = factorize(yy);
}
string ArffFiles::trim(const string& source)
{
string s(source);
s.erase(0, s.find_first_not_of(" '\n\r\t"));
s.erase(s.find_last_not_of(" '\n\r\t") + 1);
return s;
}
vector<int> ArffFiles::factorize(const vector<string>& labels_t)
{
vector<int> yy;
yy.reserve(labels_t.size());
map<string, int> labelMap;
int i = 0;
for (const string& label : labels_t) {
if (labelMap.find(label) == labelMap.end()) {
labelMap[label] = i++;
}
yy.push_back(labelMap[label]);
}
return yy;
}

View File

@@ -1,35 +0,0 @@
#ifndef ARFFFILES_H
#define ARFFFILES_H
#include <string>
#include <vector>
#include "../typesFImdlp.h"
using namespace std;
class ArffFiles {
private:
vector<string> lines;
vector<pair<string, string>> attributes;
string className;
string classType;
vector<mdlp::samples_t> X;
vector<int> y;
void generateDataset(bool);
public:
ArffFiles();
void load(const string&, bool = true);
vector<string> getLines() const;
unsigned long int getSize() const;
string getClassName() const;
string getClassType() const;
static string trim(const string&);
vector<mdlp::samples_t>& getX();
vector<int>& getY();
vector<pair<string, string>> getAttributes() const;
static vector<int> factorize(const vector<string>& labels_t);
};
#endif

View File

@@ -1,21 +1,38 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include <fstream>
#include <string>
#include <iostream>
#include "gtest/gtest.h"
#include "ArffFiles.h"
#include "../BinDisc.h"
#include <ArffFiles.hpp>
#include "BinDisc.h"
#include "Experiments.hpp"
#include <cmath>
#define EXPECT_THROW_WITH_MESSAGE(stmt, etype, whatstring) EXPECT_THROW( \
try { \
stmt; \
} catch (const etype& ex) { \
EXPECT_EQ(whatstring, std::string(ex.what())); \
throw; \
} \
, etype)
namespace mdlp {
const float margin = 1e-4;
static std::string set_data_path()
{
std::string path = "../datasets/";
std::string path = "datasets/";
std::ifstream file(path + "iris.arff");
if (file.is_open()) {
file.close();
return path;
}
return "../../tests/datasets/";
return "tests/datasets/";
}
const std::string data_path = set_data_path();
class TestBinDisc3U : public BinDisc, public testing::Test {
@@ -37,12 +54,14 @@ namespace mdlp {
TEST_F(TestBinDisc3U, Easy3BinsUniform)
{
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0 };
fit(X);
auto y = labels_t();
fit(X, y);
auto cuts = getCutPoints();
EXPECT_NEAR(3.66667, cuts[0], margin);
EXPECT_NEAR(6.33333, cuts[1], margin);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(4, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(3.66667, cuts.at(1), margin);
EXPECT_NEAR(6.33333, cuts.at(2), margin);
EXPECT_NEAR(9.0, cuts.at(3), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2 };
EXPECT_EQ(expected, labels);
@@ -52,10 +71,11 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_NEAR(3.666667, cuts[0], margin);
EXPECT_NEAR(6.333333, cuts[1], margin);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(4, cuts.size());
EXPECT_NEAR(1, cuts[0], margin);
EXPECT_NEAR(3.666667, cuts[1], margin);
EXPECT_NEAR(6.333333, cuts[2], margin);
EXPECT_NEAR(9, cuts[3], margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2 };
EXPECT_EQ(expected, labels);
@@ -65,10 +85,11 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4.0, cuts[0]);
EXPECT_EQ(7.0, cuts[1]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(4, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(4.0, cuts.at(1), margin);
EXPECT_NEAR(7.0, cuts.at(2), margin);
EXPECT_NEAR(10.0, cuts.at(3), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 2 };
EXPECT_EQ(expected, labels);
@@ -78,10 +99,11 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4, cuts[0]);
EXPECT_EQ(7, cuts[1]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(4, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(4.0, cuts.at(1), margin);
EXPECT_NEAR(7.0, cuts.at(2), margin);
EXPECT_NEAR(10.0, cuts.at(3), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 2 };
EXPECT_EQ(expected, labels);
@@ -91,10 +113,11 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_NEAR(4.33333, cuts[0], margin);
EXPECT_NEAR(7.66667, cuts[1], margin);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(4, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(4.33333, cuts.at(1), margin);
EXPECT_NEAR(7.66667, cuts.at(2), margin);
EXPECT_NEAR(11.0, cuts.at(3), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 2 };
EXPECT_EQ(expected, labels);
@@ -104,10 +127,11 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_NEAR(4.33333, cuts[0], margin);
EXPECT_NEAR(7.66667, cuts[1], margin);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(4, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(4.33333, cuts.at(1), margin);
EXPECT_NEAR(7.66667, cuts.at(2), margin);
EXPECT_NEAR(11.0, cuts.at(3), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 2 };
EXPECT_EQ(expected, labels);
@@ -117,8 +141,9 @@ namespace mdlp {
samples_t X = { 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(numeric_limits<float>::max(), cuts[0]);
EXPECT_EQ(1, cuts.size());
ASSERT_EQ(2, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(1, cuts.at(1), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 0, 0, 0 };
EXPECT_EQ(expected, labels);
@@ -128,8 +153,9 @@ namespace mdlp {
samples_t X = { 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(numeric_limits<float>::max(), cuts[0]);
EXPECT_EQ(1, cuts.size());
ASSERT_EQ(2, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(1, cuts.at(1), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 0, 0, 0 };
EXPECT_EQ(expected, labels);
@@ -137,18 +163,12 @@ namespace mdlp {
TEST_F(TestBinDisc3U, EmptyUniform)
{
samples_t X = {};
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(numeric_limits<float>::max(), cuts[0]);
EXPECT_EQ(1, cuts.size());
EXPECT_THROW(fit(X), std::invalid_argument);
}
TEST_F(TestBinDisc3Q, EmptyQuantile)
{
samples_t X = {};
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(numeric_limits<float>::max(), cuts[0]);
EXPECT_EQ(1, cuts.size());
EXPECT_THROW(fit(X), std::invalid_argument);
}
TEST(TestBinDisc3, ExceptionNumberBins)
{
@@ -159,44 +179,41 @@ namespace mdlp {
samples_t X = { 3.0, 1.0, 1.0, 3.0, 1.0, 1.0, 3.0, 1.0, 1.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_NEAR(1.66667, cuts[0], margin);
EXPECT_NEAR(2.33333, cuts[1], margin);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(4, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(1.66667, cuts.at(1), margin);
EXPECT_NEAR(2.33333, cuts.at(2), margin);
EXPECT_NEAR(3.0, cuts.at(3), margin);
auto labels = transform(X);
labels_t expected = { 2, 0, 0, 2, 0, 0, 2, 0, 0 };
EXPECT_EQ(expected, labels);
EXPECT_EQ(3.0, X[0]); // X is not modified
ASSERT_EQ(3.0, X[0]); // X is not modified
}
TEST_F(TestBinDisc3Q, EasyRepeated)
{
samples_t X = { 3.0, 1.0, 1.0, 3.0, 1.0, 1.0, 3.0, 1.0, 1.0 };
fit(X);
auto cuts = getCutPoints();
std::cout << "cuts: ";
for (auto cut : cuts) {
std::cout << cut << " ";
}
std::cout << std::endl;
std::cout << std::string(80, '-') << std::endl;
EXPECT_NEAR(1.66667, cuts[0], margin);
EXPECT_EQ(numeric_limits<float>::max(), cuts[1]);
EXPECT_EQ(2, cuts.size());
ASSERT_EQ(3, cuts.size());
EXPECT_NEAR(1, cuts.at(0), margin);
EXPECT_NEAR(1.66667, cuts.at(1), margin);
EXPECT_NEAR(3.0, cuts.at(2), margin);
auto labels = transform(X);
labels_t expected = { 1, 0, 0, 1, 0, 0, 1, 0, 0 };
EXPECT_EQ(expected, labels);
EXPECT_EQ(3.0, X[0]); // X is not modified
ASSERT_EQ(3.0, X[0]); // X is not modified
}
TEST_F(TestBinDisc4U, Easy4BinsUniform)
{
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(3.75, cuts[0]);
EXPECT_EQ(6.5, cuts[1]);
EXPECT_EQ(9.25, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(3.75, cuts.at(1), margin);
EXPECT_NEAR(6.5, cuts.at(2), margin);
EXPECT_NEAR(9.25, cuts.at(3), margin);
EXPECT_NEAR(12.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3 };
EXPECT_EQ(expected, labels);
@@ -206,11 +223,12 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(3.75, cuts[0]);
EXPECT_EQ(6.5, cuts[1]);
EXPECT_EQ(9.25, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(3.75, cuts.at(1), margin);
EXPECT_NEAR(6.5, cuts.at(2), margin);
EXPECT_NEAR(9.25, cuts.at(3), margin);
EXPECT_NEAR(12.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3 };
EXPECT_EQ(expected, labels);
@@ -220,11 +238,12 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4.0, cuts[0]);
EXPECT_EQ(7.0, cuts[1]);
EXPECT_EQ(10.0, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(4.0, cuts.at(1), margin);
EXPECT_NEAR(7.0, cuts.at(2), margin);
EXPECT_NEAR(10.0, cuts.at(3), margin);
EXPECT_NEAR(13.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3 };
EXPECT_EQ(expected, labels);
@@ -234,11 +253,12 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4.0, cuts[0]);
EXPECT_EQ(7.0, cuts[1]);
EXPECT_EQ(10.0, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(4.0, cuts.at(1), margin);
EXPECT_NEAR(7.0, cuts.at(2), margin);
EXPECT_NEAR(10.0, cuts.at(3), margin);
EXPECT_NEAR(13.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3 };
EXPECT_EQ(expected, labels);
@@ -248,11 +268,12 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4.25, cuts[0]);
EXPECT_EQ(7.5, cuts[1]);
EXPECT_EQ(10.75, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(4.25, cuts.at(1), margin);
EXPECT_NEAR(7.5, cuts.at(2), margin);
EXPECT_NEAR(10.75, cuts.at(3), margin);
EXPECT_NEAR(14.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3 };
EXPECT_EQ(expected, labels);
@@ -262,11 +283,12 @@ namespace mdlp {
samples_t X = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4.25, cuts[0]);
EXPECT_EQ(7.5, cuts[1]);
EXPECT_EQ(10.75, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(4.25, cuts.at(1), margin);
EXPECT_NEAR(7.5, cuts.at(2), margin);
EXPECT_NEAR(10.75, cuts.at(3), margin);
EXPECT_NEAR(14.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3 };
EXPECT_EQ(expected, labels);
@@ -276,11 +298,12 @@ namespace mdlp {
samples_t X = { 15.0, 8.0, 12.0, 14.0, 6.0, 1.0, 13.0, 11.0, 10.0, 9.0, 7.0, 4.0, 3.0, 5.0, 2.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4.5, cuts[0]);
EXPECT_EQ(8, cuts[1]);
EXPECT_EQ(11.5, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(4.5, cuts.at(1), margin);
EXPECT_NEAR(8, cuts.at(2), margin);
EXPECT_NEAR(11.5, cuts.at(3), margin);
EXPECT_NEAR(15.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 3, 2, 3, 3, 1, 0, 3, 2, 2, 2, 1, 0, 0, 1, 0 };
EXPECT_EQ(expected, labels);
@@ -290,11 +313,12 @@ namespace mdlp {
samples_t X = { 15.0, 13.0, 12.0, 14.0, 6.0, 1.0, 8.0, 11.0, 10.0, 9.0, 7.0, 4.0, 3.0, 5.0, 2.0 };
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(4.5, cuts[0]);
EXPECT_EQ(8, cuts[1]);
EXPECT_EQ(11.5, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(1.0, cuts.at(0), margin);
EXPECT_NEAR(4.5, cuts.at(1), margin);
EXPECT_NEAR(8, cuts.at(2), margin);
EXPECT_NEAR(11.5, cuts.at(3), margin);
EXPECT_NEAR(15.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 3, 3, 3, 3, 1, 0, 2, 2, 2, 2, 1, 0, 0, 1, 0 };
EXPECT_EQ(expected, labels);
@@ -305,11 +329,12 @@ namespace mdlp {
// 0 1 2 3 4 5 6 7 8 9
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(1.0, cuts[0]);
EXPECT_EQ(2.0, cuts[1]);
EXPECT_EQ(3.0, cuts[2]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[3]);
EXPECT_EQ(4, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(0.0, cuts.at(0), margin);
EXPECT_NEAR(1.0, cuts.at(1), margin);
EXPECT_NEAR(2.0, cuts.at(2), margin);
EXPECT_NEAR(3.0, cuts.at(3), margin);
EXPECT_NEAR(4.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 1, 1, 1, 2, 2, 3, 3, 3, 3 };
EXPECT_EQ(expected, labels);
@@ -320,32 +345,129 @@ namespace mdlp {
// 0 1 2 3 4 5 6 7 8 9
fit(X);
auto cuts = getCutPoints();
EXPECT_EQ(2.0, cuts[0]);
EXPECT_EQ(3.0, cuts[1]);
EXPECT_EQ(numeric_limits<float>::max(), cuts[2]);
EXPECT_EQ(3, cuts.size());
ASSERT_EQ(5, cuts.size());
EXPECT_NEAR(0.0, cuts.at(0), margin);
EXPECT_NEAR(1.0, cuts.at(1), margin);
EXPECT_NEAR(2.0, cuts.at(2), margin);
EXPECT_NEAR(3.0, cuts.at(3), margin);
EXPECT_NEAR(4.0, cuts.at(4), margin);
auto labels = transform(X);
labels_t expected = { 0, 0, 0, 0, 1, 1, 2, 2, 2, 2 };
labels_t expected = { 0, 1, 1, 1, 2, 2, 3, 3, 3, 3 };
EXPECT_EQ(expected, labels);
}
TEST_F(TestBinDisc4U, irisUniform)
TEST(TestBinDiscGeneric, Fileset)
{
ArffFiles file;
file.load(data_path + "iris.arff", true);
vector<samples_t>& X = file.getX();
fit(X[0]);
auto Xt = transform(X[0]);
labels_t expected = { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3, 2, 2, 1, 2, 1, 2, 0, 2, 0, 0, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 0, 1, 1, 1, 2, 0, 1, 2, 1, 3, 2, 2, 3, 0, 3, 2, 3, 2, 2, 2, 1, 1, 2, 2, 3, 3, 1, 2, 1, 3, 2, 2, 3, 2, 1, 2, 3, 3, 3, 2, 2, 1, 3, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1 };
EXPECT_EQ(expected, Xt);
Experiments exps(data_path + "tests.txt");
int num = 0;
while (exps.is_next()) {
++num;
Experiment exp = exps.next();
BinDisc disc(exp.n_bins_, exp.strategy_[0] == 'Q' ? strategy_t::QUANTILE : strategy_t::UNIFORM);
std::vector<precision_t> test;
if (exp.type_ == experiment_t::RANGE) {
for (float i = exp.from_; i < exp.to_; i += exp.step_) {
test.push_back(i);
}
} else {
test = exp.dataset_;
}
// show_vector(test, "Test");
auto empty = std::vector<int>();
auto Xt = disc.fit_transform(test, empty);
auto cuts = disc.getCutPoints();
EXPECT_EQ(exp.discretized_data_.size(), Xt.size());
auto flag = false;
size_t n_errors = 0;
if (num < 40) {
//
// Check discretization of only the first 40 tests as after we cannot ensure the same codification due to precision problems
//
for (int i = 0; i < exp.discretized_data_.size(); ++i) {
if (exp.discretized_data_.at(i) != Xt.at(i)) {
if (!flag) {
if (exp.type_ == experiment_t::RANGE)
std::cout << "+Exp #: " << num << " From: " << exp.from_ << " To: " << exp.to_ << " Step: " << exp.step_ << " Bins: " << exp.n_bins_ << " Strategy: " << exp.strategy_ << std::endl;
else {
std::cout << "+Exp #: " << num << " strategy: " << exp.strategy_ << " " << " n_bins: " << exp.n_bins_ << " ";
show_vector(exp.dataset_, "Dataset");
}
show_vector(cuts, "Cuts");
std::cout << "Error at " << i << " test[i]=" << test.at(i) << " Expected: " << exp.discretized_data_.at(i) << " Got: " << Xt.at(i) << std::endl;
flag = true;
EXPECT_EQ(exp.discretized_data_.at(i), Xt.at(i));
}
n_errors++;
}
}
if (flag) {
std::cout << "*** Found " << n_errors << " mistakes in this experiment dataset" << std::endl;
}
}
EXPECT_EQ(exp.cutpoints_.size(), cuts.size());
for (int i = 0; i < exp.cutpoints_.size(); ++i) {
EXPECT_NEAR(exp.cutpoints_.at(i), cuts.at(i), margin);
}
}
// std::cout << "* Number of experiments tested: " << num << std::endl;
}
TEST_F(TestBinDisc4Q, irisQuantile)
TEST_F(TestBinDisc3U, FitDataSizeTooSmall)
{
ArffFiles file;
file.load(data_path + "iris.arff", true);
vector<samples_t>& X = file.getX();
fit(X[0]);
auto Xt = transform(X[0]);
labels_t expected = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 3, 3, 1, 3, 1, 2, 0, 3, 1, 0, 2, 2, 2, 1, 3, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 1, 1, 1, 2, 1, 1, 2, 2, 3, 2, 3, 3, 0, 3, 3, 3, 3, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2 };
EXPECT_EQ(expected, Xt);
// Test when data size is smaller than n_bins
samples_t X = { 1.0, 2.0 }; // Only 2 elements for 3 bins
EXPECT_THROW_WITH_MESSAGE(fit(X), std::invalid_argument, "Input data size must be at least equal to n_bins");
}
TEST_F(TestBinDisc3Q, FitDataSizeTooSmall)
{
// Test when data size is smaller than n_bins
samples_t X = { 1.0, 2.0 }; // Only 2 elements for 3 bins
EXPECT_THROW_WITH_MESSAGE(fit(X), std::invalid_argument, "Input data size must be at least equal to n_bins");
}
TEST_F(TestBinDisc3U, FitWithYEmptyX)
{
// Test fit(X, y) with empty X
samples_t X = {};
labels_t y = { 1, 2, 3 };
EXPECT_THROW_WITH_MESSAGE(fit(X, y), std::invalid_argument, "X cannot be empty");
}
TEST_F(TestBinDisc3U, LinspaceInvalidNumPoints)
{
// Test linspace with num < 2
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, 1.0f, 1), std::invalid_argument, "Number of points must be at least 2 for linspace");
}
TEST_F(TestBinDisc3U, LinspaceNaNValues)
{
// Test linspace with NaN values
float nan_val = std::numeric_limits<float>::quiet_NaN();
EXPECT_THROW_WITH_MESSAGE(linspace(nan_val, 1.0f, 3), std::invalid_argument, "Start and end values cannot be NaN");
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, nan_val, 3), std::invalid_argument, "Start and end values cannot be NaN");
}
TEST_F(TestBinDisc3U, LinspaceInfiniteValues)
{
// Test linspace with infinite values
float inf_val = std::numeric_limits<float>::infinity();
EXPECT_THROW_WITH_MESSAGE(linspace(inf_val, 1.0f, 3), std::invalid_argument, "Start and end values cannot be infinite");
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, inf_val, 3), std::invalid_argument, "Start and end values cannot be infinite");
}
TEST_F(TestBinDisc3U, PercentileEmptyData)
{
// Test percentile with empty data
samples_t empty_data = {};
std::vector<precision_t> percentiles = { 25.0f, 50.0f, 75.0f };
EXPECT_THROW_WITH_MESSAGE(percentile(empty_data, percentiles), std::invalid_argument, "Data cannot be empty for percentile calculation");
}
TEST_F(TestBinDisc3U, PercentileEmptyPercentiles)
{
// Test percentile with empty percentiles
samples_t data = { 1.0f, 2.0f, 3.0f };
std::vector<precision_t> empty_percentiles = {};
EXPECT_THROW_WITH_MESSAGE(percentile(data, empty_percentiles), std::invalid_argument, "Percentiles cannot be empty");
}
}

View File

@@ -1,34 +1,40 @@
cmake_minimum_required(VERSION 3.20)
set(CMAKE_CXX_STANDARD 11)
include(FetchContent)
include_directories(${GTEST_INCLUDE_DIRS})
find_package(arff-files REQUIRED)
find_package(GTest REQUIRED)
find_package(Torch CONFIG REQUIRED)
FetchContent_Declare(
googletest
URL https://github.com/google/googletest/archive/03597a01ee50ed33e9dfd640b249b4be3799d395.zip
include_directories(
${libtorch_INCLUDE_DIRS_DEBUG}
${fimdlp_SOURCE_DIR}/src
${arff-files_INCLUDE_DIRS}
${CMAKE_BINARY_DIR}/configured_files/include
)
# For Windows: Prevent overriding the parent project's compiler/linker settings
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(googletest)
enable_testing()
add_executable(Metrics_unittest ../Metrics.cpp Metrics_unittest.cpp)
add_executable(FImdlp_unittest ../CPPFImdlp.cpp ArffFiles.cpp ../Metrics.cpp FImdlp_unittest.cpp)
add_executable(BinDisc_unittest ../BinDisc.cpp ArffFiles.cpp BinDisc_unittest.cpp)
add_executable(Metrics_unittest ${fimdlp_SOURCE_DIR}/src/Metrics.cpp Metrics_unittest.cpp)
target_link_libraries(Metrics_unittest GTest::gtest_main)
target_link_libraries(FImdlp_unittest GTest::gtest_main)
target_link_libraries(BinDisc_unittest GTest::gtest_main)
target_compile_options(Metrics_unittest PRIVATE --coverage)
target_compile_options(FImdlp_unittest PRIVATE --coverage)
target_compile_options(BinDisc_unittest PRIVATE --coverage)
target_link_options(Metrics_unittest PRIVATE --coverage)
add_executable(FImdlp_unittest FImdlp_unittest.cpp
${fimdlp_SOURCE_DIR}/src/CPPFImdlp.cpp ${fimdlp_SOURCE_DIR}/src/Metrics.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp)
target_link_libraries(FImdlp_unittest GTest::gtest_main torch::torch)
target_compile_options(FImdlp_unittest PRIVATE --coverage)
target_link_options(FImdlp_unittest PRIVATE --coverage)
add_executable(BinDisc_unittest BinDisc_unittest.cpp ${fimdlp_SOURCE_DIR}/src/BinDisc.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp)
target_link_libraries(BinDisc_unittest GTest::gtest_main torch::torch)
target_compile_options(BinDisc_unittest PRIVATE --coverage)
target_link_options(BinDisc_unittest PRIVATE --coverage)
add_executable(Discretizer_unittest Discretizer_unittest.cpp
${fimdlp_SOURCE_DIR}/src/BinDisc.cpp ${fimdlp_SOURCE_DIR}/src/CPPFImdlp.cpp ${fimdlp_SOURCE_DIR}/src/Metrics.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp )
target_link_libraries(Discretizer_unittest GTest::gtest_main torch::torch)
target_compile_options(Discretizer_unittest PRIVATE --coverage)
target_link_options(Discretizer_unittest PRIVATE --coverage)
include(GoogleTest)
gtest_discover_tests(Metrics_unittest)
gtest_discover_tests(FImdlp_unittest)
gtest_discover_tests(BinDisc_unittest)
gtest_discover_tests(BinDisc_unittest)
gtest_discover_tests(Discretizer_unittest)

View File

@@ -0,0 +1,388 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include <fstream>
#include <string>
#include <iostream>
#include <ArffFiles.hpp>
#include "gtest/gtest.h"
#include "Discretizer.h"
#include "BinDisc.h"
#include "CPPFImdlp.h"
#define EXPECT_THROW_WITH_MESSAGE(stmt, etype, whatstring) EXPECT_THROW( \
try { \
stmt; \
} catch (const etype& ex) { \
EXPECT_EQ(whatstring, std::string(ex.what())); \
throw; \
} \
, etype)
namespace mdlp {
const float margin = 1e-4;
static std::string set_data_path()
{
std::string path = "tests/datasets/";
std::ifstream file(path + "iris.arff");
if (file.is_open()) {
file.close();
return path;
}
return "datasets/";
}
const std::string data_path = set_data_path();
const labels_t iris_quantile = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 3, 3, 1, 3, 1, 2, 0, 3, 1, 0, 2, 2, 2, 1, 3, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 1, 1, 1, 2, 1, 1, 2, 2, 3, 2, 3, 3, 0, 3, 3, 3, 3, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2 };
TEST(Discretizer, Version)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
auto version = disc->version();
delete disc;
EXPECT_EQ("2.1.1", version);
}
TEST(Discretizer, BinIrisUniform)
{
ArffFiles file;
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
file.load(data_path + "iris.arff", true);
vector<samples_t>& X = file.getX();
auto y = labels_t();
disc->fit(X[0], y);
auto Xt = disc->transform(X[0]);
labels_t expected = { 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3, 2, 2, 1, 2, 1, 2, 0, 2, 0, 0, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 0, 1, 1, 1, 2, 0, 1, 2, 1, 3, 2, 2, 3, 0, 3, 2, 3, 2, 2, 2, 1, 1, 2, 2, 3, 3, 1, 2, 1, 3, 2, 2, 3, 2, 1, 2, 3, 3, 3, 2, 2, 1, 3, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1 };
delete disc;
EXPECT_EQ(expected, Xt);
}
TEST(Discretizer, BinIrisQuantile)
{
ArffFiles file;
Discretizer* disc = new BinDisc(4, strategy_t::QUANTILE);
file.load(data_path + "iris.arff", true);
vector<samples_t>& X = file.getX();
auto y = labels_t();
disc->fit(X[0], y);
auto Xt = disc->transform(X[0]);
delete disc;
EXPECT_EQ(iris_quantile, Xt);
}
TEST(Discretizer, BinIrisQuantileTorch)
{
ArffFiles file;
Discretizer* disc = new BinDisc(4, strategy_t::QUANTILE);
file.load(data_path + "iris.arff", true);
auto X = file.getX();
auto y = file.getY();
auto X_torch = torch::tensor(X[0], torch::kFloat32);
auto yt = torch::tensor(y, torch::kInt32);
disc->fit_t(X_torch, yt);
torch::Tensor Xt = disc->transform_t(X_torch);
delete disc;
EXPECT_EQ(iris_quantile.size(), Xt.size(0));
for (int i = 0; i < iris_quantile.size(); ++i) {
EXPECT_EQ(iris_quantile.at(i), Xt[i].item<int>());
}
}
TEST(Discretizer, BinIrisQuantileTorchFit_transform)
{
ArffFiles file;
Discretizer* disc = new BinDisc(4, strategy_t::QUANTILE);
file.load(data_path + "iris.arff", true);
auto X = file.getX();
auto y = file.getY();
auto X_torch = torch::tensor(X[0], torch::kFloat32);
auto yt = torch::tensor(y, torch::kInt32);
torch::Tensor Xt = disc->fit_transform_t(X_torch, yt);
delete disc;
EXPECT_EQ(iris_quantile.size(), Xt.size(0));
for (int i = 0; i < iris_quantile.size(); ++i) {
EXPECT_EQ(iris_quantile.at(i), Xt[i].item<int>());
}
}
TEST(Discretizer, FImdlpIris)
{
auto labelsq = {
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
2,
1,
1,
1,
1,
1,
1,
1,
0,
1,
0,
0,
0,
1,
1,
0,
0,
1,
1,
1,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
1,
0,
1,
0,
1,
0,
3,
3,
3,
1,
3,
1,
2,
0,
3,
1,
0,
2,
2,
2,
1,
3,
1,
2,
2,
1,
2,
2,
2,
2,
3,
3,
3,
3,
2,
1,
1,
1,
2,
2,
1,
2,
3,
2,
1,
1,
1,
2,
2,
0,
1,
1,
1,
2,
1,
1,
2,
2,
3,
2,
3,
3,
0,
3,
3,
3,
3,
3,
3,
1,
2,
3,
3,
3,
3,
2,
3,
1,
3,
2,
3,
3,
2,
2,
3,
3,
3,
3,
3,
2,
2,
3,
2,
3,
2,
3,
3,
3,
2,
3,
3,
3,
2,
3,
2,
2,
};
labels_t expected = {
5, 3, 4, 4, 5, 5, 5, 5, 2, 4, 5, 5, 3, 3, 5, 5, 5, 5, 5, 5, 5, 5,
5, 4, 5, 3, 5, 5, 5, 4, 4, 5, 5, 5, 4, 4, 5, 4, 3, 5, 5, 0, 4, 5,
5, 3, 5, 4, 5, 4, 4, 4, 4, 0, 1, 1, 4, 0, 2, 0, 0, 3, 0, 2, 2, 4,
3, 0, 0, 0, 4, 1, 0, 1, 2, 3, 1, 3, 2, 0, 0, 0, 0, 0, 3, 5, 4, 0,
3, 0, 0, 3, 0, 0, 0, 3, 2, 2, 0, 1, 4, 0, 3, 2, 3, 3, 0, 2, 0, 5,
4, 0, 3, 0, 1, 4, 3, 5, 0, 0, 4, 1, 1, 0, 4, 4, 1, 3, 1, 3, 1, 5,
1, 1, 0, 3, 5, 4, 3, 4, 4, 4, 0, 4, 4, 3, 0, 3, 5, 3
};
ArffFiles file;
Discretizer* disc = new CPPFImdlp();
file.load(data_path + "iris.arff", true);
vector<samples_t>& X = file.getX();
labels_t& y = file.getY();
disc->fit(X[1], y);
auto computed = disc->transform(X[1]);
delete disc;
EXPECT_EQ(computed.size(), expected.size());
for (unsigned long i = 0; i < computed.size(); i++) {
EXPECT_EQ(computed[i], expected[i]);
}
}
TEST(Discretizer, TransformEmptyData)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
samples_t empty_data = {};
EXPECT_THROW_WITH_MESSAGE(disc->transform(empty_data), std::invalid_argument, "Data for transformation cannot be empty");
delete disc;
}
TEST(Discretizer, TransformNotFitted)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
samples_t data = { 1.0f, 2.0f, 3.0f };
EXPECT_THROW_WITH_MESSAGE(disc->transform(data), std::runtime_error, "Discretizer not fitted yet or no valid cut points found");
delete disc;
}
TEST(Discretizer, TensorValidationFit)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
auto X = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
auto y = torch::tensor({ 1, 2, 3 }, torch::kInt32);
// Test non-1D tensors
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_2d, y), std::invalid_argument, "Only 1D tensors supported");
auto y_2d = torch::tensor({ {1, 2}, {3, 4} }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_2d), std::invalid_argument, "Only 1D tensors supported");
// Test wrong tensor types
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_int, y), std::invalid_argument, "X tensor must be Float32 type");
auto y_float = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_float), std::invalid_argument, "y tensor must be Int32 type");
// Test mismatched sizes
auto y_short = torch::tensor({ 1, 2 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_short), std::invalid_argument, "X and y tensors must have same number of elements");
// Test empty tensors
auto X_empty = torch::tensor({}, torch::kFloat32);
auto y_empty = torch::tensor({}, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_empty, y_empty), std::invalid_argument, "Tensors cannot be empty");
delete disc;
}
TEST(Discretizer, TensorValidationTransform)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
// First fit with valid data
auto X_fit = torch::tensor({ 1.0f, 2.0f, 3.0f, 4.0f }, torch::kFloat32);
auto y_fit = torch::tensor({ 1, 2, 3, 4 }, torch::kInt32);
disc->fit_t(X_fit, y_fit);
// Test non-1D tensor
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_2d), std::invalid_argument, "Only 1D tensors supported");
// Test wrong tensor type
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_int), std::invalid_argument, "X tensor must be Float32 type");
// Test empty tensor
auto X_empty = torch::tensor({}, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_empty), std::invalid_argument, "Tensor cannot be empty");
delete disc;
}
TEST(Discretizer, TensorValidationFitTransform)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
auto X = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
auto y = torch::tensor({ 1, 2, 3 }, torch::kInt32);
// Test non-1D tensors
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_2d, y), std::invalid_argument, "Only 1D tensors supported");
auto y_2d = torch::tensor({ {1, 2}, {3, 4} }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_2d), std::invalid_argument, "Only 1D tensors supported");
// Test wrong tensor types
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_int, y), std::invalid_argument, "X tensor must be Float32 type");
auto y_float = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_float), std::invalid_argument, "y tensor must be Int32 type");
// Test mismatched sizes
auto y_short = torch::tensor({ 1, 2 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_short), std::invalid_argument, "X and y tensors must have same number of elements");
// Test empty tensors
auto X_empty = torch::tensor({}, torch::kFloat32);
auto y_empty = torch::tensor({}, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_empty, y_empty), std::invalid_argument, "Tensors cannot be empty");
delete disc;
}
}

139
tests/Experiments.hpp Normal file
View File

@@ -0,0 +1,139 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#ifndef EXPERIMENTS_HPP
#define EXPERIMENTS_HPP
#include<sstream>
#include<iostream>
#include<string>
#include<fstream>
#include<vector>
#include<tuple>
#include "typesFImdlp.h"
template <typename T>
void show_vector(const std::vector<T>& data, std::string title)
{
std::cout << title << ": ";
std::string sep = "";
for (const auto& d : data) {
std::cout << sep << d;
sep = ", ";
}
std::cout << std::endl;
}
enum class experiment_t {
RANGE,
VECTOR
};
class Experiment {
public:
Experiment(float from_, float to_, float step_, int n_bins, std::string strategy, std::vector<int> data_discretized, std::vector<mdlp::precision_t> cutpoints) :
from_{ from_ }, to_{ to_ }, step_{ step_ }, n_bins_{ n_bins }, strategy_{ strategy }, discretized_data_{ data_discretized }, cutpoints_{ cutpoints }, type_{ experiment_t::RANGE }
{
validate_strategy();
}
Experiment(std::vector<mdlp::precision_t> dataset, int n_bins, std::string strategy, std::vector<int> data_discretized, std::vector<mdlp::precision_t> cutpoints) :
n_bins_{ n_bins }, strategy_{ strategy }, dataset_{ dataset }, discretized_data_{ data_discretized }, cutpoints_{ cutpoints }, type_{ experiment_t::VECTOR }
{
validate_strategy();
}
void validate_strategy()
{
if (strategy_ != "Q" && strategy_ != "U") {
throw std::invalid_argument("Invalid strategy " + strategy_);
}
}
float from_;
float to_;
float step_;
int n_bins_;
std::string strategy_;
std::vector<mdlp::precision_t> dataset_;
std::vector<int> discretized_data_;
std::vector<mdlp::precision_t> cutpoints_;
experiment_t type_;
};
class Experiments {
public:
Experiments(const std::string filename) : filename{ filename }
{
test_file.open(filename);
if (!test_file.is_open()) {
throw std::runtime_error("File " + filename + " not found");
}
exp_end = false;
}
~Experiments()
{
test_file.close();
}
bool end() const
{
return exp_end;
}
bool is_next()
{
while (std::getline(test_file, line) && line[0] == '#');
if (test_file.eof()) {
exp_end = true;
return false;
}
return true;
}
Experiment next()
{
return parse_experiment(line);
}
private:
std::tuple<float, float, float, int, std::string> parse_header(const std::string& line)
{
std::istringstream iss(line);
std::string from_, to_, step_, n_bins, strategy;
iss >> from_ >> to_ >> step_ >> n_bins >> strategy;
return { std::stof(from_), std::stof(to_), std::stof(step_), std::stoi(n_bins), strategy };
}
template <typename T>
std::vector<T> parse_vector(const std::string& line)
{
std::istringstream iss(line);
std::vector<T> data;
std::string d;
while (iss >> d) {
data.push_back(std::is_same<T, float>::value ? std::stof(d) : std::stoi(d));
}
return data;
}
Experiment parse_experiment(std::string& line)
{
// Read experiment lines
std::string experiment, data, cuts, strategy;
std::getline(test_file, experiment);
std::getline(test_file, data);
std::getline(test_file, cuts);
// split data into variables
float from_, to_, step_;
int n_bins;
std::vector<mdlp::precision_t> dataset;
auto data_discretized = parse_vector<int>(data);
auto cutpoints = parse_vector<mdlp::precision_t>(cuts);
if (line == "RANGE") {
tie(from_, to_, step_, n_bins, strategy) = parse_header(experiment);
return Experiment{ from_, to_, step_, n_bins, strategy, data_discretized, cutpoints };
}
strategy = experiment.substr(0, 1);
n_bins = std::stoi(experiment.substr(1, 1));
data = experiment.substr(3, experiment.size() - 4);
dataset = parse_vector<mdlp::precision_t>(data);
return Experiment(dataset, n_bins, strategy, data_discretized, cutpoints);
}
std::ifstream test_file;
std::string filename;
std::string line;
bool exp_end;
};
#endif

View File

@@ -1,9 +1,15 @@
#include "gtest/gtest.h"
#include "../Metrics.h"
#include "../CPPFImdlp.h"
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include <fstream>
#include <iostream>
#include "ArffFiles.h"
#include <ArffFiles.hpp>
#include "gtest/gtest.h"
#include "Metrics.h"
#include "CPPFImdlp.h"
#define EXPECT_THROW_WITH_MESSAGE(stmt, etype, whatstring) EXPECT_THROW( \
try { \
@@ -34,13 +40,13 @@ namespace mdlp {
static string set_data_path()
{
string path = "../datasets/";
string path = "datasets/";
ifstream file(path + "iris.arff");
if (file.is_open()) {
file.close();
return path;
}
return "../../tests/datasets/";
return "tests/datasets/";
}
void checkSortedVector()
@@ -58,7 +64,7 @@ namespace mdlp {
{
EXPECT_EQ(computed.size(), expected.size());
for (unsigned long i = 0; i < computed.size(); i++) {
cout << "(" << computed[i] << ", " << expected[i] << ") ";
// cout << "(" << computed[i] << ", " << expected[i] << ") ";
EXPECT_NEAR(computed[i], expected[i], precision);
}
}
@@ -70,7 +76,7 @@ namespace mdlp {
X = X_;
y = y_;
indices = sortIndices(X, y);
cout << "* " << title << endl;
// cout << "* " << title << endl;
result = valueCutPoint(0, cut, 10);
EXPECT_NEAR(result.first, midPoint, precision);
EXPECT_EQ(result.second, limit);
@@ -89,9 +95,9 @@ namespace mdlp {
test.fit(X[feature], y);
EXPECT_EQ(test.get_depth(), depths[feature]);
auto computed = test.getCutPoints();
cout << "Feature " << feature << ": ";
// cout << "Feature " << feature << ": ";
checkCutPoints(computed, expected[feature]);
cout << endl;
// cout << endl;
}
}
};
@@ -107,38 +113,39 @@ namespace mdlp {
{
X = { 1, 2, 3 };
y = { 1, 2 };
EXPECT_THROW_WITH_MESSAGE(fit(X, y), invalid_argument, "X and y must have the same size");
EXPECT_THROW_WITH_MESSAGE(fit(X, y), invalid_argument, "X and y must have the same size: " + std::to_string(X.size()) + " != " + std::to_string(y.size()));
}
TEST_F(TestFImdlp, FitErrorMinLengtMaxDepth)
TEST_F(TestFImdlp, FitErrorMinLength)
{
auto testLength = CPPFImdlp(2, 10, 0);
auto testDepth = CPPFImdlp(3, 0, 0);
X = { 1, 2, 3 };
y = { 1, 2, 3 };
EXPECT_THROW_WITH_MESSAGE(testLength.fit(X, y), invalid_argument, "min_length must be greater than 2");
EXPECT_THROW_WITH_MESSAGE(testDepth.fit(X, y), invalid_argument, "max_depth must be greater than 0");
EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(2, 10, 0), invalid_argument, "min_length must be greater than 2");
}
TEST_F(TestFImdlp, FitErrorMaxDepth)
{
EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(3, 0, 0), invalid_argument, "max_depth must be greater than 0");
}
TEST_F(TestFImdlp, JoinFit)
{
samples_t X_ = { 1, 2, 2, 3, 4, 2, 3 };
labels_t y_ = { 0, 0, 1, 2, 3, 4, 5 };
cutPoints_t expected = { 1.5f, 2.5f };
cutPoints_t expected = { 1.0, 1.5f, 2.5f, 4.0 };
fit(X_, y_);
auto computed = getCutPoints();
EXPECT_EQ(computed.size(), expected.size());
checkCutPoints(computed, expected);
}
TEST_F(TestFImdlp, FitErrorMinCutPoints)
{
EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(3, 10, -1), invalid_argument, "proposed_cuts must be non-negative");
}
TEST_F(TestFImdlp, FitErrorMaxCutPoints)
{
auto testmin = CPPFImdlp(2, 10, -1);
auto testmax = CPPFImdlp(3, 0, 200);
X = { 1, 2, 3 };
y = { 1, 2, 3 };
EXPECT_THROW_WITH_MESSAGE(testmin.fit(X, y), invalid_argument, "wrong proposed num_cuts value");
EXPECT_THROW_WITH_MESSAGE(testmax.fit(X, y), invalid_argument, "wrong proposed num_cuts value");
auto test = CPPFImdlp(3, 1, 8);
samples_t X_ = { 1, 2, 2, 3, 4, 2, 3 };
labels_t y_ = { 0, 0, 1, 2, 3, 4, 5 };
EXPECT_THROW_WITH_MESSAGE(test.fit(X_, y_), invalid_argument, "wrong proposed num_cuts value");
}
TEST_F(TestFImdlp, SortIndices)
@@ -160,6 +167,15 @@ namespace mdlp {
indices = { 1, 2, 0 };
}
TEST_F(TestFImdlp, SortIndicesOutOfBounds)
{
// Test for out of bounds exception in sortIndices
samples_t X_long = { 1.0f, 2.0f, 3.0f };
labels_t y_short = { 1, 2 };
EXPECT_THROW_WITH_MESSAGE(sortIndices(X_long, y_short), std::out_of_range, "Index out of bounds in sort comparison");
}
TEST_F(TestFImdlp, TestShortDatasets)
{
vector<precision_t> computed;
@@ -167,29 +183,31 @@ namespace mdlp {
y = { 1 };
fit(X, y);
computed = getCutPoints();
EXPECT_EQ(computed.size(), 0);
EXPECT_EQ(computed.size(), 2);
X = { 1, 3 };
y = { 1, 2 };
fit(X, y);
computed = getCutPoints();
EXPECT_EQ(computed.size(), 0);
EXPECT_EQ(computed.size(), 2);
X = { 2, 4 };
y = { 1, 2 };
fit(X, y);
computed = getCutPoints();
EXPECT_EQ(computed.size(), 0);
EXPECT_EQ(computed.size(), 2);
X = { 1, 2, 3 };
y = { 1, 2, 2 };
fit(X, y);
computed = getCutPoints();
EXPECT_EQ(computed.size(), 1);
EXPECT_NEAR(computed[0], 1.5, precision);
EXPECT_EQ(computed.size(), 3);
EXPECT_NEAR(computed[0], 1, precision);
EXPECT_NEAR(computed[1], 1.5, precision);
EXPECT_NEAR(computed[2], 3, precision);
}
TEST_F(TestFImdlp, TestArtificialDataset)
{
fit(X, y);
cutPoints_t expected = { 5.05f };
cutPoints_t expected = { 4.7, 5.05, 6.0 };
vector<precision_t> computed = getCutPoints();
EXPECT_EQ(computed.size(), expected.size());
for (unsigned long i = 0; i < computed.size(); i++) {
@@ -200,10 +218,10 @@ namespace mdlp {
TEST_F(TestFImdlp, TestIris)
{
vector<cutPoints_t> expected = {
{5.45f, 5.75f},
{2.75f, 2.85f, 2.95f, 3.05f, 3.35f},
{2.45f, 4.75f, 5.05f},
{0.8f, 1.75f}
{4.3, 5.45f, 5.75f, 7.9},
{2, 2.75f, 2.85f, 2.95f, 3.05f, 3.35f, 4.4},
{1, 2.45f, 4.75f, 5.05f, 6.9},
{0.1, 0.8f, 1.75f, 2.5}
};
vector<int> depths = { 3, 5, 4, 3 };
auto test = CPPFImdlp();
@@ -213,7 +231,7 @@ namespace mdlp {
TEST_F(TestFImdlp, ComputeCutPointsGCase)
{
cutPoints_t expected;
expected = { 1.5 };
expected = { 0, 1.5, 2 };
samples_t X_ = { 0, 1, 2, 2, 2 };
labels_t y_ = { 1, 1, 1, 2, 2 };
fit(X_, y_);
@@ -247,10 +265,10 @@ namespace mdlp {
// Set max_depth to 1
auto test = CPPFImdlp(3, 1, 0);
vector<cutPoints_t> expected = {
{5.45f},
{3.35f},
{2.45f},
{0.8f}
{4.3, 5.45f, 7.9},
{2, 3.35f, 4.4},
{1, 2.45f, 6.9},
{0.1, 0.8f, 2.5}
};
vector<int> depths = { 1, 1, 1, 1 };
test_dataset(test, "iris", expected, depths);
@@ -261,10 +279,10 @@ namespace mdlp {
auto test = CPPFImdlp(75, 100, 0);
// Set min_length to 75
vector<cutPoints_t> expected = {
{5.45f, 5.75f},
{2.85f, 3.35f},
{2.45f, 4.75f},
{0.8f, 1.75f}
{4.3, 5.45f, 5.75f, 7.9},
{2, 2.85f, 3.35f, 4.4},
{1, 2.45f, 4.75f, 6.9},
{0.1, 0.8f, 1.75f, 2.5}
};
vector<int> depths = { 3, 2, 2, 2 };
test_dataset(test, "iris", expected, depths);
@@ -275,10 +293,10 @@ namespace mdlp {
// Set min_length to 75
auto test = CPPFImdlp(75, 2, 0);
vector<cutPoints_t> expected = {
{5.45f, 5.75f},
{2.85f, 3.35f},
{2.45f, 4.75f},
{0.8f, 1.75f}
{4.3, 5.45f, 5.75f, 7.9},
{2, 2.85f, 3.35f, 4.4},
{1, 2.45f, 4.75f, 6.9},
{0.1, 0.8f, 1.75f, 2.5}
};
vector<int> depths = { 2, 2, 2, 2 };
test_dataset(test, "iris", expected, depths);
@@ -289,10 +307,10 @@ namespace mdlp {
// Set min_length to 75
auto test = CPPFImdlp(75, 2, 1);
vector<cutPoints_t> expected = {
{5.45f},
{2.85f},
{2.45f},
{0.8f}
{4.3, 5.45f, 7.9},
{2, 2.85f, 4.4},
{1, 2.45f, 6.9},
{0.1, 0.8f, 2.5}
};
vector<int> depths = { 2, 2, 2, 2 };
test_dataset(test, "iris", expected, depths);
@@ -304,10 +322,10 @@ namespace mdlp {
// Set min_length to 75
auto test = CPPFImdlp(75, 2, 0.2f);
vector<cutPoints_t> expected = {
{5.45f, 5.75f},
{2.85f, 3.35f},
{2.45f, 4.75f},
{0.8f, 1.75f}
{4.3, 5.45f, 5.75f, 7.9},
{2, 2.85f, 3.35f, 4.4},
{1, 2.45f, 4.75f, 6.9},
{0.1, 0.8f, 1.75f, 2.5}
};
vector<int> depths = { 2, 2, 2, 2 };
test_dataset(test, "iris", expected, depths);
@@ -327,7 +345,6 @@ namespace mdlp {
computed = compute_max_num_cut_points();
ASSERT_EQ(expected, computed);
}
}
TEST_F(TestFImdlp, TransformTest)
{
@@ -350,5 +367,61 @@ namespace mdlp {
for (unsigned long i = 0; i < computed.size(); i++) {
EXPECT_EQ(computed[i], expected[i]);
}
auto computed_ft = fit_transform(X[1], y);
EXPECT_EQ(computed_ft.size(), expected.size());
for (unsigned long i = 0; i < computed_ft.size(); i++) {
EXPECT_EQ(computed_ft[i], expected[i]);
}
}
TEST_F(TestFImdlp, SafeXAccessIndexOutOfBounds)
{
// Test safe_X_access with index out of bounds for indices array
X = { 1.0f, 2.0f, 3.0f };
y = { 1, 2, 3 };
indices = { 0, 1 }; // shorter than expected
// This should trigger the first exception in safe_X_access (idx >= indices.size())
EXPECT_THROW_WITH_MESSAGE(safe_X_access(2), std::out_of_range, "Index out of bounds for indices array");
}
TEST_F(TestFImdlp, SafeXAccessXOutOfBounds)
{
// Test safe_X_access with real_idx out of bounds for X array
X = { 1.0f, 2.0f }; // shorter array
y = { 1, 2, 3 };
indices = { 0, 1, 5 }; // indices[2] = 5 is out of bounds for X
// This should trigger the second exception in safe_X_access (real_idx >= X.size())
EXPECT_THROW_WITH_MESSAGE(safe_X_access(2), std::out_of_range, "Index out of bounds for X array");
}
TEST_F(TestFImdlp, SafeYAccessIndexOutOfBounds)
{
// Test safe_y_access with index out of bounds for indices array
X = { 1.0f, 2.0f, 3.0f };
y = { 1, 2, 3 };
indices = { 0, 1 }; // shorter than expected
// This should trigger the first exception in safe_y_access (idx >= indices.size())
EXPECT_THROW_WITH_MESSAGE(safe_y_access(2), std::out_of_range, "Index out of bounds for indices array");
}
TEST_F(TestFImdlp, SafeYAccessYOutOfBounds)
{
// Test safe_y_access with real_idx out of bounds for y array
X = { 1.0f, 2.0f, 3.0f };
y = { 1, 2 }; // shorter array
indices = { 0, 1, 5 }; // indices[2] = 5 is out of bounds for y
// This should trigger the second exception in safe_y_access (real_idx >= y.size())
EXPECT_THROW_WITH_MESSAGE(safe_y_access(2), std::out_of_range, "Index out of bounds for y array");
}
TEST_F(TestFImdlp, SafeSubtractUnderflow)
{
// Test safe_subtract with underflow condition (b > a)
EXPECT_THROW_WITH_MESSAGE(safe_subtract(3, 5), std::underflow_error, "Subtraction would cause underflow");
}
}

View File

@@ -1,14 +1,20 @@
// ****************************************************************
// SPDX - FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
// SPDX - FileType: SOURCE
// SPDX - License - Identifier: MIT
// ****************************************************************
#include "gtest/gtest.h"
#include "../Metrics.h"
#include "Metrics.h"
namespace mdlp {
class TestMetrics: public Metrics, public testing::Test {
class TestMetrics : public Metrics, public testing::Test {
public:
labels_t y_ = { 1, 1, 1, 1, 1, 2, 2, 2, 2, 2 };
indices_t indices_ = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
precision_t precision = 0.000001f;
precision_t precision = 1e-6;
TestMetrics(): Metrics(y_, indices_) {};
TestMetrics() : Metrics(y_, indices_) {};
void SetUp() override
{

222
tests/datasets/tests.txt Normal file
View File

@@ -0,0 +1,222 @@
#
# from, to, step, #bins, Q/U
# discretized data
# cut points
#
#
# Range experiments
#
RANGE
0, 100, 1, 4, Q
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3
0.0, 24.75, 49.5, 74.25, 99.0
RANGE
0, 50, 1, 4, Q
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3
0.0, 12.25, 24.5, 36.75, 49.0
RANGE
0, 100, 1, 3, Q
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
0.0, 33.0, 66.0, 99.0
RANGE
0, 50, 1, 3, Q
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
0.0, 16.33333, 32.66667, 49.0
RANGE
0, 10, 1, 3, Q
0, 0, 0, 1, 1, 1, 2, 2, 2, 2
0.0, 3.0, 6.0, 9.0
RANGE
0, 100, 1, 4, U
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3
0.0, 24.75, 49.5, 74.25, 99.0
RANGE
0, 50, 1, 4, U
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3
0.0, 12.25, 24.5, 36.75, 49.0
RANGE
0, 100, 1, 3, U
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
0.0, 33.0, 66.0, 99.0
RANGE
0, 50, 1, 3, U
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
0.0, 16.33333, 32.66667, 49.0
RANGE
0, 10, 1, 3, U
0, 0, 0, 1, 1, 1, 2, 2, 2, 2
0.0, 3.0, 6.0, 9.0
RANGE
1, 10, 1, 3, Q
0, 0, 0, 1, 1, 1, 2, 2, 2
1.0, 3.66667, 6.33333, 9.0
RANGE
1, 10, 1, 3, U
0, 0, 0, 1, 1, 1, 2, 2, 2
1.0, 3.66667, 6.33333, 9.0
RANGE
1, 11, 1, 3, Q
0, 0, 0, 1, 1, 1, 2, 2, 2, 2
1.0, 4.0, 7.0, 10.0
RANGE
1, 11, 1, 3, U
0, 0, 0, 1, 1, 1, 2, 2, 2, 2
1.0, 4.0, 7.0, 10.0
RANGE
1, 12, 1, 3, Q
0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 2
1.0, 4.33333, 7.66667, 11.0
RANGE
1, 12, 1, 3, U
0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 2
1.0, 4.33333, 7.66667, 11.0
RANGE
1, 13, 1, 3, Q
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2
1.0, 4.66667, 8.33333, 12.0
RANGE
1, 13, 1, 3, U
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2
1.0, 4.66667, 8.33333, 12.0
RANGE
1, 14, 1, 3, Q
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.0, 9.0, 13.0
RANGE
1, 14, 1, 3, U
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.0, 9.0, 13.0
RANGE
1, 15, 1, 3, Q
0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.33333, 9.66667, 14.0
RANGE
1, 15, 1, 3, U
0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.33333, 9.66667, 14.0
#
# Vector experiments
#
VECTOR
Q3[3.0, 1.0, 1.0, 3.0, 1.0, 1.0, 3.0, 1.0, 1.0]
1, 0, 0, 1, 0, 0, 1, 0, 0
1.0, 1.66667, 3.0
VECTOR
U3[3.0, 1.0, 1.0, 3.0, 1.0, 1.0, 3.0, 1.0, 1.0]
2, 0, 0, 2, 0, 0, 2, 0, 0
1.0, 1.66667, 2.33333, 3.0
VECTOR
Q3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0]
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2
1.0, 4.66667, 8.33333, 12.0
VECTOR
U3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0]
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2
1.0, 4.66667, 8.33333, 12.0
VECTOR
Q3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0]
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.0, 9.0, 13.0
VECTOR
U3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0]
0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.0, 9.0, 13.0
VECTOR
Q3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0]
0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.33333, 9.66667, 14.0
VECTOR
U3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0]
0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.33333, 9.66667, 14.0
VECTOR
Q3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0]
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.66667, 10.33333, 15.0
VECTOR
U3[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0]
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2
1.0, 5.66667, 10.33333, 15.0
VECTOR
Q3[15.0, 8.0, 12.0, 14.0, 6.0, 1.0, 13.0, 11.0, 10.0, 9.0, 7.0, 4.0, 3.0, 5.0, 2.0]
2, 1, 2, 2, 1, 0, 2, 2, 1, 1, 1, 0, 0, 0, 0
1.0, 5.66667, 10.33333, 15.0
VECTOR
U3[15.0, 8.0, 12.0, 14.0, 6.0, 1.0, 13.0, 11.0, 10.0, 9.0, 7.0, 4.0, 3.0, 5.0, 2.0]
2, 1, 2, 2, 1, 0, 2, 2, 1, 1, 1, 0, 0, 0, 0
1.0, 5.66667, 10.33333, 15.0
VECTOR
Q3[0.0, 1.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 3.0, 4.0]
0, 1, 1, 1, 1, 1, 2, 2, 2, 2
0.0, 1.0, 3.0, 4.0
VECTOR
U3[0.0, 1.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 3.0, 4.0]
0, 0, 0, 0, 1, 1, 2, 2, 2, 2
0.0, 1.33333, 2.66667, 4.0
#
# Vector experiments with iris
#
VECTOR
Q3[5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5.0, 5.0, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5.0, 5.5, 4.9, 4.4, 5.1, 5.0, 4.5, 4.4, 5.0, 5.1, 4.8, 5.1, 4.6, 5.3, 5.0, 7.0, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5.0, 5.9, 6.0, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6.0, 5.7, 5.5, 5.5, 5.8, 6.0, 5.4, 6.0, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5.0, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6.0, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6.0, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 1, 2, 1, 2, 0, 2, 0, 0, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 2, 1, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1
4.3, 5.4, 6.3, 7.9
VECTOR
U3[5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5.0, 5.0, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5.0, 5.5, 4.9, 4.4, 5.1, 5.0, 4.5, 4.4, 5.0, 5.1, 4.8, 5.1, 4.6, 5.3, 5.0, 7.0, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5.0, 5.9, 6.0, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6.0, 5.7, 5.5, 5.5, 5.8, 6.0, 5.4, 6.0, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5.0, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6.0, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6.0, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 2, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 0, 1, 2, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 2, 1, 1, 2, 0, 2, 2, 2, 1, 1, 2, 1, 1, 1, 1, 2, 2, 1, 2, 1, 2, 1, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 1, 1, 1, 2, 2, 2, 1, 2, 2, 2, 1, 1, 1, 1
4.3, 5.5, 6.7, 7.9
VECTOR
Q4[5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5.0, 5.0, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5.0, 5.5, 4.9, 4.4, 5.1, 5.0, 4.5, 4.4, 5.0, 5.1, 4.8, 5.1, 4.6, 5.3, 5.0, 7.0, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5.0, 5.9, 6.0, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6.0, 5.7, 5.5, 5.5, 5.8, 6.0, 5.4, 6.0, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5.0, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6.0, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6.0, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]
1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 3, 3, 1, 3, 1, 2, 0, 3, 1, 0, 2, 2, 2, 1, 3, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 1, 1, 1, 2, 1, 1, 2, 2, 3, 2, 3, 3, 0, 3, 3, 3, 3, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2
4.3, 5.1, 5.8, 6.4, 7.9
VECTOR
U4[5.1, 4.9, 4.7, 4.6, 5.0, 5.4, 4.6, 5.0, 4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5.0, 5.0, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5.0, 5.5, 4.9, 4.4, 5.1, 5.0, 4.5, 4.4, 5.0, 5.1, 4.8, 5.1, 4.6, 5.3, 5.0, 7.0, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5.0, 5.9, 6.0, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6.0, 5.7, 5.5, 5.5, 5.8, 6.0, 5.4, 6.0, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5.0, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6.0, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6.0, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9]
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3, 2, 2, 1, 2, 1, 2, 0, 2, 1, 0, 1, 1, 2, 1, 2, 1, 1, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 1, 1, 1, 2, 0, 1, 2, 1, 3, 2, 2, 3, 0, 3, 2, 3, 2, 2, 2, 1, 1, 2, 2, 3, 3, 1, 2, 1, 3, 2, 2, 3, 2, 2, 2, 3, 3, 3, 2, 2, 2, 3, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1
4.3, 5.2, 6.1, 7.0, 7.9
VECTOR
Q3[3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3.0, 3.0, 4.0, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3.0, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3.0, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3.0, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2.0, 3.0, 2.2, 2.9, 2.9, 3.1, 3.0, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3.0, 2.8, 3.0, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3.0, 3.4, 3.1, 2.3, 3.0, 2.5, 2.6, 3.0, 2.6, 2.3, 2.7, 3.0, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3.0, 2.9, 3.0, 3.0, 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3.0, 2.5, 2.8, 3.2, 3.0, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3.0, 2.8, 3.0, 2.8, 3.8, 2.8, 2.8, 2.6, 3.0, 3.4, 3.1, 3.0, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3.0, 2.5, 3.0, 3.4, 3.0]
2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 0, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 0, 0, 0, 2, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 2, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 2, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 2, 0, 1, 1, 1, 1, 0, 1, 0, 2, 2, 0, 1, 0, 0, 2, 1, 2, 0, 0, 2, 0, 0, 0, 2, 2, 0, 1, 0, 1, 0, 2, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 0, 2, 2, 1, 0, 1, 2, 1
2.0, 2.9, 3.2, 4.4
VECTOR
U3[3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3.0, 3.0, 4.0, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3.0, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3.0, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3.0, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2.0, 3.0, 2.2, 2.9, 2.9, 3.1, 3.0, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3.0, 2.8, 3.0, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3.0, 3.4, 3.1, 2.3, 3.0, 2.5, 2.6, 3.0, 2.6, 2.3, 2.7, 3.0, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3.0, 2.9, 3.0, 3.0, 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3.0, 2.5, 2.8, 3.2, 3.0, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3.0, 2.8, 3.0, 2.8, 3.8, 2.8, 2.8, 2.6, 3.0, 3.4, 3.1, 3.0, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3.0, 2.5, 3.0, 3.4, 3.0]
1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 2, 1, 1, 1, 2, 2, 2, 1, 2, 2, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 2, 1, 1, 1, 0, 1, 1, 2, 1, 2, 1, 2, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 2, 1, 0, 1, 0, 1, 1, 1, 2, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1
2.0, 2.8, 3.6, 4.4
VECTOR
Q4[3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3.0, 3.0, 4.0, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3.0, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3.0, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3.0, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2.0, 3.0, 2.2, 2.9, 2.9, 3.1, 3.0, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3.0, 2.8, 3.0, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3.0, 3.4, 3.1, 2.3, 3.0, 2.5, 2.6, 3.0, 2.6, 2.3, 2.7, 3.0, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3.0, 2.9, 3.0, 3.0, 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3.0, 2.5, 2.8, 3.2, 3.0, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3.0, 2.8, 3.0, 2.8, 3.8, 2.8, 2.8, 2.6, 3.0, 3.4, 3.1, 3.0, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3.0, 2.5, 3.0, 3.4, 3.0]
3, 2, 2, 2, 3, 3, 3, 3, 1, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 2, 2, 3, 3, 3, 2, 2, 3, 3, 2, 3, 3, 0, 2, 3, 3, 2, 3, 2, 3, 3, 2, 2, 2, 0, 1, 1, 3, 0, 1, 0, 0, 2, 0, 1, 1, 2, 2, 0, 0, 0, 2, 1, 0, 1, 1, 2, 1, 2, 1, 0, 0, 0, 0, 0, 2, 3, 2, 0, 2, 0, 0, 2, 0, 0, 0, 2, 1, 1, 0, 1, 3, 0, 2, 1, 2, 2, 0, 1, 0, 3, 2, 0, 2, 0, 1, 2, 2, 3, 0, 0, 2, 1, 1, 0, 3, 2, 1, 2, 1, 2, 1, 3, 1, 1, 0, 2, 3, 2, 2, 2, 2, 2, 0, 2, 3, 2, 0, 2, 3, 2
2.0, 2.8, 3.0, 3.3, 4.4
VECTOR
U4[3.5, 3.0, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3.0, 3.0, 4.0, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3.0, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3.0, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3.0, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2.0, 3.0, 2.2, 2.9, 2.9, 3.1, 3.0, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3.0, 2.8, 3.0, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3.0, 3.4, 3.1, 2.3, 3.0, 2.5, 2.6, 3.0, 2.6, 2.3, 2.7, 3.0, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3.0, 2.9, 3.0, 3.0, 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3.0, 2.5, 2.8, 3.2, 3.0, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3.0, 2.8, 3.0, 2.8, 3.8, 2.8, 2.8, 2.6, 3.0, 3.4, 3.1, 3.0, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3.0, 2.5, 3.0, 3.4, 3.0]
2, 1, 2, 1, 2, 3, 2, 2, 1, 1, 2, 2, 1, 1, 3, 3, 3, 2, 3, 3, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, 3, 3, 1, 2, 2, 2, 1, 2, 2, 0, 2, 2, 3, 1, 3, 2, 2, 2, 2, 2, 1, 0, 1, 1, 2, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 2, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 2, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 2, 1, 1, 1, 1, 1, 0, 1, 0, 2, 2, 1, 1, 0, 1, 2, 1, 3, 1, 0, 2, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 1, 2, 1
2.0, 2.6, 3.2, 3.8, 4.4
VECTOR
Q3[1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1.0, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4.0, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4.0, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4.0, 4.9, 4.7, 4.3, 4.4, 4.8, 5.0, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4.0, 4.4, 4.6, 4.0, 3.3, 4.2, 4.2, 4.2, 4.3, 3.0, 4.1, 6.0, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5.0, 5.1, 5.3, 5.5, 6.7, 6.9, 5.0, 5.7, 4.9, 6.7, 4.9, 5.7, 6.0, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5.0, 5.2, 5.4, 5.1]
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
1.0, 2.63333, 4.9, 6.9
VECTOR
U3[1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1.0, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4.0, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4.0, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4.0, 4.9, 4.7, 4.3, 4.4, 4.8, 5.0, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4.0, 4.4, 4.6, 4.0, 3.3, 4.2, 4.2, 4.2, 4.3, 3.0, 4.1, 6.0, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5.0, 5.1, 5.3, 5.5, 6.7, 6.9, 5.0, 5.7, 4.9, 6.7, 4.9, 5.7, 6.0, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5.0, 5.2, 5.4, 5.1]
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
1.0, 2.96667, 4.93333, 6.9
VECTOR
Q4[1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1.0, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4.0, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4.0, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4.0, 4.9, 4.7, 4.3, 4.4, 4.8, 5.0, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4.0, 4.4, 4.6, 4.0, 3.3, 4.2, 4.2, 4.2, 4.3, 3.0, 4.1, 6.0, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5.0, 5.1, 5.3, 5.5, 6.7, 6.9, 5.0, 5.7, 4.9, 6.7, 4.9, 5.7, 6.0, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5.0, 5.2, 5.4, 5.1]
0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 2, 2, 2, 1, 2, 2, 2, 1, 2, 1, 1, 1, 1, 2, 1, 2, 2, 1, 2, 1, 2, 1, 2, 2, 1, 2, 2, 2, 2, 1, 1, 1, 1, 3, 2, 2, 2, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 2, 3, 2, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3
1.0, 1.6, 4.35, 5.1, 6.9
VECTOR
U4[1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1.0, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4.0, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4.0, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4.0, 4.9, 4.7, 4.3, 4.4, 4.8, 5.0, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4.0, 4.4, 4.6, 4.0, 3.3, 4.2, 4.2, 4.2, 4.3, 3.0, 4.1, 6.0, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5.0, 5.1, 5.3, 5.5, 6.7, 6.9, 5.0, 5.7, 4.9, 6.7, 4.9, 5.7, 6.0, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5.0, 5.2, 5.4, 5.1]
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, 3, 2, 3, 3, 3, 3, 2, 3, 3, 3, 2, 2, 3, 2, 2, 2, 3, 3, 3, 2, 3, 2, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 2, 2, 3, 2, 2, 3, 3, 2, 2, 2, 2, 2
1.0, 2.475, 3.95, 5.425, 6.9
VECTOR
Q3[0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1.0, 1.3, 1.4, 1.0, 1.5, 1.0, 1.4, 1.3, 1.4, 1.5, 1.0, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1.0, 1.1, 1.0, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1.0, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2.0, 1.9, 2.1, 2.0, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2.0, 2.0, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2.0, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2.0, 2.3, 1.8]
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
0.1, 0.86667, 1.6, 2.5
VECTOR
U3[0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1.0, 1.3, 1.4, 1.0, 1.5, 1.0, 1.4, 1.3, 1.4, 1.5, 1.0, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1.0, 1.1, 1.0, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1.0, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2.0, 1.9, 2.1, 2.0, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2.0, 2.0, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2.0, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2.0, 2.3, 1.8]
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2
0.1, 0.9, 1.7, 2.5
VECTOR
Q4[0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1.0, 1.3, 1.4, 1.0, 1.5, 1.0, 1.4, 1.3, 1.4, 1.5, 1.0, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1.0, 1.1, 1.0, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1.0, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2.0, 1.9, 2.1, 2.0, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2.0, 2.0, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2.0, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2.0, 2.3, 1.8]
0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 1, 2, 2, 2, 2, 1, 2, 1, 3, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 1, 2, 1, 2, 2, 1, 2, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 3, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3
0.1, 0.3, 1.3, 1.8, 2.5
VECTOR
U4[0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1.0, 1.3, 1.4, 1.0, 1.5, 1.0, 1.4, 1.3, 1.4, 1.5, 1.0, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1.0, 1.1, 1.0, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1.0, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2.0, 1.9, 2.1, 2.0, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2.0, 2.0, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2.0, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2.0, 2.3, 1.8]
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 1, 2, 2, 2, 2, 1, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 1, 2, 1, 2, 2, 1, 2, 3, 3, 3, 2, 3, 3, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 2, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2, 2, 3, 2, 3, 3, 3, 2, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2
0.1, 0.7, 1.3, 1.9, 2.5

View File

@@ -1,18 +0,0 @@
#!/bin/bash
if [ -d build ] ; then
rm -fr build
fi
if [ -d gcovr-report ] ; then
rm -fr gcovr-report
fi
cmake -S . -B build -Wno-dev
cmake --build build
cd build
ctest --output-on-failure
cd ..
mkdir gcovr-report
#lcov --capture --directory ./ --output-file lcoverage/main_coverage.info
#lcov --remove lcoverage/main_coverage.info 'v1/*' '/Applications/*' '*/tests/*' --output-file lcoverage/main_coverage.info -q
#lcov --list lcoverage/main_coverage.info
cd ..
gcovr --gcov-filter "CPPFImdlp.cpp" --gcov-filter "Metrics.cpp" --gcov-filter "BinDisc.cpp" --txt --sonarqube=tests/gcovr-report/coverage.xml --exclude-noncode-lines

View File

@@ -1,404 +0,0 @@
from scipy.io.arff import loadarff
from sklearn.preprocessing import KBinsDiscretizer
def test(clf, X, expected, title):
X = [[x] for x in X]
clf.fit(X)
computed = [int(x[0]) for x in clf.transform(X)]
print(f"{title}")
print(f"{computed=}")
print(f"{expected=}")
assert computed == expected
print("-" * 80)
# Test Uniform Strategy
clf3u = KBinsDiscretizer(
n_bins=3, encode="ordinal", strategy="uniform", subsample=200_000
)
clf3q = KBinsDiscretizer(
n_bins=3, encode="ordinal", strategy="quantile", subsample=200_000
)
clf4u = KBinsDiscretizer(
n_bins=4, encode="ordinal", strategy="uniform", subsample=200_000
)
clf4q = KBinsDiscretizer(
n_bins=4, encode="ordinal", strategy="quantile", subsample=200_000
)
#
X = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
labels = [0, 0, 0, 1, 1, 1, 2, 2, 2]
test(clf3u, X, labels, title="Easy3BinsUniform")
test(clf3q, X, labels, title="Easy3BinsQuantile")
#
X = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]
labels = [0, 0, 0, 1, 1, 1, 2, 2, 2, 2]
# En C++ se obtiene el mismo resultado en ambos, no como aquí
labels2 = [0, 0, 0, 1, 1, 1, 1, 2, 2, 2]
test(clf3u, X, labels, title="X10BinsUniform")
test(clf3q, X, labels2, title="X10BinsQuantile")
#
X = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0]
labels = [0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 2]
# En C++ se obtiene el mismo resultado en ambos, no como aquí
# labels2 = [0, 0, 0, 1, 1, 1, 1, 2, 2, 2]
test(clf3u, X, labels, title="X11BinsUniform")
test(clf3q, X, labels, title="X11BinsQuantile")
#
X = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
labels = [0, 0, 0, 0, 0, 0]
test(clf3u, X, labels, title="ConstantUniform")
test(clf3q, X, labels, title="ConstantQuantile")
#
X = [3.0, 1.0, 1.0, 3.0, 1.0, 1.0, 3.0, 1.0, 1.0]
labels = [2, 0, 0, 2, 0, 0, 2, 0, 0]
labels2 = [1, 0, 0, 1, 0, 0, 1, 0, 0] # igual que en C++
test(clf3u, X, labels, title="EasyRepeatedUniform")
test(clf3q, X, labels2, title="EasyRepeatedQuantile")
#
X = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0]
labels = [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]
test(clf4u, X, labels, title="Easy4BinsUniform")
test(clf4q, X, labels, title="Easy4BinsQuantile")
#
X = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0]
labels = [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3]
test(clf4u, X, labels, title="X13BinsUniform")
test(clf4q, X, labels, title="X13BinsQuantile")
#
X = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0]
labels = [0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3]
test(clf4u, X, labels, title="X14BinsUniform")
test(clf4q, X, labels, title="X14BinsQuantile")
#
X1 = [15.0, 8.0, 12.0, 14.0, 6.0, 1.0, 13.0, 11.0, 10.0, 9.0, 7.0, 4.0, 3.0, 5.0, 2.0]
X2 = [15.0, 13.0, 12.0, 14.0, 6.0, 1.0, 8.0, 11.0, 10.0, 9.0, 7.0, 4.0, 3.0, 5.0, 2.0]
labels1 = [3, 2, 3, 3, 1, 0, 3, 2, 2, 2, 1, 0, 0, 1, 0]
labels2 = [3, 3, 3, 3, 1, 0, 2, 2, 2, 2, 1, 0, 0, 1, 0]
test(clf4u, X1, labels1, title="X15BinsUniform")
test(clf4q, X2, labels2, title="X15BinsQuantile")
#
X = [0.0, 1.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 3.0, 4.0]
labels = [0, 1, 1, 1, 2, 2, 3, 3, 3, 3]
test(clf4u, X, labels, title="RepeatedValuesUniform")
test(clf4q, X, labels, title="RepeatedValuesQuantile")
print(f"Uniform {clf4u.bin_edges_=}")
print(f"Quaintile {clf4q.bin_edges_=}")
print("-" * 80)
#
data, meta = loadarff("tests/datasets/iris.arff")
labelsu = [
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
1,
1,
1,
0,
1,
0,
1,
0,
0,
0,
0,
0,
0,
1,
1,
0,
0,
1,
1,
1,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0,
3,
2,
2,
1,
2,
1,
2,
0,
2,
1,
0,
1,
1,
2,
1,
2,
1,
1,
2,
1,
1,
2,
2,
2,
2,
2,
2,
2,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
1,
1,
1,
2,
1,
0,
1,
1,
1,
2,
0,
1,
2,
1,
3,
2,
2,
3,
0,
3,
2,
3,
2,
2,
2,
1,
1,
2,
2,
3,
3,
1,
2,
1,
3,
2,
2,
3,
2,
2,
2,
3,
3,
3,
2,
2,
2,
3,
2,
2,
1,
2,
2,
2,
1,
2,
2,
2,
2,
2,
2,
1,
]
labelsq = [
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
2,
1,
1,
1,
1,
1,
1,
1,
0,
1,
0,
0,
0,
1,
1,
0,
0,
1,
1,
1,
0,
0,
1,
0,
0,
1,
0,
0,
0,
0,
1,
0,
1,
0,
1,
0,
3,
3,
3,
1,
3,
1,
2,
0,
3,
1,
0,
2,
2,
2,
1,
3,
1,
2,
2,
1,
2,
2,
2,
2,
3,
3,
3,
3,
2,
1,
1,
1,
2,
2,
1,
2,
3,
2,
1,
1,
1,
2,
2,
0,
1,
1,
1,
2,
1,
1,
2,
2,
3,
2,
3,
3,
0,
3,
3,
3,
3,
3,
3,
1,
2,
3,
3,
3,
3,
2,
3,
1,
3,
2,
3,
3,
2,
2,
3,
3,
3,
3,
3,
2,
2,
3,
2,
3,
2,
3,
3,
3,
2,
3,
3,
3,
2,
3,
2,
2,
]
test(clf4u, data["sepallength"], labelsu, title="IrisUniform")
test(clf4q, data["sepallength"], labelsq, title="IrisQuantile")
# print("Labels")
# print(labels)
# print("Expected")
# print(expected)
# for i in range(len(labels)):
# if labels[i] != expected[i]:
# print(f"Error at {i} {labels[i]} != {expected[i]}")

71
tests/tests_do.py Normal file
View File

@@ -0,0 +1,71 @@
# ***************************************************************
# SPDX-FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
# SPDX-FileType: SOURCE
# SPDX-License-Identifier: MIT
# ***************************************************************
import json
from sklearn.preprocessing import KBinsDiscretizer
with open("datasets/tests.txt") as f:
data = f.readlines()
data = [x.strip() for x in data if x[0] != "#"]
errors = False
for i in range(0, len(data), 4):
experiment_type = data[i]
print("Experiment:", data[i + 1])
if experiment_type == "RANGE":
range_data = data[i + 1]
from_, to_, step_, n_bins_, strategy_ = range_data.split(",")
X = [[float(x)] for x in range(int(from_), int(to_), int(step_))]
else:
strategy_ = data[i + 1][0]
n_bins_ = data[i + 1][1]
vector = data[i + 1][2:]
X = [[float(x)] for x in json.loads(vector)]
strategy = "quantile" if strategy_.strip() == "Q" else "uniform"
disc = KBinsDiscretizer(
n_bins=int(n_bins_),
encode="ordinal",
strategy=strategy,
)
expected_data = data[i + 2]
cuts_data = data[i + 3]
disc.fit(X)
#
# Normalize the cutpoints to remove numerical errors such as 33.0000000001
# instead of 33
#
for j in range(len(disc.bin_edges_[0])):
disc.bin_edges_[0][j] = round(disc.bin_edges_[0][j], 5)
result = disc.transform(X)
result = [int(x) for x in result.flatten()]
expected = [int(x) for x in expected_data.split(",")]
#
# Check the Results
#
assert len(result) == len(expected)
for j in range(len(result)):
if result[j] != expected[j]:
print("* Error at", j, "Expected=", expected[j], "Result=", result[j])
errors = True
expected_cuts = disc.bin_edges_[0]
computed_cuts = [float(x) for x in cuts_data.split(",")]
assert len(expected_cuts) == len(computed_cuts)
for j in range(len(expected_cuts)):
if round(expected_cuts[j], 5) != computed_cuts[j]:
print(
"* Error at",
j,
"Expected=",
expected_cuts[j],
"Result=",
computed_cuts[j],
)
errors = True
if errors:
raise Exception("There were errors!")
print("*** All tests run succesfully! ***")

209
tests/tests_generate.ipynb Normal file
View File

@@ -0,0 +1,209 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import KBinsDiscretizer\n",
"from sklearn.datasets import load_iris"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"experiments_range = [\n",
" [0, 100, 1, 4, \"Q\"],\n",
" [0, 50, 1, 4, \"Q\"],\n",
" [0, 100, 1, 3, \"Q\"],\n",
" [0, 50, 1, 3, \"Q\"],\n",
" [0, 10, 1, 3, \"Q\"],\n",
" [0, 100, 1, 4, \"U\"],\n",
" [0, 50, 1, 4, \"U\"],\n",
" [0, 100, 1, 3, \"U\"],\n",
" [0, 50, 1, 3, \"U\"],\n",
"# \n",
" [0, 10, 1, 3, \"U\"],\n",
" [1, 10, 1, 3, \"Q\"],\n",
" [1, 10, 1, 3, \"U\"],\n",
" [1, 11, 1, 3, \"Q\"],\n",
" [1, 11, 1, 3, \"U\"],\n",
" [1, 12, 1, 3, \"Q\"],\n",
" [1, 12, 1, 3, \"U\"],\n",
" [1, 13, 1, 3, \"Q\"],\n",
" [1, 13, 1, 3, \"U\"],\n",
" [1, 14, 1, 3, \"Q\"],\n",
" [1, 14, 1, 3, \"U\"],\n",
" [1, 15, 1, 3, \"Q\"],\n",
" [1, 15, 1, 3, \"U\"]\n",
"]\n",
"experiments_vectors = [\n",
" (3, [3.0, 1.0, 1.0, 3.0, 1.0, 1.0, 3.0, 1.0, 1.0]),\n",
" (3, [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0]),\n",
" (3, [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0]),\n",
" (3, [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0]),\n",
" (3, [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0]),\n",
" (3, [15.0, 8.0, 12.0, 14.0, 6.0, 1.0, 13.0, 11.0, 10.0, 9.0, 7.0, 4.0, 3.0, 5.0, 2.0]),\n",
" (3, [0.0, 1.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 3.0, 4.0])\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/rmontanana/miniconda3/lib/python3.11/site-packages/sklearn/preprocessing/_discretization.py:307: UserWarning: Bins whose width are too small (i.e., <= 1e-8) in feature 0 are removed. Consider decreasing the number of bins.\n",
" warnings.warn(\n"
]
}
],
"source": [
"def write_lists(file, data, cuts):\n",
" sep = \"\"\n",
" for res in data:\n",
" file.write(f\"{sep}{int(res):d}\")\n",
" sep= \", \"\n",
" file.write(\"\\n\")\n",
" sep = \"\"\n",
" for res in cuts:\n",
" file.write(sep + str(round(res,5)))\n",
" sep = \", \"\n",
" file.write(\"\\n\")\n",
"\n",
"def normalize_cuts(cuts):\n",
" #\n",
" # Normalize the cutpoints to remove numerical errors such as 33.0000000001\n",
" # instead of 33\n",
" #\n",
" for k in range(cuts.shape[0]):\n",
" for i in range(len(cuts[k])):\n",
" cuts[k][i] = round(cuts[k][i], 5)\n",
"\n",
"with open(\"datasets/tests.txt\", \"w\") as file:\n",
" file.write(\"#\\n\")\n",
" file.write(\"# from, to, step, #bins, Q/U\\n\")\n",
" file.write(\"# discretized data\\n\")\n",
" file.write(\"# cut points\\n\")\n",
" file.write(\"#\\n\")\n",
" #\n",
" # Range experiments\n",
" #\n",
" file.write(\"#\\n\")\n",
" file.write(\"# Range experiments\\n\")\n",
" file.write(\"#\\n\")\n",
" for experiment in experiments_range:\n",
" file.write(\"RANGE\\n\")\n",
" (from_, to_, step_, bins_, strategy) = experiment\n",
" disc = KBinsDiscretizer(n_bins=bins_, encode='ordinal', strategy='quantile' if strategy.strip() == \"Q\" else 'uniform')\n",
" data = [[x] for x in range(from_, to_, step_)]\n",
" disc.fit(data)\n",
" normalize_cuts(disc.bin_edges_)\n",
" result = disc.transform(data)\n",
" file.write(f\"{from_}, {to_}, {step_}, {bins_}, {strategy}\\n\")\n",
" write_lists(file, result, disc.bin_edges_[0])\n",
" #\n",
" # Vector experiments\n",
" #\n",
" file.write(\"#\\n\")\n",
" file.write(\"# Vector experiments\\n\")\n",
" file.write(\"#\\n\")\n",
" for n_bins, experiment in experiments_vectors:\n",
" for strategy in [\"Q\", \"U\"]:\n",
" file.write(\"VECTOR\\n\")\n",
" file.write(f\"{strategy}{n_bins}{experiment}\\n\")\n",
" disc = KBinsDiscretizer(\n",
" n_bins=n_bins,\n",
" encode=\"ordinal\",\n",
" \n",
" strategy=\"quantile\" if strategy.strip() == \"Q\" else \"uniform\",\n",
" )\n",
" data = [[x] for x in experiment]\n",
" disc.fit(data)\n",
" normalize_cuts(disc.bin_edges_)\n",
" result = disc.transform(data)\n",
" write_lists(file, result, disc.bin_edges_[0])\n",
" #\n",
" # Vector experiments iris\n",
" #\n",
" file.write(\"#\\n\");\n",
" file.write(\"# Vector experiments with iris\\n\");\n",
" file.write(\"#\\n\");\n",
" X, y = load_iris(return_X_y=True)\n",
" for i in range(X.shape[1]):\n",
" for n_bins in [3, 4]:\n",
" for strategy in [\"Q\", \"U\"]:\n",
" file.write(\"VECTOR\\n\")\n",
" experiment = X[:, i]\n",
" file.write(f\"{strategy}{n_bins}{experiment.tolist()}\\n\")\n",
" disc = KBinsDiscretizer(\n",
" n_bins=n_bins,\n",
" encode=\"ordinal\",\n",
" strategy=\"quantile\" if strategy.strip() == \"Q\" else \"uniform\")\n",
" data = [[x] for x in experiment]\n",
" disc.fit(data)\n",
" normalize_cuts(disc.bin_edges_)\n",
" result = disc.transform(data)\n",
" write_lists(file, result, disc.bin_edges_[0])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cut points: [array([ 0., 33., 66., 99.])]\n",
"Mistaken transformed data disc.transform([[33]]) = [[0.]]\n",
"Reason of the mistake the cutpoint has decimals (double): 33.00000000000001\n"
]
}
],
"source": [
"#\n",
"# Proving the mistakes due to floating point precision\n",
"#\n",
"from sklearn.preprocessing import KBinsDiscretizer\n",
"\n",
"data = [[x] for x in range(100)]\n",
"disc = KBinsDiscretizer(n_bins=3, encode=\"ordinal\", strategy=\"quantile\")\n",
"disc.fit(data)\n",
"print(\"Cut points: \", disc.bin_edges_)\n",
"print(\"Mistaken transformed data disc.transform([[33]]) =\", disc.transform([[33]]))\n",
"print(\"Reason of the mistake the cutpoint has decimals (double): \", disc.bin_edges_[0][1])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.1.undefined"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

38
update_coverage.py Normal file
View File

@@ -0,0 +1,38 @@
# ***************************************************************
# SPDX-FileCopyrightText: Copyright 2024 Ricardo Montañana Gómez
# SPDX-FileType: SOURCE
# SPDX-License-Identifier: MIT
# ***************************************************************
import subprocess
import sys
readme_file = "README.md"
print("Updating coverage...")
# Generate badge line
output = subprocess.check_output(
"lcov --summary " + sys.argv[1] + "/coverage.info",
shell=True,
)
value = output.decode("utf-8").strip()
percentage = 0
for line in value.splitlines():
if "lines" in line:
percentage = float(line.split(":")[1].split("%")[0])
break
print(f"Coverage: {percentage}%")
if percentage < 90:
print("⛔Coverage is less than 90%. I won't update the badge.")
sys.exit(1)
percentage_label = str(percentage).replace(".", ",")
coverage_line = f"[![Coverage Badge](https://img.shields.io/badge/Coverage-{percentage_label}%25-green)](html/index.html)"
# Update README.md
with open(readme_file, "r") as f:
lines = f.readlines()
with open(readme_file, "w") as f:
for line in lines:
if "img.shields.io/badge/Coverage" in line:
f.write(coverage_line + "\n")
else:
f.write(line)
print(f"✅Coverage updated with value: {percentage}")