9 Commits

Author SHA1 Message Date
08d8910b34 Add version 2.7.1 2025-07-16 16:11:16 +02:00
Ricardo Montañana Gómez
6d8b55a808 Fix conan (#10)
* Fix debug conan build target

* Add viewcoverage and fix coverage generation

* Add more tests to cover new integrity checks

* Add tests to accomplish 100%

* Fix conan-create makefile target
2025-07-02 20:09:34 +02:00
c1759ba1ce Fix conan build 2025-06-28 19:17:44 +02:00
f1dae498ac Fix tests 2025-06-28 18:41:33 +02:00
4418ea8a6f Compiling right 2025-06-28 17:18:57 +02:00
159e24b5cb Remove submodule 2025-06-28 16:38:43 +02:00
77e28e728e Remove submodule 2025-06-28 16:38:19 +02:00
18db982dec Update build method 2025-06-28 13:55:04 +02:00
99b751a4d4 Claude enhancement proposal 2025-06-28 13:17:31 +02:00
30 changed files with 1169 additions and 167 deletions

11
.conan/profiles/default Normal file
View File

@@ -0,0 +1,11 @@
[settings]
os=Linux
arch=x86_64
compiler=gcc
compiler.version=11
compiler.libcxx=libstdc++11
build_type=Release
[conf]
tools.system.package_manager:mode=install
tools.system.package_manager:sudo=True

3
.gitignore vendored
View File

@@ -39,4 +39,5 @@ build_release
.idea .idea
cmake-* cmake-*
**/CMakeFiles **/CMakeFiles
**/gcovr-report **/gcovr-report
CMakeUserPresets.json

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "tests/lib/Files"]
path = tests/lib/Files
url = https://github.com/rmontanana/ArffFiles.git

View File

@@ -104,6 +104,10 @@
"stop_token": "cpp", "stop_token": "cpp",
"text_encoding": "cpp", "text_encoding": "cpp",
"typeindex": "cpp", "typeindex": "cpp",
"valarray": "cpp" "valarray": "cpp",
"csignal": "cpp",
"regex": "cpp",
"future": "cpp",
"shared_mutex": "cpp"
} }
} }

214
CHANGELOG.md Normal file
View File

@@ -0,0 +1,214 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [2.1.0] - 2025-06-28
### Added
- Conan dependency manager support
- Technical analysis report
### Changed
- Updated README.md
- Refactored library version and installation system
- Updated config variable names
### Fixed
- Removed unneeded semicolon
## [2.0.1] - 2024-07-22
### Added
- CMake install target and make install command
- Flag to control sample building in Makefile
### Changed
- Library name changed to `fimdlp`
- Updated version numbers across test files
### Fixed
- Version number consistency in tests
## [2.0.0] - 2024-07-04
### Added
- Makefile with build & test actions for easier development
- PyTorch (libtorch) integration for tensor operations
### Changed
- Major refactoring of build system
- Updated build workflows and CI configuration
### Fixed
- BinDisc quantile calculation errors (#9)
- Error in percentile method calculation
- Integer type issues in calculations
- Multiple GitHub Actions configuration fixes
## [1.2.1] - 2024-06-08
### Added
- PyTorch tensor methods for discretization
- Improved library build system
### Changed
- Refactored sample build process
### Fixed
- Library creation and linking issues
- Multiple GitHub Actions workflow fixes
## [1.2.0] - 2024-06-05
### Added
- **Discretizer** - Abstract base class for all discretization algorithms (#8)
- **BinDisc** - K-bins discretization with quantile and uniform strategies (#7)
- Transform method to discretize values using existing cut points
- Support for multiple datasets in sample program
- Docker development container configuration
### Changed
- Refactored system types throughout the library
- Improved sample program with better dataset handling
- Enhanced build system with debug options
### Fixed
- Transform method initialization issues
- ARFF file attribute name extraction
- Sample program library binary separation
## [1.1.3] - 2024-06-05
### Added
- `max_cutpoints` hyperparameter for controlling algorithm complexity
- `max_depth` and `min_length` as configurable hyperparameters
- Enhanced sample program with hyperparameter support
- Additional datasets for testing
### Changed
- Improved constructor design and parameter handling
- Enhanced test coverage and reporting
- Refactored build system configuration
### Fixed
- Depth initialization in fit method
- Code quality improvements and smell fixes
- Exception handling in value cut point calculations
## [1.1.2] - 2023-04-01
### Added
- Comprehensive test suite with GitHub Actions CI
- SonarCloud integration for code quality analysis
- Enhanced build system with automated testing
### Changed
- Improved GitHub Actions workflow configuration
- Updated project structure for better maintainability
### Fixed
- Build system configuration issues
- Test execution and coverage reporting
## [1.1.1] - 2023-02-22
### Added
- Limits header for proper compilation
- Enhanced build system support
### Changed
- Updated version numbering system
- Improved SonarCloud configuration
### Fixed
- ValueCutPoint exception handling (removed unnecessary exception)
- Build system compatibility issues
- GitHub Actions token configuration
## [1.1.0] - 2023-02-21
### Added
- Classic algorithm implementation for performance comparison
- Enhanced ValueCutPoint logic with same_values detection
- Glass dataset support in sample program
- Debug configuration for development
### Changed
- Refactored ValueCutPoint algorithm for better accuracy
- Improved candidate selection logic
- Enhanced sample program with multiple datasets
### Fixed
- Sign error in valueCutPoint calculation
- Final cut value computation
- Duplicate dataset handling in sample
## [1.0.0.0] - 2022-12-21
### Added
- Initial release of MDLP (Minimum Description Length Principle) discretization library
- Core CPPFImdlp algorithm implementation based on Fayyad & Irani's paper
- Entropy and information gain calculation methods
- Sample program demonstrating library usage
- CMake build system
- Basic test suite
- ARFF file format support for datasets
### Features
- Recursive discretization using entropy-based criteria
- Stable sorting with tie-breaking for identical values
- Configurable algorithm parameters
- Cross-platform C++ implementation
---
## Release Notes
### Version 2.x
- **Breaking Changes**: Library renamed to `fimdlp`
- **Major Enhancement**: PyTorch integration for improved performance
- **New Features**: Comprehensive discretization framework with multiple algorithms
### Version 1.x
- **Core Algorithm**: MDLP discretization implementation
- **Extensibility**: Hyperparameter support and algorithm variants
- **Quality**: Comprehensive testing and CI/CD pipeline
### Version 1.0.x
- **Foundation**: Initial stable implementation
- **Algorithm**: Core MDLP discretization functionality

View File

@@ -4,17 +4,17 @@ project(fimdlp
LANGUAGES CXX LANGUAGES CXX
DESCRIPTION "Discretization algorithm based on the paper by Fayyad & Irani Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning." DESCRIPTION "Discretization algorithm based on the paper by Fayyad & Irani Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning."
HOMEPAGE_URL "https://github.com/rmontanana/mdlp" HOMEPAGE_URL "https://github.com/rmontanana/mdlp"
VERSION 2.0.1 VERSION 2.1.0
) )
set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD 17)
cmake_policy(SET CMP0135 NEW) cmake_policy(SET CMP0135 NEW)
find_package(Torch REQUIRED) # Find dependencies
find_package(Torch CONFIG REQUIRED)
# Options # Options
# ------- # -------
option(ENABLE_TESTING OFF) option(ENABLE_TESTING OFF)
option(ENABLE_SAMPLE OFF)
option(COVERAGE OFF) option(COVERAGE OFF)
add_subdirectory(config) add_subdirectory(config)
@@ -25,20 +25,24 @@ if (NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -fno-default-inline") set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -fno-default-inline")
endif() endif()
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
message(STATUS "Debug mode")
else()
message(STATUS "Release mode")
endif()
if (ENABLE_TESTING) if (ENABLE_TESTING)
message("Debug mode") message(STATUS "Testing is enabled")
enable_testing() enable_testing()
set(CODE_COVERAGE ON) set(CODE_COVERAGE ON)
set(GCC_COVERAGE_LINK_FLAGS "${GCC_COVERAGE_LINK_FLAGS} -lgcov --coverage") set(GCC_COVERAGE_LINK_FLAGS "${GCC_COVERAGE_LINK_FLAGS} -lgcov --coverage")
add_subdirectory(tests) add_subdirectory(tests)
else() else()
message("Release mode") message(STATUS "Testing is disabled")
endif() endif()
if (ENABLE_SAMPLE) message(STATUS "Building sample")
message("Building sample") add_subdirectory(sample)
add_subdirectory(sample)
endif()
include_directories( include_directories(
${fimdlp_SOURCE_DIR}/src ${fimdlp_SOURCE_DIR}/src
@@ -46,7 +50,7 @@ include_directories(
) )
add_library(fimdlp src/CPPFImdlp.cpp src/Metrics.cpp src/BinDisc.cpp src/Discretizer.cpp) add_library(fimdlp src/CPPFImdlp.cpp src/Metrics.cpp src/BinDisc.cpp src/Discretizer.cpp)
target_link_libraries(fimdlp torch::torch) target_link_libraries(fimdlp PRIVATE torch::torch)
# Installation # Installation
# ------------ # ------------
@@ -60,11 +64,10 @@ write_basic_package_version_file(
install(TARGETS fimdlp install(TARGETS fimdlp
EXPORT fimdlpTargets EXPORT fimdlpTargets
ARCHIVE DESTINATION lib ARCHIVE DESTINATION lib
LIBRARY DESTINATION lib LIBRARY DESTINATION lib)
CONFIGURATIONS Release)
install(DIRECTORY src/ DESTINATION include/fimdlp FILES_MATCHING CONFIGURATIONS Release PATTERN "*.h") install(DIRECTORY src/ DESTINATION include/fimdlp FILES_MATCHING PATTERN "*.h")
install(FILES ${CMAKE_BINARY_DIR}/configured_files/include/config.h DESTINATION include/fimdlp CONFIGURATIONS Release) install(FILES ${CMAKE_BINARY_DIR}/configured_files/include/config.h DESTINATION include/fimdlp)
install(EXPORT fimdlpTargets install(EXPORT fimdlpTargets
FILE fimdlpTargets.cmake FILE fimdlpTargets.cmake

View File

@@ -1,9 +0,0 @@
{
"version": 4,
"vendor": {
"conan": {}
},
"include": [
"build/Release/generators/CMakePresets.json"
]
}

153
CONAN_README.md Normal file
View File

@@ -0,0 +1,153 @@
# Conan Package for fimdlp
This directory contains the Conan package configuration for the fimdlp library.
## Dependencies
The package manages the following dependencies:
### Build Requirements
- **libtorch/2.4.1** - PyTorch C++ library for tensor operations
### Test Requirements (when testing enabled)
- **catch2/3.8.1** - Modern C++ testing framework
- **arff-files** - ARFF file format support (included locally in tests/lib/Files/)
## Building with Conan
### 1. Install Dependencies and Build
```bash
# Install dependencies
conan install . --output-folder=build --build=missing
# Build the project
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release
cmake --build .
```
### 2. Using the Build Script
```bash
# Build release version
./scripts/build_conan.sh
# Build with tests
./scripts/build_conan.sh --test
```
## Creating a Package
### 1. Create Package Locally
```bash
conan create . --profile:build=default --profile:host=default
```
### 2. Create Package with Options
```bash
# Create with testing enabled
conan create . -o enable_testing=True --profile:build=default --profile:host=default
# Create shared library version
conan create . -o shared=True --profile:build=default --profile:host=default
```
### 3. Using the Package Creation Script
```bash
./scripts/create_package.sh
```
## Uploading to Cimmeria
### 1. Configure Remote
```bash
# Add Cimmeria remote
conan remote add cimmeria <cimmeria-server-url>
# Login to Cimmeria
conan remote login cimmeria <username>
```
### 2. Upload Package
```bash
# Upload the package
conan upload fimdlp/2.1.0 --remote=cimmeria --all
# Or use the script (will configure remote instructions if not set up)
./scripts/create_package.sh
```
## Using the Package
### In conanfile.txt
```ini
[requires]
fimdlp/2.1.0
[generators]
CMakeDeps
CMakeToolchain
```
### In conanfile.py
```python
def requirements(self):
self.requires("fimdlp/2.1.0")
```
### In CMakeLists.txt
```cmake
find_package(fimdlp REQUIRED)
target_link_libraries(your_target fimdlp::fimdlp)
```
## Package Options
| Option | Values | Default | Description |
|--------|--------|---------|-------------|
| shared | True/False | False | Build shared library |
| fPIC | True/False | True | Position independent code |
| enable_testing | True/False | False | Enable test suite |
| enable_sample | True/False | False | Build sample program |
## Example Usage
```cpp
#include <fimdlp/CPPFImdlp.h>
#include <fimdlp/Metrics.h>
int main() {
// Create MDLP discretizer
CPPFImdlp discretizer;
// Calculate entropy
Metrics metrics;
std::vector<int> labels = {0, 1, 0, 1, 1};
double entropy = metrics.entropy(labels);
return 0;
}
```
## Testing
The package includes comprehensive tests that can be enabled with:
```bash
conan create . -o enable_testing=True
```
## Requirements
- C++17 compatible compiler
- CMake 3.20 or later
- Conan 2.0 or later

View File

@@ -1,35 +1,70 @@
SHELL := /bin/bash SHELL := /bin/bash
.DEFAULT_GOAL := build .DEFAULT_GOAL := release
.PHONY: build test .PHONY: debug release install test conan-create viewcoverage
lcov := lcov lcov := lcov
build: f_debug = build_debug
@if [ -d build_release ]; then rm -fr build_release; fi f_release = build_release
@mkdir build_release genhtml = genhtml
@cmake -B build_release -S . -DCMAKE_BUILD_TYPE=Release -DENABLE_TESTING=OFF -DENABLE_SAMPLE=ON docscdir = docs
@cmake --build build_release -j 8
install: define build_target
@cmake --build build_release --target install -j 8 @echo ">>> Building the project for $(1)..."
@if [ -d $(2) ]; then rm -fr $(2); fi
@conan install . --build=missing -of $(2) -s build_type=$(1)
@cmake -S . -B $(2) -DCMAKE_TOOLCHAIN_FILE=$(2)/build/$(1)/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=$(1) -D$(3)
@cmake --build $(2) --config $(1) -j 8
endef
test: debug: ## Build Debug version of the library
@if [ -d build_debug ]; then rm -fr build_debug; fi @$(call build_target,"Debug","$(f_debug)", "ENABLE_TESTING=ON")
@mkdir build_debug
@cmake -B build_debug -S . -DCMAKE_BUILD_TYPE=Debug -DENABLE_TESTING=ON -DENABLE_SAMPLE=ON release: ## Build Release version of the library
@cmake --build build_debug -j 8 @$(call build_target,"Release","$(f_release)", "ENABLE_TESTING=OFF")
@cd build_debug/tests && ctest --output-on-failure -j 8
@cd build_debug/tests && $(lcov) --capture --directory ../ --demangle-cpp --ignore-errors source,source --ignore-errors mismatch --output-file coverage.info >/dev/null 2>&1; \ install: ## Install the project
@echo ">>> Installing the project..."
@cmake --build $(f_release) --target install -j 8
test: ## Build Debug version and run tests
@echo ">>> Building Debug version and running tests..."
@$(MAKE) debug;
@cp -r tests/datasets $(f_debug)/tests/datasets
@cd $(f_debug)/tests && ctest --output-on-failure -j 8
@cd $(f_debug)/tests && $(lcov) --capture --directory ../ --demangle-cpp --ignore-errors source,source --ignore-errors mismatch --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info '/usr/*' --output-file coverage.info >/dev/null 2>&1; \ $(lcov) --remove coverage.info '/usr/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'lib/*' --output-file coverage.info >/dev/null 2>&1; \ $(lcov) --remove coverage.info 'lib/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'libtorch/*' --output-file coverage.info >/dev/null 2>&1; \ $(lcov) --remove coverage.info 'libtorch/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'tests/*' --output-file coverage.info >/dev/null 2>&1; \ $(lcov) --remove coverage.info 'tests/*' --output-file coverage.info >/dev/null 2>&1; \
$(lcov) --remove coverage.info 'gtest/*' --output-file coverage.info >/dev/null 2>&1; $(lcov) --remove coverage.info 'gtest/*' --output-file coverage.info >/dev/null 2>&1; \
@genhtml build_debug/tests/coverage.info --demangle-cpp --output-directory build_debug/tests/coverage --title "Discretizer mdlp Coverage Report" -s -k -f --legend $(lcov) --remove coverage.info '*/.conan2/*' --ignore-errors unused --output-file coverage.info >/dev/null 2>&1;
@echo "* Coverage report is generated at build_debug/tests/coverage/index.html" @genhtml $(f_debug)/tests/coverage.info --demangle-cpp --output-directory $(f_debug)/tests/coverage --title "Discretizer mdlp Coverage Report" -s -k -f --legend
@echo "* Coverage report is generated at $(f_debug)/tests/coverage/index.html"
@which python || (echo ">>> Please install python"; exit 1) @which python || (echo ">>> Please install python"; exit 1)
@if [ ! -f build_debug/tests/coverage.info ]; then \ @if [ ! -f $(f_debug)/tests/coverage.info ]; then \
echo ">>> No coverage.info file found!"; \ echo ">>> No coverage.info file found!"; \
exit 1; \ exit 1; \
fi fi
@echo ">>> Updating coverage badge..." @echo ">>> Updating coverage badge..."
@env python update_coverage.py build_debug/tests @env python update_coverage.py $(f_debug)/tests
@echo ">>> Done"
viewcoverage: ## View the html coverage report
@which $(genhtml) >/dev/null || (echo ">>> Please install lcov (genhtml not found)"; exit 1)
@if [ ! -d $(docscdir)/coverage ]; then mkdir -p $(docscdir)/coverage; fi
@if [ ! -f $(f_debug)/tests/coverage.info ]; then \
echo ">>> No coverage.info file found. Run make coverage first!"; \
exit 1; \
fi
@$(genhtml) $(f_debug)/tests/coverage.info --demangle-cpp --output-directory $(docscdir)/coverage --title "FImdlp Coverage Report" -s -k -f --legend >/dev/null 2>&1;
@xdg-open $(docscdir)/coverage/index.html || open $(docscdir)/coverage/index.html 2>/dev/null
@echo ">>> Done";
conan-create: ## Create the conan package
@echo ">>> Creating the conan package..."
conan create . --build=missing -tf "" -s:a build_type=Release
conan create . --build=missing -tf "" -s:a build_type=Debug -o "&:enable_testing=False"
@echo ">>> Done"

16
conandata.yml Normal file
View File

@@ -0,0 +1,16 @@
sources:
"2.1.0":
url: "https://github.com/rmontanana/mdlp/archive/refs/tags/v2.1.0.tar.gz"
sha256: "placeholder_sha256_hash"
"2.0.1":
url: "https://github.com/rmontanana/mdlp/archive/refs/tags/v2.0.1.tar.gz"
sha256: "placeholder_sha256_hash"
"2.0.0":
url: "https://github.com/rmontanana/mdlp/archive/refs/tags/v2.0.0.tar.gz"
sha256: "placeholder_sha256_hash"
patches:
"2.1.0":
- patch_file: "patches/001-cmake-fix.patch"
patch_description: "Fix CMake configuration for Conan compatibility"
patch_type: "portability"

View File

@@ -1,8 +1,9 @@
import re
import os import os
import re
from conan import ConanFile from conan import ConanFile
from conan.tools.cmake import CMake, CMakeToolchain, cmake_layout, CMakeDeps from conan.tools.cmake import CMakeToolchain, CMake, cmake_layout, CMakeDeps
from conan.tools.files import save, load from conan.tools.files import load, copy
class FimdlpConan(ConanFile): class FimdlpConan(ConanFile):
name = "fimdlp" name = "fimdlp"
@@ -10,46 +11,101 @@ class FimdlpConan(ConanFile):
license = "MIT" license = "MIT"
author = "Ricardo Montañana <rmontanana@gmail.com>" author = "Ricardo Montañana <rmontanana@gmail.com>"
url = "https://github.com/rmontanana/mdlp" url = "https://github.com/rmontanana/mdlp"
description = "Discretization algorithm based on the paper by Fayyad & Irani." description = "Discretization algorithm based on the paper by Fayyad & Irani Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning."
topics = ("discretization", "classification", "machine learning") topics = ("machine-learning", "discretization", "mdlp", "classification")
# Package configuration
settings = "os", "compiler", "build_type", "arch" settings = "os", "compiler", "build_type", "arch"
exports_sources = "src/*", "CMakeLists.txt", "README.md", "config/*", "fimdlpConfig.cmake.in" options = {
"shared": [True, False],
"fPIC": [True, False],
"enable_testing": [True, False],
"enable_sample": [True, False],
}
default_options = {
"shared": False,
"fPIC": True,
"enable_testing": False,
"enable_sample": False,
}
# Sources are located in the same place as this recipe, copy them to the recipe
exports_sources = "CMakeLists.txt", "src/*", "sample/*", "tests/*", "config/*", "fimdlpConfig.cmake.in"
def set_version(self): def set_version(self):
# Read the CMakeLists.txt file to get the version content = load(self, "CMakeLists.txt")
try: version_pattern = re.compile(r'project\s*\([^\)]*VERSION\s+([0-9]+\.[0-9]+\.[0-9]+)', re.IGNORECASE | re.DOTALL)
content = load(self, "CMakeLists.txt") match = version_pattern.search(content)
match = re.search(r"VERSION\s+(\d+\.\d+\.\d+)", content) if match:
if match: self.version = match.group(1)
self.version = match.group(1) else:
except Exception: raise Exception("Version not found in CMakeLists.txt")
self.version = "2.0.1" # fallback version
def config_options(self):
if self.settings.os == "Windows":
self.options.rm_safe("fPIC")
def configure(self):
if self.options.shared:
self.options.rm_safe("fPIC")
def requirements(self): def requirements(self):
self.requires("libtorch/2.7.0") # PyTorch dependency for tensor operations
self.requires("libtorch/2.7.1")
def build_requirements(self):
self.requires("arff-files/1.2.0") # for tests and sample
if self.options.enable_testing:
self.test_requires("gtest/1.16.0")
def layout(self): def layout(self):
cmake_layout(self) cmake_layout(self)
def generate(self): def generate(self):
# Generate CMake configuration files
deps = CMakeDeps(self) deps = CMakeDeps(self)
deps.generate() deps.generate()
tc = CMakeToolchain(self) tc = CMakeToolchain(self)
# Set CMake variables based on options
tc.variables["ENABLE_TESTING"] = self.options.enable_testing
tc.variables["ENABLE_SAMPLE"] = self.options.enable_sample
tc.variables["BUILD_SHARED_LIBS"] = self.options.shared
tc.generate() tc.generate()
def build(self): def build(self):
cmake = CMake(self) cmake = CMake(self)
cmake.configure() cmake.configure()
cmake.build() cmake.build()
# Run tests if enabled
if self.options.enable_testing:
cmake.test()
def package(self): def package(self):
# Install using CMake
cmake = CMake(self) cmake = CMake(self)
cmake.install() cmake.install()
# Copy license file
copy(self, "LICENSE", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
def package_info(self): def package_info(self):
# Library configuration
self.cpp_info.libs = ["fimdlp"] self.cpp_info.libs = ["fimdlp"]
self.cpp_info.includedirs = ["include"] self.cpp_info.includedirs = ["include"]
self.cpp_info.libdirs = ["lib"]
self.cpp_info.set_property("cmake_find_mode", "both") # CMake package configuration
self.cpp_info.set_property("cmake_file_name", "fimdlp")
self.cpp_info.set_property("cmake_target_name", "fimdlp::fimdlp") self.cpp_info.set_property("cmake_target_name", "fimdlp::fimdlp")
self.cpp_info.set_property("cmake_file_name", "fimdlp")
# Compiler features
self.cpp_info.cppstd = "17"
# System libraries (if needed)
if self.settings.os in ["Linux", "FreeBSD"]:
self.cpp_info.system_libs.append("m") # Math library
self.cpp_info.system_libs.append("pthread") # Threading
# Build information for consumers
self.cpp_info.builddirs = ["lib/cmake/fimdlp"]

View File

@@ -1,12 +1,12 @@
set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD 17)
set(CMAKE_BUILD_TYPE Debug) find_package(arff-files REQUIRED)
include_directories( include_directories(
${fimdlp_SOURCE_DIR}/src ${fimdlp_SOURCE_DIR}/src
${fimdlp_SOURCE_DIR}/tests/lib/Files
${CMAKE_BINARY_DIR}/configured_files/include ${CMAKE_BINARY_DIR}/configured_files/include
${arff-files_INCLUDE_DIRS}
) )
add_executable(sample sample.cpp ) add_executable(sample sample.cpp)
target_link_libraries(sample fimdlp "${TORCH_LIBRARIES}") target_link_libraries(sample PRIVATE fimdlp torch::torch arff-files::arff-files)

25
scripts/build_conan.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/bin/bash
# Build script for fimdlp using Conan
set -e
echo "Building fimdlp with Conan..."
# Clean previous builds
rm -rf build_conan
# Install dependencies and build
conan install . --output-folder=build_conan --build=missing --profile:build=default --profile:host=default
# Build the project
cd build_conan
cmake .. -DCMAKE_TOOLCHAIN_FILE=conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release
cmake --build .
echo "Build completed successfully!"
# Run tests if requested
if [ "$1" = "--test" ]; then
echo "Running tests..."
ctest --output-on-failure
fi

33
scripts/create_package.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# Script to create and upload fimdlp Conan package
set -e
PACKAGE_NAME="fimdlp"
PACKAGE_VERSION="2.1.0"
REMOTE_NAME="cimmeria"
echo "Creating Conan package for $PACKAGE_NAME/$PACKAGE_VERSION..."
# Create the package
conan create . --profile:build=default --profile:host=default
echo "Package created successfully!"
# Test the package
echo "Testing package..."
conan test test_package $PACKAGE_NAME/$PACKAGE_VERSION@ --profile:build=default --profile:host=default
echo "Package tested successfully!"
# Upload to Cimmeria (if remote is configured)
if conan remote list | grep -q "$REMOTE_NAME"; then
echo "Uploading package to $REMOTE_NAME..."
conan upload $PACKAGE_NAME/$PACKAGE_VERSION --remote=$REMOTE_NAME --all
echo "Package uploaded to $REMOTE_NAME successfully!"
else
echo "Remote '$REMOTE_NAME' not configured. To upload the package:"
echo "1. Add the remote: conan remote add $REMOTE_NAME <cimmeria-url>"
echo "2. Login: conan remote login $REMOTE_NAME <username>"
echo "3. Upload: conan upload $PACKAGE_NAME/$PACKAGE_VERSION --remote=$REMOTE_NAME --all"
fi

View File

@@ -22,13 +22,15 @@ namespace mdlp {
BinDisc::~BinDisc() = default; BinDisc::~BinDisc() = default;
void BinDisc::fit(samples_t& X) void BinDisc::fit(samples_t& X)
{ {
// y is included for compatibility with the Discretizer interface // Input validation
cutPoints.clear();
if (X.empty()) { if (X.empty()) {
cutPoints.push_back(0.0); throw std::invalid_argument("Input data X cannot be empty");
cutPoints.push_back(0.0);
return;
} }
if (X.size() < static_cast<size_t>(n_bins)) {
throw std::invalid_argument("Input data size must be at least equal to n_bins");
}
cutPoints.clear();
if (strategy == strategy_t::QUANTILE) { if (strategy == strategy_t::QUANTILE) {
direction = bound_dir_t::RIGHT; direction = bound_dir_t::RIGHT;
fit_quantile(X); fit_quantile(X);
@@ -39,10 +41,27 @@ namespace mdlp {
} }
void BinDisc::fit(samples_t& X, labels_t& y) void BinDisc::fit(samples_t& X, labels_t& y)
{ {
if (X.empty()) {
throw std::invalid_argument("X cannot be empty");
}
// BinDisc is inherently unsupervised, but we validate inputs for consistency
// Note: y parameter is validated but not used in binning strategy
fit(X); fit(X);
} }
std::vector<precision_t> linspace(precision_t start, precision_t end, int num) std::vector<precision_t> BinDisc::linspace(precision_t start, precision_t end, int num)
{ {
// Input validation
if (num < 2) {
throw std::invalid_argument("Number of points must be at least 2 for linspace");
}
if (std::isnan(start) || std::isnan(end)) {
throw std::invalid_argument("Start and end values cannot be NaN");
}
if (std::isinf(start) || std::isinf(end)) {
throw std::invalid_argument("Start and end values cannot be infinite");
}
if (start == end) { if (start == end) {
return { start, end }; return { start, end };
} }
@@ -58,8 +77,16 @@ namespace mdlp {
{ {
return std::max(lower, std::min(n, upper)); return std::max(lower, std::min(n, upper));
} }
std::vector<precision_t> percentile(samples_t& data, const std::vector<precision_t>& percentiles) std::vector<precision_t> BinDisc::percentile(samples_t& data, const std::vector<precision_t>& percentiles)
{ {
// Input validation
if (data.empty()) {
throw std::invalid_argument("Data cannot be empty for percentile calculation");
}
if (percentiles.empty()) {
throw std::invalid_argument("Percentiles cannot be empty");
}
// Implementation taken from https://dpilger26.github.io/NumCpp/doxygen/html/percentile_8hpp_source.html // Implementation taken from https://dpilger26.github.io/NumCpp/doxygen/html/percentile_8hpp_source.html
std::vector<precision_t> results; std::vector<precision_t> results;
bool first = true; bool first = true;

View File

@@ -23,6 +23,9 @@ namespace mdlp {
// y is included for compatibility with the Discretizer interface // y is included for compatibility with the Discretizer interface
void fit(samples_t& X_, labels_t& y) override; void fit(samples_t& X_, labels_t& y) override;
void fit(samples_t& X); void fit(samples_t& X);
protected:
std::vector<precision_t> linspace(precision_t start, precision_t end, int num);
std::vector<precision_t> percentile(samples_t& data, const std::vector<precision_t>& percentiles);
private: private:
void fit_uniform(const samples_t&); void fit_uniform(const samples_t&);
void fit_quantile(const samples_t&); void fit_quantile(const samples_t&);

View File

@@ -8,6 +8,7 @@
#include <algorithm> #include <algorithm>
#include <set> #include <set>
#include <cmath> #include <cmath>
#include <stdexcept>
#include "CPPFImdlp.h" #include "CPPFImdlp.h"
namespace mdlp { namespace mdlp {
@@ -18,6 +19,17 @@ namespace mdlp {
max_depth(max_depth_), max_depth(max_depth_),
proposed_cuts(proposed) proposed_cuts(proposed)
{ {
// Input validation for constructor parameters
if (min_length_ < 3) {
throw std::invalid_argument("min_length must be greater than 2");
}
if (max_depth_ < 1) {
throw std::invalid_argument("max_depth must be greater than 0");
}
if (proposed < 0.0f) {
throw std::invalid_argument("proposed_cuts must be non-negative");
}
direction = bound_dir_t::RIGHT; direction = bound_dir_t::RIGHT;
} }
@@ -27,7 +39,7 @@ namespace mdlp {
if (proposed_cuts == 0) { if (proposed_cuts == 0) {
return numeric_limits<size_t>::max(); return numeric_limits<size_t>::max();
} }
if (proposed_cuts < 0 || proposed_cuts > static_cast<precision_t>(X.size())) { if (proposed_cuts > static_cast<precision_t>(X.size())) {
throw invalid_argument("wrong proposed num_cuts value"); throw invalid_argument("wrong proposed num_cuts value");
} }
if (proposed_cuts < 1) if (proposed_cuts < 1)
@@ -44,17 +56,11 @@ namespace mdlp {
discretizedData.clear(); discretizedData.clear();
cutPoints.clear(); cutPoints.clear();
if (X.size() != y.size()) { if (X.size() != y.size()) {
throw invalid_argument("X and y must have the same size"); throw std::invalid_argument("X and y must have the same size: " + std::to_string(X.size()) + " != " + std::to_string(y.size()));
} }
if (X.empty() || y.empty()) { if (X.empty() || y.empty()) {
throw invalid_argument("X and y must have at least one element"); throw invalid_argument("X and y must have at least one element");
} }
if (min_length < 3) {
throw invalid_argument("min_length must be greater than 2");
}
if (max_depth < 1) {
throw invalid_argument("max_depth must be greater than 0");
}
indices = sortIndices(X_, y_); indices = sortIndices(X_, y_);
metrics.setData(y, indices); metrics.setData(y, indices);
computeCutPoints(0, X.size(), 1); computeCutPoints(0, X.size(), 1);
@@ -81,26 +87,33 @@ namespace mdlp {
precision_t previous; precision_t previous;
precision_t actual; precision_t actual;
precision_t next; precision_t next;
previous = X[indices[idxPrev]]; previous = safe_X_access(idxPrev);
actual = X[indices[cut]]; actual = safe_X_access(cut);
next = X[indices[idxNext]]; next = safe_X_access(idxNext);
// definition 2 of the paper => X[t-1] < X[t] // definition 2 of the paper => X[t-1] < X[t]
// get the first equal value of X in the interval // get the first equal value of X in the interval
while (idxPrev > start && actual == previous) { while (idxPrev > start && actual == previous) {
previous = X[indices[--idxPrev]]; --idxPrev;
previous = safe_X_access(idxPrev);
} }
backWall = idxPrev == start && actual == previous; backWall = idxPrev == start && actual == previous;
// get the last equal value of X in the interval // get the last equal value of X in the interval
while (idxNext < end - 1 && actual == next) { while (idxNext < end - 1 && actual == next) {
next = X[indices[++idxNext]]; ++idxNext;
next = safe_X_access(idxNext);
} }
// # of duplicates before cutpoint // # of duplicates before cutpoint
n = cut - 1 - idxPrev; n = safe_subtract(safe_subtract(cut, 1), idxPrev);
// # of duplicates after cutpoint // # of duplicates after cutpoint
m = idxNext - cut - 1; m = idxNext - cut - 1;
// Decide which values to use // Decide which values to use
cut = cut + (backWall ? m + 1 : -n); if (backWall) {
actual = X[indices[cut]]; m = int(idxNext - cut - 1) < 0 ? 0 : m; // Ensure m right
cut = cut + m + 1;
} else {
cut = safe_subtract(cut, n);
}
actual = safe_X_access(cut);
return { (actual + previous) / 2, cut }; return { (actual + previous) / 2, cut };
} }
@@ -109,7 +122,7 @@ namespace mdlp {
size_t cut; size_t cut;
pair<precision_t, size_t> result; pair<precision_t, size_t> result;
// Check if the interval length and the depth are Ok // Check if the interval length and the depth are Ok
if (end - start < min_length || depth_ > max_depth) if (end < start || safe_subtract(end, start) < min_length || depth_ > max_depth)
return; return;
depth = depth_ > depth ? depth_ : depth; depth = depth_ > depth ? depth_ : depth;
cut = getCandidate(start, end); cut = getCandidate(start, end);
@@ -129,14 +142,14 @@ namespace mdlp {
/* Definition 1: A binary discretization for A is determined by selecting the cut point TA for which /* Definition 1: A binary discretization for A is determined by selecting the cut point TA for which
E(A, TA; S) is minimal amongst all the candidate cut points. */ E(A, TA; S) is minimal amongst all the candidate cut points. */
size_t candidate = numeric_limits<size_t>::max(); size_t candidate = numeric_limits<size_t>::max();
size_t elements = end - start; size_t elements = safe_subtract(end, start);
bool sameValues = true; bool sameValues = true;
precision_t entropy_left; precision_t entropy_left;
precision_t entropy_right; precision_t entropy_right;
precision_t minEntropy; precision_t minEntropy;
// Check if all the values of the variable in the interval are the same // Check if all the values of the variable in the interval are the same
for (size_t idx = start + 1; idx < end; idx++) { for (size_t idx = start + 1; idx < end; idx++) {
if (X[indices[idx]] != X[indices[start]]) { if (safe_X_access(idx) != safe_X_access(start)) {
sameValues = false; sameValues = false;
break; break;
} }
@@ -146,7 +159,7 @@ namespace mdlp {
minEntropy = metrics.entropy(start, end); minEntropy = metrics.entropy(start, end);
for (size_t idx = start + 1; idx < end; idx++) { for (size_t idx = start + 1; idx < end; idx++) {
// Cutpoints are always on boundaries (definition 2) // Cutpoints are always on boundaries (definition 2)
if (y[indices[idx]] == y[indices[idx - 1]]) if (safe_y_access(idx) == safe_y_access(idx - 1))
continue; continue;
entropy_left = precision_t(idx - start) / static_cast<precision_t>(elements) * metrics.entropy(start, idx); entropy_left = precision_t(idx - start) / static_cast<precision_t>(elements) * metrics.entropy(start, idx);
entropy_right = precision_t(end - idx) / static_cast<precision_t>(elements) * metrics.entropy(idx, end); entropy_right = precision_t(end - idx) / static_cast<precision_t>(elements) * metrics.entropy(idx, end);
@@ -168,7 +181,7 @@ namespace mdlp {
precision_t ent; precision_t ent;
precision_t ent1; precision_t ent1;
precision_t ent2; precision_t ent2;
auto N = precision_t(end - start); auto N = precision_t(safe_subtract(end, start));
k = metrics.computeNumClasses(start, end); k = metrics.computeNumClasses(start, end);
k1 = metrics.computeNumClasses(start, cut); k1 = metrics.computeNumClasses(start, cut);
k2 = metrics.computeNumClasses(cut, end); k2 = metrics.computeNumClasses(cut, end);
@@ -188,6 +201,9 @@ namespace mdlp {
indices_t idx(X_.size()); indices_t idx(X_.size());
std::iota(idx.begin(), idx.end(), 0); std::iota(idx.begin(), idx.end(), 0);
stable_sort(idx.begin(), idx.end(), [&X_, &y_](size_t i1, size_t i2) { stable_sort(idx.begin(), idx.end(), [&X_, &y_](size_t i1, size_t i2) {
if (i1 >= X_.size() || i2 >= X_.size() || i1 >= y_.size() || i2 >= y_.size()) {
throw std::out_of_range("Index out of bounds in sort comparison");
}
if (X_[i1] == X_[i2]) if (X_[i1] == X_[i2])
return y_[i1] < y_[i2]; return y_[i1] < y_[i2];
else else
@@ -206,7 +222,7 @@ namespace mdlp {
size_t end; size_t end;
for (size_t idx = 0; idx < cutPoints.size(); idx++) { for (size_t idx = 0; idx < cutPoints.size(); idx++) {
end = begin; end = begin;
while (X[indices[end]] < cutPoints[idx] && end < X.size()) while (end < indices.size() && safe_X_access(end) < cutPoints[idx] && end < X.size())
end++; end++;
entropy = metrics.entropy(begin, end); entropy = metrics.entropy(begin, end);
if (entropy > maxEntropy) { if (entropy > maxEntropy) {

View File

@@ -39,6 +39,35 @@ namespace mdlp {
size_t getCandidate(size_t, size_t); size_t getCandidate(size_t, size_t);
size_t compute_max_num_cut_points() const; size_t compute_max_num_cut_points() const;
pair<precision_t, size_t> valueCutPoint(size_t, size_t, size_t); pair<precision_t, size_t> valueCutPoint(size_t, size_t, size_t);
inline precision_t safe_X_access(size_t idx) const
{
if (idx >= indices.size()) {
throw std::out_of_range("Index out of bounds for indices array");
}
size_t real_idx = indices[idx];
if (real_idx >= X.size()) {
throw std::out_of_range("Index out of bounds for X array");
}
return X[real_idx];
}
inline label_t safe_y_access(size_t idx) const
{
if (idx >= indices.size()) {
throw std::out_of_range("Index out of bounds for indices array");
}
size_t real_idx = indices[idx];
if (real_idx >= y.size()) {
throw std::out_of_range("Index out of bounds for y array");
}
return y[real_idx];
}
inline size_t safe_subtract(size_t a, size_t b) const
{
if (b > a) {
throw std::underflow_error("Subtraction would cause underflow");
}
return a - b;
}
}; };
} }
#endif #endif

View File

@@ -10,6 +10,14 @@ namespace mdlp {
labels_t& Discretizer::transform(const samples_t& data) labels_t& Discretizer::transform(const samples_t& data)
{ {
// Input validation
if (data.empty()) {
throw std::invalid_argument("Data for transformation cannot be empty");
}
if (cutPoints.size() < 2) {
throw std::runtime_error("Discretizer not fitted yet or no valid cut points found");
}
discretizedData.clear(); discretizedData.clear();
discretizedData.reserve(data.size()); discretizedData.reserve(data.size());
// CutPoints always have at least two items // CutPoints always have at least two items
@@ -31,6 +39,23 @@ namespace mdlp {
} }
void Discretizer::fit_t(const torch::Tensor& X_, const torch::Tensor& y_) void Discretizer::fit_t(const torch::Tensor& X_, const torch::Tensor& y_)
{ {
// Validate tensor properties for security
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
throw std::invalid_argument("Only 1D tensors supported");
}
if (X_.dtype() != torch::kFloat32) {
throw std::invalid_argument("X tensor must be Float32 type");
}
if (y_.dtype() != torch::kInt32) {
throw std::invalid_argument("y tensor must be Int32 type");
}
if (X_.numel() != y_.numel()) {
throw std::invalid_argument("X and y tensors must have same number of elements");
}
if (X_.numel() == 0) {
throw std::invalid_argument("Tensors cannot be empty");
}
auto num_elements = X_.numel(); auto num_elements = X_.numel();
samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements); samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements);
labels_t y(y_.data_ptr<int>(), y_.data_ptr<int>() + num_elements); labels_t y(y_.data_ptr<int>(), y_.data_ptr<int>() + num_elements);
@@ -38,6 +63,17 @@ namespace mdlp {
} }
torch::Tensor Discretizer::transform_t(const torch::Tensor& X_) torch::Tensor Discretizer::transform_t(const torch::Tensor& X_)
{ {
// Validate tensor properties for security
if (X_.sizes().size() != 1) {
throw std::invalid_argument("Only 1D tensors supported");
}
if (X_.dtype() != torch::kFloat32) {
throw std::invalid_argument("X tensor must be Float32 type");
}
if (X_.numel() == 0) {
throw std::invalid_argument("Tensor cannot be empty");
}
auto num_elements = X_.numel(); auto num_elements = X_.numel();
samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements); samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements);
auto result = transform(X); auto result = transform(X);
@@ -45,6 +81,23 @@ namespace mdlp {
} }
torch::Tensor Discretizer::fit_transform_t(const torch::Tensor& X_, const torch::Tensor& y_) torch::Tensor Discretizer::fit_transform_t(const torch::Tensor& X_, const torch::Tensor& y_)
{ {
// Validate tensor properties for security
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
throw std::invalid_argument("Only 1D tensors supported");
}
if (X_.dtype() != torch::kFloat32) {
throw std::invalid_argument("X tensor must be Float32 type");
}
if (y_.dtype() != torch::kInt32) {
throw std::invalid_argument("y tensor must be Int32 type");
}
if (X_.numel() != y_.numel()) {
throw std::invalid_argument("X and y tensors must have same number of elements");
}
if (X_.numel() == 0) {
throw std::invalid_argument("Tensors cannot be empty");
}
auto num_elements = X_.numel(); auto num_elements = X_.numel();
samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements); samples_t X(X_.data_ptr<precision_t>(), X_.data_ptr<precision_t>() + num_elements);
labels_t y(y_.data_ptr<int>(), y_.data_ptr<int>() + num_elements); labels_t y(y_.data_ptr<int>(), y_.data_ptr<int>() + num_elements);

View File

@@ -26,6 +26,7 @@ namespace mdlp {
void Metrics::setData(const labels_t& y_, const indices_t& indices_) void Metrics::setData(const labels_t& y_, const indices_t& indices_)
{ {
std::lock_guard<std::mutex> lock(cache_mutex);
indices = indices_; indices = indices_;
y = y_; y = y_;
numClasses = computeNumClasses(0, indices.size()); numClasses = computeNumClasses(0, indices.size());
@@ -35,15 +36,23 @@ namespace mdlp {
precision_t Metrics::entropy(size_t start, size_t end) precision_t Metrics::entropy(size_t start, size_t end)
{ {
if (end - start < 2)
return 0;
// Check cache first with read lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
if (entropyCache.find({ start, end }) != entropyCache.end()) {
return entropyCache[{start, end}];
}
}
// Compute entropy outside of lock
precision_t p; precision_t p;
precision_t ventropy = 0; precision_t ventropy = 0;
int nElements = 0; int nElements = 0;
labels_t counts(numClasses + 1, 0); labels_t counts(numClasses + 1, 0);
if (end - start < 2)
return 0;
if (entropyCache.find({ start, end }) != entropyCache.end()) {
return entropyCache[{start, end}];
}
for (auto i = &indices[start]; i != &indices[end]; ++i) { for (auto i = &indices[start]; i != &indices[end]; ++i) {
counts[y[*i]]++; counts[y[*i]]++;
nElements++; nElements++;
@@ -54,12 +63,27 @@ namespace mdlp {
ventropy -= p * log2(p); ventropy -= p * log2(p);
} }
} }
entropyCache[{start, end}] = ventropy;
// Update cache with write lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
entropyCache[{start, end}] = ventropy;
}
return ventropy; return ventropy;
} }
precision_t Metrics::informationGain(size_t start, size_t cut, size_t end) precision_t Metrics::informationGain(size_t start, size_t cut, size_t end)
{ {
// Check cache first with read lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
if (igCache.find(make_tuple(start, cut, end)) != igCache.end()) {
return igCache[make_tuple(start, cut, end)];
}
}
// Compute information gain outside of lock
precision_t iGain; precision_t iGain;
precision_t entropyInterval; precision_t entropyInterval;
precision_t entropyLeft; precision_t entropyLeft;
@@ -67,9 +91,7 @@ namespace mdlp {
size_t nElementsLeft = cut - start; size_t nElementsLeft = cut - start;
size_t nElementsRight = end - cut; size_t nElementsRight = end - cut;
size_t nElements = end - start; size_t nElements = end - start;
if (igCache.find(make_tuple(start, cut, end)) != igCache.end()) {
return igCache[make_tuple(start, cut, end)];
}
entropyInterval = entropy(start, end); entropyInterval = entropy(start, end);
entropyLeft = entropy(start, cut); entropyLeft = entropy(start, cut);
entropyRight = entropy(cut, end); entropyRight = entropy(cut, end);
@@ -77,7 +99,13 @@ namespace mdlp {
(static_cast<precision_t>(nElementsLeft) * entropyLeft + (static_cast<precision_t>(nElementsLeft) * entropyLeft +
static_cast<precision_t>(nElementsRight) * entropyRight) / static_cast<precision_t>(nElementsRight) * entropyRight) /
static_cast<precision_t>(nElements); static_cast<precision_t>(nElements);
igCache[make_tuple(start, cut, end)] = iGain;
// Update cache with write lock
{
std::lock_guard<std::mutex> lock(cache_mutex);
igCache[make_tuple(start, cut, end)] = iGain;
}
return iGain; return iGain;
} }

View File

@@ -8,6 +8,7 @@
#define CCMETRICS_H #define CCMETRICS_H
#include "typesFImdlp.h" #include "typesFImdlp.h"
#include <mutex>
namespace mdlp { namespace mdlp {
class Metrics { class Metrics {
@@ -15,6 +16,7 @@ namespace mdlp {
labels_t& y; labels_t& y;
indices_t& indices; indices_t& indices;
int numClasses; int numClasses;
mutable std::mutex cache_mutex;
cacheEnt_t entropyCache = cacheEnt_t(); cacheEnt_t entropyCache = cacheEnt_t();
cacheIg_t igCache = cacheIg_t(); cacheIg_t igCache = cacheIg_t();
public: public:

View File

@@ -0,0 +1,9 @@
cmake_minimum_required(VERSION 3.20)
project(test_fimdlp)
find_package(fimdlp REQUIRED)
find_package(Torch REQUIRED)
add_executable(test_fimdlp src/test_fimdlp.cpp)
target_link_libraries(test_fimdlp fimdlp::fimdlp torch::torch)
target_compile_features(test_fimdlp PRIVATE cxx_std_17)

View File

@@ -0,0 +1,10 @@
{
"version": 4,
"vendor": {
"conan": {}
},
"include": [
"build/gcc-14-x86_64-gnu17-release/generators/CMakePresets.json",
"build/gcc-14-x86_64-gnu17-debug/generators/CMakePresets.json"
]
}

28
test_package/conanfile.py Normal file
View File

@@ -0,0 +1,28 @@
import os
from conan import ConanFile
from conan.tools.cmake import CMake, cmake_layout
from conan.tools.build import can_run
class FimdlpTestConan(ConanFile):
settings = "os", "compiler", "build_type", "arch"
# VirtualBuildEnv and VirtualRunEnv can be avoided if "tools.env:CONAN_RUN_TESTS" is false
generators = "CMakeDeps", "CMakeToolchain", "VirtualRunEnv"
apply_env = False # avoid the default VirtualBuildEnv from the base class
test_type = "explicit"
def requirements(self):
self.requires(self.tested_reference_str)
def layout(self):
cmake_layout(self)
def build(self):
cmake = CMake(self)
cmake.configure()
cmake.build()
def test(self):
if can_run(self):
cmd = os.path.join(self.cpp.build.bindir, "test_fimdlp")
self.run(cmd, env="conanrun")

View File

@@ -0,0 +1,27 @@
#include <iostream>
#include <vector>
#include <fimdlp/CPPFImdlp.h>
#include <fimdlp/Metrics.h>
int main() {
std::cout << "Testing fimdlp library..." << std::endl;
// Simple test of the library
try {
// Test Metrics class
Metrics metrics;
std::vector<int> labels = {0, 0, 1, 1, 0, 1};
double entropy = metrics.entropy(labels);
std::cout << "Entropy calculated: " << entropy << std::endl;
// Test CPPFImdlp creation
CPPFImdlp discretizer;
std::cout << "CPPFImdlp instance created successfully" << std::endl;
std::cout << "fimdlp library test completed successfully!" << std::endl;
return 0;
} catch (const std::exception& e) {
std::cerr << "Error testing fimdlp library: " << e.what() << std::endl;
return 1;
}
}

View File

@@ -11,18 +11,28 @@
#include <ArffFiles.hpp> #include <ArffFiles.hpp>
#include "BinDisc.h" #include "BinDisc.h"
#include "Experiments.hpp" #include "Experiments.hpp"
#include <cmath>
#define EXPECT_THROW_WITH_MESSAGE(stmt, etype, whatstring) EXPECT_THROW( \
try { \
stmt; \
} catch (const etype& ex) { \
EXPECT_EQ(whatstring, std::string(ex.what())); \
throw; \
} \
, etype)
namespace mdlp { namespace mdlp {
const float margin = 1e-4; const float margin = 1e-4;
static std::string set_data_path() static std::string set_data_path()
{ {
std::string path = "../datasets/"; std::string path = "datasets/";
std::ifstream file(path + "iris.arff"); std::ifstream file(path + "iris.arff");
if (file.is_open()) { if (file.is_open()) {
file.close(); file.close();
return path; return path;
} }
return "../../tests/datasets/"; return "tests/datasets/";
} }
const std::string data_path = set_data_path(); const std::string data_path = set_data_path();
class TestBinDisc3U : public BinDisc, public testing::Test { class TestBinDisc3U : public BinDisc, public testing::Test {
@@ -153,20 +163,12 @@ namespace mdlp {
TEST_F(TestBinDisc3U, EmptyUniform) TEST_F(TestBinDisc3U, EmptyUniform)
{ {
samples_t X = {}; samples_t X = {};
fit(X); EXPECT_THROW(fit(X), std::invalid_argument);
auto cuts = getCutPoints();
ASSERT_EQ(2, cuts.size());
EXPECT_NEAR(0, cuts.at(0), margin);
EXPECT_NEAR(0, cuts.at(1), margin);
} }
TEST_F(TestBinDisc3Q, EmptyQuantile) TEST_F(TestBinDisc3Q, EmptyQuantile)
{ {
samples_t X = {}; samples_t X = {};
fit(X); EXPECT_THROW(fit(X), std::invalid_argument);
auto cuts = getCutPoints();
ASSERT_EQ(2, cuts.size());
EXPECT_NEAR(0, cuts.at(0), margin);
EXPECT_NEAR(0, cuts.at(1), margin);
} }
TEST(TestBinDisc3, ExceptionNumberBins) TEST(TestBinDisc3, ExceptionNumberBins)
{ {
@@ -406,6 +408,66 @@ namespace mdlp {
EXPECT_NEAR(exp.cutpoints_.at(i), cuts.at(i), margin); EXPECT_NEAR(exp.cutpoints_.at(i), cuts.at(i), margin);
} }
} }
std::cout << "* Number of experiments tested: " << num << std::endl; // std::cout << "* Number of experiments tested: " << num << std::endl;
}
TEST_F(TestBinDisc3U, FitDataSizeTooSmall)
{
// Test when data size is smaller than n_bins
samples_t X = { 1.0, 2.0 }; // Only 2 elements for 3 bins
EXPECT_THROW_WITH_MESSAGE(fit(X), std::invalid_argument, "Input data size must be at least equal to n_bins");
}
TEST_F(TestBinDisc3Q, FitDataSizeTooSmall)
{
// Test when data size is smaller than n_bins
samples_t X = { 1.0, 2.0 }; // Only 2 elements for 3 bins
EXPECT_THROW_WITH_MESSAGE(fit(X), std::invalid_argument, "Input data size must be at least equal to n_bins");
}
TEST_F(TestBinDisc3U, FitWithYEmptyX)
{
// Test fit(X, y) with empty X
samples_t X = {};
labels_t y = { 1, 2, 3 };
EXPECT_THROW_WITH_MESSAGE(fit(X, y), std::invalid_argument, "X cannot be empty");
}
TEST_F(TestBinDisc3U, LinspaceInvalidNumPoints)
{
// Test linspace with num < 2
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, 1.0f, 1), std::invalid_argument, "Number of points must be at least 2 for linspace");
}
TEST_F(TestBinDisc3U, LinspaceNaNValues)
{
// Test linspace with NaN values
float nan_val = std::numeric_limits<float>::quiet_NaN();
EXPECT_THROW_WITH_MESSAGE(linspace(nan_val, 1.0f, 3), std::invalid_argument, "Start and end values cannot be NaN");
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, nan_val, 3), std::invalid_argument, "Start and end values cannot be NaN");
}
TEST_F(TestBinDisc3U, LinspaceInfiniteValues)
{
// Test linspace with infinite values
float inf_val = std::numeric_limits<float>::infinity();
EXPECT_THROW_WITH_MESSAGE(linspace(inf_val, 1.0f, 3), std::invalid_argument, "Start and end values cannot be infinite");
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, inf_val, 3), std::invalid_argument, "Start and end values cannot be infinite");
}
TEST_F(TestBinDisc3U, PercentileEmptyData)
{
// Test percentile with empty data
samples_t empty_data = {};
std::vector<precision_t> percentiles = { 25.0f, 50.0f, 75.0f };
EXPECT_THROW_WITH_MESSAGE(percentile(empty_data, percentiles), std::invalid_argument, "Data cannot be empty for percentile calculation");
}
TEST_F(TestBinDisc3U, PercentileEmptyPercentiles)
{
// Test percentile with empty percentiles
samples_t data = { 1.0f, 2.0f, 3.0f };
std::vector<precision_t> empty_percentiles = {};
EXPECT_THROW_WITH_MESSAGE(percentile(data, empty_percentiles), std::invalid_argument, "Percentiles cannot be empty");
} }
} }

View File

@@ -1,17 +1,12 @@
include(FetchContent)
include_directories(${GTEST_INCLUDE_DIRS}) find_package(arff-files REQUIRED)
FetchContent_Declare( find_package(GTest REQUIRED)
googletest find_package(Torch CONFIG REQUIRED)
URL https://github.com/google/googletest/archive/03597a01ee50ed33e9dfd640b249b4be3799d395.zip
)
# For Windows: Prevent overriding the parent project's compiler/linker settings
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(googletest)
include_directories( include_directories(
${TORCH_INCLUDE_DIRS} ${libtorch_INCLUDE_DIRS_DEBUG}
${fimdlp_SOURCE_DIR}/src ${fimdlp_SOURCE_DIR}/src
${fimdlp_SOURCE_DIR}/tests/lib/Files ${arff-files_INCLUDE_DIRS}
${CMAKE_BINARY_DIR}/configured_files/include ${CMAKE_BINARY_DIR}/configured_files/include
) )
@@ -22,18 +17,18 @@ target_link_options(Metrics_unittest PRIVATE --coverage)
add_executable(FImdlp_unittest FImdlp_unittest.cpp add_executable(FImdlp_unittest FImdlp_unittest.cpp
${fimdlp_SOURCE_DIR}/src/CPPFImdlp.cpp ${fimdlp_SOURCE_DIR}/src/Metrics.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp) ${fimdlp_SOURCE_DIR}/src/CPPFImdlp.cpp ${fimdlp_SOURCE_DIR}/src/Metrics.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp)
target_link_libraries(FImdlp_unittest GTest::gtest_main "${TORCH_LIBRARIES}") target_link_libraries(FImdlp_unittest GTest::gtest_main torch::torch)
target_compile_options(FImdlp_unittest PRIVATE --coverage) target_compile_options(FImdlp_unittest PRIVATE --coverage)
target_link_options(FImdlp_unittest PRIVATE --coverage) target_link_options(FImdlp_unittest PRIVATE --coverage)
add_executable(BinDisc_unittest BinDisc_unittest.cpp ${fimdlp_SOURCE_DIR}/src/BinDisc.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp) add_executable(BinDisc_unittest BinDisc_unittest.cpp ${fimdlp_SOURCE_DIR}/src/BinDisc.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp)
target_link_libraries(BinDisc_unittest GTest::gtest_main "${TORCH_LIBRARIES}") target_link_libraries(BinDisc_unittest GTest::gtest_main torch::torch)
target_compile_options(BinDisc_unittest PRIVATE --coverage) target_compile_options(BinDisc_unittest PRIVATE --coverage)
target_link_options(BinDisc_unittest PRIVATE --coverage) target_link_options(BinDisc_unittest PRIVATE --coverage)
add_executable(Discretizer_unittest Discretizer_unittest.cpp add_executable(Discretizer_unittest Discretizer_unittest.cpp
${fimdlp_SOURCE_DIR}/src/BinDisc.cpp ${fimdlp_SOURCE_DIR}/src/CPPFImdlp.cpp ${fimdlp_SOURCE_DIR}/src/Metrics.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp ) ${fimdlp_SOURCE_DIR}/src/BinDisc.cpp ${fimdlp_SOURCE_DIR}/src/CPPFImdlp.cpp ${fimdlp_SOURCE_DIR}/src/Metrics.cpp ${fimdlp_SOURCE_DIR}/src/Discretizer.cpp )
target_link_libraries(Discretizer_unittest GTest::gtest_main "${TORCH_LIBRARIES}") target_link_libraries(Discretizer_unittest GTest::gtest_main torch::torch)
target_compile_options(Discretizer_unittest PRIVATE --coverage) target_compile_options(Discretizer_unittest PRIVATE --coverage)
target_link_options(Discretizer_unittest PRIVATE --coverage) target_link_options(Discretizer_unittest PRIVATE --coverage)

View File

@@ -13,17 +13,26 @@
#include "BinDisc.h" #include "BinDisc.h"
#include "CPPFImdlp.h" #include "CPPFImdlp.h"
#define EXPECT_THROW_WITH_MESSAGE(stmt, etype, whatstring) EXPECT_THROW( \
try { \
stmt; \
} catch (const etype& ex) { \
EXPECT_EQ(whatstring, std::string(ex.what())); \
throw; \
} \
, etype)
namespace mdlp { namespace mdlp {
const float margin = 1e-4; const float margin = 1e-4;
static std::string set_data_path() static std::string set_data_path()
{ {
std::string path = "../datasets/"; std::string path = "tests/datasets/";
std::ifstream file(path + "iris.arff"); std::ifstream file(path + "iris.arff");
if (file.is_open()) { if (file.is_open()) {
file.close(); file.close();
return path; return path;
} }
return "../../tests/datasets/"; return "datasets/";
} }
const std::string data_path = set_data_path(); const std::string data_path = set_data_path();
const labels_t iris_quantile = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 3, 3, 1, 3, 1, 2, 0, 3, 1, 0, 2, 2, 2, 1, 3, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 1, 1, 1, 2, 1, 1, 2, 2, 3, 2, 3, 3, 0, 3, 3, 3, 3, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2 }; const labels_t iris_quantile = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 3, 3, 1, 3, 1, 2, 0, 3, 1, 0, 2, 2, 2, 1, 3, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 1, 1, 1, 2, 1, 1, 2, 2, 3, 2, 3, 3, 0, 3, 3, 3, 3, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2 };
@@ -32,8 +41,7 @@ namespace mdlp {
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM); Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
auto version = disc->version(); auto version = disc->version();
delete disc; delete disc;
std::cout << "Version computed: " << version; EXPECT_EQ("2.1.0", version);
EXPECT_EQ("2.0.1", version);
} }
TEST(Discretizer, BinIrisUniform) TEST(Discretizer, BinIrisUniform)
{ {
@@ -271,4 +279,110 @@ namespace mdlp {
EXPECT_EQ(computed[i], expected[i]); EXPECT_EQ(computed[i], expected[i]);
} }
} }
TEST(Discretizer, TransformEmptyData)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
samples_t empty_data = {};
EXPECT_THROW_WITH_MESSAGE(disc->transform(empty_data), std::invalid_argument, "Data for transformation cannot be empty");
delete disc;
}
TEST(Discretizer, TransformNotFitted)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
samples_t data = { 1.0f, 2.0f, 3.0f };
EXPECT_THROW_WITH_MESSAGE(disc->transform(data), std::runtime_error, "Discretizer not fitted yet or no valid cut points found");
delete disc;
}
TEST(Discretizer, TensorValidationFit)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
auto X = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
auto y = torch::tensor({ 1, 2, 3 }, torch::kInt32);
// Test non-1D tensors
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_2d, y), std::invalid_argument, "Only 1D tensors supported");
auto y_2d = torch::tensor({ {1, 2}, {3, 4} }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_2d), std::invalid_argument, "Only 1D tensors supported");
// Test wrong tensor types
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_int, y), std::invalid_argument, "X tensor must be Float32 type");
auto y_float = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_float), std::invalid_argument, "y tensor must be Int32 type");
// Test mismatched sizes
auto y_short = torch::tensor({ 1, 2 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_short), std::invalid_argument, "X and y tensors must have same number of elements");
// Test empty tensors
auto X_empty = torch::tensor({}, torch::kFloat32);
auto y_empty = torch::tensor({}, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_empty, y_empty), std::invalid_argument, "Tensors cannot be empty");
delete disc;
}
TEST(Discretizer, TensorValidationTransform)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
// First fit with valid data
auto X_fit = torch::tensor({ 1.0f, 2.0f, 3.0f, 4.0f }, torch::kFloat32);
auto y_fit = torch::tensor({ 1, 2, 3, 4 }, torch::kInt32);
disc->fit_t(X_fit, y_fit);
// Test non-1D tensor
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_2d), std::invalid_argument, "Only 1D tensors supported");
// Test wrong tensor type
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_int), std::invalid_argument, "X tensor must be Float32 type");
// Test empty tensor
auto X_empty = torch::tensor({}, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_empty), std::invalid_argument, "Tensor cannot be empty");
delete disc;
}
TEST(Discretizer, TensorValidationFitTransform)
{
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
auto X = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
auto y = torch::tensor({ 1, 2, 3 }, torch::kInt32);
// Test non-1D tensors
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_2d, y), std::invalid_argument, "Only 1D tensors supported");
auto y_2d = torch::tensor({ {1, 2}, {3, 4} }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_2d), std::invalid_argument, "Only 1D tensors supported");
// Test wrong tensor types
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_int, y), std::invalid_argument, "X tensor must be Float32 type");
auto y_float = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_float), std::invalid_argument, "y tensor must be Int32 type");
// Test mismatched sizes
auto y_short = torch::tensor({ 1, 2 }, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_short), std::invalid_argument, "X and y tensors must have same number of elements");
// Test empty tensors
auto X_empty = torch::tensor({}, torch::kFloat32);
auto y_empty = torch::tensor({}, torch::kInt32);
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_empty, y_empty), std::invalid_argument, "Tensors cannot be empty");
delete disc;
}
} }

View File

@@ -40,13 +40,13 @@ namespace mdlp {
static string set_data_path() static string set_data_path()
{ {
string path = "../datasets/"; string path = "datasets/";
ifstream file(path + "iris.arff"); ifstream file(path + "iris.arff");
if (file.is_open()) { if (file.is_open()) {
file.close(); file.close();
return path; return path;
} }
return "../../tests/datasets/"; return "tests/datasets/";
} }
void checkSortedVector() void checkSortedVector()
@@ -64,7 +64,7 @@ namespace mdlp {
{ {
EXPECT_EQ(computed.size(), expected.size()); EXPECT_EQ(computed.size(), expected.size());
for (unsigned long i = 0; i < computed.size(); i++) { for (unsigned long i = 0; i < computed.size(); i++) {
cout << "(" << computed[i] << ", " << expected[i] << ") "; // cout << "(" << computed[i] << ", " << expected[i] << ") ";
EXPECT_NEAR(computed[i], expected[i], precision); EXPECT_NEAR(computed[i], expected[i], precision);
} }
} }
@@ -76,7 +76,7 @@ namespace mdlp {
X = X_; X = X_;
y = y_; y = y_;
indices = sortIndices(X, y); indices = sortIndices(X, y);
cout << "* " << title << endl; // cout << "* " << title << endl;
result = valueCutPoint(0, cut, 10); result = valueCutPoint(0, cut, 10);
EXPECT_NEAR(result.first, midPoint, precision); EXPECT_NEAR(result.first, midPoint, precision);
EXPECT_EQ(result.second, limit); EXPECT_EQ(result.second, limit);
@@ -95,9 +95,9 @@ namespace mdlp {
test.fit(X[feature], y); test.fit(X[feature], y);
EXPECT_EQ(test.get_depth(), depths[feature]); EXPECT_EQ(test.get_depth(), depths[feature]);
auto computed = test.getCutPoints(); auto computed = test.getCutPoints();
cout << "Feature " << feature << ": "; // cout << "Feature " << feature << ": ";
checkCutPoints(computed, expected[feature]); checkCutPoints(computed, expected[feature]);
cout << endl; // cout << endl;
} }
} }
}; };
@@ -113,17 +113,16 @@ namespace mdlp {
{ {
X = { 1, 2, 3 }; X = { 1, 2, 3 };
y = { 1, 2 }; y = { 1, 2 };
EXPECT_THROW_WITH_MESSAGE(fit(X, y), invalid_argument, "X and y must have the same size"); EXPECT_THROW_WITH_MESSAGE(fit(X, y), invalid_argument, "X and y must have the same size: " + std::to_string(X.size()) + " != " + std::to_string(y.size()));
} }
TEST_F(TestFImdlp, FitErrorMinLengtMaxDepth) TEST_F(TestFImdlp, FitErrorMinLength)
{ {
auto testLength = CPPFImdlp(2, 10, 0); EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(2, 10, 0), invalid_argument, "min_length must be greater than 2");
auto testDepth = CPPFImdlp(3, 0, 0); }
X = { 1, 2, 3 }; TEST_F(TestFImdlp, FitErrorMaxDepth)
y = { 1, 2, 3 }; {
EXPECT_THROW_WITH_MESSAGE(testLength.fit(X, y), invalid_argument, "min_length must be greater than 2"); EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(3, 0, 0), invalid_argument, "max_depth must be greater than 0");
EXPECT_THROW_WITH_MESSAGE(testDepth.fit(X, y), invalid_argument, "max_depth must be greater than 0");
} }
TEST_F(TestFImdlp, JoinFit) TEST_F(TestFImdlp, JoinFit)
@@ -137,14 +136,16 @@ namespace mdlp {
checkCutPoints(computed, expected); checkCutPoints(computed, expected);
} }
TEST_F(TestFImdlp, FitErrorMinCutPoints)
{
EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(3, 10, -1), invalid_argument, "proposed_cuts must be non-negative");
}
TEST_F(TestFImdlp, FitErrorMaxCutPoints) TEST_F(TestFImdlp, FitErrorMaxCutPoints)
{ {
auto testmin = CPPFImdlp(2, 10, -1); auto test = CPPFImdlp(3, 1, 8);
auto testmax = CPPFImdlp(3, 0, 200); samples_t X_ = { 1, 2, 2, 3, 4, 2, 3 };
X = { 1, 2, 3 }; labels_t y_ = { 0, 0, 1, 2, 3, 4, 5 };
y = { 1, 2, 3 }; EXPECT_THROW_WITH_MESSAGE(test.fit(X_, y_), invalid_argument, "wrong proposed num_cuts value");
EXPECT_THROW_WITH_MESSAGE(testmin.fit(X, y), invalid_argument, "wrong proposed num_cuts value");
EXPECT_THROW_WITH_MESSAGE(testmax.fit(X, y), invalid_argument, "wrong proposed num_cuts value");
} }
TEST_F(TestFImdlp, SortIndices) TEST_F(TestFImdlp, SortIndices)
@@ -166,6 +167,15 @@ namespace mdlp {
indices = { 1, 2, 0 }; indices = { 1, 2, 0 };
} }
TEST_F(TestFImdlp, SortIndicesOutOfBounds)
{
// Test for out of bounds exception in sortIndices
samples_t X_long = { 1.0f, 2.0f, 3.0f };
labels_t y_short = { 1, 2 };
EXPECT_THROW_WITH_MESSAGE(sortIndices(X_long, y_short), std::out_of_range, "Index out of bounds in sort comparison");
}
TEST_F(TestFImdlp, TestShortDatasets) TEST_F(TestFImdlp, TestShortDatasets)
{ {
vector<precision_t> computed; vector<precision_t> computed;
@@ -363,4 +373,55 @@ namespace mdlp {
EXPECT_EQ(computed_ft[i], expected[i]); EXPECT_EQ(computed_ft[i], expected[i]);
} }
} }
TEST_F(TestFImdlp, SafeXAccessIndexOutOfBounds)
{
// Test safe_X_access with index out of bounds for indices array
X = { 1.0f, 2.0f, 3.0f };
y = { 1, 2, 3 };
indices = { 0, 1 }; // shorter than expected
// This should trigger the first exception in safe_X_access (idx >= indices.size())
EXPECT_THROW_WITH_MESSAGE(safe_X_access(2), std::out_of_range, "Index out of bounds for indices array");
}
TEST_F(TestFImdlp, SafeXAccessXOutOfBounds)
{
// Test safe_X_access with real_idx out of bounds for X array
X = { 1.0f, 2.0f }; // shorter array
y = { 1, 2, 3 };
indices = { 0, 1, 5 }; // indices[2] = 5 is out of bounds for X
// This should trigger the second exception in safe_X_access (real_idx >= X.size())
EXPECT_THROW_WITH_MESSAGE(safe_X_access(2), std::out_of_range, "Index out of bounds for X array");
}
TEST_F(TestFImdlp, SafeYAccessIndexOutOfBounds)
{
// Test safe_y_access with index out of bounds for indices array
X = { 1.0f, 2.0f, 3.0f };
y = { 1, 2, 3 };
indices = { 0, 1 }; // shorter than expected
// This should trigger the first exception in safe_y_access (idx >= indices.size())
EXPECT_THROW_WITH_MESSAGE(safe_y_access(2), std::out_of_range, "Index out of bounds for indices array");
}
TEST_F(TestFImdlp, SafeYAccessYOutOfBounds)
{
// Test safe_y_access with real_idx out of bounds for y array
X = { 1.0f, 2.0f, 3.0f };
y = { 1, 2 }; // shorter array
indices = { 0, 1, 5 }; // indices[2] = 5 is out of bounds for y
// This should trigger the second exception in safe_y_access (real_idx >= y.size())
EXPECT_THROW_WITH_MESSAGE(safe_y_access(2), std::out_of_range, "Index out of bounds for y array");
}
TEST_F(TestFImdlp, SafeSubtractUnderflow)
{
// Test safe_subtract with underflow condition (b > a)
EXPECT_THROW_WITH_MESSAGE(safe_subtract(3, 5), std::underflow_error, "Subtraction would cause underflow");
}
} }

Submodule tests/lib/Files deleted from a5316928d4