mirror of
https://github.com/rmontanana/mdlp.git
synced 2025-08-17 16:35:57 +00:00
Compare commits
4 Commits
4418ea8a6f
...
v2.1.0
Author | SHA1 | Date | |
---|---|---|---|
08d8910b34
|
|||
|
6d8b55a808 | ||
c1759ba1ce
|
|||
f1dae498ac
|
1
.gitignore
vendored
1
.gitignore
vendored
@@ -40,3 +40,4 @@ build_release
|
|||||||
cmake-*
|
cmake-*
|
||||||
**/CMakeFiles
|
**/CMakeFiles
|
||||||
**/gcovr-report
|
**/gcovr-report
|
||||||
|
CMakeUserPresets.json
|
||||||
|
34
CHANGELOG.md
34
CHANGELOG.md
@@ -5,44 +5,53 @@ All notable changes to this project will be documented in this file.
|
|||||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||||
|
|
||||||
## [Unreleased]
|
## [2.1.0] - 2025-06-28
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- Conan dependency manager support
|
- Conan dependency manager support
|
||||||
- Technical analysis report
|
- Technical analysis report
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Updated README.md
|
- Updated README.md
|
||||||
- Refactored library version and installation system
|
- Refactored library version and installation system
|
||||||
- Updated config variable names
|
- Updated config variable names
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- Removed unneeded semicolon
|
- Removed unneeded semicolon
|
||||||
|
|
||||||
## [2.0.1] - 2024-07-22
|
## [2.0.1] - 2024-07-22
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- CMake install target and make install command
|
- CMake install target and make install command
|
||||||
- Flag to control sample building in Makefile
|
- Flag to control sample building in Makefile
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Library name changed to `fimdlp`
|
- Library name changed to `fimdlp`
|
||||||
- Updated version numbers across test files
|
- Updated version numbers across test files
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- Version number consistency in tests
|
- Version number consistency in tests
|
||||||
|
|
||||||
## [2.0.0] - 2024-07-04
|
## [2.0.0] - 2024-07-04
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- Makefile with build & test actions for easier development
|
- Makefile with build & test actions for easier development
|
||||||
- PyTorch (libtorch) integration for tensor operations
|
- PyTorch (libtorch) integration for tensor operations
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Major refactoring of build system
|
- Major refactoring of build system
|
||||||
- Updated build workflows and CI configuration
|
- Updated build workflows and CI configuration
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- BinDisc quantile calculation errors (#9)
|
- BinDisc quantile calculation errors (#9)
|
||||||
- Error in percentile method calculation
|
- Error in percentile method calculation
|
||||||
- Integer type issues in calculations
|
- Integer type issues in calculations
|
||||||
@@ -51,19 +60,23 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
## [1.2.1] - 2024-06-08
|
## [1.2.1] - 2024-06-08
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- PyTorch tensor methods for discretization
|
- PyTorch tensor methods for discretization
|
||||||
- Improved library build system
|
- Improved library build system
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Refactored sample build process
|
- Refactored sample build process
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- Library creation and linking issues
|
- Library creation and linking issues
|
||||||
- Multiple GitHub Actions workflow fixes
|
- Multiple GitHub Actions workflow fixes
|
||||||
|
|
||||||
## [1.2.0] - 2024-06-05
|
## [1.2.0] - 2024-06-05
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- **Discretizer** - Abstract base class for all discretization algorithms (#8)
|
- **Discretizer** - Abstract base class for all discretization algorithms (#8)
|
||||||
- **BinDisc** - K-bins discretization with quantile and uniform strategies (#7)
|
- **BinDisc** - K-bins discretization with quantile and uniform strategies (#7)
|
||||||
- Transform method to discretize values using existing cut points
|
- Transform method to discretize values using existing cut points
|
||||||
@@ -71,11 +84,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
- Docker development container configuration
|
- Docker development container configuration
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Refactored system types throughout the library
|
- Refactored system types throughout the library
|
||||||
- Improved sample program with better dataset handling
|
- Improved sample program with better dataset handling
|
||||||
- Enhanced build system with debug options
|
- Enhanced build system with debug options
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- Transform method initialization issues
|
- Transform method initialization issues
|
||||||
- ARFF file attribute name extraction
|
- ARFF file attribute name extraction
|
||||||
- Sample program library binary separation
|
- Sample program library binary separation
|
||||||
@@ -83,17 +98,20 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
## [1.1.3] - 2024-06-05
|
## [1.1.3] - 2024-06-05
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- `max_cutpoints` hyperparameter for controlling algorithm complexity
|
- `max_cutpoints` hyperparameter for controlling algorithm complexity
|
||||||
- `max_depth` and `min_length` as configurable hyperparameters
|
- `max_depth` and `min_length` as configurable hyperparameters
|
||||||
- Enhanced sample program with hyperparameter support
|
- Enhanced sample program with hyperparameter support
|
||||||
- Additional datasets for testing
|
- Additional datasets for testing
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Improved constructor design and parameter handling
|
- Improved constructor design and parameter handling
|
||||||
- Enhanced test coverage and reporting
|
- Enhanced test coverage and reporting
|
||||||
- Refactored build system configuration
|
- Refactored build system configuration
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- Depth initialization in fit method
|
- Depth initialization in fit method
|
||||||
- Code quality improvements and smell fixes
|
- Code quality improvements and smell fixes
|
||||||
- Exception handling in value cut point calculations
|
- Exception handling in value cut point calculations
|
||||||
@@ -101,29 +119,35 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
## [1.1.2] - 2023-04-01
|
## [1.1.2] - 2023-04-01
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- Comprehensive test suite with GitHub Actions CI
|
- Comprehensive test suite with GitHub Actions CI
|
||||||
- SonarCloud integration for code quality analysis
|
- SonarCloud integration for code quality analysis
|
||||||
- Enhanced build system with automated testing
|
- Enhanced build system with automated testing
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Improved GitHub Actions workflow configuration
|
- Improved GitHub Actions workflow configuration
|
||||||
- Updated project structure for better maintainability
|
- Updated project structure for better maintainability
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- Build system configuration issues
|
- Build system configuration issues
|
||||||
- Test execution and coverage reporting
|
- Test execution and coverage reporting
|
||||||
|
|
||||||
## [1.1.1] - 2023-02-22
|
## [1.1.1] - 2023-02-22
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- Limits header for proper compilation
|
- Limits header for proper compilation
|
||||||
- Enhanced build system support
|
- Enhanced build system support
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Updated version numbering system
|
- Updated version numbering system
|
||||||
- Improved SonarCloud configuration
|
- Improved SonarCloud configuration
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- ValueCutPoint exception handling (removed unnecessary exception)
|
- ValueCutPoint exception handling (removed unnecessary exception)
|
||||||
- Build system compatibility issues
|
- Build system compatibility issues
|
||||||
- GitHub Actions token configuration
|
- GitHub Actions token configuration
|
||||||
@@ -131,17 +155,20 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
## [1.1.0] - 2023-02-21
|
## [1.1.0] - 2023-02-21
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- Classic algorithm implementation for performance comparison
|
- Classic algorithm implementation for performance comparison
|
||||||
- Enhanced ValueCutPoint logic with same_values detection
|
- Enhanced ValueCutPoint logic with same_values detection
|
||||||
- Glass dataset support in sample program
|
- Glass dataset support in sample program
|
||||||
- Debug configuration for development
|
- Debug configuration for development
|
||||||
|
|
||||||
### Changed
|
### Changed
|
||||||
|
|
||||||
- Refactored ValueCutPoint algorithm for better accuracy
|
- Refactored ValueCutPoint algorithm for better accuracy
|
||||||
- Improved candidate selection logic
|
- Improved candidate selection logic
|
||||||
- Enhanced sample program with multiple datasets
|
- Enhanced sample program with multiple datasets
|
||||||
|
|
||||||
### Fixed
|
### Fixed
|
||||||
|
|
||||||
- Sign error in valueCutPoint calculation
|
- Sign error in valueCutPoint calculation
|
||||||
- Final cut value computation
|
- Final cut value computation
|
||||||
- Duplicate dataset handling in sample
|
- Duplicate dataset handling in sample
|
||||||
@@ -149,6 +176,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
## [1.0.0.0] - 2022-12-21
|
## [1.0.0.0] - 2022-12-21
|
||||||
|
|
||||||
### Added
|
### Added
|
||||||
|
|
||||||
- Initial release of MDLP (Minimum Description Length Principle) discretization library
|
- Initial release of MDLP (Minimum Description Length Principle) discretization library
|
||||||
- Core CPPFImdlp algorithm implementation based on Fayyad & Irani's paper
|
- Core CPPFImdlp algorithm implementation based on Fayyad & Irani's paper
|
||||||
- Entropy and information gain calculation methods
|
- Entropy and information gain calculation methods
|
||||||
@@ -158,6 +186,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
- ARFF file format support for datasets
|
- ARFF file format support for datasets
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|
||||||
- Recursive discretization using entropy-based criteria
|
- Recursive discretization using entropy-based criteria
|
||||||
- Stable sorting with tie-breaking for identical values
|
- Stable sorting with tie-breaking for identical values
|
||||||
- Configurable algorithm parameters
|
- Configurable algorithm parameters
|
||||||
@@ -168,15 +197,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||||||
## Release Notes
|
## Release Notes
|
||||||
|
|
||||||
### Version 2.x
|
### Version 2.x
|
||||||
|
|
||||||
- **Breaking Changes**: Library renamed to `fimdlp`
|
- **Breaking Changes**: Library renamed to `fimdlp`
|
||||||
- **Major Enhancement**: PyTorch integration for improved performance
|
- **Major Enhancement**: PyTorch integration for improved performance
|
||||||
- **New Features**: Comprehensive discretization framework with multiple algorithms
|
- **New Features**: Comprehensive discretization framework with multiple algorithms
|
||||||
|
|
||||||
### Version 1.x
|
### Version 1.x
|
||||||
|
|
||||||
- **Core Algorithm**: MDLP discretization implementation
|
- **Core Algorithm**: MDLP discretization implementation
|
||||||
- **Extensibility**: Hyperparameter support and algorithm variants
|
- **Extensibility**: Hyperparameter support and algorithm variants
|
||||||
- **Quality**: Comprehensive testing and CI/CD pipeline
|
- **Quality**: Comprehensive testing and CI/CD pipeline
|
||||||
|
|
||||||
### Version 1.0.x
|
### Version 1.0.x
|
||||||
|
|
||||||
- **Foundation**: Initial stable implementation
|
- **Foundation**: Initial stable implementation
|
||||||
- **Algorithm**: Core MDLP discretization functionality
|
- **Algorithm**: Core MDLP discretization functionality
|
@@ -10,12 +10,11 @@ set(CMAKE_CXX_STANDARD 17)
|
|||||||
cmake_policy(SET CMP0135 NEW)
|
cmake_policy(SET CMP0135 NEW)
|
||||||
|
|
||||||
# Find dependencies
|
# Find dependencies
|
||||||
find_package(Torch REQUIRED)
|
find_package(Torch CONFIG REQUIRED)
|
||||||
|
|
||||||
# Options
|
# Options
|
||||||
# -------
|
# -------
|
||||||
option(ENABLE_TESTING OFF)
|
option(ENABLE_TESTING OFF)
|
||||||
option(ENABLE_SAMPLE OFF)
|
|
||||||
option(COVERAGE OFF)
|
option(COVERAGE OFF)
|
||||||
|
|
||||||
add_subdirectory(config)
|
add_subdirectory(config)
|
||||||
@@ -26,21 +25,24 @@ if (NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
|
|||||||
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -fno-default-inline")
|
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -fno-default-inline")
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (ENABLE_TESTING)
|
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
|
||||||
message("Debug mode")
|
message(STATUS "Debug mode")
|
||||||
|
else()
|
||||||
|
message(STATUS "Release mode")
|
||||||
|
endif()
|
||||||
|
|
||||||
|
if (ENABLE_TESTING)
|
||||||
|
message(STATUS "Testing is enabled")
|
||||||
enable_testing()
|
enable_testing()
|
||||||
set(CODE_COVERAGE ON)
|
set(CODE_COVERAGE ON)
|
||||||
set(GCC_COVERAGE_LINK_FLAGS "${GCC_COVERAGE_LINK_FLAGS} -lgcov --coverage")
|
set(GCC_COVERAGE_LINK_FLAGS "${GCC_COVERAGE_LINK_FLAGS} -lgcov --coverage")
|
||||||
add_subdirectory(tests)
|
add_subdirectory(tests)
|
||||||
else()
|
else()
|
||||||
message("Release mode")
|
message(STATUS "Testing is disabled")
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if (ENABLE_SAMPLE)
|
message(STATUS "Building sample")
|
||||||
message("Building sample")
|
add_subdirectory(sample)
|
||||||
add_subdirectory(sample)
|
|
||||||
endif()
|
|
||||||
|
|
||||||
include_directories(
|
include_directories(
|
||||||
${fimdlp_SOURCE_DIR}/src
|
${fimdlp_SOURCE_DIR}/src
|
||||||
@@ -62,11 +64,10 @@ write_basic_package_version_file(
|
|||||||
install(TARGETS fimdlp
|
install(TARGETS fimdlp
|
||||||
EXPORT fimdlpTargets
|
EXPORT fimdlpTargets
|
||||||
ARCHIVE DESTINATION lib
|
ARCHIVE DESTINATION lib
|
||||||
LIBRARY DESTINATION lib
|
LIBRARY DESTINATION lib)
|
||||||
CONFIGURATIONS Release)
|
|
||||||
|
|
||||||
install(DIRECTORY src/ DESTINATION include/fimdlp FILES_MATCHING CONFIGURATIONS Release PATTERN "*.h")
|
install(DIRECTORY src/ DESTINATION include/fimdlp FILES_MATCHING PATTERN "*.h")
|
||||||
install(FILES ${CMAKE_BINARY_DIR}/configured_files/include/config.h DESTINATION include/fimdlp CONFIGURATIONS Release)
|
install(FILES ${CMAKE_BINARY_DIR}/configured_files/include/config.h DESTINATION include/fimdlp)
|
||||||
|
|
||||||
install(EXPORT fimdlpTargets
|
install(EXPORT fimdlpTargets
|
||||||
FILE fimdlpTargets.cmake
|
FILE fimdlpTargets.cmake
|
||||||
|
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"version": 4,
|
|
||||||
"vendor": {
|
|
||||||
"conan": {}
|
|
||||||
},
|
|
||||||
"include": [
|
|
||||||
"build_release/build/Release/generators/CMakePresets.json",
|
|
||||||
"build_debug/build/Debug/generators/CMakePresets.json"
|
|
||||||
]
|
|
||||||
}
|
|
57
Makefile
57
Makefile
@@ -1,35 +1,43 @@
|
|||||||
SHELL := /bin/bash
|
SHELL := /bin/bash
|
||||||
.DEFAULT_GOAL := build
|
.DEFAULT_GOAL := release
|
||||||
.PHONY: build install test
|
.PHONY: debug release install test conan-create viewcoverage
|
||||||
lcov := lcov
|
lcov := lcov
|
||||||
|
|
||||||
f_debug = build_debug
|
f_debug = build_debug
|
||||||
f_release = build_release
|
f_release = build_release
|
||||||
|
genhtml = genhtml
|
||||||
|
docscdir = docs
|
||||||
|
|
||||||
build: ## Build the project for Release
|
define build_target
|
||||||
@echo ">>> Building the project for Release..."
|
@echo ">>> Building the project for $(1)..."
|
||||||
@if [ -d $(f_release) ]; then rm -fr $(f_release); fi
|
@if [ -d $(2) ]; then rm -fr $(2); fi
|
||||||
@conan install . --build=missing -of $(f_release) -s build_type=Release --profile:build=default --profile:host=default
|
@conan install . --build=missing -of $(2) -s build_type=$(1)
|
||||||
cmake -S . -B $(f_release) -DCMAKE_TOOLCHAIN_FILE=$(f_release)/build/Release/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=Release -DENABLE_TESTING=OFF -DENABLE_SAMPLE=ON
|
@cmake -S . -B $(2) -DCMAKE_TOOLCHAIN_FILE=$(2)/build/$(1)/generators/conan_toolchain.cmake -DCMAKE_BUILD_TYPE=$(1) -D$(3)
|
||||||
@cmake --build $(f_release) -j 8
|
@cmake --build $(2) --config $(1) -j 8
|
||||||
|
endef
|
||||||
|
|
||||||
|
debug: ## Build Debug version of the library
|
||||||
|
@$(call build_target,"Debug","$(f_debug)", "ENABLE_TESTING=ON")
|
||||||
|
|
||||||
|
release: ## Build Release version of the library
|
||||||
|
@$(call build_target,"Release","$(f_release)", "ENABLE_TESTING=OFF")
|
||||||
|
|
||||||
install: ## Install the project
|
install: ## Install the project
|
||||||
@echo ">>> Installing the project..."
|
@echo ">>> Installing the project..."
|
||||||
@cmake --build build_release --target install -j 8
|
@cmake --build $(f_release) --target install -j 8
|
||||||
|
|
||||||
test: ## Build Debug version and run tests
|
test: ## Build Debug version and run tests
|
||||||
@echo ">>> Building Debug version and running tests..."
|
@echo ">>> Building Debug version and running tests..."
|
||||||
@if [ -d $(f_debug) ]; then rm -fr $(f_debug); fi
|
@$(MAKE) debug;
|
||||||
@conan install . --build=missing -of $(f_debug) -s build_type=Debug
|
@cp -r tests/datasets $(f_debug)/tests/datasets
|
||||||
@cmake -B $(f_debug) -S . -DCMAKE_BUILD_TYPE=Debug -DCMAKE_TOOLCHAIN_FILE=$(f_debug)/build/Debug/generators/conan_toolchain.cmake -DENABLE_TESTING=ON -DENABLE_SAMPLE=ON
|
|
||||||
@cmake --build $(f_debug) -j 8
|
|
||||||
@cd $(f_debug)/tests && ctest --output-on-failure -j 8
|
@cd $(f_debug)/tests && ctest --output-on-failure -j 8
|
||||||
@cd $(f_debug)/tests && $(lcov) --capture --directory ../ --demangle-cpp --ignore-errors source,source --ignore-errors mismatch --output-file coverage.info >/dev/null 2>&1; \
|
@cd $(f_debug)/tests && $(lcov) --capture --directory ../ --demangle-cpp --ignore-errors source,source --ignore-errors mismatch --output-file coverage.info >/dev/null 2>&1; \
|
||||||
$(lcov) --remove coverage.info '/usr/*' --output-file coverage.info >/dev/null 2>&1; \
|
$(lcov) --remove coverage.info '/usr/*' --output-file coverage.info >/dev/null 2>&1; \
|
||||||
$(lcov) --remove coverage.info 'lib/*' --output-file coverage.info >/dev/null 2>&1; \
|
$(lcov) --remove coverage.info 'lib/*' --output-file coverage.info >/dev/null 2>&1; \
|
||||||
$(lcov) --remove coverage.info 'libtorch/*' --output-file coverage.info >/dev/null 2>&1; \
|
$(lcov) --remove coverage.info 'libtorch/*' --output-file coverage.info >/dev/null 2>&1; \
|
||||||
$(lcov) --remove coverage.info 'tests/*' --output-file coverage.info >/dev/null 2>&1; \
|
$(lcov) --remove coverage.info 'tests/*' --output-file coverage.info >/dev/null 2>&1; \
|
||||||
$(lcov) --remove coverage.info 'gtest/*' --output-file coverage.info >/dev/null 2>&1;
|
$(lcov) --remove coverage.info 'gtest/*' --output-file coverage.info >/dev/null 2>&1; \
|
||||||
|
$(lcov) --remove coverage.info '*/.conan2/*' --ignore-errors unused --output-file coverage.info >/dev/null 2>&1;
|
||||||
@genhtml $(f_debug)/tests/coverage.info --demangle-cpp --output-directory $(f_debug)/tests/coverage --title "Discretizer mdlp Coverage Report" -s -k -f --legend
|
@genhtml $(f_debug)/tests/coverage.info --demangle-cpp --output-directory $(f_debug)/tests/coverage --title "Discretizer mdlp Coverage Report" -s -k -f --legend
|
||||||
@echo "* Coverage report is generated at $(f_debug)/tests/coverage/index.html"
|
@echo "* Coverage report is generated at $(f_debug)/tests/coverage/index.html"
|
||||||
@which python || (echo ">>> Please install python"; exit 1)
|
@which python || (echo ">>> Please install python"; exit 1)
|
||||||
@@ -39,3 +47,24 @@ test: ## Build Debug version and run tests
|
|||||||
fi
|
fi
|
||||||
@echo ">>> Updating coverage badge..."
|
@echo ">>> Updating coverage badge..."
|
||||||
@env python update_coverage.py $(f_debug)/tests
|
@env python update_coverage.py $(f_debug)/tests
|
||||||
|
@echo ">>> Done"
|
||||||
|
|
||||||
|
viewcoverage: ## View the html coverage report
|
||||||
|
@which $(genhtml) >/dev/null || (echo ">>> Please install lcov (genhtml not found)"; exit 1)
|
||||||
|
@if [ ! -d $(docscdir)/coverage ]; then mkdir -p $(docscdir)/coverage; fi
|
||||||
|
@if [ ! -f $(f_debug)/tests/coverage.info ]; then \
|
||||||
|
echo ">>> No coverage.info file found. Run make coverage first!"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
@$(genhtml) $(f_debug)/tests/coverage.info --demangle-cpp --output-directory $(docscdir)/coverage --title "FImdlp Coverage Report" -s -k -f --legend >/dev/null 2>&1;
|
||||||
|
@xdg-open $(docscdir)/coverage/index.html || open $(docscdir)/coverage/index.html 2>/dev/null
|
||||||
|
@echo ">>> Done";
|
||||||
|
|
||||||
|
conan-create: ## Create the conan package
|
||||||
|
@echo ">>> Creating the conan package..."
|
||||||
|
conan create . --build=missing -tf "" -s:a build_type=Release
|
||||||
|
conan create . --build=missing -tf "" -s:a build_type=Debug -o "&:enable_testing=False"
|
||||||
|
@echo ">>> Done"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@@ -1,101 +0,0 @@
|
|||||||
# This is the CMakeCache file.
|
|
||||||
# For build in directory: /home/rmontanana/Code/mdlp/build_conan
|
|
||||||
# It was generated by CMake: /usr/bin/cmake
|
|
||||||
# You can edit this file to change values found and used by cmake.
|
|
||||||
# If you do not want to change any of the values, simply exit the editor.
|
|
||||||
# If you do want to change a value, simply edit, save, and exit the editor.
|
|
||||||
# The syntax for the file is as follows:
|
|
||||||
# KEY:TYPE=VALUE
|
|
||||||
# KEY is the name of a variable in the cache.
|
|
||||||
# TYPE is a hint to GUIs for the type of VALUE, DO NOT EDIT TYPE!.
|
|
||||||
# VALUE is the current value for the KEY.
|
|
||||||
|
|
||||||
########################
|
|
||||||
# EXTERNAL cache entries
|
|
||||||
########################
|
|
||||||
|
|
||||||
//No help, variable specified on the command line.
|
|
||||||
CMAKE_BUILD_TYPE:UNINITIALIZED=Release
|
|
||||||
|
|
||||||
//Value Computed by CMake.
|
|
||||||
CMAKE_FIND_PACKAGE_REDIRECTS_DIR:STATIC=/home/rmontanana/Code/mdlp/build_conan/CMakeFiles/pkgRedirects
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_DESCRIPTION:STATIC=Discretization algorithm based on the paper by Fayyad & Irani Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning.
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_HOMEPAGE_URL:STATIC=https://github.com/rmontanana/mdlp
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_NAME:STATIC=fimdlp
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_VERSION:STATIC=2.1.0
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_VERSION_MAJOR:STATIC=2
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_VERSION_MINOR:STATIC=1
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_VERSION_PATCH:STATIC=0
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
CMAKE_PROJECT_VERSION_TWEAK:STATIC=
|
|
||||||
|
|
||||||
//No help, variable specified on the command line.
|
|
||||||
CMAKE_TOOLCHAIN_FILE:UNINITIALIZED=conan_toolchain.cmake
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
fimdlp_BINARY_DIR:STATIC=/home/rmontanana/Code/mdlp/build_conan
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
fimdlp_IS_TOP_LEVEL:STATIC=ON
|
|
||||||
|
|
||||||
//Value Computed by CMake
|
|
||||||
fimdlp_SOURCE_DIR:STATIC=/home/rmontanana/Code/mdlp
|
|
||||||
|
|
||||||
|
|
||||||
########################
|
|
||||||
# INTERNAL cache entries
|
|
||||||
########################
|
|
||||||
|
|
||||||
//This is the directory where this CMakeCache.txt was created
|
|
||||||
CMAKE_CACHEFILE_DIR:INTERNAL=/home/rmontanana/Code/mdlp/build_conan
|
|
||||||
//Major version of cmake used to create the current loaded cache
|
|
||||||
CMAKE_CACHE_MAJOR_VERSION:INTERNAL=3
|
|
||||||
//Minor version of cmake used to create the current loaded cache
|
|
||||||
CMAKE_CACHE_MINOR_VERSION:INTERNAL=30
|
|
||||||
//Patch version of cmake used to create the current loaded cache
|
|
||||||
CMAKE_CACHE_PATCH_VERSION:INTERNAL=8
|
|
||||||
//Path to CMake executable.
|
|
||||||
CMAKE_COMMAND:INTERNAL=/usr/bin/cmake
|
|
||||||
//Path to cpack program executable.
|
|
||||||
CMAKE_CPACK_COMMAND:INTERNAL=/usr/bin/cpack
|
|
||||||
//Path to ctest program executable.
|
|
||||||
CMAKE_CTEST_COMMAND:INTERNAL=/usr/bin/ctest
|
|
||||||
//Path to cache edit program executable.
|
|
||||||
CMAKE_EDIT_COMMAND:INTERNAL=/usr/bin/ccmake
|
|
||||||
//Name of external makefile project generator.
|
|
||||||
CMAKE_EXTRA_GENERATOR:INTERNAL=
|
|
||||||
//Name of generator.
|
|
||||||
CMAKE_GENERATOR:INTERNAL=Unix Makefiles
|
|
||||||
//Generator instance identifier.
|
|
||||||
CMAKE_GENERATOR_INSTANCE:INTERNAL=
|
|
||||||
//Name of generator platform.
|
|
||||||
CMAKE_GENERATOR_PLATFORM:INTERNAL=
|
|
||||||
//Name of generator toolset.
|
|
||||||
CMAKE_GENERATOR_TOOLSET:INTERNAL=
|
|
||||||
//Source directory with the top level CMakeLists.txt file for this
|
|
||||||
// project
|
|
||||||
CMAKE_HOME_DIRECTORY:INTERNAL=/home/rmontanana/Code/mdlp
|
|
||||||
//number of local generators
|
|
||||||
CMAKE_NUMBER_OF_MAKEFILES:INTERNAL=1
|
|
||||||
//Platform information initialized
|
|
||||||
CMAKE_PLATFORM_INFO_INITIALIZED:INTERNAL=1
|
|
||||||
//Path to CMake installation.
|
|
||||||
CMAKE_ROOT:INTERNAL=/usr/share/cmake
|
|
||||||
//uname command
|
|
||||||
CMAKE_UNAME:INTERNAL=/usr/bin/uname
|
|
||||||
|
|
22
conanfile.py
22
conanfile.py
@@ -1,7 +1,8 @@
|
|||||||
|
import os
|
||||||
|
import re
|
||||||
from conan import ConanFile
|
from conan import ConanFile
|
||||||
from conan.tools.cmake import CMakeToolchain, CMake, cmake_layout, CMakeDeps
|
from conan.tools.cmake import CMakeToolchain, CMake, cmake_layout, CMakeDeps
|
||||||
from conan.tools.files import copy
|
from conan.tools.files import load, copy
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
class FimdlpConan(ConanFile):
|
class FimdlpConan(ConanFile):
|
||||||
@@ -32,14 +33,13 @@ class FimdlpConan(ConanFile):
|
|||||||
exports_sources = "CMakeLists.txt", "src/*", "sample/*", "tests/*", "config/*", "fimdlpConfig.cmake.in"
|
exports_sources = "CMakeLists.txt", "src/*", "sample/*", "tests/*", "config/*", "fimdlpConfig.cmake.in"
|
||||||
|
|
||||||
def set_version(self):
|
def set_version(self):
|
||||||
# Read the CMakeLists.txt file to get the version
|
content = load(self, "CMakeLists.txt")
|
||||||
try:
|
version_pattern = re.compile(r'project\s*\([^\)]*VERSION\s+([0-9]+\.[0-9]+\.[0-9]+)', re.IGNORECASE | re.DOTALL)
|
||||||
content = load(self, "CMakeLists.txt")
|
match = version_pattern.search(content)
|
||||||
match = re.search(r"VERSION\s+(\d+\.\d+\.\d+)", content)
|
if match:
|
||||||
if match:
|
self.version = match.group(1)
|
||||||
self.version = match.group(1)
|
else:
|
||||||
except Exception:
|
raise Exception("Version not found in CMakeLists.txt")
|
||||||
self.version = "0.0.1" # fallback version
|
|
||||||
|
|
||||||
def config_options(self):
|
def config_options(self):
|
||||||
if self.settings.os == "Windows":
|
if self.settings.os == "Windows":
|
||||||
@@ -51,7 +51,7 @@ class FimdlpConan(ConanFile):
|
|||||||
|
|
||||||
def requirements(self):
|
def requirements(self):
|
||||||
# PyTorch dependency for tensor operations
|
# PyTorch dependency for tensor operations
|
||||||
self.requires("libtorch/2.7.0")
|
self.requires("libtorch/2.7.1")
|
||||||
|
|
||||||
def build_requirements(self):
|
def build_requirements(self):
|
||||||
self.requires("arff-files/1.2.0") # for tests and sample
|
self.requires("arff-files/1.2.0") # for tests and sample
|
||||||
|
@@ -1,14 +1,10 @@
|
|||||||
set(CMAKE_CXX_STANDARD 17)
|
set(CMAKE_CXX_STANDARD 17)
|
||||||
|
|
||||||
set(CMAKE_BUILD_TYPE Debug)
|
|
||||||
|
|
||||||
find_package(arff-files REQUIRED)
|
find_package(arff-files REQUIRED)
|
||||||
|
|
||||||
include_directories(
|
include_directories(
|
||||||
${fimdlp_SOURCE_DIR}/src
|
${fimdlp_SOURCE_DIR}/src
|
||||||
${fimdlp_SOURCE_DIR}/tests/lib/Files
|
|
||||||
${CMAKE_BINARY_DIR}/configured_files/include
|
${CMAKE_BINARY_DIR}/configured_files/include
|
||||||
${libtorch_INCLUDE_DIRS_RELEASE}
|
|
||||||
${arff-files_INCLUDE_DIRS}
|
${arff-files_INCLUDE_DIRS}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@@ -41,19 +41,15 @@ namespace mdlp {
|
|||||||
}
|
}
|
||||||
void BinDisc::fit(samples_t& X, labels_t& y)
|
void BinDisc::fit(samples_t& X, labels_t& y)
|
||||||
{
|
{
|
||||||
// Input validation for supervised interface
|
if (X.empty()) {
|
||||||
if (X.size() != y.size()) {
|
throw std::invalid_argument("X cannot be empty");
|
||||||
throw std::invalid_argument("X and y must have the same size");
|
|
||||||
}
|
|
||||||
if (X.empty() || y.empty()) {
|
|
||||||
throw std::invalid_argument("X and y cannot be empty");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// BinDisc is inherently unsupervised, but we validate inputs for consistency
|
// BinDisc is inherently unsupervised, but we validate inputs for consistency
|
||||||
// Note: y parameter is validated but not used in binning strategy
|
// Note: y parameter is validated but not used in binning strategy
|
||||||
fit(X);
|
fit(X);
|
||||||
}
|
}
|
||||||
std::vector<precision_t> linspace(precision_t start, precision_t end, int num)
|
std::vector<precision_t> BinDisc::linspace(precision_t start, precision_t end, int num)
|
||||||
{
|
{
|
||||||
// Input validation
|
// Input validation
|
||||||
if (num < 2) {
|
if (num < 2) {
|
||||||
@@ -81,7 +77,7 @@ namespace mdlp {
|
|||||||
{
|
{
|
||||||
return std::max(lower, std::min(n, upper));
|
return std::max(lower, std::min(n, upper));
|
||||||
}
|
}
|
||||||
std::vector<precision_t> percentile(samples_t& data, const std::vector<precision_t>& percentiles)
|
std::vector<precision_t> BinDisc::percentile(samples_t& data, const std::vector<precision_t>& percentiles)
|
||||||
{
|
{
|
||||||
// Input validation
|
// Input validation
|
||||||
if (data.empty()) {
|
if (data.empty()) {
|
||||||
|
@@ -23,6 +23,9 @@ namespace mdlp {
|
|||||||
// y is included for compatibility with the Discretizer interface
|
// y is included for compatibility with the Discretizer interface
|
||||||
void fit(samples_t& X_, labels_t& y) override;
|
void fit(samples_t& X_, labels_t& y) override;
|
||||||
void fit(samples_t& X);
|
void fit(samples_t& X);
|
||||||
|
protected:
|
||||||
|
std::vector<precision_t> linspace(precision_t start, precision_t end, int num);
|
||||||
|
std::vector<precision_t> percentile(samples_t& data, const std::vector<precision_t>& percentiles);
|
||||||
private:
|
private:
|
||||||
void fit_uniform(const samples_t&);
|
void fit_uniform(const samples_t&);
|
||||||
void fit_quantile(const samples_t&);
|
void fit_quantile(const samples_t&);
|
||||||
|
@@ -39,7 +39,7 @@ namespace mdlp {
|
|||||||
if (proposed_cuts == 0) {
|
if (proposed_cuts == 0) {
|
||||||
return numeric_limits<size_t>::max();
|
return numeric_limits<size_t>::max();
|
||||||
}
|
}
|
||||||
if (proposed_cuts < 0 || proposed_cuts > static_cast<precision_t>(X.size())) {
|
if (proposed_cuts > static_cast<precision_t>(X.size())) {
|
||||||
throw invalid_argument("wrong proposed num_cuts value");
|
throw invalid_argument("wrong proposed num_cuts value");
|
||||||
}
|
}
|
||||||
if (proposed_cuts < 1)
|
if (proposed_cuts < 1)
|
||||||
@@ -56,7 +56,7 @@ namespace mdlp {
|
|||||||
discretizedData.clear();
|
discretizedData.clear();
|
||||||
cutPoints.clear();
|
cutPoints.clear();
|
||||||
if (X.size() != y.size()) {
|
if (X.size() != y.size()) {
|
||||||
throw invalid_argument("X and y must have the same size");
|
throw std::invalid_argument("X and y must have the same size: " + std::to_string(X.size()) + " != " + std::to_string(y.size()));
|
||||||
}
|
}
|
||||||
if (X.empty() || y.empty()) {
|
if (X.empty() || y.empty()) {
|
||||||
throw invalid_argument("X and y must have at least one element");
|
throw invalid_argument("X and y must have at least one element");
|
||||||
@@ -105,9 +105,10 @@ namespace mdlp {
|
|||||||
// # of duplicates before cutpoint
|
// # of duplicates before cutpoint
|
||||||
n = safe_subtract(safe_subtract(cut, 1), idxPrev);
|
n = safe_subtract(safe_subtract(cut, 1), idxPrev);
|
||||||
// # of duplicates after cutpoint
|
// # of duplicates after cutpoint
|
||||||
m = safe_subtract(safe_subtract(idxNext, cut), 1);
|
m = idxNext - cut - 1;
|
||||||
// Decide which values to use
|
// Decide which values to use
|
||||||
if (backWall) {
|
if (backWall) {
|
||||||
|
m = int(idxNext - cut - 1) < 0 ? 0 : m; // Ensure m right
|
||||||
cut = cut + m + 1;
|
cut = cut + m + 1;
|
||||||
} else {
|
} else {
|
||||||
cut = safe_subtract(cut, n);
|
cut = safe_subtract(cut, n);
|
||||||
|
@@ -39,8 +39,8 @@ namespace mdlp {
|
|||||||
size_t getCandidate(size_t, size_t);
|
size_t getCandidate(size_t, size_t);
|
||||||
size_t compute_max_num_cut_points() const;
|
size_t compute_max_num_cut_points() const;
|
||||||
pair<precision_t, size_t> valueCutPoint(size_t, size_t, size_t);
|
pair<precision_t, size_t> valueCutPoint(size_t, size_t, size_t);
|
||||||
private:
|
inline precision_t safe_X_access(size_t idx) const
|
||||||
inline precision_t safe_X_access(size_t idx) const {
|
{
|
||||||
if (idx >= indices.size()) {
|
if (idx >= indices.size()) {
|
||||||
throw std::out_of_range("Index out of bounds for indices array");
|
throw std::out_of_range("Index out of bounds for indices array");
|
||||||
}
|
}
|
||||||
@@ -50,7 +50,8 @@ namespace mdlp {
|
|||||||
}
|
}
|
||||||
return X[real_idx];
|
return X[real_idx];
|
||||||
}
|
}
|
||||||
inline label_t safe_y_access(size_t idx) const {
|
inline label_t safe_y_access(size_t idx) const
|
||||||
|
{
|
||||||
if (idx >= indices.size()) {
|
if (idx >= indices.size()) {
|
||||||
throw std::out_of_range("Index out of bounds for indices array");
|
throw std::out_of_range("Index out of bounds for indices array");
|
||||||
}
|
}
|
||||||
@@ -60,7 +61,8 @@ namespace mdlp {
|
|||||||
}
|
}
|
||||||
return y[real_idx];
|
return y[real_idx];
|
||||||
}
|
}
|
||||||
inline size_t safe_subtract(size_t a, size_t b) const {
|
inline size_t safe_subtract(size_t a, size_t b) const
|
||||||
|
{
|
||||||
if (b > a) {
|
if (b > a) {
|
||||||
throw std::underflow_error("Subtraction would cause underflow");
|
throw std::underflow_error("Subtraction would cause underflow");
|
||||||
}
|
}
|
||||||
|
@@ -40,9 +40,6 @@ namespace mdlp {
|
|||||||
void Discretizer::fit_t(const torch::Tensor& X_, const torch::Tensor& y_)
|
void Discretizer::fit_t(const torch::Tensor& X_, const torch::Tensor& y_)
|
||||||
{
|
{
|
||||||
// Validate tensor properties for security
|
// Validate tensor properties for security
|
||||||
if (!X_.is_contiguous() || !y_.is_contiguous()) {
|
|
||||||
throw std::invalid_argument("Tensors must be contiguous");
|
|
||||||
}
|
|
||||||
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
|
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
|
||||||
throw std::invalid_argument("Only 1D tensors supported");
|
throw std::invalid_argument("Only 1D tensors supported");
|
||||||
}
|
}
|
||||||
@@ -67,9 +64,6 @@ namespace mdlp {
|
|||||||
torch::Tensor Discretizer::transform_t(const torch::Tensor& X_)
|
torch::Tensor Discretizer::transform_t(const torch::Tensor& X_)
|
||||||
{
|
{
|
||||||
// Validate tensor properties for security
|
// Validate tensor properties for security
|
||||||
if (!X_.is_contiguous()) {
|
|
||||||
throw std::invalid_argument("Tensor must be contiguous");
|
|
||||||
}
|
|
||||||
if (X_.sizes().size() != 1) {
|
if (X_.sizes().size() != 1) {
|
||||||
throw std::invalid_argument("Only 1D tensors supported");
|
throw std::invalid_argument("Only 1D tensors supported");
|
||||||
}
|
}
|
||||||
@@ -88,9 +82,6 @@ namespace mdlp {
|
|||||||
torch::Tensor Discretizer::fit_transform_t(const torch::Tensor& X_, const torch::Tensor& y_)
|
torch::Tensor Discretizer::fit_transform_t(const torch::Tensor& X_, const torch::Tensor& y_)
|
||||||
{
|
{
|
||||||
// Validate tensor properties for security
|
// Validate tensor properties for security
|
||||||
if (!X_.is_contiguous() || !y_.is_contiguous()) {
|
|
||||||
throw std::invalid_argument("Tensors must be contiguous");
|
|
||||||
}
|
|
||||||
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
|
if (X_.sizes().size() != 1 || y_.sizes().size() != 1) {
|
||||||
throw std::invalid_argument("Only 1D tensors supported");
|
throw std::invalid_argument("Only 1D tensors supported");
|
||||||
}
|
}
|
||||||
|
@@ -2,7 +2,8 @@ cmake_minimum_required(VERSION 3.20)
|
|||||||
project(test_fimdlp)
|
project(test_fimdlp)
|
||||||
|
|
||||||
find_package(fimdlp REQUIRED)
|
find_package(fimdlp REQUIRED)
|
||||||
|
find_package(Torch REQUIRED)
|
||||||
|
|
||||||
add_executable(test_fimdlp src/test_fimdlp.cpp)
|
add_executable(test_fimdlp src/test_fimdlp.cpp)
|
||||||
target_link_libraries(test_fimdlp fimdlp::fimdlp)
|
target_link_libraries(test_fimdlp fimdlp::fimdlp torch::torch)
|
||||||
target_compile_features(test_fimdlp PRIVATE cxx_std_17)
|
target_compile_features(test_fimdlp PRIVATE cxx_std_17)
|
10
test_package/CMakeUserPresets.json
Normal file
10
test_package/CMakeUserPresets.json
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"version": 4,
|
||||||
|
"vendor": {
|
||||||
|
"conan": {}
|
||||||
|
},
|
||||||
|
"include": [
|
||||||
|
"build/gcc-14-x86_64-gnu17-release/generators/CMakePresets.json",
|
||||||
|
"build/gcc-14-x86_64-gnu17-debug/generators/CMakePresets.json"
|
||||||
|
]
|
||||||
|
}
|
@@ -11,6 +11,16 @@
|
|||||||
#include <ArffFiles.hpp>
|
#include <ArffFiles.hpp>
|
||||||
#include "BinDisc.h"
|
#include "BinDisc.h"
|
||||||
#include "Experiments.hpp"
|
#include "Experiments.hpp"
|
||||||
|
#include <cmath>
|
||||||
|
|
||||||
|
#define EXPECT_THROW_WITH_MESSAGE(stmt, etype, whatstring) EXPECT_THROW( \
|
||||||
|
try { \
|
||||||
|
stmt; \
|
||||||
|
} catch (const etype& ex) { \
|
||||||
|
EXPECT_EQ(whatstring, std::string(ex.what())); \
|
||||||
|
throw; \
|
||||||
|
} \
|
||||||
|
, etype)
|
||||||
|
|
||||||
namespace mdlp {
|
namespace mdlp {
|
||||||
const float margin = 1e-4;
|
const float margin = 1e-4;
|
||||||
@@ -153,20 +163,12 @@ namespace mdlp {
|
|||||||
TEST_F(TestBinDisc3U, EmptyUniform)
|
TEST_F(TestBinDisc3U, EmptyUniform)
|
||||||
{
|
{
|
||||||
samples_t X = {};
|
samples_t X = {};
|
||||||
fit(X);
|
EXPECT_THROW(fit(X), std::invalid_argument);
|
||||||
auto cuts = getCutPoints();
|
|
||||||
ASSERT_EQ(2, cuts.size());
|
|
||||||
EXPECT_NEAR(0, cuts.at(0), margin);
|
|
||||||
EXPECT_NEAR(0, cuts.at(1), margin);
|
|
||||||
}
|
}
|
||||||
TEST_F(TestBinDisc3Q, EmptyQuantile)
|
TEST_F(TestBinDisc3Q, EmptyQuantile)
|
||||||
{
|
{
|
||||||
samples_t X = {};
|
samples_t X = {};
|
||||||
fit(X);
|
EXPECT_THROW(fit(X), std::invalid_argument);
|
||||||
auto cuts = getCutPoints();
|
|
||||||
ASSERT_EQ(2, cuts.size());
|
|
||||||
EXPECT_NEAR(0, cuts.at(0), margin);
|
|
||||||
EXPECT_NEAR(0, cuts.at(1), margin);
|
|
||||||
}
|
}
|
||||||
TEST(TestBinDisc3, ExceptionNumberBins)
|
TEST(TestBinDisc3, ExceptionNumberBins)
|
||||||
{
|
{
|
||||||
@@ -406,6 +408,66 @@ namespace mdlp {
|
|||||||
EXPECT_NEAR(exp.cutpoints_.at(i), cuts.at(i), margin);
|
EXPECT_NEAR(exp.cutpoints_.at(i), cuts.at(i), margin);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
std::cout << "* Number of experiments tested: " << num << std::endl;
|
// std::cout << "* Number of experiments tested: " << num << std::endl;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3U, FitDataSizeTooSmall)
|
||||||
|
{
|
||||||
|
// Test when data size is smaller than n_bins
|
||||||
|
samples_t X = { 1.0, 2.0 }; // Only 2 elements for 3 bins
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(fit(X), std::invalid_argument, "Input data size must be at least equal to n_bins");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3Q, FitDataSizeTooSmall)
|
||||||
|
{
|
||||||
|
// Test when data size is smaller than n_bins
|
||||||
|
samples_t X = { 1.0, 2.0 }; // Only 2 elements for 3 bins
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(fit(X), std::invalid_argument, "Input data size must be at least equal to n_bins");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3U, FitWithYEmptyX)
|
||||||
|
{
|
||||||
|
// Test fit(X, y) with empty X
|
||||||
|
samples_t X = {};
|
||||||
|
labels_t y = { 1, 2, 3 };
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(fit(X, y), std::invalid_argument, "X cannot be empty");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3U, LinspaceInvalidNumPoints)
|
||||||
|
{
|
||||||
|
// Test linspace with num < 2
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, 1.0f, 1), std::invalid_argument, "Number of points must be at least 2 for linspace");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3U, LinspaceNaNValues)
|
||||||
|
{
|
||||||
|
// Test linspace with NaN values
|
||||||
|
float nan_val = std::numeric_limits<float>::quiet_NaN();
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(linspace(nan_val, 1.0f, 3), std::invalid_argument, "Start and end values cannot be NaN");
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, nan_val, 3), std::invalid_argument, "Start and end values cannot be NaN");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3U, LinspaceInfiniteValues)
|
||||||
|
{
|
||||||
|
// Test linspace with infinite values
|
||||||
|
float inf_val = std::numeric_limits<float>::infinity();
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(linspace(inf_val, 1.0f, 3), std::invalid_argument, "Start and end values cannot be infinite");
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(linspace(0.0f, inf_val, 3), std::invalid_argument, "Start and end values cannot be infinite");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3U, PercentileEmptyData)
|
||||||
|
{
|
||||||
|
// Test percentile with empty data
|
||||||
|
samples_t empty_data = {};
|
||||||
|
std::vector<precision_t> percentiles = { 25.0f, 50.0f, 75.0f };
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(percentile(empty_data, percentiles), std::invalid_argument, "Data cannot be empty for percentile calculation");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestBinDisc3U, PercentileEmptyPercentiles)
|
||||||
|
{
|
||||||
|
// Test percentile with empty percentiles
|
||||||
|
samples_t data = { 1.0f, 2.0f, 3.0f };
|
||||||
|
std::vector<precision_t> empty_percentiles = {};
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(percentile(data, empty_percentiles), std::invalid_argument, "Percentiles cannot be empty");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@@ -1,6 +1,7 @@
|
|||||||
|
|
||||||
find_package(arff-files REQUIRED)
|
find_package(arff-files REQUIRED)
|
||||||
find_package(GTest REQUIRED)
|
find_package(GTest REQUIRED)
|
||||||
|
find_package(Torch CONFIG REQUIRED)
|
||||||
|
|
||||||
include_directories(
|
include_directories(
|
||||||
${libtorch_INCLUDE_DIRS_DEBUG}
|
${libtorch_INCLUDE_DIRS_DEBUG}
|
||||||
|
@@ -13,17 +13,26 @@
|
|||||||
#include "BinDisc.h"
|
#include "BinDisc.h"
|
||||||
#include "CPPFImdlp.h"
|
#include "CPPFImdlp.h"
|
||||||
|
|
||||||
|
#define EXPECT_THROW_WITH_MESSAGE(stmt, etype, whatstring) EXPECT_THROW( \
|
||||||
|
try { \
|
||||||
|
stmt; \
|
||||||
|
} catch (const etype& ex) { \
|
||||||
|
EXPECT_EQ(whatstring, std::string(ex.what())); \
|
||||||
|
throw; \
|
||||||
|
} \
|
||||||
|
, etype)
|
||||||
|
|
||||||
namespace mdlp {
|
namespace mdlp {
|
||||||
const float margin = 1e-4;
|
const float margin = 1e-4;
|
||||||
static std::string set_data_path()
|
static std::string set_data_path()
|
||||||
{
|
{
|
||||||
std::string path = "datasets/";
|
std::string path = "tests/datasets/";
|
||||||
std::ifstream file(path + "iris.arff");
|
std::ifstream file(path + "iris.arff");
|
||||||
if (file.is_open()) {
|
if (file.is_open()) {
|
||||||
file.close();
|
file.close();
|
||||||
return path;
|
return path;
|
||||||
}
|
}
|
||||||
return "tests/datasets/";
|
return "datasets/";
|
||||||
}
|
}
|
||||||
const std::string data_path = set_data_path();
|
const std::string data_path = set_data_path();
|
||||||
const labels_t iris_quantile = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 3, 3, 1, 3, 1, 2, 0, 3, 1, 0, 2, 2, 2, 1, 3, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 1, 1, 1, 2, 1, 1, 2, 2, 3, 2, 3, 3, 0, 3, 3, 3, 3, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2 };
|
const labels_t iris_quantile = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 3, 3, 3, 1, 3, 1, 2, 0, 3, 1, 0, 2, 2, 2, 1, 3, 1, 2, 2, 1, 2, 2, 2, 2, 3, 3, 3, 3, 2, 1, 1, 1, 2, 2, 1, 2, 3, 2, 1, 1, 1, 2, 2, 0, 1, 1, 1, 2, 1, 1, 2, 2, 3, 2, 3, 3, 0, 3, 3, 3, 3, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 2, 3, 2, 2 };
|
||||||
@@ -32,7 +41,6 @@ namespace mdlp {
|
|||||||
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
|
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
|
||||||
auto version = disc->version();
|
auto version = disc->version();
|
||||||
delete disc;
|
delete disc;
|
||||||
std::cout << "Version computed: " << version;
|
|
||||||
EXPECT_EQ("2.1.0", version);
|
EXPECT_EQ("2.1.0", version);
|
||||||
}
|
}
|
||||||
TEST(Discretizer, BinIrisUniform)
|
TEST(Discretizer, BinIrisUniform)
|
||||||
@@ -271,4 +279,110 @@ namespace mdlp {
|
|||||||
EXPECT_EQ(computed[i], expected[i]);
|
EXPECT_EQ(computed[i], expected[i]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST(Discretizer, TransformEmptyData)
|
||||||
|
{
|
||||||
|
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
|
||||||
|
samples_t empty_data = {};
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->transform(empty_data), std::invalid_argument, "Data for transformation cannot be empty");
|
||||||
|
delete disc;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(Discretizer, TransformNotFitted)
|
||||||
|
{
|
||||||
|
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
|
||||||
|
samples_t data = { 1.0f, 2.0f, 3.0f };
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->transform(data), std::runtime_error, "Discretizer not fitted yet or no valid cut points found");
|
||||||
|
delete disc;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(Discretizer, TensorValidationFit)
|
||||||
|
{
|
||||||
|
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
|
||||||
|
|
||||||
|
auto X = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
|
||||||
|
auto y = torch::tensor({ 1, 2, 3 }, torch::kInt32);
|
||||||
|
|
||||||
|
// Test non-1D tensors
|
||||||
|
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_2d, y), std::invalid_argument, "Only 1D tensors supported");
|
||||||
|
|
||||||
|
auto y_2d = torch::tensor({ {1, 2}, {3, 4} }, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_2d), std::invalid_argument, "Only 1D tensors supported");
|
||||||
|
|
||||||
|
// Test wrong tensor types
|
||||||
|
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_int, y), std::invalid_argument, "X tensor must be Float32 type");
|
||||||
|
|
||||||
|
auto y_float = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_float), std::invalid_argument, "y tensor must be Int32 type");
|
||||||
|
|
||||||
|
// Test mismatched sizes
|
||||||
|
auto y_short = torch::tensor({ 1, 2 }, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X, y_short), std::invalid_argument, "X and y tensors must have same number of elements");
|
||||||
|
|
||||||
|
// Test empty tensors
|
||||||
|
auto X_empty = torch::tensor({}, torch::kFloat32);
|
||||||
|
auto y_empty = torch::tensor({}, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_t(X_empty, y_empty), std::invalid_argument, "Tensors cannot be empty");
|
||||||
|
|
||||||
|
delete disc;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(Discretizer, TensorValidationTransform)
|
||||||
|
{
|
||||||
|
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
|
||||||
|
|
||||||
|
// First fit with valid data
|
||||||
|
auto X_fit = torch::tensor({ 1.0f, 2.0f, 3.0f, 4.0f }, torch::kFloat32);
|
||||||
|
auto y_fit = torch::tensor({ 1, 2, 3, 4 }, torch::kInt32);
|
||||||
|
disc->fit_t(X_fit, y_fit);
|
||||||
|
|
||||||
|
// Test non-1D tensor
|
||||||
|
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_2d), std::invalid_argument, "Only 1D tensors supported");
|
||||||
|
|
||||||
|
// Test wrong tensor type
|
||||||
|
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_int), std::invalid_argument, "X tensor must be Float32 type");
|
||||||
|
|
||||||
|
// Test empty tensor
|
||||||
|
auto X_empty = torch::tensor({}, torch::kFloat32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->transform_t(X_empty), std::invalid_argument, "Tensor cannot be empty");
|
||||||
|
|
||||||
|
delete disc;
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(Discretizer, TensorValidationFitTransform)
|
||||||
|
{
|
||||||
|
Discretizer* disc = new BinDisc(4, strategy_t::UNIFORM);
|
||||||
|
|
||||||
|
auto X = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
|
||||||
|
auto y = torch::tensor({ 1, 2, 3 }, torch::kInt32);
|
||||||
|
|
||||||
|
// Test non-1D tensors
|
||||||
|
auto X_2d = torch::tensor({ {1.0f, 2.0f}, {3.0f, 4.0f} }, torch::kFloat32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_2d, y), std::invalid_argument, "Only 1D tensors supported");
|
||||||
|
|
||||||
|
auto y_2d = torch::tensor({ {1, 2}, {3, 4} }, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_2d), std::invalid_argument, "Only 1D tensors supported");
|
||||||
|
|
||||||
|
// Test wrong tensor types
|
||||||
|
auto X_int = torch::tensor({ 1, 2, 3 }, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_int, y), std::invalid_argument, "X tensor must be Float32 type");
|
||||||
|
|
||||||
|
auto y_float = torch::tensor({ 1.0f, 2.0f, 3.0f }, torch::kFloat32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_float), std::invalid_argument, "y tensor must be Int32 type");
|
||||||
|
|
||||||
|
// Test mismatched sizes
|
||||||
|
auto y_short = torch::tensor({ 1, 2 }, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X, y_short), std::invalid_argument, "X and y tensors must have same number of elements");
|
||||||
|
|
||||||
|
// Test empty tensors
|
||||||
|
auto X_empty = torch::tensor({}, torch::kFloat32);
|
||||||
|
auto y_empty = torch::tensor({}, torch::kInt32);
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(disc->fit_transform_t(X_empty, y_empty), std::invalid_argument, "Tensors cannot be empty");
|
||||||
|
|
||||||
|
delete disc;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
@@ -64,7 +64,7 @@ namespace mdlp {
|
|||||||
{
|
{
|
||||||
EXPECT_EQ(computed.size(), expected.size());
|
EXPECT_EQ(computed.size(), expected.size());
|
||||||
for (unsigned long i = 0; i < computed.size(); i++) {
|
for (unsigned long i = 0; i < computed.size(); i++) {
|
||||||
cout << "(" << computed[i] << ", " << expected[i] << ") ";
|
// cout << "(" << computed[i] << ", " << expected[i] << ") ";
|
||||||
EXPECT_NEAR(computed[i], expected[i], precision);
|
EXPECT_NEAR(computed[i], expected[i], precision);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -76,7 +76,7 @@ namespace mdlp {
|
|||||||
X = X_;
|
X = X_;
|
||||||
y = y_;
|
y = y_;
|
||||||
indices = sortIndices(X, y);
|
indices = sortIndices(X, y);
|
||||||
cout << "* " << title << endl;
|
// cout << "* " << title << endl;
|
||||||
result = valueCutPoint(0, cut, 10);
|
result = valueCutPoint(0, cut, 10);
|
||||||
EXPECT_NEAR(result.first, midPoint, precision);
|
EXPECT_NEAR(result.first, midPoint, precision);
|
||||||
EXPECT_EQ(result.second, limit);
|
EXPECT_EQ(result.second, limit);
|
||||||
@@ -95,9 +95,9 @@ namespace mdlp {
|
|||||||
test.fit(X[feature], y);
|
test.fit(X[feature], y);
|
||||||
EXPECT_EQ(test.get_depth(), depths[feature]);
|
EXPECT_EQ(test.get_depth(), depths[feature]);
|
||||||
auto computed = test.getCutPoints();
|
auto computed = test.getCutPoints();
|
||||||
cout << "Feature " << feature << ": ";
|
// cout << "Feature " << feature << ": ";
|
||||||
checkCutPoints(computed, expected[feature]);
|
checkCutPoints(computed, expected[feature]);
|
||||||
cout << endl;
|
// cout << endl;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
@@ -113,17 +113,16 @@ namespace mdlp {
|
|||||||
{
|
{
|
||||||
X = { 1, 2, 3 };
|
X = { 1, 2, 3 };
|
||||||
y = { 1, 2 };
|
y = { 1, 2 };
|
||||||
EXPECT_THROW_WITH_MESSAGE(fit(X, y), invalid_argument, "X and y must have the same size");
|
EXPECT_THROW_WITH_MESSAGE(fit(X, y), invalid_argument, "X and y must have the same size: " + std::to_string(X.size()) + " != " + std::to_string(y.size()));
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_F(TestFImdlp, FitErrorMinLengtMaxDepth)
|
TEST_F(TestFImdlp, FitErrorMinLength)
|
||||||
{
|
{
|
||||||
auto testLength = CPPFImdlp(2, 10, 0);
|
EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(2, 10, 0), invalid_argument, "min_length must be greater than 2");
|
||||||
auto testDepth = CPPFImdlp(3, 0, 0);
|
}
|
||||||
X = { 1, 2, 3 };
|
TEST_F(TestFImdlp, FitErrorMaxDepth)
|
||||||
y = { 1, 2, 3 };
|
{
|
||||||
EXPECT_THROW_WITH_MESSAGE(testLength.fit(X, y), invalid_argument, "min_length must be greater than 2");
|
EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(3, 0, 0), invalid_argument, "max_depth must be greater than 0");
|
||||||
EXPECT_THROW_WITH_MESSAGE(testDepth.fit(X, y), invalid_argument, "max_depth must be greater than 0");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_F(TestFImdlp, JoinFit)
|
TEST_F(TestFImdlp, JoinFit)
|
||||||
@@ -137,14 +136,16 @@ namespace mdlp {
|
|||||||
checkCutPoints(computed, expected);
|
checkCutPoints(computed, expected);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST_F(TestFImdlp, FitErrorMinCutPoints)
|
||||||
|
{
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(CPPFImdlp(3, 10, -1), invalid_argument, "proposed_cuts must be non-negative");
|
||||||
|
}
|
||||||
TEST_F(TestFImdlp, FitErrorMaxCutPoints)
|
TEST_F(TestFImdlp, FitErrorMaxCutPoints)
|
||||||
{
|
{
|
||||||
auto testmin = CPPFImdlp(2, 10, -1);
|
auto test = CPPFImdlp(3, 1, 8);
|
||||||
auto testmax = CPPFImdlp(3, 0, 200);
|
samples_t X_ = { 1, 2, 2, 3, 4, 2, 3 };
|
||||||
X = { 1, 2, 3 };
|
labels_t y_ = { 0, 0, 1, 2, 3, 4, 5 };
|
||||||
y = { 1, 2, 3 };
|
EXPECT_THROW_WITH_MESSAGE(test.fit(X_, y_), invalid_argument, "wrong proposed num_cuts value");
|
||||||
EXPECT_THROW_WITH_MESSAGE(testmin.fit(X, y), invalid_argument, "wrong proposed num_cuts value");
|
|
||||||
EXPECT_THROW_WITH_MESSAGE(testmax.fit(X, y), invalid_argument, "wrong proposed num_cuts value");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_F(TestFImdlp, SortIndices)
|
TEST_F(TestFImdlp, SortIndices)
|
||||||
@@ -166,6 +167,15 @@ namespace mdlp {
|
|||||||
indices = { 1, 2, 0 };
|
indices = { 1, 2, 0 };
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST_F(TestFImdlp, SortIndicesOutOfBounds)
|
||||||
|
{
|
||||||
|
// Test for out of bounds exception in sortIndices
|
||||||
|
samples_t X_long = { 1.0f, 2.0f, 3.0f };
|
||||||
|
labels_t y_short = { 1, 2 };
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(sortIndices(X_long, y_short), std::out_of_range, "Index out of bounds in sort comparison");
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
TEST_F(TestFImdlp, TestShortDatasets)
|
TEST_F(TestFImdlp, TestShortDatasets)
|
||||||
{
|
{
|
||||||
vector<precision_t> computed;
|
vector<precision_t> computed;
|
||||||
@@ -363,4 +373,55 @@ namespace mdlp {
|
|||||||
EXPECT_EQ(computed_ft[i], expected[i]);
|
EXPECT_EQ(computed_ft[i], expected[i]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
TEST_F(TestFImdlp, SafeXAccessIndexOutOfBounds)
|
||||||
|
{
|
||||||
|
// Test safe_X_access with index out of bounds for indices array
|
||||||
|
X = { 1.0f, 2.0f, 3.0f };
|
||||||
|
y = { 1, 2, 3 };
|
||||||
|
indices = { 0, 1 }; // shorter than expected
|
||||||
|
|
||||||
|
// This should trigger the first exception in safe_X_access (idx >= indices.size())
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(safe_X_access(2), std::out_of_range, "Index out of bounds for indices array");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestFImdlp, SafeXAccessXOutOfBounds)
|
||||||
|
{
|
||||||
|
// Test safe_X_access with real_idx out of bounds for X array
|
||||||
|
X = { 1.0f, 2.0f }; // shorter array
|
||||||
|
y = { 1, 2, 3 };
|
||||||
|
indices = { 0, 1, 5 }; // indices[2] = 5 is out of bounds for X
|
||||||
|
|
||||||
|
// This should trigger the second exception in safe_X_access (real_idx >= X.size())
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(safe_X_access(2), std::out_of_range, "Index out of bounds for X array");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestFImdlp, SafeYAccessIndexOutOfBounds)
|
||||||
|
{
|
||||||
|
// Test safe_y_access with index out of bounds for indices array
|
||||||
|
X = { 1.0f, 2.0f, 3.0f };
|
||||||
|
y = { 1, 2, 3 };
|
||||||
|
indices = { 0, 1 }; // shorter than expected
|
||||||
|
|
||||||
|
// This should trigger the first exception in safe_y_access (idx >= indices.size())
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(safe_y_access(2), std::out_of_range, "Index out of bounds for indices array");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestFImdlp, SafeYAccessYOutOfBounds)
|
||||||
|
{
|
||||||
|
// Test safe_y_access with real_idx out of bounds for y array
|
||||||
|
X = { 1.0f, 2.0f, 3.0f };
|
||||||
|
y = { 1, 2 }; // shorter array
|
||||||
|
indices = { 0, 1, 5 }; // indices[2] = 5 is out of bounds for y
|
||||||
|
|
||||||
|
// This should trigger the second exception in safe_y_access (real_idx >= y.size())
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(safe_y_access(2), std::out_of_range, "Index out of bounds for y array");
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST_F(TestFImdlp, SafeSubtractUnderflow)
|
||||||
|
{
|
||||||
|
// Test safe_subtract with underflow condition (b > a)
|
||||||
|
EXPECT_THROW_WITH_MESSAGE(safe_subtract(3, 5), std::underflow_error, "Subtraction would cause underflow");
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
Reference in New Issue
Block a user