Analysis of the feasibility level of IT device using K-Means cluster and C4.5 classification
English
DOI:
https://doi.org/10.35134/komtekinfo.v13i1.673Keywords:
feasibility level, IT Device, K-Means Cluster, C4.5 classificationAbstract
The availability of reliable laptops is essential for ensuring smooth business operations; however, decisions regarding device upgrades and replacements in many organizations still rely primarily on device age and subjective user perceptions. This practice often leads to inconsistent IT asset lifecycle decisions, increased security risks, and inefficient cost management. This study proposes a classification model to recommend laptop feasibility levels, namely usable, requires upgrade, and requires replacement, based on a combination of technical specifications and operating system characteristics. K-Means clustering is applied to group laptops into three feasibility categories using processor type, release year, RAM capacity, storage type, and operating system attributes that have undergone performance score–based ordinal encoding and Min–Max normalization. Subsequently, the C4.5 algorithm is employed to construct a decision tree using the K-Means cluster labels as target classes, producing interpretable if–then rules that describe device feasibility patterns. The dataset is obtained from the IT device inventory of PT Semen Indonesia, consisting of 1,905 laptop records, which after data cleaning result in 85 unique specification combinations for analysis. The clustering process classifies 47 laptops as usable, 22 as requiring upgrades, and 16 as requiring replacement. The C4.5 algorithm model achieves accuracy, precision, recall, and F1-score values of 100% on the test data, indicating its ability to effectively replicate the feasibility patterns generated by K-Means algorithm. These findings demonstrate that the proposed approach provides a data-driven framework for supporting upgrade and replacement decisions, contributing to more efficient and measurable IT asset lifecycle management.
References
. M. Jasiulewicz-Kaczmarek, A. Saniuk, and T. Nowicki, “Asset lifecycle management and sustainability-oriented decision-making in industrial systems,” Sustainability, vol. 15, no. 4, pp. 1–18, 2023.
. B. Bhurtel and D. Rawat, “IT asset lifecycle management and risk-based decision support,” Journal of Information Systems Management, vol. 40, no. 2, pp. 123–137, 2023.
. J. Kim, H. Lee, and S. Park, “Data-driven asset management using machine learning techniques,” IEEE Access, vol. 12, pp. 45678–45690, 2024.
. C. Parra, L. Martínez, and J. Ruiz, “Lifecycle-based decision models for information technology assets,” Decision Support Systems, vol. 176, pp. 113915, 2024.
. A. Darmawan, R. Pratama, and M. Hidayat, “Application of data mining for IT asset feasibility analysis,” Journal of Information Technology Systems, vol. 9, no. 1, pp. 45–56, 2024.
. A. Rindu, S. Wahyuni, and F. Nugroho, “Decision support systems for hardware replacement planning,” Indonesian Journal of Computing, vol. 8, no. 2, pp. 89–101, 2023.
. T. Bold and G. Urschel, “Managing IT assets through analytics and predictive models,” Information Systems Management, vol. 40, no. 3, pp. 201–214, 2023.
. J. Ortega-Guzmán, M. López, and A. Torres, “Knowledge discovery frameworks for asset evaluation,” Expert Systems with Applications, vol. 237, pp. 121393, 2024.
. A. H. Pitafi, M. Kanwal, and A. Khan, “Performance analysis of K-Means clustering in applied data mining,” Applied Soft Computing, vol. 136, pp. 110087, 2023.
. H. Blockeel, J. Vanschoren, and A. Bifet, “Decision tree learning: Recent advances and practical considerations,” Machine Learning, vol. 112, no. 7, pp. 2761–2791, 2023.
. M. Jusia, R. Siregar, and D. Putra, “Optimization of C4.5 decision tree using metaheuristic approaches,” Journal of Computer Science and Applications, vol. 15, no. 3, pp. 210–221, 2024.
. M. Golazad, A. Rahimi, and H. Karimi, “A comprehensive review of data preprocessing techniques for machine learning,” Data Science and Analytics, vol. 9, no. 1, pp. 1–20, 2024.
. M. Ibrahimi, A. Bouali, and L. Boussaid, “Feature scaling and encoding strategies in machine learning models,” Journal of Big Data, vol. 10, no. 1, pp. 1–22, 2023.
. X. Gong, Y. Li, and Z. Wang, “Impact of data quality and preprocessing on machine learning performance,” Knowledge-Based Systems, vol. 268, pp. 110407, 2023.
. A. Pinheiro, R. Silva, and P. Costa, “Evaluation of feature normalization techniques in classification tasks,” Pattern Recognition Letters, vol. 176, pp. 1–9, 2025.
. M. Côté, J. M. Bouchard, and F. Saubion, “A systematic survey on data cleaning for machine learning,” ACM Computing Surveys, vol. 56, no. 2, pp. 1–36, 2024.
. A. Borrohou, K. El Yassini, and M. Azizi, “Outlier detection and handling in data preprocessing,” Procedia Computer Science, vol. 219, pp. 328–335, 2023.
. N. Wongoutong, S. Thongkam, and P. Meesad, “Effect of feature scaling on distance-based clustering algorithms,” International Journal of Data Mining & Knowledge Management Process, vol. 14, no. 1, pp. 15–27, 2024.
. K. Rashmi, R. Kumar, and S. Patel, “Comparative study of normalization techniques for clustering algorithms,” Journal of Intelligent Systems, vol. 33, no. 1, pp. 245–258, 2024.
. J. Han, M. Kamber, and J. Pei, Data Mining: Concepts and Techniques, 3rd ed. San Francisco, CA, USA: Morgan Kaufmann, 2012.
. S. Lee, J. Park, and H. Choi, “Z-score normalization effects on K-Means clustering for time-series data,” IEEE Access, vol. 12, pp. 99871–99883, 2024.
. D. Ulqinaku and S. Ktona, “Stability and interpretability of gain ratio–based decision trees,” Artificial Intelligence Review, vol. 57, no. 1, pp. 1–29, 2024.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Jurnal KomtekInfo

This work is licensed under a Creative Commons Attribution 4.0 International License.


