Comparative Analysis of Deep Learning Architectures for Multi-Disease Classification of Single-Label Chest X-rays
DOI:
https://doi.org/10.31530/cjnst.2026.2.1.2Keywords:
Deep Learning, Chest X-ray, Transfer Learning, Medical Imaging, Convolutional Neural Networks, Disease Classification.Abstract
Background: Chest X-ray imaging has been the most widely used diagnostic technique for pulmonary and cardiac disorders in healthcare systems around the world, owing to its low cost and ease of use. However, the accuracy of the diagnosis is severely hampered by a lack of radiologists and inter-observer variability, which is exacerbated by resource-constrained circumstances. Even though deep learning methods for disease classification have shown great promise, there has been a lack of rigorous comparative evaluations of contemporary architectures for prediction using balanced multi-class chest disease data.
Aims: This work examines seven popular deep learning models for multi-class Chest X-ray classification, with an emphasis on trade-offs between performance indicators and computational efficiency. The findings would inform deployment decisions in healthcare settings with diverse resource availability.
Methodology: The architectures studied were ConvNeXt-Tiny, DenseNet121, DenseNet201, ResNet50, Vision Transformer (ViT-B/16), EfficientNetV2-M, and MobileNetV2. A comprehensive dataset was generated from existing repositories, consisting of 13,108 training photos, 1,455 validation images, and 3,517 test images for five conditions: Cardiomegaly, COVID-19, Normal, Pneumonia, and Tuberculosis. All models were initialized with ImageNet-pretrained weights and trained under consistent settings, including standardized preprocessing, data augmentation, and optimization hyperparameters. To evaluate model performance, we used metrics such as AUROC, overall accuracy, precision, recall, F1-score, and computational efficiency.
Results: All seven studied designs achieved test accuracies that exceeded 90%. ConvNeXt Tiny performed well, with a validation AUROC of 98.64% and a test accuracy of 92.31%. ResNet50 followed closely with 92% test accuracy, while ViT-B/16 earned 91.87%. Notably, MobileNetV2 emerged as the most parameter-efficient Net alternative, with only 3.50 million parameters. Despite its small size, it achieved a test AUROC of 94.10% and obtained the highest efficiency rating in our study. This lightweight architecture achieved around 98.3% of the accuracy of the best-performing model while using 87.5% fewer parameters, which has important implications for deployment in resource-constrained contexts.
Conclusion: The current findings show that excellent accuracy in multi-disease categorization of Chest X-ray pictures is possible without requiring significant computational resources. This finding has important implications for the practical integration of deep learning as a diagnostic aid in a variety of healthcare settings, both resource-rich and resource-constrained. Furthermore, choosing a suitable architecture should consider the available infrastructure as well as the unique characteristics of each deployment scenario.
References
[1] World Health Organization (or WHO), “Pneumonia,” www.who.int. Accessed: Oct. 08, 2025. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/Pneumonia
[2] WHO, Global Tuberculosis Report 2023. World Health Organization, 2023.
[3] H. Nakrani, E. Q. Shahra, S. Basurra, R. Mohammad, E. Vakaj, and W. A. Jabbar, “Advanced Diagnosis of Cardiac and Respiratory Diseases from Chest X-ray Imagery Using Deep Learning Ensembles,” Journal of Sensor and Actuator Networks, vol. 14, no. 2, Apr. 2025, doi: 10.3390/jsan14020044. DOI: https://doi.org/10.3390/jsan14020044
[4] M. H. Al-Sheikh, O. Al Dandan, A. S. Al-Shamayleh, H. A. Jalab, and R. W. Ibrahim, “Multi-class deep learning architecture for classifying lung diseases from Chest X-ray and CT images,” Sci. Rep., vol. 13, no. 1, Dec. 2023, doi: 10.1038/s41598-023-46147-3. DOI: https://doi.org/10.1038/s41598-023-46147-3
[5] C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, “Key challenges for delivering clinical impact with artificial intelligence,” BMC Med., vol. 17, no. 1, p. 195, Oct. 2019, doi: 10.1186/s12916-019-1426-2. DOI: https://doi.org/10.1186/s12916-019-1426-2
[6] Q. An, W. Chen, and W. Shao, “A Deep Convolutional Neural Network for Pneumonia Detection in X-ray Im-ages with Attention Ensemble,” Diagnostics, vol. 14, no. 4, Feb. 2024, doi: 10.3390/diagnostics14040390. DOI: https://doi.org/10.3390/diagnostics14040390
[7] Z. Zhang, X. Zhang, K. Ichiji, I. Bukovský, and N. Homma, “How intra-source imbalanced datasets im-pact the performance of deep learning for COVID-19 diagnosis using Chest X-ray images,” Sci. Rep., vol. 13, no. 1, Dec. 2023, doi: 10.1038/s41598-023-45368-w. DOI: https://doi.org/10.1038/s41598-023-45368-w
[8] E. J. Hwang et al., “Development and Validation of a Deep Learning–Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs,” JAMA Netw. Open, vol. 2, no. 3, p. e191095, Mar. 2019, doi: 10.1001/jamanetworkopen.2019.1095. DOI: https://doi.org/10.1001/jamanetworkopen.2019.1095
[9] G. D. Deepak and S. K. Bhat, “A multi-stage deep learning approach for comprehensive lung disease clas-sification from x-ray images,” Expert Syst. Appl., vol. 277, Jun. 2025, doi: 10.1016/j.eswa.2025.127220. DOI: https://doi.org/10.1016/j.eswa.2025.127220
[10] A. Esteva et al., “Deep learning-enabled medical com-puter vision,” NPJ Digit. Med., vol. 4, no. 1, p. 5, Jan. 2021, doi: 10.1038/s41746-020-00376-2. DOI: https://doi.org/10.1038/s41746-020-00376-2
[11] S. A. Aula and T. A. Rashid, “Foxtsage vs. Adam: Revolution or evolution in optimization?,” Cogn. Syst. Res., vol. 92, Sep. 2025, doi: 10.1016/j.cogsys.2025.101373. DOI: https://doi.org/10.1016/j.cogsys.2025.101373
[12] P. Rajpurkar et al., “CheXNet: Radiologist-Level Pneumonia Detection on Chest X-rays with Deep Learning,” Dec. 2017. [Online]. Available: http://arxiv.org/abs/1711.05225
[13] H. Iqbal, A. Khan, N. Nepal, F. Khan, and Y. K. Moon, “Deep Learning Approaches for Chest Radio-graph Interpretation: A Systematic Review,” Dec. 01, 2024, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/electronics13234688. DOI: https://doi.org/10.3390/electronics13234688
[14] C. Randieri, A. Perrotta, A. Puglisi, M. Grazia Bocci, and C. Napoli, “CNN-Based Framework for Classifying COVID-19, Pneumonia, and Normal Chest X-rays,” Big Data and Cognitive Computing, vol. 9, no. 7, Jul. 2025, doi: 10.3390/bdcc9070186. DOI: https://doi.org/10.3390/bdcc9070186
[15] M. Salmi, D. Atif, D. Oliva, A. Abraham, and S. Ven-tura, “Handling imbalanced medical datasets: review of a decade of research,” Artif. Intell. Rev., vol. 57, no. 10, Oct. 2024, doi: 10.1007/s10462-024-10884-2. DOI: https://doi.org/10.1007/s10462-024-10884-2
[16] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-Ray8: Hospital-Scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases,” in 2017 IEEE Conference on Computer Vi-sion and Pattern Recognition (CVPR), 2017, pp. 3462–3471. doi: 10.1109/CVPR.2017.369. DOI: https://doi.org/10.1109/CVPR.2017.369
[17] M. E. H. Chowdhury et al., “COVID-19 Radiography Database,” Kaggle. Accessed: Oct. 09, 2025. [Online]. Available: https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database
[18] S. Jaeger, S. Candemir, S. Antani, Y.-X. J. Wang, P.-X. Lu, and G. Thoma, “Tuberculosis (TB) Chest X-ray Database – Shenzhen & Montgomery,” Mendeley Da-ta. Accessed: Oct. 09, 2025. [Online]. Available: https://data.mendeley.com/datasets/8j2g3csprk/2
[19] L. Oakden-Rayner, “Exploring Large-scale Public Med-ical Image Datasets,” Acad. Radiol., vol. 27, no. 1, pp. 106–112, 2020, doi: https://doi.org/10.1016/j.acra.2019.10.006. DOI: https://doi.org/10.1016/j.acra.2019.10.006
[20] S. Jaeger, S. Candemir, S. Antani, Y. X. Wáng, P. X. Lu, and G. Thoma, “Two public Chest X-ray datasets for computer-aided screening of pulmonary diseases,” Quant. Imaging Med. Surg., vol. 4, no. 6, pp. 475–477, 2014, doi: 10.3978/j.issn.2223-4292.2014.11.20.
[21] M. Buda, A. Maki, and M. A. Mazurowski, “A system-atic study of the class imbalance problem in convolu-tional neural networks,” Neural Networks, vol. 106, pp. 249–259, 2018, doi: https://doi.org/10.1016/j.neunet.2018.07.011. DOI: https://doi.org/10.1016/j.neunet.2018.07.011
[22] D. A. Abdullah et al., “A novel facial recognition technique with focusing on masked faces,” Ain Shams Engineering Journal, vol. 16, no. 5, p. 103350, Apr. 2025, doi: 10.1016/J.ASEJ.2025.103350. DOI: https://doi.org/10.1016/j.asej.2025.103350
[23] J. R. Zech, M. A. Badgeley, M. Liu, A. B. Costa, J. J. Titano, and E. K. Oermann, “Variable generalization performance of a deep learning model to detect Pneu-monia in chest radiographs: A cross-sectional study,” PLoS Med., vol. 15, no. 11, p. e1002683, 2018, doi: 10.1371/journal.pmed.1002683. DOI: https://doi.org/10.1371/journal.pmed.1002683
[24] N. Chawla, K. Bowyer, L. Hall, and W. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Tech-nique,” J. Artif. Intell. Res. (JAIR), vol. 16, pp. 321–357, Jun. 2002, doi: 10.1613/jair.953. DOI: https://doi.org/10.1613/jair.953
[25] S. A. Aula and T. A. Rashid, “FOX-TSA: Navigating Complex Search Spaces and Superior Performance in Benchmark and Real-World Optimization Problems,” Ain Shams Engineering Journal, 2024, doi: 10.1016/j.asej.2024.103185. DOI: https://doi.org/10.36227/techrxiv.173385635.52802360/v1
[26] P. Rajpurkar et al., “Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists,” PLoS Med., vol. 15, no. 11, p. e1002686, Nov. 2018, doi: 10.1371/journal.pmed.1002686. DOI: https://doi.org/10.1371/journal.pmed.1002686
[27] I. M. Baltruschat, H. Nickisch, M. Grass, T. Knopp, and A. Saalbach, “Comparison of Deep Learning Ap-proaches for Multi-Label Chest X-ray Classification,” Sci. Rep., vol. 9, no. 1, p. 6381, Apr. 2019, doi: 10.1038/s41598-019-42294-8. DOI: https://doi.org/10.1038/s41598-019-42294-8
[28] M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” Sep. 2020, [Online]. Available: http://arxiv.org/abs/1905.11946
[29] J. Ruiz, C. Arboleda, and D. Grisales, “EfficientNet for COVID-19 detection in Chest X-ray images,” Applied Sciences, vol. 11, no. 14, p. 6327, Jul. 2021, doi: 10.3390/app11146327. DOI: https://doi.org/10.3390/app11146327
[30] Z. Liu, H. Mao, C. Y. Wu, C. Feichtenhofer, T. Dar-rell, and S. Xie, “A ConvNet for the 2020s,” in Pro-ceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Com-puter Society, 2022, pp. 11966–11976. doi: 10.1109/CVPR52688.2022.01167. DOI: https://doi.org/10.1109/CVPR52688.2022.01167
[31] A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” Jun. 2021. [Online]. Available: http://arxiv.org/abs/2010.11929
[32] F. Shamshad et al., “Transformers in medical imaging: A survey,” Med. Image Anal., vol. 88, p. 102802, Aug. 2023, doi: 10.1016/j.media.2023.102802. DOI: https://doi.org/10.1016/j.media.2023.102802
[33] C. Matsoukas, J. F. Haslum, M. Söderberg, and K. Smith, “Is it time to replace CNNs with transformers for medical images?,” arXiv preprint arXiv:2108.09038, Aug. 2021.
[34] G. Mohan, M. M. Subashini, S. Balan, and S. Singh, “A multiclass deep learning algorithm for healthy lung, Covid-19 and Pneumonia disease detection from Chest X-ray images,” Discover Artificial Intelligence, vol. 4, no. 1, Dec. 2024, doi: 10.1007/s44163-024-00110-x. DOI: https://doi.org/10.1007/s44163-024-00110-x
[35] [35] M. S. Ahmed et al., “Joint Diagnosis of Pneumonia, COVID-19, and Tuberculosis from Chest X-ray Images: A Deep Learning Approach,” Diagnos-tics, vol. 13, no. 15, Aug. 2023, doi: 10.3390/diagnostics13152562. DOI: https://doi.org/10.3390/diagnostics13152562
[36] R. R. Nair and T. Singh, “Exploring Ensemble Archi-tectures for Lung X-Ray Multi-Class Image Classifica-tion using CNN-LSTM,” in Procedia Computer Sci-ence, Elsevier B.V., 2025, pp. 852–861. doi: 10.1016/j.procs.2025.04.317. DOI: https://doi.org/10.1016/j.procs.2025.04.317
[37] R. Sharma and M. Kamble, “Hybrid Deep Learning Model for Multi-Class Chest X-ray Classification Using CNN-LSTM,” J. Ambient Intell. Humaniz. Comput., vol. 14, no. 7, pp. 8911–8925, Jul. 2023, doi: 10.1007/s12652-021-03682-7.
[38] Md. M. Kabir, Md. F. Mridha, A. Rahman, M. A. Ha-mid, and Md. M. Monowar, “Detection of COVID-19, Pneumonia, and Tuberculosis from radiographs using AI-driven knowledge distillation,” Heliyon, vol. 10, no. 5, Mar. 2024, doi: 10.1016/j.heliyon.2024.e26801. DOI: https://doi.org/10.1016/j.heliyon.2024.e26801
[39] J. A. Qadir, S. K. Jameel, and J. Majidpour, “Covid-19 Detection and Overcome the Scarcity of Chest X-ray Datasets Based on Transfer Learning and GAN Mod-el,” in Proceedings of the International Conference on Information and Communication Technology (ICICT), 2021, pp. 104–109. doi: 10.1109/ICICT54344.2021.9952034. DOI: https://doi.org/10.1109/ICICT52195.2021.9568468
[40] S. U. Amin, S. Taj, A. Hussain, and S. Seo, “An au-tomated Chest X-ray analysis for COVID-19, Tubercu-losis, and Pneumonia employing ensemble learning approach,” Biomed. Signal Process. Control, vol. 87, Jan. 2024, doi: 10.1016/j.bspc.2023.105408. DOI: https://doi.org/10.1016/j.bspc.2023.105408
[41] C.-T. Yen and C.-Y. Tsao, “Lightweight convolutional neural network for Chest X-ray images classification,” Sci. Rep., vol. 14, no. 1, p. 29759, Nov. 2024, doi: 10.1038/s41598-024-80826-z. DOI: https://doi.org/10.1038/s41598-024-80826-z
[42] U. Hasanah et al., “CheXNet and feature pyramid net-work: a fusion deep learning architecture for multilabel Chest X-ray clinical diagnoses classification,” Interna-tional Journal of Cardiovascular Imaging, vol. 40, no. 4, pp. 709–722, Apr. 2024, doi: 10.1007/s10554-023-03039-x. DOI: https://doi.org/10.1007/s10554-023-03039-x
[43] M. Zhang, X. Chen, Y. Yu, and P. Zhang, “Expert Uncertainty and Severity Aware Chest X-ray Classifi-cation by Multi-Relationship Graph Learning,” arXiv preprint arXiv:2309.03331, Sep. 2023.
[44] C. F. Chen et al., “A deep learning-based algorithm for pulmonary Tuberculosis detection in chest radiog-raphy,” Sci. Rep., vol. 14, no. 1, Dec. 2024, doi: 10.1038/s41598-024-65703-z. DOI: https://doi.org/10.1038/s41598-024-65703-z
[45] K. Santosh, S. Allu, S. Rajaraman, and S. Antani, “Advances in Deep Learning for Tuberculosis Screen-ing using Chest X-rays: The Last 5 Years Review,” J. Med. Syst., vol. 46, no. 11, Nov. 2022, doi: 10.1007/s10916-022-01870-8. DOI: https://doi.org/10.1007/s10916-022-01870-8
[46] S. A. Aula and T. A. Rashid, “FOX-TSA hybrid algo-rithm: Advancing for superior predictive accuracy in tourism-driven multi-layer perceptron models,” Sys-tems and Soft Computing, vol. 6, Dec. 2024, doi: 10.1016/j.sasc.2024.200178. DOI: https://doi.org/10.1016/j.sasc.2024.200178
[47] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neu-ral networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, Jun. 2017, doi: 10.1145/3065386. DOI: https://doi.org/10.1145/3065386
[48] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Con-volutional networks for biomedical image segmenta-tion,” in Lecture Notes in Computer Science (includ-ing subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28. DOI: https://doi.org/10.1007/978-3-319-24574-4_28
[49] Y. Xu and R. Goodacre, “On Splitting Training and Validation Set: A Comparative Study of Cross-Validation, Bootstrap and Systematic Sampling for Es-timating the Generalization Performance of Supervised Learning,” J. Anal. Test., vol. 2, no. 3, pp. 249–262, Jul. 2018, doi: 10.1007/s41664-018-0068-2. DOI: https://doi.org/10.1007/s41664-018-0068-2
[50] V. Singh, M. Pencina, A. J. Einstein, J. X. Liang, D. S. Berman, and P. Slomka, “Impact of train/test sample regimen on performance estimate stability of machine learning in cardiovascular imaging,” Sci. Rep., vol. 11, no. 1, Dec. 2021, doi: 10.1038/s41598-021-93651-5. DOI: https://doi.org/10.1038/s41598-021-93651-5
[51] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional net-works,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Institute of Electrical and Electronics Engineers Inc., Nov. 2017, pp. 2261–2269. doi: 10.1109/CVPR.2017.243. DOI: https://doi.org/10.1109/CVPR.2017.243
[52] O. Taher and K. Özacar, “MedCapsNet: A modified Densenet201 model integrated with capsule network for heel disease detection and classification,” Heliyon, vol. 10, no. 14, Jul. 2024, doi: 10.1016/j.heliyon.2024.e34420. DOI: https://doi.org/10.1016/j.heliyon.2024.e34420
[53] M. Tan and Q. V Le, “EfficientNetV2: Smaller Models and Faster Training,” 2021. [Online]. Available: https://github.com/google/
[54] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proceedings of the IEEE Com-puter Society Conference on Computer Vision and Pat-tern Recognition, IEEE Computer Society, Dec. 2018, pp. 4510–4520. doi: 10.1109/CVPR.2018.00474. DOI: https://doi.org/10.1109/CVPR.2018.00474
[55] S. J. Pan and Q. Yang, “A survey on transfer learning,” 2010. doi: 10.1109/TKDE.2009.191. DOI: https://doi.org/10.1109/TKDE.2009.191
[56] A. Paszke et al., “Advances in Neural Information Pro-cessing Systems 32 (NeurIPS 2019),” Dec. 2019, pp. 8024–8035. [Online]. Available: http://arxiv.org/abs/1912.01703
[57] J. Heaton, “Ian Goodfellow, Yoshua Bengio, and Aa-ron Courville: Deep learning,” Genet. Program. Evolv-able Mach., vol. 19, no. 1–2, pp. 305–307, Jun. 2018, doi: 10.1007/s10710-017-9314-z. DOI: https://doi.org/10.1007/s10710-017-9314-z
[58] J. Davis and M. Goadrich, “The Relationship Between Precision-Recall and ROC Curves,” 2006. DOI: https://doi.org/10.1145/1143844.1143874
[59] S. A. Hicks et al., “On evaluation metrics for medical applications of artificial intelligence,” Sci. Rep., vol. 12, no. 1, Dec. 2022, doi: 10.1038/s41598-022-09954-8. DOI: https://doi.org/10.1038/s41598-022-09954-8
[60] I. Salehin et al., “Real-Time Medical Image Classifica-tion with ML Framework and Dedicated CNN-LSTM Architecture,” J. Sens., vol. 2023, 2023, doi: 10.1155/2023/3717035. DOI: https://doi.org/10.1155/2023/3717035
[61] S. K. Zhou, D. Rueckert, and G. Fichtinger, Handbook of Medical Image Computing and Computer Assisted Intervention. Academic Press, 2019.
[62] M. Grandini, E. Bagli, and G. Visani, “Metrics for Multi-Class Classification: an Overview,” Aug. 2020, [Online]. Available: http://arxiv.org/abs/2008.05756
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Charmo Journal of Natural Sciences and Technologies

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.


