Abstract
-
Background
- Intracranial pressure (ICP) waveform analysis provides critical insights into brain compliance and can aid in the early detection of neurological deterioration. Deep learning (DL) has recently emerged as an effective approach for analyzing complex medical signals and imaging data. The aim of the present research was to develop a DL-based model for detecting ICP waveforms indicative of poor brain compliance.
-
Methods
- A retrospective cohort study was conducted using ICP wave images collected from postoperative hydrocephalus (HCP) patients who underwent ventriculostomy. The images were categorized into normal and poor compliance waveforms. Precision, recall, mean average precision at the 0.5 intersection over union (mAP_0.5), and the area under the receiver operating characteristic curve (AUC) were used to test.
-
Results
- The dataset consisted of 2,744 ICP wave images from 21 HCP patients. The best-performing model achieved a precision of 0.97, a recall of 0.96, and a mAP_0.5 of 0.989. The confusion matrix for poor brain compliance waveform detection using the test dataset also demonstrated a high classification accuracy, with true positive and true negative rates of 48.5% and 47.8%, respectively. Additionally, the model demonstrated high accuracy, achieving a mAP_0.5 of 0.994, sensitivity of 0.956, specificity of 0.970, and an AUC of 0.96 in the detection of poor compliance waveforms.
-
Conclusions
- The DL-based model successfully detected pathological ICP waveforms, thereby enhancing clinical decision-making. As DL advances, its significance in neurocritical care will help to pave the way for more individualized and data-driven approaches to brain monitoring and management.
-
Key Words: deep learning; detection algorithms; hydrocephalus; intracranial hypertension; intracranial pressure
INTRODUCTION
Intracranial pressure (ICP) monitoring is a crucial tool in the management of patients with neurological conditions, including traumatic brain injury, hydrocephalus (HCP), and intracranial hemorrhage [1,2]. High ICP is characterized as an ICP of more than 20–22 mm Hg [3], for which there are numerous treatments available, thereby preventing additional brain damage. Furthermore, analysis of the ICP waveforms can reveal cerebral compliance [4]. The ICP waveform consists of three characteristic components: P1 (percussion wave) is caused by arterial pulsations and reflects blood ejection from the heart. It is normally located in the highest point in a healthy brain. P2 (tidal wave) shows cerebral compliance, which refers to the brain’s ability to adjust to volume changes [4,5]. Furthermore, P3 (dicrotic wave) reflects the venous outflow and corresponds to the dicrotic notch of the arterial pulse [4-6]. However, when brain compliance is diminished due to increased ICP, P2 rises and may exceed P1, indicating poor compensatory mechanisms and probable neurological impairment [5,6].
Although early detection of pathological brain dysfunction has been accomplished with the use of pathogenic ICP waveforms, ICP waveform interpretation can be limited by interobserver variability and real-time clinical requirements. In recent years, deep learning (DL) has enabled the development of powerful models that are capable of detecting complex patterns in medical signals and images. Ramesh et al. [7] utilized the Yolov5 algorithm’s DL-based model to identify microsurgical equipment in microscopic videos, achieving a mean average precision (mAP) of 0.932. Jaruenpunyasak et al. [8] used a DL-based model to differentiate between glioblastoma and primary central nervous system lymphoma for corpus callosal tumors. The area under the receiver operating characteristic (ROC) curve (AUC) of these DL models ranged from 0.77 to 0.83. Moreover, Tunthanathip et al. [9] compared the diagnostic performance for the different types of pineal region tumors among various DL architectures. The standard convolutional neural network architecture achieved the highest AUC of 0.96. In comparison, the LeNet model recorded an AUC of 0.95, while the Densely Connected Convolutional Network and Vision Transformer architectures attained AUC values of 0.87 and 0.80, respectively.
In addition, recent studies have used the DL approach to predict ICP values. Nair et al. [10] estimated ICP values using arterial blood pressure (ABP), photoplethysmography, and electrocardiography, with a mean average error of 1.34 (±0.59) mm Hg for single patients and 5.10 (±0.11) mm Hg for multi-patients. Lei et al. [11] examined the reconstructed ICP waveforms from the ABP and ICP signals using the Wave-U-Net DL model, and the mean average error of the reconstructed ICP was 0.42±0.18 mm Hg. Nevertheless, the literature review revealed a lack of evidence with regard to DL-based models applied for detecting pathological ICP waveforms.
Because ICP waveform analysis can be combined with ICP values to aid in the early diagnosis and warning of neurological degeneration in the future, the aim of the present research was to develop a DL-based model for detecting ICP waveforms indicative of poor brain compliance.
MATERIALS AND METHODS
This research was conducted in accordance with the Declaration of Helsinki and was approved by the Human Research Ethics Committee of the Faculty of Medicine, Prince of Songkla University (REC.65-249-10-1). As a retrospective analysis, the study did not require informed consent from patients. To ensure confidentiality, all patient identity numbers were encoded before the training process.
Study Design and Patient Selection
This retrospective cohort study examined HCP patients who underwent ventriculostomy and had video footage of postoperative ICP monitoring recorded at our institution between January 2021 and December 2023. Patients were excluded if their medical records were unavailable.
Object Detection Model Training
ICP wave images were collected from the video footage and then categorized into two classes: normal waves and poor compliance waves, using the Roboflow application (Roboflow Inc.,). The total dataset consisted of 2,744 ICP wave images from 21 HCP patients, as illustrated in Figure 1. The validation set accounted for 20% of the total dataset, while 70% of the overall dataset was randomly partitioned into training datasets. Additionally, the test dataset was composed of the remaining 10% of the images. The yolov5 model (YOLOv5, Ultralytics Inc.) was employed to train the model for the detection of ICP waveforms with poor brain compliance. Images from the training dataset were resized to 416x416 pixels for enhanced detail. A variety of parameters were employed to train and fine-tune the model for each image.
Operational Definitions
Poor brain compliance waveforms were labeled based on visual characteristics, specifically when the P2 waveform exceeded P1 in amplitude, confirmed by two independent neurosurgeons. In detail, two neurosurgeons independently reviewed and labeled the waveform images based on predefined criteria for poor brain compliance. The labeling process was blinded between reviewers to prevent bias. In cases where there was disagreement, the images were subsequently reviewed in consensus meetings to reach an agreement. This consensus-based approach ensured consistent and clinically valid labeling. To quantify inter-rater reliability before consensus, Cohen’s kappa coefficient was calculated. According to Cohen's guidelines, kappa values of 0 or less indicate no agreement, and 0.01–0.20 as none to slight. 0.21–0.40 as fair, 0.41– 0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1.00 as almost perfect agreement [12].
The model development process involved the evaluation of numerous parameters. Consequently, the following parameters were defined: loss function for optimizing object detection models is comprised of box loss, objectness loss, and class loss. Box loss measures the discrepancy between the predicted and the ground truth bounding boxes. It typically combines loss components such as generalized intersection over union (IoU) or Complete IoU to ensure precise object localization while penalizing incorrect placements and size mismatches. Objectness loss determines the confidence score error in the detection model, distinguishing between object and non-object regions within the input image. It ensures that the model correctly assigns higher confidence scores to regions containing objects while minimizing false detections in background areas. Additionally, class loss evaluates the errors in predicted class probabilities for detected objects, which allows for the estimation of recall and precision (positive predictive values [PPVs]) [13].
mAP at the 0.5 IoU (mAP_0.5) is defined as the average precision (AP) across all object classes at a fixed IoU threshold of 0.5. In detail, IoU is a measure of the overlap between the predicted and the ground truth bounding boxes. The AP is calculated as the area under the precision-recall curve for each class, and the mAP_0.5 is obtained by averaging the APs across all classes [13]. mAP at the 0.5–0.95 IoU (mAP_0.5:0.95) is calculated for the APs at multiple IoU thresholds ranging from 0.5 to 0.95. The final mAP score is obtained by averaging the AP values across all IoU thresholds and object classes [13].
For the evaluation of the model’s prediction performance, the confusion matrix and related metrics were estimated using the test dataset. The confusion matrix assessed the model’s classification performance by comparing the predicted labels with the actual classes, which consisted of four key components: true positives, false positives, false negatives, and true negatives. Moreover, the related metrics were evaluated, including sensitivity (recall), specificity, PPV, negative predictive value (NPV), accuracy, and F1 score [14]. In addition, the ROC curve with AUC was created to evaluate the detection model. Better classification performance is indicated by a higher AUC value that is closest to 1. Thus, AUC values above 0.8 are typically considered acceptable, whereas values below 0.7 may indicate the need for model refinement [14,15].
Statistical Analysis
The baseline clinical characteristics data were evaluated using descriptive statistics. Continuous variables were established using mean and standard deviation (SD), whereas percentages were utilized to describe categorical data. The research analyses were conducted using the R program version 4.4.0 (R Foundation). The detection model for poor brain compliance waveforms was trained and tested using Python software with YOLO version 5 (Ultralytics Inc.) through Google Colab (Google). The Python program (Python Software Foundation) was used to examine the performance measures obtained from the training, validation, and testing.
Model Deployment as an Online Application
Applied as a web application in clinical practice, the best model from the development procedures for handling the web application proved to be Gradio software (Hugging Face Inc.).
RESULTS
Patient Characteristics
The 21 HCP patients who underwent ventriculostomy in the present study were included, and their characteristics are listed in Table 1. Males were slightly more common than females, and the median age was 49 years (interquartile range, 36–59 years). Common causes of HCP were intracranial tumors, subarachnoid hemorrhage, and brain abscess. Regarding the intracranial tumors, the tumors were located in the cerebellum, pineal area, and cerebellopontine angle. Consequently, for the entire dataset, 2,744 images of postoperative ICP monitoring footage were collected. In particular, there were 1,302 normal (N) waveform images, and the poor brain compliance (P) waveform images included 1,442 images. Inter-rater reliability was determined by utilizing Cohen's kappa (κ=0.89) following the annotation of the P waveform images by two neurosurgeons.
The performance metrics of the model’s refinement using the validation dataset during the training of the model are shown in Figure 2. With an increasing number of epochs, the box loss, objectness loss, and class loss in the performance were observed to decrease steadily, indicating that the model was learning to forecast with improved box locations, detection of objects, and classification performance. Additionally, the plots of precision, recall, and mAP_0.5 all indicated an immediate increase and subsequent stabilization at a high value during the final epoch. As shown in Table 2, the mAP_0.5, and mAP_0.5:0.95 values for the P waveform were 0.989 and 0.949, respectively.
The confusion matrix for P waveform detection is illustrated in Figure 3, and for the purpose of evaluating the model using the test dataset, the true positive rate and true negative rate were 48.5% and 47.8%, respectively. As shown in Table 3, the sensitivity, specificity, PPV, NPV, accuracy, and F1 scores were 0.956, 0.970, 0.970, 0.956, 0.963, and 0.963, respectively. Moreover, Figure 4 illustrates the ROC curve for P waveform detection, which had an AUC of 0.96.
Online Application
The Gradio application was employed to deploy the best model. The web application implementation instructions can be accessed at: https://huggingface.co/spaces/thara7640/icpwaveform. This web application may be accessed from both laptops and smartphones via a URL created with Python scripts, as illustrated in Figure 5.
DISCUSSION
The findings of this study demonstrate that a DL-based model can effectively detect ICP waveforms indicative of poor brain compliance with high accuracy and PPV. These results are in concordance with prior studies. Mataczynski et al. [16] utilized DL models to differentiate between normal and abnormal ICP waveforms, achieving accuracy rates ranging from 0.82 to 0.93. Furthermore, a review of the literature has highlighted the challenges associated with the integration of DL and clinical practice in ICP waveform analysis. Fong et al. [17] used several clinical characteristics and the ICP of intensive care unit patients to predict ICP values and the mean square error of the predicted ICP was 3.56–4.51 mm Hg. Hu et al. [18]. devised an algorithm to distinguish pre-intracranial hypertension patterns, and their model performance had high specificity. This development is especially significant in critical care settings, where rapid and precise monitoring of intracranial dynamics can be lifesaving. The application of this approach may result in early recognition of declining brain compliance, prompt intervention, and subsequently improved patient outcomes [19].
Despite these favorable results, numerous challenges must be addressed prior to the implementation of this tool in clinical practice. The wide variety of ICP waveforms among patient populations, underlying diseases, and monitoring settings challenge model generalization [9]. Signal artifacts and noise from invasive monitoring systems can also impact the accuracy of waveform interpretation. Shen et al. [20] employed the DL model to identify underwater objects, but they observed poor imaging quality, harsh underwater environments, and concealed underwater targets. Additionally, the detection of the DL model is influenced by a variety of factors, including the object’s deformation, its change over time, its small size, and the presence of overlapping objects [21]. Future research should therefore focus on the enhancement of image preprocessing techniques and the addition of more unseen datasets in order to refine the model before its implementation in clinical practice [22].
Although the DL-based model’s detection performance was remarkable, the limitations of the present study should be acknowledged. The dataset comprised 2,744 ICP waveform images obtained from 21 patients, which is considered sufficient for a proof-of-concept investigation in this study. While there is no absolute minimum number of images required for DL-based object detection, it is dependent on several factors, including the number of classes, image diversity, and model complexity. Prior studies in medical imaging have successfully demonstrated model feasibility with datasets of comparable size (2,000–5,000 images). Mataczynski et al. [16] used 2,165-5,499 images to classify five ICP waveform morphologies (normal, possibly pathological, likely pathological, pathological, and artifact/error waveforms) using the DL model [16]. In addition, Ramesh et al. [7] used 240–4,756 images from microscopic neurosurgical videos to train a YOLOv5 model for tool detection. Therefore, the use of 2,744 ICP waveform images was considered satisfactory for the proof-of-concept in the present study.
The overfitting issue of the model may be a concern since randomly dividing the same footage into training, validation, and test datasets may have resulted in frames from the same condition being contained in several datasets. Because the sample size of the study population was limited, the patient-wise splitting method, which confines all the images from a single patient to only one dataset, could not be performed in the present investigation [23,24]. A patient-wise split has been proposed to reduce data leakage and improve generalizability [25]. However, when the total number of patients is small, patient-wise splitting may result in very few patients in the test or validation sets, and patient-wise splitting may result in only a few patients in the test or validation sets. This can lead to instability in performance measurements and reduce statistical power. Hence, we decided to utilize traditional data splitting, and multi-center should be conducted in the future for a larger number of patients, while patient-wise splitting is being utilized for generalizability [26,27].
The open-source platform for the web application limits real-time detection, and automated and real-time online applications could be a challenge in terms of their convenience for clinical usage. However, real-time inference capability should be addressed; YOLOv5 has shown inference speeds as low as 7–10 ms per image in GPU-enabled systems [28,29]. Furthermore, because the DL model in the current investigation was trained on postoperative HCP cases, it may not generalize to other etiologies without additional training and validation. Future research ought to explore the use of real-time monitoring with YOLOv5 in the real-world intensive care unit setting, as well as the model's validity in other etiologies of P waveform. Additionally, a comparative study between the YOLOv5-based model and other DL architectures, such as EfficientNet, or Transformer-based models, may be conducted in the future to evaluate performance differences in the task of ICP waveform detection.
Regarding the implications, the integration of DL-based ICP waveform analysis into bedside monitoring systems has the potential to revolutionize neurocritical care. A real-time DL-driven system that detects the early warning signs of intracranial hypertension and poor brain compliance should therefore be developed and tested in the future to validate clinical relevance, such as preventing delayed cerebral ischemia and guiding timely intervention [19,30,31]. Additionally, combining ICP waveform analysis with other physiological parameters, such as cerebral perfusion pressure and brain oxygenation, may further improve the prediction potential of artificial intelligence tools such as time series data analysis with personalized therapy [31,32].
A DL-based model demonstrated the ability of artificial intelligence to recognize pathological ICP waveforms, which improved clinical decision-making. As DL advances, its significance in neurocritical care will help to lay the foundation for more individualized and data-driven approaches to brain monitoring and management.
KEY MESSAGES
▪ Analysis of intracranial pressure (ICP) waveforms can facilitate early detection and identification of neurological deterioration.
▪ The deep learning-based model effectively detects the ICP waveforms that are indicative of poor brain compliance with high accuracy.
▪ By shifting from clinician-dependent assessment to artificial intelligence-based waveform analysis, this approach decreases subjectivity and enhances diagnostic consistency.
NOTES
-
CONFLICT OF INTEREST
No potential conflict of interest relevant to this article was reported.
-
FUNDING
None.
-
ACKNOWLEDGMENTS
None.
-
AUTHOR CONTRIBUTIONS
Conceptualization: TT. Methodology: TT, AT. Formal analysis: TT, AT. Data curation: TT. Visualization: TT, AT. Project administration: TT. Writing - original draft: TT, AT. Writing - review & editing: TT, AT. All authors read and agreed to the published version of the manuscript.
Figure 1.Workflow of intracranial waveforms detection. mAP: mean average precision; mAP_0.5: mAP at the 0.5 intersection over union; mAP_0.5:0.95: mAP at the 0.5–0.95 intersection over union; ROC: receiver operating characteristic; AUC: area under the ROC curve.
Figure 2.Performance matrices of the validation processes. (A) Box loss of validation. (B) Object loss of validation. (C) Class loss of validation. (D) Precision of validation. (E) Recall of validation. (F) Mean average precision at the 0.5 intersection over union (mAP_0.5) of validation.
Figure 3.Confusion matrix of poor compliance wave predictions using the test dataset.
Figure 4.Receiver operating characteristic (ROC) curve with area under the curve (AUC) of poor compliance wave predictions using the test dataset.
Figure 5.Gradio web application using Hugging Face. (A) The graphical user interface of the application on a laptop. (B) The graphical user interface of smartphones. ICP: intracranial pressure.
Table 1.Baseline clinical characteristics (n=21)
|
Variable |
Value |
|
Sex |
|
|
Male |
11 (52.4) |
|
Female |
10 (47.6) |
|
Age (yr) |
49 (36–59) |
|
Cause of hydrocephalus |
|
|
Tumor |
13 (61.9) |
|
Subarachnoid hemorrhage |
3 (14.3) |
|
Intracranial abscess |
3 (14.3) |
|
Intracerebral hemorrhage |
1 (4.8) |
|
Intracranial cyst |
1 (4.8) |
|
Location of tumor (n=13) |
|
|
Cerebellopontine angle |
6 (46.2) |
|
Pineal region |
4 (30.8) |
|
Cerebellum |
1 (4.8) |
|
Thalamus |
1 (4.8) |
|
Suprasellar region |
1 (4.8) |
Table 2.mAP of validation and test dataset
|
Class |
mAP_0.5 |
mAP_0.5:0.95 |
|
Validation dataset (n=549) |
|
|
|
Normal waveform |
0.991 |
0.829 |
|
Poor compliance waveform |
0.989 |
0.849 |
|
Test dataset (n=274) |
|
|
|
Normal waveform |
0.991 |
0.801 |
|
Poor compliance waveform |
0.994 |
0.842 |
Table 3.Prediction performance for poor compliance waveform using test dataset
|
Sensitivity |
Specificity |
PPV |
NPV |
Accuracy |
F1 score |
|
0.96 |
0.97 |
0.97 |
0.96 |
0.96 |
0.96 |
REFERENCES
- 1. Jitchanvichai J, Tunthanathip T. Effect of intracranial pressure monitoring on mortality following severe traumatic brain injury in Thailand: propensity score matching methods. J Emerg Crit Care Med 2024;8:1.Article
- 2. Trakulpanitkit A, Tunthanathip T. Comparison of intracranial pressure prediction in hydrocephalus patients among linear, non-linear, and machine learning regression models in Thailand. Acute Crit Care 2023;38:362-70.ArticlePubMedPMCPDF
- 3. Hawryluk GW, Aguilera S, Buki A, Bulger E, Citerio G, Cooper DJ, et al. A management algorithm for patients with intracranial pressure monitoring: the Seattle International Severe Traumatic Brain Injury Consensus Conference (SIBICC). Intensive Care Med 2019;45:1783-94.ArticlePubMedPMCPDF
- 4. Cucciolini G, Motroni V, Czosnyka M. Intracranial pressure for clinicians: it is not just a number. J Anesth Analg Crit Care 2023;3:31.ArticlePubMedPMCPDF
- 5. Czosnyka M, Pickard JD. Monitoring and interpretation of intracranial pressure. J Neurol Neurosurg Psychiatry 2004;75:813-21.ArticlePubMedPMC
- 6. March K. Intracranial pressure monitoring and assessing intracranial compliance in brain injury. Crit Care Nurs Clin North Am 2000;12:429-36.ArticlePubMed
- 7. Ramesh A, Beniwal M, Uppar AM, Rao M. Microsurgical tool detection and characterization in intra-operative neurosurgical videos. Annu Int Conf IEEE Eng Med Biol Soc 2021;2021:2676-81.ArticlePubMed
- 8. Jaruenpunyasak J, Duangsoithong R, Tunthanathip T. Deep learning for image classification between primary central nervous system lymphoma and glioblastoma in corpus callosal tumors. J Neurosci Rural Pract 2023;14:470-6.ArticlePubMedPMC
- 9. Tunthanathip T, Kaewborisutsakul T, Supbumrung S. Comparative analysis of deep learning architectures for performance of image classification in pineal region tumors. J Med Artif Intell 2025;8:13.Article
- 10. Nair SS, Guo A, Boen J, Aggarwal A, Chahal O, Tandon A, et al. A deep learning approach for generating intracranial pressure waveforms from extracranial signals routinely measured in the intensive care unit. Comput Biol Med 2024;177:108677.ArticlePubMed
- 11. Lei X, Pan F, Liu H, He P, Zheng D, Feng J. An end-to-end deep learning framework for accurate estimation of intracranial pressure waveform characteristics. Eng Appl Artif Intell 2024;130:107686.Article
- 12. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb) 2012;22:276-82.ArticlePubMedPMC
- 13. Mercaldo F, Brunese L, Martinelli F, Santone A, Cesarelli M. Object detection for brain cancer detection and localization. Appl Sci 2023;13:9158.
- 14. Tunthanathip T, Phuenpathom N, Jongjit A. Web-based calculator using machine learning to predict intracranial hematoma in geriatric traumatic brain injury. J Hosp Manag Health Policy 2023;7:16.Article
- 15. Supbumrung S, Kaewborisutsakul A, Tunthanathip T. Machine learning-based classification of pineal germinoma from magnetic resonance imaging. World Neurosurg X 2023;20:100231.ArticlePubMedPMC
- 16. Mataczynski C, Kazimierska A, Uryga A, Burzynska M, Rusiecki A, Kasprowicz M. End-to-end automatic morphological classification of intracranial pressure pulse waveforms using deep learning. IEEE J Biomed Health Inform 2022;26:494-504.ArticlePubMed
- 17. Fong N, Feng J, Hubbard A, Dang LE, Pirracchio R. Intracranial pressure prediction AlgoRithm using machinE learning (I-CARE): training and validation study. Crit Care Explor 2023;6:e1024. ArticlePubMedPMC
- 18. Hu X, Xu P, Asgari S, Vespa P, Bergsneider M. Forecasting ICP elevation based on prescient changes of intracranial pressure waveform morphology. IEEE Trans Biomed Eng 2010;57:1070-8.ArticlePubMedPMC
- 19. Güiza F, Depreitere B, Piper I, Citerio G, Jorens PG, Maas A, et al. Early detection of increased intracranial pressure episodes in traumatic brain injury: external validation in an adult and in a pediatric cohort. Crit Care Med 2017;45:e316-20.ArticlePubMed
- 20. Shen X, Sun X, Wang H, Fu X. Multi-dimensional, multi-functional and multi-level attention in YOLO for underwater object detection. Neural Comput Appl 2023;35:19935-60.ArticlePDF
- 21. Tarasiewicz J. 7 Problems you can’t ignore when working on object detection [Internet]. ATL; 2023 [cited 2025 Jun 7]. Available from: https://www.atltranslate.com/ai/blog/7-problems-in-object-detection-you-cant-ignore
- 22. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 2024;385:q902.ArticlePubMedPMC
- 23. Pachetti E, Colantonio S. 3D-vision-transformer stacking ensemble for assessing prostate cancer aggressiveness from T2w images. Bioengineering (Basel) 2023;10:1015.ArticlePubMedPMC
- 24. Parsarad S, Saeedizadeh N, Soufi GJ, Shafieyoon S, Hekmatnia F, Zarei AP, et al. Biased deep learning methods in detection of COVID-19 using CT images: a challenge mounted by subject-wise-split ISFCT dataset. J Imaging 2023;9:159.ArticlePubMedPMC
- 25. Veetil IK, Chowdary DE, Chowdary PN, V S, Gopalakrishnan EA. An analysis of data leakage and generalizability in MRI based classification of Parkinson's Disease using explainable 2D convolutional neural networks. Digit Signal Process 2024;147:104407.Article
- 26. Tunthanathip T, Sae-Heng S, Oearsakul T, Kaewborisutsakul A, Taweesomboonyat C. Economic impact of a machine learning-based strategy for preparation of blood products in brain tumor surgery. PLoS One 2022;17:e0270916. ArticlePubMedPMC
- 27. Tunthanathip T, Oearsakul T. Comparison of predicted survival curves and personalized prognosis among cox regression and machine learning approaches in glioblastoma. J Med Artif Intell 2023;6:10.Article
- 28. Khanam R, Asghar T, Hussain M. Comparative performance evaluation of YOLOv5, YOLOv8, and YOLOv11 for solar panel defect detection. Solar 2005;5:6.Article
- 29. Vina A. Real-time inferences in vision AI solutions are making an impact [Internet]. Ultralytics; 2023 [cited 2025 Jun 7]. Available from: https://www.ultralytics.com/blog/real-time-inferences-in-vision-ai-solutions-are-making-an-impact
- 30. Kaewborisutsakul A, Tunthanathip T. Development and internal validation of a nomogram for predicting outcomes in children with traumatic subdural hematoma. Acute Crit Care 2022;37:429-37.ArticlePubMedPMCPDF
- 31. Jitchanvichai J, Tunthanathip T. Cost-effectiveness of intracranial pressure monitoring in severe traumatic brain injury in Southern Thailand. Acute Crit Care 2025;40:69-78.ArticlePubMedPMCPDF
- 32. Kaewborisutsakul A, Sae-Heng S, Kitsiripant C Benjhawaleemas P. The first awake craniotomy for eloquent glioblastoma in southern Thailand. J Health Sci Med Res 2020;38:61-5.ArticlePDF
Citations
Citations to this article as recorded by

- Perioperative Anesthetic Strategies in Emergent Neurosurgery During Severe Traumatic Brain Injury
Denise Baloi, Clayton Rawson, Deondra Montgomery, Michael Karsy, Mehrdad Pahlevani
Trauma Care.2026; 6(1): 5. CrossRef