Explainable Stroke Detection using Transfer Learning and Stacking Technique
1 Amrita Ticku; 2Anu Rathee; 3 Dhruv Mathur; 4 Deepika Yadav; 5 Ayush SrivastavStroke stays among the world's foremost trigger of mortality and disability by annually affecting numerous people with severe medical outcomes. Medical diagnostics requires immediate correct stroke detection because delayed or incorrect stroke diagnosis can lead to severe neurological disabilities or death. In clinical environments, beyond achieving high diagnostic precision, it is imperative that models offer interpretability to foster clinician trust, support informed decision-making, and uphold accountability in AI-assisted healthcare interventions. Therefore, AI-driven stroke detection systems must balance predictive performance with transparency to ensure safe and reliable deployment. This study proposes an Explainable Stacked System for Stroke Detection (EXS3D), that used a Stacking Ensemble technique and transfer learning with multiple deep learning models (Res Net, Efficient Net, Dense Net) as base classifiers, whose outputs were combined through a meta-level Logistic Regression model. To enhance transparency, the system employed Grad-CAM for visual explainability of image-based features in base models, and SHAP and LIME frameworks to interpret the decision-making of the final meta model. The EXS3D system achieved an accuracy of 97.37%, with the meta-model outperforming individual base models in predictive performance. EXS3D exemplifies how explainable AI can be seamlessly integrated into ensemble learning for high-stakes domains like stroke detection.