Multimodal Deep Learning Framework for Early Parkinson's Disease Detection Through Gait Pattern Analysis Using Wearable Sensors and Computer Vision
DOI:
https://doi.org/10.63575/Keywords:
Parkinson's disease, multimodal deep learning, gait analysis, wearable sensorsAbstract
This study presents a novel multimodal deep learning framework that integrates wearable sensor data and computer vision techniques for early-stage Parkinson's disease detection through comprehensive gait pattern analysis. The proposed system combines inertial measurement units, accelerometers, and computer vision-based pose estimation to capture multidimensional gait characteristics. A hybrid CNN-LSTM architecture with attention mechanisms processes temporal and spatial features from heterogeneous data sources. Experimental validation on a dataset of 184 participants (92 early-stage PD patients, 92 healthy controls) demonstrates superior performance with 94.2% accuracy, 93.8% sensitivity, and 94.6% specificity. The multimodal fusion approach outperforms unimodal methods by 8.3% in overall classification accuracy. Feature importance analysis reveals stride variability, postural sway metrics, and temporal gait parameters as the most discriminative biomarkers for early PD detection. The system provides clinically interpretable results and demonstrates potential for real-world deployment in healthcare settings.