A Hybrid Architecture Approach for Emotion-Aware Multimodal Content Personalization
DOI:
https://doi.org/10.63575/CIA.2025.30105Keywords:
multimodal content personalization, emotion recognition, hybrid architecture, machine learning, user experienceAbstract
This paper presents a novel hybrid architecture for emotion-aware multimodal content personalization that addresses the critical challenges of computational efficiency and content relevance in digital media recommendation systems. Our approach introduces an emotion-aware dimension to content evaluation, leveraging a split offline-online processing model to minimize latency while maximizing the emotional coherence between primary content, supplemental content, and user preferences. The proposed system generates multimodal embeddings that capture emotional attributes across visual, audio, and textual modalities during an offline phase, enabling rapid online matching and ranking during content delivery. Experimental results demonstrate that our emotion-aware hybrid architecture achieves a 37% improvement in user engagement metrics while reducing computational overhead by 42% compared to traditional real-time recommendation approaches. Through comprehensive ablation studies, we validate the contribution of each component to the overall system performance, highlighting the particular importance of emotional context in personalized content delivery. This work advances the state of the art in multimodal content personalization by effectively integrating emotional awareness into the recommendation pipeline while maintaining practical computational efficiency for real-world applications.