Короткий опис (реферат):
Early and accurate detection of retinal pathologies is critical for preventing vision loss and enabling timely clinical intervention. Traditional computer vision techniques, such as thresholding, edge detection, morphological filtering, and Hough transforms, have long been used to extract features from retinal fundus images, yet their performance is often constrained by image variability and complex pathological presentations. This study presents a hybrid deep learning architecture that integrates Convolutional Neural Networks (CNNs) for image-based classification with Recurrent Neural Networks (RNNs), specifically Long Short-Term Memory (LSTM) units, to model geometric and anatomical features derived from classical methods. This architecture allows for the fusion of pixel-level deep features with clinically interpretable descriptors, including optic disc-fovea distance, lesion spatial distribution, and vessel curvature sequences. Comparative analysis demonstrates that the proposed hybrid model achieves superior diagnostic accuracy, reaching 97%, significantly outperforming both conventional image processing approaches and CNN-only baselines. The results indicate that incorporating structured domain knowledge into neural models improves both performance and interpretability, offering a robust framework for real-world retinal disease screening applications.