E2DSR: Edge-Enhanced Representation for Deep Super-Resolution in Machine Vision Applications
Abstract
While deep-based super-resolution (SR) has achieved remarkable progress, state-of-the-art models like EDSR often rely solely on pixel-level information, resulting in overly smooth outputs that often fail to reconstruct the fine-grained edge details essential for downstream machine vision tasks. To address this challenge, we propose the Edge-Enhanced Deep Super-Resolution (E2DSR) model, a task-aware framework that leverages explicit edge guidance to enhance the reconstruction process with high-frequency edge information. E2DSR integrates a novel Edge Feature Enhancement Block (EFE) into a deep residual architecture, which learns to extract and fuse salient edge features from the low-resolution input. We demonstrate the effectiveness of our approach within a gesture recognition, where E2DSR significantly enhances input quality for a state-of-the-art YOLOv10 detector. Experimental results show that our method substantially outperforms the original EDSR and other approaches, improving the mean average precision (mAP) from 0.776 to 0.822 on average across four representative gesture action types. Our work demonstrates that explicit edge guidance is a crucial component for developing super-resolution models that excel in practical machine vision applications.
DOI: http://dx.doi.org/10.21553/rev-jec.427
Copyright (c) 2025 REV Journal on Electronics and Communications
ISSN: 1859-378X Copyright © 2011-2025 |
|