All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear authors, we are pleased to verify that you meet the reviewer's valuable feedback to improve your research.
Although in the last round there was only feedback from one of the previous reviewers, after my verification, the manuscript is ready to be accepted.
Thank you for considering PeerJ Computer Science and submitting your work.
Kind regards
PCoelho
[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]
The manuscript improved a lot.
I Recommend to publication
No need
Dear authors,
You are advised to critically respond to all comments point by point when preparing an updated version of the manuscript and while preparing for the rebuttal letter. Please address all comments/suggestions provided by reviewers, considering that these should be added to the new version of the manuscript.
Kind regards,
PCoelho
[# PeerJ Staff Note: PeerJ can provide language editing services if you wish - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Your revision deadline is always extended while you undergo language editing. #]
Clarity and Language:
Overall, the language is clear and precise, though there are some areas where clarity could be improved. For instance, in the abstract and introduction, the description of the problem could benefit from more concise phrasing. Some sentences are long and can obscure the key message.
Examples: Sentences like "Due to the limited computing resources typically available in such application environments, there is an urgent need to develop efficient super-resolution algorithms" could be more concise.
Minor grammatical improvements could enhance readability in sections like "The GPDN leverages multi-level stacked Feature Distillation Hybrid Units..."
The authors must carefully review the draft again to address any spelling errors and ensure the clarity and professionalism of the manuscript.
Introduction and Background:
The introduction provides a solid background and justification for the study. However, it could benefit from including a more thorough comparison of how the proposed method improves on existing approaches.
The literature cited is relevant, but including more recent advancements in SR models could provide a stronger context.
For instance, citing and discussing more recent work on SR methods with attention mechanisms and transformer-based models could help better frame the work's novelty.
Figures and Tables:
The figures are relevant and adequate. However, Figure 3, depicting the Channel Attention Module, could benefit from a more detailed explanation in the text, as it seems somewhat disconnected from the overall flow of the discussion.
The parameters and performance comparisons with state-of-the-art methods in Table 6 are valuable. However, it might be helpful to provide a brief explanation or context for the large disparities in parameter counts and FLOPs in the text.
Adding reference numbers to all Tables for all state-of-the-art (SOTA) models used for comparison, and datasets would improve the clarity and traceability of the information presented. This would enable readers to easily cross-reference the models and datasets mentioned in the paper with their respective references, enhancing the overall readability and credibility of the study.
Novelty and Scope:
The research presents a novel approach with the Gradient Pooling Distillation Network (GPDN), which seems relevant within lightweight super-resolution methods. The model's architecture and use of hierarchical pooling are novel, though the impact of this design on practical applications could be discussed further.
Methodology:
The method section is well-detailed, but some areas would benefit from greater clarity for replicability. For instance, more quantitative evidence could expand the specifics of the GPD module and its advantages over previous pooling methods.
The description of the learning rate scheduler (KneeLR) and how it affects training is helpful, but providing additional intuition or rationale for the choice of this scheduler could strengthen the methodology section.
Comparisons with SOTA:
The comparisons with state-of-the-art methods are adequate, though it would be beneficial to explain in more detail why GPDN outperforms in specific datasets but not others. For example, the discussion in Table 6 shows that GPDN performs comparably to or better than models like CARN in terms of parameters. Still, a deeper discussion on practical trade-offs (e.g., quality versus speed) would provide further insight.
Data Presentation:
The data from the experiments are generally well-presented. However, it might be helpful to include more detailed visual examples comparing the output quality of the super-resolved images from different models, mainly where GPDN offers improvements. This would help reinforce the findings.
Computing PSNR and SSIM for the visual patches would provide quantitative measures to support the visual comparisons, enhancing the robustness and credibility of the presented results.
Results and Performance:
The results section adequately demonstrates the model's performance across various datasets. However, the section could benefit from further discussion on the limitations of the GPDN, such as potential drawbacks in terms of robustness or overfitting to specific dataset characteristics (e.g., texture complexity).
Adding analysis of the computational requirements, including parametric comparison discussions on time and space complexity, would strengthen the paper's contribution by providing insights into the efficiency gains achieved.
Conclusions:
The conclusions are well-stated and aligned with the results. However, providing more specific recommendations for future work based on the identified gaps or limitations would strengthen the impact of the conclusions.
The proposed GPDN is promising in balancing lightweight design with high-performance super-resolution reconstruction. The hierarchical pooling mechanism is a strong point of the model, though explaining its distinct advantages could be more robust.
The paper's overall structure is solid, and the experimental results are convincing, though deeper analysis of the comparative performance with SOTA models would enhance the discussion.
GPDN addresses the challenge of enhancing low-resolution images to high-resolution while maintaining computational efficiency, making it suitable for resource-constrained scenarios such as autonomous driving and mobile devices. However, the paper need some improvements:
1. Provide a clearer discussion on the trade-offs between accuracy, computational efficiency, and model size, including specific cases where GPDN outperforms others.
2. Explain architectural choices, such as hierarchical pooling or specific residual connection types, with theoretical backing or related works.
3. Alongside PSNR and SSIM, include perceptual quality metrics such as LPIPS (Learned Perceptual Image Patch Similarity) or human visual perception metrics to evaluate the quality more comprehensively.
4. Highlight potential challenges, such as performance in very high magnification factors or images with extreme noise.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.