Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on May 5th, 2025 and was peer-reviewed by 4 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 7th, 2025.
  • The first revision was submitted on July 24th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on August 14th, 2025.

Version 0.2 (accepted)

· Aug 14, 2025 · Academic Editor

Accept

Reviewers' concerns have been addressed.

[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]

Reviewer 3 ·

Basic reporting

The paper's cross-disciplinary interest aligns well with the scope of the journal. The authors have made a commendable effort to introduce the subject.

Experimental design

The authors showed and explained very neately about the subject and all the referenced papers have been cited properly.

Validity of the findings

The authors have made a sincere effort to address disease detection using automated machine learning. However, the study does not focus on any specific type of disease. Since input data characteristics vary across different diseases, the paper would have been more impactful if it had highlighted specific techniques tailored to each disease.

Cite this review as

Version 0.1 (original submission)

· Jul 7, 2025 · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.

**Language Note:** When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at [email protected] for pricing (be sure to provide your manuscript number and title). – PeerJ Staff

Reviewer 1 ·

Basic reporting

This study presents a comprehensive evaluation of AutoML for disease detection.

Experimental design

-

Validity of the findings

-

Additional comments

I recommend several points for the authors to consider to further deepen their current study.

AutoML was a popular approach previously, but in the era of foundation models, across image, text, and multi-modal data, its role and utility deserve further discussion. The computational resources required for AutoML, especially on language models, may pose a significant barrier for many healthcare institutions, and it’s worth investigating whether AutoML‑generated models outperform foundation models in a way that justifies these additional costs (https://doi.org/10.1002/med4.70001).

This consideration ties directly to Question 10 raised by the authors. Foundation models can leverage vast amounts of publicly available data, which can enable strong performance, especially in text modalities (like large language models). However, in the context of medical imaging, their performance may not be as robust, underscoring the continued value and significance of AutoML in this era of large models (https://doi.org/10.1002/hcs2.70009).


**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.

Cite this review as

Reviewer 2 ·

Basic reporting

The manuscript has significant methodological flaws, inconsistent referencing, and superficial analysis. Critical technical errors (e.g., broken figures, incomplete data) and authors have not done any comparative analysis concerning significant aspects of the field of study. Other than Figure 2 and Table 7, the manuscript does not have any significant technical content, and it's not truly a Prisma analysis.

Experimental design

-

Validity of the findings

-

Cite this review as

Reviewer 3 ·

Basic reporting

Do not feel that the paper is suitable for the scope of the journal; it mainly focuses on existing AIML tools for detecting the disease. Did not see the novelty.

Experimental design

The survey has been done fantastically, but failed to identify the gaps in each.

Validity of the findings

Instead of providing general steps to follow for disease detection, try to identify suitable machine-learning models for specific disease identification or classification.

Additional comments

Work on specific diseases and try to find pros and cons.

Cite this review as

Reviewer 4 ·

Basic reporting

1) The review done by the authors is good and decent, but there are many papers on this work. The main motivation why this topic is selected broadly.

2) Disease detection is a broad topic and this paper is not narrow.

3) The paper does not address any particular disease, but a combination of AutoML methods only. This doesn't justify the title.

Experimental design

The paper's design is good and addresses the key aspects.

Validity of the findings

The performance of the study is not appropriately reported. They lack which datasets + which disease + which Author ML methods that have the best accuracies.

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.