All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear Authors,
I appreciate your time and dedication through this paper and I now confirm it is accepted for publication.
Best regards,
Marta Lovino
[# PeerJ Staff Note - this decision was reviewed and approved by Mike Climstein, a PeerJ Section Editor covering this Section #]
Dear Authors,
while the reviewer comments have been addressed, the manuscript requires minor revisions concerning the figures and tables. The figures appear to have been manipulated, distorting the font and potentially misleading the reader. Additionally, they are currently in the reviewer response but missing from the main text (if not, let me know).
The authors must address the figure issues by ensuring they are not stretched and are included in the main text. They should also consider incorporating tables from their response to Reviewers into the main manuscript.
Best regards,
Marta Lovino
All the comments have been incorporated
All the comments have been incorporated
All the comments have been incorporated
All the comments have been addressed
Dear Authors,
Both reviewers identified areas requiring significant improvement. Reviewer 1, while acknowledging the detailed experimental analysis, emphasized the marginal novelty of the work and suggested expanding the introduction with background on kidney cancer, medical imaging techniques (MRI/CT), and existing ML algorithms in this field.
Reviewer 2 raised substantial concerns regarding the lack of detail in the abstract, including missing information on dataset size, model architecture, and evaluation metrics (precision, recall). Like Reviewer 1, this reviewer also highlighted the need for a clearer problem statement and a stronger justification of the work's contribution in the introduction, recommending the inclusion of recent studies. Please proceed with a more rigorous evaluation, including varying batch sizes, learning rates, optimizers, and a comparative analysis with SOTA models on original and augmented data. Please also address all other points raised by both reviewers carefully.
Best regards,
Marta Lovino
The authors examine how pre-trained architectures can be used to predict kidney cancer. The experimental part is detailed, however the novelty is marginal. The authors should address the attached comments to significantly enhance the quality of their manuscript.
no comment
no comment
1) Overall, the experimental analysis is detailed. However, the novelty of the paper is marginal! The authors should write more about the article’s novelty at the end of the introduction section.
2) I think that the introduction should begin with a paragraph referring to the characteristics of kidney cancer and the specificities of supervised and unsupervised learning in the field of medical imaging.
3) Moreover, since the introduction is small, a paragraph referring to MRI images and CT scans should be added.
4) Are there any ML algorithms used for kidney cancer detection in the literature?
5) I believe that more references must be added.
6) Line 110: add a “space” before the word “presents”.
7) Avoid these extra titles in the manuscript, such as “Our Contributions, Computing Infrastructure, data preprocessing”, “Nadam”, e.g. Remove ALL of them.
8) It is not necessary to use commas after “TABLE X” and “FIG X”.
9) Line 250: “aswell” is 2 words.
10) How many fully connected layers did the authors use?
11) How did the authors select the learning rate or batch-size?
12) How did the authors select the number of epochs? More comments should be added in the manuscript.
13) The authors should significantly enhance the quality of their figures. Use greater font sizes for axis and labels.
14) The section of limitations is not informative enough. What about more architectures? Low-Explainability? Non trivial feature extraction techniques?
see the below list , i have mentioned all the details
see the below list , i have mentioned all the details
see the below list , i have mentioned all the details
The author has proposed ‘Fine-tuned deep transfer learning: An effective strategy for the accurate chronic kidney disease classification’ I have following comments:
- Abstract:
- From the abstract it can be seen that the details of the dataset is missing, author must provide the detail such as number of images in the dataset, number of classes.
- The details of the proposed model isn’t also mentioned about its architecture, new techniques they have used, and more,
- When it comes to results in the abstract it can be also noticed that not details are presented such as except accuracy, author must add more details such as precession, recall and also comparative results are also missing, author must add those
- Introduction:
- It is very brief the background and problem statement is not clearly stated, author must fix that
- When it comes the last para of the introduction, author has provided the summary of the proposed work and it can be seen that there is hardly any contribution in this work. It is more like a comparative study rather a proposed model study.
- Instead of providing the objective of the study the author should provide the list of real contribution.
- From the literature it can be seen that author has not quoted the recent studies in the said domain. It is suggested to enhance the literature.
- It is very uncommon to mention the contribution of the study in the literature part. Author must shift that to introduction.
- Material and methods:
- It is not clear which dataset the author has used. They have quoted a paper [31] upon checking that that paper also have used multiple datasets, so it is unclear which dataset has been used.
- There is also issue with dataset, author has not mentioned anything about the number of images for each class. How many has been used for training, validation and testing. It is must first author provides the correct link to dataset, plus accurate distribution of dataset in a table.
- Table 4 is vague, author talks about preprocessing, then mentions they have used data augmentation due to imbalance of dataset. as mentioned before it is not clear at first place how many images are there for each class.
- Talking about data augmentation (DA), it is again vague how many images has been created, what is the total dataset after data augmentation. Which technique of DA has been used.
- Evaluation matrix details are missing
- Results:
- Results evaluation is poorly done. Author should read papers from quality journals how result section is written. There is no proper way of writing results and discussion section used.
- Author should rewrite the whole result section, starting with training of
- SOTA models on original data present their training, and validation details for accuracy, loss, precision, Fi score, inference time
- Then train them on augmented data using same details as above
- Then train them using transfer learning and other tuning parameters.
- Author must show different evaluation based on batch size starting from 8,16,32, 64
- Learning rate from 0.01,0.001, 0.0001 ... 0.00001
- Show which optimizer they have used, such as Adam and try two three optimizers.
- Then do the comparative results
- Finally show the optimal model training, validation and testing
- Without above details this paper has no contribution
-
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.