Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on November 21st, 2019 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on January 7th, 2020.
  • The first revision was submitted on February 8th, 2020 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on February 17th, 2020.

Version 0.2 (accepted)

· Feb 17, 2020 · Academic Editor

Accept

The revised manuscript improved a great deal and can be accepted as it is.

[# PeerJ Staff Note - this decision was reviewed and approved by Keith Crandall, a PeerJ Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

No comment.

Experimental design

No comment.

Validity of the findings

No comment.

Additional comments

The authors have addressed reviewers' comments.

Version 0.1 (original submission)

· Jan 7, 2020 · Academic Editor

Minor Revisions

Two specialists in the field evaluated this submission. They see merits in the manuscript and suggest minor revisions. Please ensure that the English language in this submission meets our standards: uses clear and unambiguous text, is grammatically correct, and conforms to professional standards of courtesy and expression. Considering the evaluation carried out by these reviewers, I recommend minor revision in this paper.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in an appropriate rebuttal letter, and please ensure that any edits or clarifications mentioned in the rebuttal letter are also inserted into the revised manuscript (where appropriate). Direction on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

This manuscript, to the reviewer, is well-organized and easy to read. The use of English is acceptable. Literature review is also enough and relevant.

Experimental design

This study fits the journal well. The research question is well-defined. The data collection process needs more explanations.

Validity of the findings

The proposed model might be overfitting. This should be solved.

Additional comments

This study is interesting for adopting machine learning approaches to predict leg weakness and lameness of pigs. Their findings can have important practical implications. Several suggestions are provided for the authors.

1.‘Materials and Methods’ section: How predictors are selected for building the model should be explained.
2.‘Materials and Methods’ section: It is suggested to present the operational definitions and measurement scale of predictors used in this study. Further, how these predictors are collected should be delineated as well. If data collection involves human judgment, what are the qualifications of those raters? And also how to ensure the reliability of those collected data?
3.‘Materials and Methods’ section: Is there any particular considerations for choosing these nine machine learning algorithms?
4.‘Data Preparation’ section: Regarding class imbalance issue, it is suggested to explicitly express what method (under- or over-sampling) is used to solve the issue instead of saying ‘…using the ROSE package…’
5.‘Model training’ section: This study randomly chose 10% of the sample for the final assessment, nonetheless no assessment results was demonstrated in this study.
6.‘Results’ section: The proposed model is suffered from ‘overfitting’ because the prediction accuracy of train data is higher than that of test data (lines 238-239). This issue should be solved or the proposed model is unable to predict new data accurately.

Reviewer 2 ·

Basic reporting

See below

Experimental design

See below

Validity of the findings

See below

Additional comments

The paper can be accepted but it needs to be corrected and updated. The comments are:

a) There are many grammatical mistakes such as:

Pork is the most widely consumed meats in the world

b) Description of the data balancing techniques used is not given

c) These also exist some classifiers whose performances are least affected by class imbalance. Comments on the that is also required.

d) The reason of using a method for filling the missing values needs to be quoted. And also it needs to mentioned that who effective their approach was?

e) The significance of Kappa value (Table 1), Friedman rank (Table 2) along with the other metrics needs to be mentioned. Also it needs to be mentioned the relevance of these tests w.r.t. these results.

f) The description of the data used needs more details

g) Figure 1 is confusing w.r.t. RF and KNN classifiers

h) Figure 2 and figure 3 are not readable, quality of these figures need to be improved

i) More emphasis can be given on the proposed methodology and the conclusions of this work

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.