Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on November 15th, 2024 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on January 14th, 2025.
  • The first revision was submitted on February 21st, 2025 and was reviewed by 3 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on March 3rd, 2025.

Version 0.2 (accepted)

· Mar 3, 2025 · Academic Editor

Accept

All the reviewers' comments have been addressed carefully and sufficiently, the revisions are rational from my point of view, I think the current version of the paper can be accepted.

[# PeerJ Staff Note - this decision was reviewed and approved by Daniel S. Katz, a PeerJ Section Editor covering this Section #]

·

Basic reporting

No more comments.

Experimental design

No more comments.

Validity of the findings

No more comments.

Additional comments

No more comments.

Cite this review as

Reviewer 2 ·

Basic reporting

no comments

Experimental design

no comments

Validity of the findings

no comments

Additional comments

Author addressed all comments, the paper can be accepted now.

Cite this review as

Reviewer 3 ·

Basic reporting

The authors have revised this paper according to the given suggestions.
This paper should be accepted.

Experimental design

This paper is well-written and well-organized.

Validity of the findings

The findings and results are correct and valid.

Additional comments

The authors have revised this paper carefully and meet the level of acceptance.

Cite this review as

Version 0.1 (original submission)

· Jan 14, 2025 · Academic Editor

Major Revisions

Dear authors,

Reviewers have now commented on your paper. You will see that they advise you to make major revisions to your manuscript. If you are prepared to undertake the work required, I would be pleased to reconsider my decision.

If you decide to revise the work, please submit a list of changes or a rebuttal against each point that is being raised when you submit the revised manuscript.

Best wishes,
D. Pamucar

·

Basic reporting

.1. English Language and Readability
• The manuscript is generally readable and written in professional, academic language. However, there are areas where phrasing becomes overly long or convoluted, potentially obscuring meaning for an international audience. For example, sentences in Sections 3.2 and 5 are sometimes too dense and would benefit from rephrasing for clarity.
• Some minor grammar inconsistencies appear (e.g., occasional omission of articles, minor punctuation issues). Although not pervasive, careful proofreading would further polish the manuscript. Below are a few illustrative examples (verbatim excerpts) from the manuscript where minor grammar and/or punctuation issues appear. While these do not severely impact comprehension, they do show where additional proofreading would help:

o Omission or Misuse of Articles
“It is based on the concept that, given a set of alternatives and two ideal solutions, one positive (PIS) and one negative (NIS), the best alternative is the one that has the shortest Euclidean distance from the positive ideal solution and the longest distance from the negative ideal solution.” (Section 2.2.2)

Here, there is no critical error, but the sentence is lengthy and would read more clearly as something like:
“It is based on the concept that, given a set of alternatives and two ideal solutions—a positive (PIS) and a negative (NIS)—the best alternative is the one closest to the positive ideal solution and farthest from the negative ideal solution.”

o Inconsistent or Missing Hyphenation
“… as it is necessary to support the decision making. This is an open problem that has not been analysed in the context of SPL and cannot be directly applied from other domains.” (lines 53–54)

A minor stylistic point is that “decision making” here would typically be hyphenated as “decision-making” in formal writing.

o Awkward/Run-On Sentence Construction
“It should also be noted that FM allows features to have multiple values due to the flexibility of the OR operator.” (Section 3.2)

While not grammatically wrong, it can be tightened to something like:
“Notably, the FM may allow features to have multiple values via the OR operator, introducing additional complexity.”

o Pronoun/Antecedent Shifts
“… we need to help him/her decide which option is better in our case and why, so that he/she can make an informed choice.” (Section 3.4)

A more concise construction would avoid bundled pronouns (e.g., “him/her” and “he/she”) or rephrase these references neutrally, like “the user” or “they.”

In general, these issues appear sporadically—mostly involving missing articles (“the,” “an”) or small style inconsistencies—rather than repeated fundamental grammatical errors. A meticulous review (or a copyedit pass) to standardize article use, punctuation, and sentence flow should resolve these points.

2. Literature Context and References
• The related work section covers diverse contributions on software product lines (SPL) and multi-criteria decision-making techniques, referencing well-known methods such as AHP, TOPSIS, and VIKOR .
• Relevant works on test prioritization (e.g., Section 8) and multi-objective optimization approaches (e.g., genetic algorithms for SPL) are properly mentioned. This provides a decent context for the research. However, in some places, the review of existing MCDM methods remains slightly superficial. For instance, the discussion focuses primarily on AHP, TOPSIS, and VIKOR and briefly mentions others (PROMETHEE, ELECTRE) but does not deeply compare them.
• The background on variability models and SPL is adequately introduced (Sections 1 and 2.1). While the manuscript references relevant works, some more recent studies (beyond 2021) in MCDM or SPLs might be included to strengthen currency
.
3. Structure, Figures, and Tables
• The manuscript is logically organized into well-defined sections matching the authors’ stated methodology: introduction, foundations, methodology, detailed application, decision support, evaluation, discussion, and conclusion.
• Figures (e.g., Figures 4, 5, 6) and tables (e.g., Tables 1, 2, 5, etc.) are clearly labeled and generally aligned with the text. They are useful in clarifying the methods and examples.
• The arrangement of subheadings (particularly in sections 4 and 5) is coherent; however, some crucial subsections (notably 3.4 and 5) could benefit from more explicit statements summarizing key points.

4. Raw Data and Supplementary Material
• The manuscript references the AMADEUS framework, which apparently provides a repository for code. The mention of GitHub is helpful; however, it would be beneficial to confirm whether the raw data, pairwise matrices, and all numeric scale assignments are fully available. Offering more direct links or additional appendices with step-by-step examples (especially the preference matrices) would enhance reproducibility.

Experimental design

1. Research Question and Scope
• The manuscript states a clear research question: “How to resolve discrepancies among different multicriteria decision-making methods when prioritizing configurations in an SPL?” This question is relevant and meaningful for the SPL community.
• The study is firmly within the scope of software product lines and multi-criteria decision-making, aligning with typical journal criteria.

2. Methodological Rigor
• The paper introduces a phased methodology: (1) define input/model, (2) set preferences, (3) apply MCDM methods, and (4) handle discrepancies. This structure is thorough in concept.
• The authors provide two application examples (the Apache feature model and the SSL/TLS feature model), showing the process in detail. These examples help illustrate the approach’s practicality.

3. Detail for Replication
• While the conceptual details of each MCDM method (AHP, TOPSIS, VIKOR) are well summarized, certain numerical matrices (especially for TOPSIS and VIKOR) rely on partially described preference scales. The authors do give examples (like Tables 8 and 16), but occasionally, the reasoning behind specific numeric assignments is not sufficiently explained. Making explicit how the scale from “false/want/true” or “v1, v1.1, v1.2 & v1.3” was settled upon based on user preferences would bolster the paper.
• The authors claim the approach can handle large feature models, but actual performance/scalability constraints are not discussed in detail. More metrics (e.g., time complexity or a rough estimate of feasible model size) would be useful.

4. Ethical Standards
• Not directly applicable, as this is not human-subjects research. There are no apparent ethical concerns.

Validity of the findings

1. Data Soundness and Statistical Support
• The authors demonstrate how each MCDM method yields a ranking of the same set of configurations, sometimes yielding discrepancies. This is consistent with established knowledge of MCDM methods, where the difference in distance metrics or weighting can cause alternative order shifts.

• The authors’ informed decision-making phase (Phase 4) is a useful addition, highlighting differences and “overall deviation.” However, one concern is that the authors rely largely on example-based validation. A broader or more formal proof of how robust these merges or “overall deviation” measures are would strengthen the claims.

2. Conclusions and Justification
• The conclusions (Section 9) align with the core results. They identify that the methodology helps interpret different permutations of MCDM-based results and provide some guidelines for future work.
• The novelty claimed is that this is one of the first attempts to systematically integrate multiple MCDM methods in feature model prioritization and then reconcile their discrepancies. This is a valid and interesting proposition, though it could be further enhanced by a more extensive comparison with existing tools or frameworks that attempt partial prioritization.

3. Limitations
• The authors acknowledge that the approach depends on expert-driven weighting, which can be subjective. They also note that large-scale FMs require substantial effort in preference specification. A deeper discussion on how to mitigate or manage these limitations (e.g., advanced partial weighting strategies or automated preference learning) would be helpful.
• There is no specific measure of the method’s computational efficiency on significantly large industrial contexts. This remains a gap that might be addressed in future studies.

Additional comments

GENERAL COMMENTS
1. Most Important Issue
• Clarification on Example Scales and Preferences: The paper would benefit from more transparent justification for the numeric scales assigned to each feature instance. Currently, the paper presents these scales and pairwise matrices but does not always detail the practical rationale behind the assigned values. Providing a deeper explanation increases confidence in reproducibility and correctness.

2. Next Important Item
• Handling Large Feature Models: The authors mention the approach can scale to large models, but evidence or a short proof-of-concept on time or memory costs (even approximate) is missing. This is particularly important given that an FM could contain thousands of features.

3. Additional Points
• The writing in a few sections can be tightened to remove redundancy (e.g., repeating method steps). For instance, the authors restate how AHP or TOPSIS works in multiple places. Streamlining the text would improve flow without sacrificing clarity.
• The suggestions for future work in Section 9 are appropriate, particularly for investigating other MCDM methods and further automation of the discrepancy-handling phase. This could further extend the paper’s impact.

4. Least Important Points
• A minor formatting detail: Some tables (e.g., Table 21) appear quite large, and abbreviations or footnotes clarifying the scale (e.g. “0%, 50%, 100%”) might be further explained for quick reference. Additionally, the styles used for tables vary; some have borders while others do not. Consistency in style could enhance the overall visual appeal of the paper.

Commendation: The authors present a comprehensive framework and thorough examples, demonstrating both practicality and clarity in the methodology. Their approach to bridging the gap when multiple MCDM methods yield different outcomes is particularly strong. The “informed decision-making” phase is nicely articulated, helping users see the rationale behind discrepancies.

Cite this review as

Reviewer 2 ·

Basic reporting

no comments.

Experimental design

no comments.

Validity of the findings

no comments.

Additional comments

1. The abstract should be improved. The present abstract is not up to the mark.
2. The contribution and the motivation of the proposed theory are missing. The authors must describe the new contributions and the motivation of the proposed work in the introduction.
3. The authors should improve the problem description. The current problem description is not clear and hard to understand.
4. As the authors discussed decision making approach so they should discuss and add recent work on decision-making approaches
5. The broader significance and practical implications of the work should be added in the manuscript.
6. The authors need to compare the proposed approach with existing methods.
7. The conclusion is written like a contribution, but there is no deep analysis of obtained knowledge, no discussion, or wider application.
8. How the authors can expand their work to other mathematical frameworks such hesitant bipolar complex fuzzy set devised by Tahir Mahmood?
9. Add some merit and limitations of the proposed work.

Cite this review as

Reviewer 3 ·

Basic reporting

The paper focused on resolving discrepancies that arise when different multi-criteria decision-making (MCDM) methods, such as AHP, TOPSIS, and VIKOR, produce varied rankings for the same feature model and user preference criteria.
The authors proposed a systematic framework to guide users in selecting the most suitable configuration. The framework integrates decision support mechanisms to prioritize configurations methodically, even when different rankings are produced. This solution is validated through a real-world case study, demonstrating its practical applicability and effectiveness.
The paper is well-organized, well-written and mathematically correct up to the best of my knowledge.
I recommend that this article can be published. However, I have some suggestions given below:
1. Rewrite the abstract and the introduction sections of the paper, it should provide problem statement with contributions.
2. The study could benefit from a deeper comparative analysis of AHP, TOPSIS, and VIKOR. Specifically, discussing scenarios where one method outperforms others or is more suitable based on certain criteria would add significant value.
3. Each new definition should be followed by an example. Add a comparative study of fuzzy sets, AHP, TOPSIS, to show advantages of this study.
4. The paper could introduce quantitative metrics or benchmarks to evaluate the framework’s performance. Metrics such as computational efficiency, accuracy in reflecting user preferences, or resolution time for discrepancies would be useful.
5. Include some related references of recently published in this journal. Some comments on literature are not up to date. Please read and update related literature of TOPSIS, VIKOR and linear Diophantine fuzzy sets, etc.
6. The proposed framework is innovative and practical, with a strong foundation in established decision-making techniques. While there are opportunities to improve clarity and expand the scope of the analysis, the work provides a significant step forward in systematic and informed decision support for SPL configuration prioritization.
7. The authors should add future work in the light of these related topics. The future research scope is not clearly mentioned in the conclusion section. More specific future research directions are welcome.
8. This paper is recommended for practitioners and researchers interested in SPLs, variability modeling, and decision-making frameworks. Further exploration into scalability and domain-specific applications could lead to broader adoption and impact.

Experimental design

This paper is well-organized, well-written and mathematically correct up to the best of my knowledge.

Validity of the findings

The results are mathematically correct up to the best of my knowledge.

Additional comments

I recommend that this article can be published. However, I have given some suggestions to improve this paper.

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.