Leveraging multimodal learning to address oral health inequities: A public health policy perspective


Abstract

Oral health disparities persist as a critical public health challenge, particularly within marginalized communities where barriers to care and preventive interventions are prevalent. Conventional methods in public health policy have exhibited limitations in effectively addressing these disparities, largely due to fragmented data systems and insufficient integration of multifaceted determinants of oral health. This study introduces an innovative multimodal learning framework designed to enhance policy-making and oral health outcomes by unifying diverse data sources—including clinical, socioeconomic, behavioral, and environmental factors—into a comprehensive analytical model. The framework incorporates the Cross-Modal Coherence Encoder (CMCE), leveraging structure-preserving attention mechanisms to align and integrate heterogeneous data modalities, thereby capturing intricate intra- and inter-modal relationships. Additionally, the Semantic Anchor Matching (SAM) mechanism is employed to refine the learning process by introducing latent semantic anchors, ensuring robust and semantically consistent representations even in the presence of incomplete or noisy data. Experimental evaluations indicate that the proposed framework achieves substantial improvements over traditional unimodal approaches in predicting oral health outcomes and identifying vulnerable populations. By revealing complex interdependencies among diverse determinants, this integrative methodology provides actionable insights for formulating targeted and evidence-based public health policies. The results highlight the transformative potential of advanced machine learning techniques to advance oral health equity and reduce systemic disparities in underserved populations.
Ask to review this manuscript

Notes for potential reviewers

  • Volunteering is not a guarantee that you will be asked to review. There are many reasons: reviewers must be qualified, there should be no conflicts of interest, a minimum of two reviewers have already accepted an invitation, etc.
  • This is NOT OPEN peer review. The review is single-blind, and all recommendations are sent privately to the Academic Editor handling the manuscript. All reviews are published and reviewers can choose to sign their reviews.
  • What happens after volunteering? It may be a few days before you receive an invitation to review with further instructions. You will need to accept the invitation to then become an official referee for the manuscript. If you do not receive an invitation it is for one of many possible reasons as noted above.

  • PeerJ Computer Science does not judge submissions based on subjective measures such as novelty, impact or degree of advance. Effectively, reviewers are asked to comment on whether or not the submission is scientifically and technically sound and therefore deserves to join the scientific literature. Our Peer Review criteria can be found on the "Editorial Criteria" page - reviewers are specifically asked to comment on 3 broad areas: "Basic Reporting", "Experimental Design" and "Validity of the Findings".
  • Reviewers are expected to comment in a timely, professional, and constructive manner.
  • Until the article is published, reviewers must regard all information relating to the submission as strictly confidential.
  • When submitting a review, reviewers are given the option to "sign" their review (i.e. to associate their name with their comments). Otherwise, all review comments remain anonymous.
  • All reviews of published articles are published. This includes manuscript files, peer review comments, author rebuttals and revised materials.
  • Each time a decision is made by the Academic Editor, each reviewer will receive a copy of the Decision Letter (which will include the comments of all reviewers).

If you have any questions about submitting your review, please email us at [email protected].