AI-assisted assessment of the IFSO consensus on obesity management medications in the context of metabolic bariatric surgery
: Kermansaravi, Mohammad; Salminen, Paulina; Prager, Gerhard; Cohen, Ricardo V.
Publisher: Public Library of Science
: 2025
PLoS Digital Health
: e0001132
: 4
: 12
: 2767-3170
DOI: https://doi.org/10.1371/journal.pdig.0001132
: https://doi.org/10.1371/journal.pdig.0001132
: https://research.utu.fi/converis/portal/detail/Publication/508658198
Artificial intelligence (AI) and large language models (LLMs), when combined with human expertise in collaborative intelligence (CI), can enhance medical decision-making, reduce bias in guideline development, and support precision care. New obesity management medications (OMMs) such as GLP-1 receptor agonists and dual incretin mimetics complement metabolic bariatric surgery but currently lack clear integration strategies. To address this gap, IFSO released consensus guidelines in 2024. This study evaluates their robustness by comparing expert recommendations with LLM outputs, highlighting the role of AI in assessment and strengthening clinical consensus. Thirty-one IFSO consensus statements were tested across eleven advanced LLMs on June 1, 2025. Models received standardized prompts that required binary “AGREE” or “DISAGREE” outputs, supported by brief, evidence-based rationales. Individual responses were aggregated to form an overall “LLM consensus,” and mean percentage agreement was calculated against the original IFSO expert grades—Fleiss’ κappa quantified inter-model reliability beyond chance. Incorporating the AI responses led to shifts in the consensus grade for 2 of the 31 statements. One statement originally rated A + was downgraded to A after some LLMs’ outputs indicated disagreement, citing nuanced evidence on pre- and post-MBS OMM use and comparative effectiveness. One statement on combining OMMs with endoscopic therapies was upgraded from C to B due to unanimous support from the LLM. The remaining 29 statements maintained their original grades, demonstrating strong overall alignment between LLM outputs and expert consensus. Overall concordance between LLMs and experts was 93%, with substantial inter-model agreement(κ = 0.81 [95% CI 0.74–0.87]). Integrating AI, especially LLMs, into collaborative intelligence frameworks strengthens clinical consensus when evidence is limited. This study shows that concordance between LLMs outputs and expert consensus should not be taken as evidence of objectivity; rather, it may simply reflect overlap between the published evidence base and the model’s training data or retrieval sources.
:
The author(s) received no specific funding for this work.