A1 Refereed original research article in a scientific journal
AI-assisted assessment of the IFSO consensus on obesity management medications in the context of metabolic bariatric surgery
Authors: Kermansaravi, Mohammad; Salminen, Paulina; Prager, Gerhard; Cohen, Ricardo V.
Publisher: Public Library of Science
Publication year: 2025
Journal: PLoS Digital Health
Article number: e0001132
Volume: 4
Issue: 12
eISSN: 2767-3170
DOI: https://doi.org/10.1371/journal.pdig.0001132
Publication's open availability at the time of reporting: Open Access
Publication channel's open availability : Open Access publication channel
Web address : https://doi.org/10.1371/journal.pdig.0001132
Self-archived copy’s web address: https://research.utu.fi/converis/portal/detail/Publication/508658198
Self-archived copy's licence: CC BY
Self-archived copy's version: Publisher`s PDF
Artificial intelligence (AI) and large language models (LLMs), when combined with human expertise in collaborative intelligence (CI), can enhance medical decision-making, reduce bias in guideline development, and support precision care. New obesity management medications (OMMs) such as GLP-1 receptor agonists and dual incretin mimetics complement metabolic bariatric surgery but currently lack clear integration strategies. To address this gap, IFSO released consensus guidelines in 2024. This study evaluates their robustness by comparing expert recommendations with LLM outputs, highlighting the role of AI in assessment and strengthening clinical consensus. Thirty-one IFSO consensus statements were tested across eleven advanced LLMs on June 1, 2025. Models received standardized prompts that required binary “AGREE” or “DISAGREE” outputs, supported by brief, evidence-based rationales. Individual responses were aggregated to form an overall “LLM consensus,” and mean percentage agreement was calculated against the original IFSO expert grades—Fleiss’ κappa quantified inter-model reliability beyond chance. Incorporating the AI responses led to shifts in the consensus grade for 2 of the 31 statements. One statement originally rated A + was downgraded to A after some LLMs’ outputs indicated disagreement, citing nuanced evidence on pre- and post-MBS OMM use and comparative effectiveness. One statement on combining OMMs with endoscopic therapies was upgraded from C to B due to unanimous support from the LLM. The remaining 29 statements maintained their original grades, demonstrating strong overall alignment between LLM outputs and expert consensus. Overall concordance between LLMs and experts was 93%, with substantial inter-model agreement(κ = 0.81 [95% CI 0.74–0.87]). Integrating AI, especially LLMs, into collaborative intelligence frameworks strengthens clinical consensus when evidence is limited. This study shows that concordance between LLMs outputs and expert consensus should not be taken as evidence of objectivity; rather, it may simply reflect overlap between the published evidence base and the model’s training data or retrieval sources.
Downloadable publication This is an electronic reprint of the original article. |
Funding information in the publication:
The author(s) received no specific funding for this work.