情報処理学会 第86回全国大会 会期:2024年3月15日~17日

7B-01
オンライン議論における言語モデルに基づくマルチエージェント事実検証システム
○董 一涵,伊藤孝行(京大)
This paper presents the design and implementation of a multi-agent fact-checking system framework to judge how credible the posts are. Rumours and misinformation affect online discussions by misleading the directions of online discussions as they impact social networking services (SNS). Thus, detecting rumours and misinformation is still significant as it is within SNS fields.
The previous research focusing on rumour detection has some issues: 1) Most of the fact-checking works are based on a single source which is assumed to be authoritative. 2) The judgement results made by large language models (LLM) with provided information were always seen as credible. 3) Binary label classification is not suitable for posts of online discussions since the participants may not spread rumours and misinformation intentionally. To address the obstacles and challenges mentioned above, a multi-agent fact-checking system framework connected with LLM is conducted to verify if posts of online discussions are trustworthy.
Specifically, to address issues 1) and 2), multiple fact-checking agents are designed and implemented in this system framework, and each agent gains evidence from a unique information source to judge the credibility of claims with their confidence. For issue 3), a mechanism to calculate the credibility of posts with the judgement results and confidence of agents, and to classify the original posts using multiple labels, is also designed.
Finally, the system was tested using a few related data sets. The results of the test are also included in this paper to illustrate that the system framework is completely feasible and practical.