情報処理学会 第87回全国大会

6B-01
Enhancing interpretability of large language models for contextually dissimilar tasks
○Prasan Yapa,Ashala Lakmini Senanayake,Zilu Liang(京都先端科学大)
Prompt-based engineering methods, including instruction prompt tuning (IPT), are gaining traction for adapting downstream tasks into large language model (LLM) formats. While most studies focus on in-domain knowledge transfer, there is limited exploration of IPT for out-of-domain (OOD) transfer, where tasks differ contextually. To address this gap, a novel LLM-based framework is proposed to enhance interpretability and to improve OOD transferability for unseen tasks. This approach leverages computational intelligence techniques to refine IPT, enabling LLMs to enhance their performance in detecting contextually dissimilar depression cases within text-based conversational data. This refinement broadens their effectiveness in diverse and challenging scenarios.