gxceed
← 論文一覧に戻る

ESG-Bench: Benchmarking Long-Context ESG Reports for Hallucination Mitigation

ESG-Bench: 長期コンテキストESG報告書のハルシネーション軽減のためのベンチマーク (AI 翻訳)

Siqi Sun, Ben Wu, Mali Jin, Peizhen Bai, Han Zhang, Xingyi Song

AAAI Conference on Artificial Intelligence📚 査読済 / ジャーナル2026-03-13#AI×ESG
DOI: 10.1609/aaai.v40i46.41281
原典: https://doi.org/10.1609/aaai.v40i46.41281

🤖 gxceed AI 要約

日本語

ESG報告書の解釈と自動分析の信頼性向上のため、ESG-Benchデータセットを提案する。人為アノテーションされたQAペアを用いて、大規模言語モデルの幻覚(ハルシネーション)を評価する。タスク固有のChain-of-Thought手法により、標準手法より大幅に幻覚を軽減できることを示した。

English

This paper introduces ESG-Bench, a benchmark dataset for evaluating LLMs on ESG report understanding and hallucination mitigation. The dataset includes human-annotated QA pairs with factual support labels. Chain-of-Thought prompting and fine-tuning methods outperform standard approaches in reducing hallucinations, with transfer gains to other QA benchmarks.

Unofficial AI-generated summary based on the public title and abstract. Not an official translation.

📝 gxceed 編集解説 — Why this matters

日本のGX文脈において

日本でもESG報告が義務化されつつあり、本ベンチマークは開示内容の正確性評価に寄与する。ただし、日本語対応は今後の課題。

In the global GX context

With ESG reporting becoming mandatory globally (CSRD, SEC), this benchmark provides a tool to verify LLM reliability for ESG analysis, crucial for ensuring trust in automated disclosure processing.

👥 読者別の含意

🔬研究者:Researchers can use ESG-Bench to evaluate and improve LLMs for ESG-specific tasks.

🏢実務担当者:Corporate sustainability teams can leverage this benchmark to assess AI tools for ESG report analysis.

🏛政策担当者:Policymakers can consider this benchmark when setting standards for AI-assisted disclosure verification.

📄 Abstract(原文)

As corporate responsibility increasingly incorporates environmental, social, and governance (ESG) criteria, ESG reporting is becoming a legal requirement in many regions and a key channel for documenting sustainability practices and assessing firms’ long-term and ethical performance. However, the length and complexity of ESG disclosures make them difficult to interpret and automate the analysis reliably. To support scalable and trustworthy analysis, this paper introduces ESG-Bench, a benchmark dataset for ESG report understanding and hallucination mitigation in large language models (LLMs). ESG-Bench contains human-annotated question–answer (QA) pairs grounded in real-world ESG report contexts, with fine-grained labels indicating whether model outputs are factually supported or hallucinated. Framing ESG report analysis as a QA task with verifiability constraints enables systematic evaluation of LLMs’ ability to extract and reason over ESG content and provides a new use case: mitigating hallucinations in socially sensitive, compliance-critical settings. We design task-specific Chain-of-Thought (CoT) prompting strategies and fine-tune multiple state-of-the-art LLMs on ESG-Bench using CoT-annotated rationales. Our experiments show that these CoT-based methods substantially outperform standard prompting and direct fine-tuning in reducing hallucinations, and that the gains transfer to existing QA benchmarks beyond the ESG domain.

🔗 Provenance — このレコードを発見したソース

🔔 こうした論文の新着を逃したくない方は キーワードアラート に登録(無料・3キーワードまで)。

gxceed は公開メタデータに基づく研究支援データセットです。要約・翻訳・解説は AI 支援で生成されています。 最終的な解釈・検証は利用者が原典資料に基づいて行うことを前提とします。