EXPLAINABLE AI (XAI) FOR DETECTING GREENWASHING: A HYBRID NLP-GOVERNANCE MODEL FOR TRANSPARENT ESG REPORTING
グリーンウォッシュ検出のための説明可能AI(XAI):透明なESG報告のためのハイブリッドNLP-ガバナンスモデル (AI 翻訳)
Sayali Girish Patankar
🤖 gxceed AI 要約
日本語
本論文は、ESG報告書におけるグリーンウォッシュを検出するために、説明可能AIと自然言語処理を組み合わせたハイブリッドガバナンスフレームワークを提案する。透明性のある解釈可能な分析を提供することで、従来の不透明なAI監査の限界を克服する。101人のステークホルダー調査に基づき、信頼が低下しているセクターを特定し、説明可能AIによる改善のロードマップを示す。
English
This paper proposes a hybrid governance framework combining Explainable AI (XAI) and Natural Language Processing (NLP) to detect greenwashing in ESG reports. By providing interpretable reasoning for each detection, it aims to enhance transparency in ESG auditing. Based on a survey of 101 stakeholders, the study identifies sectors with declining public trust and suggests a technological roadmap for using XAI to improve credibility.
Unofficial AI-generated summary based on the public title and abstract. Not an official translation.
📝 gxceed 編集解説 — Why this matters
日本のGX文脈において
日本ではSSBJ基準や有報のサステナビリティ情報開示が進む中、グリーンウォッシュ防止は重要な課題である。本論文は説明可能AIを用いた監査の透明性向上手法を提案しており、日本の開示実務にも示唆を与える。
In the global GX context
Globally, as ISSB and CSRD standards mandate more rigorous ESG disclosures, greenwashing detection is critical. This paper offers a practical AI-based approach to enhance auditing transparency, relevant for regulators and firms navigating increasing scrutiny.
👥 読者別の含意
🔬研究者:This paper provides a framework for applying XAI to ESG auditing, highlighting the need for interpretability in AI-driven sustainability assessments.
🏢実務担当者:The proposed hybrid model can help corporate sustainability teams strengthen the credibility of their ESG reports by using explainable AI to detect potential greenwashing.
🏛政策担当者:Regulators can consider this framework when developing guidelines for AI-assisted ESG verification to ensure transparency and trust.
📄 Abstract(原文)
In the current phase of technological development, often described as Industry 5.0, the direction of innovation is no longer limited to automation alone. Earlier industrial transitions primarily emphasized efficiency, digital connectivity, and the integration of systems such as the Internet of Things (IoT). However, the emerging Industry 5.0 perspective introduces a stronger emphasis on the relationship between technology, human wellbeing, and environmental sustainability. In simple terms, technology is now expected to support social goals rather than operate purely as an efficiency tool. Because of this shift, the idea of corporate accountability has become increasingly important. Organizations are now expected to demonstrate responsible practices not only in terms of profit generation but also in terms of environmental and social impact. As a result, many corporations publish Environmental, Social, and Governance (ESG) reports. These disclosures are intended to show investors, regulators, and consumers how the company manages sustainability-related responsibilities. The idea of Explainable Artificial Intelligence has gained attention during the last few years as organizations attempt to improve transparency in machine learning systems. Scholars working in the field of governance technology have pointed out that interpretability becomes extremely important when artificial intelligence is used in decision-making environments that affect public trust. If an AI system produces results without explanation, users may struggle to understand or verify its conclusions. At the same time, increased pressure to appear sustainable has unintentionally created a new challenge known as greenwashing. Greenwashing occurs when organizations invest heavily in marketing themselves as environmentally responsible while making relatively small changes to their actual environmental performance. Instead of focusing on genuine sustainability improvements, companies may emphasize promotional communication designed to create a positive image. This phenomenon has broader consequences than simple misleading advertising. When companies exaggerate environmental achievements or selectively present sustainability data, they distort the global sustainability landscape. Investors may unknowingly support organizations that appear responsible but are not implementing meaningful environmental strategies. As a result, progress toward global sustainability objectives—such as those outlined by the United Nations Sustainable Development Goals—can slow down. A major cause of this problem lies in information asymmetry between corporations and stakeholders. ESG reports are often extremely long and filled with technical terminology. For many readers, verifying the claims contained in these documents is difficult. Even when artificial intelligence is used to analyze these reports, the algorithms themselves often operate as opaque systems. These systems may identify suspicious patterns but do not clearly explain how the conclusion was reached. Because of this situation, a second trust gap emerges. Stakeholders are told that artificial intelligence has verified the information contained in a report, yet the reasoning behind the verification remains hidden. This lack of clarity reduces the credibility of the auditing process. To address this issue, the present study proposes a hybrid governance framework that combines Natural Language Processing (NLP) with Explainable Artificial Intelligence (XAI). Instead of relying on hidden computational logic, the system is designed to present understandable reasoning for every detection it produces. By moving from opaque decision-making toward interpretable analysis, ESG auditing can become a more transparent and participatory process. The research presented in this paper includes findings from a survey involving 101 stakeholders drawn from diverse professional and academic backgrounds. Based on these insights, the study identifies sectors where public trust has declined most significantly and proposes a technological roadmap for improving transparency through explainable AI methods.
🔗 Provenance — このレコードを発見したソース
- openalex https://doi.org/10.5281/zenodo.20003789first seen 2026-05-05 21:32:52
gxceed は公開メタデータに基づく研究支援データセットです。要約・翻訳・解説は AI 支援で生成されています。 最終的な解釈・検証は利用者が原典資料に基づいて行うことを前提とします。