gxceed
← 論文一覧に戻る

Dual-Agent Deep Reinforcement Learning for Low-Carbon Economic Dispatch in Wind-Integrated Microgrids Based on Carbon Emission Flow

風力統合マイクログリッドにおける低炭素経済運用のためのデュアルエージェント深層強化学習:炭素排出フローに基づく (AI 翻訳)

W. Qiu, Hebin Ruan, Xiaoxiao Yu, Yuhang Li, Yicheng Liu, Zhiyi He

Energies📚 査読済 / ジャーナル2026-01-22#炭素価格Origin: US
DOI: 10.3390/en19020551
原典: https://doi.org/10.3390/en19020551

🤖 gxceed AI 要約

日本語

本研究は、再生可能エネルギーが高比率で導入されたマイクログリッドの低炭素経済運用を目的とし、デュアルエージェント深層強化学習フレームワークを提案する。PPOエージェントが運用コスト最小化、SACエージェントが炭素排出削減を担当し、適応的重み付けで両者を統合。炭素排出フロー理論と段階的炭素価格メカニズムを導入し、需要応答も活用。PJM 5バス系統での検証により、従来手法と比較してコスト16.8%、排出11.3%、風力抑制15.2%削減を達成した。

English

This study proposes a dual-agent deep reinforcement learning framework for low-carbon economic dispatch in wind-integrated microgrids. A PPO agent minimizes operating costs while a SAC agent targets carbon emission reduction, combined via adaptive weighting. Carbon emission flow theory and stepped carbon pricing are incorporated, along with demand response. Case studies on a modified PJM 5-bus system show 16.8% cost reduction, 11.3% emission reduction, and 15.2% wind curtailment reduction compared to a DDPG baseline.

Unofficial AI-generated summary based on the public title and abstract. Not an official translation.

📝 gxceed 編集解説 — Why this matters

日本のGX文脈において

日本でもGX-ETSの導入が進み、再生可能エネルギー比率が向上する中、本手法のようなAIと炭素価格メカニズムを組み合わせたマイクログリッド運用最適化は、地域エネルギー管理や分散型電源の効率的運用に貢献する可能性がある。

In the global GX context

This paper contributes to the global GX literature by demonstrating a scalable AI-driven dispatch framework that internalizes carbon costs via stepped pricing and emission flow tracing. It offers practical insights for grid operators and policymakers seeking to balance cost and emissions in high-renewable systems.

👥 読者別の含意

🔬研究者:Provides a novel dual-agent DRL architecture with adaptive weighting and carbon emission flow integration, advancing the intersection of AI and low-carbon power systems.

🏢実務担当者:Offers a method for optimizing microgrid dispatch that reduces both costs and emissions, applicable to corporate renewable energy management and demand response programs.

🏛政策担当者:Highlights the effectiveness of stepped carbon pricing in guiding dispatch decisions, supporting the design of carbon pricing mechanisms for power systems.

📄 Abstract(原文)

High renewable penetration in microgrids makes low-carbon economic dispatch under uncertainty challenging, and single-agent deep reinforcement learning (DRL) often yields unstable cost–emission trade-offs. This study proposes a dual-agent DRL framework that explicitly balances operational economy and environmental sustainability. A Proximal Policy Optimization (PPO) agent focuses on minimizing operating cost, while a Soft Actor–Critic (SAC) agent targets carbon emission reduction; their actions are combined through an adaptive weighting strategy. The framework is supported by carbon emission flow (CEF) theory, which enables network-level tracing of carbon flows, and a stepped carbon pricing mechanism that internalizes dynamic carbon costs. Demand response (DR) is incorporated to enhance operational flexibility. The dispatch problem is formulated as a Markov Decision Process, allowing the dual-agent system to learn policies through interaction with the environment. Case studies on a modified PJM 5-bus test system show that, compared with a Deep Deterministic Policy Gradient (DDPG) baseline, the proposed method reduces total operating cost, carbon emissions, and wind curtailment by 16.8%, 11.3%, and 15.2%, respectively. These results demonstrate that the proposed framework is an effective solution for economical and low-carbon operation in renewable-rich power systems.

🔗 Provenance — このレコードを発見したソース

🔔 こうした論文の新着を逃したくない方は キーワードアラート に登録(無料・3キーワードまで)。

gxceed は公開メタデータに基づく研究支援データセットです。要約・翻訳・解説は AI 支援で生成されています。 最終的な解釈・検証は利用者が原典資料に基づいて行うことを前提とします。