Adetunji, KEHofsajer, IWAbu-Mahfouz, Adnan MICheng, L2024-11-262024-11-262024-031551-32031941-0050DOI: 10.1109/TII.2023.3305682http://hdl.handle.net/10204/13867Electric vehicles (EVs) are crucial to the reduction of carbon emissions. However, their charging poses a threat to power system networks. Hence, EV charging control strategies are developed to curb this challenge, using charging prices to incentivize EV drivers to choose EV charging stations (EVCS) favourable to the grid's stability. The challenge of this strategy is the likelihood of EV drivers accepting EVCS suggestions. To increase the probability of accepting EVCS suggestions, we introduce a two-tailed incentive pricing (TTIP) scheme in an EV charging coordination model, where incentives are offered as charging prices and parking time. We formalized the EV charging problem as a multiobjective Markov decision process and proposed a deep deterministic policy gradient (DDPG) to solve it. To tackle the challenge of continuous action space that leads to the dimensionality curse, the proposed DDPG models the action space using a metaheuristic-based technique. The proposed scheme implements a multiple reward system to generate Pareto optimal solutions and a decision-making technique to choose the compromise reward. Using real-world electricity prices and the IEEE 33-bus distribution network, numerical simulations show that our proposed TTIP scheme yields an average of 18% improvement in grid stability than the sustainable policy following, random, and price-greedy algorithms. It also improves the EV charging profit margins by an average of 28%.AbstractenDistribution networksElectric vehiclesEVsEV charging stationsEVCSMultiple rewards systemPolicy gradientReinforcement learningA two-tailed pricing scheme for optimal EV charging scheduling using multiobjective reinforcement learningArticle28190