6P-03
Update Reward Function based on Accumulated Data
Efficient reward functions can shorten the training time for Reinforcement learning, but it could restrict exploration of solution spaces. Here, the sub-reward function is considered using the Curling problem, which aims to stop a stone launched at a constant velocity by exerting various opposing forces. In our procedure, accumulated data, position and velocity, is classified into groups along the final reward, and sub-reward is updated based on the data groups. Consequently, the optimised reward function successfully executes quick commands without the programmer intentionally limiting the agent from exploring the solution space. Finally, we discuss practical situations for our method.