Enter your keyword

Online state elimination in accelerated reinforcement learning

Sari S.C.a, Kuspriyantoa, Prihatmanto A.S.a, Adiprawita W.a

a School of Electrical Engineering and Informatics, Institut Teknologi Bandung, Bandung, 40132, Indonesia

Abstract

© 2014, International Journal on Electrical Engineering and Informatics. All rights reserved.Most successes in accelerating RL incorporated internal knowledge or human intervention into the learning system such as reward shaping, transfer learning, parameter tuning, and even heuristics. These approaches could be no longer solutions to RL acceleration when internal knowledge is not available. Since the learning convergence is determined by the size of the state space where the larger the state space the slower learning might become, reducing the state space by eliminating the insignificant ones can lead to faster learning. In this paper a novel algorithm called Online State Elimination in Accelerated Reinforcement Learning (OSE-ARL) is introduced. This algorithm accelerates the RL learning performance by distinguishing insignificant states from the significant one and then eliminating them from the state space in early learning episodes. Applying OSE-ARL in grid world robot navigation shows 1.46 times faster in achieving learning convergence. This algorithm is generally applicable for other robotic task challenges or general robotics learning with large scale state space.

Author keywords

Indexed keywords

Accelerated reinforcement learning,Reinforcement learning,Reinforcement learning,Robot learning,Soccer robotics

Funding details

DOI