The invention, which belongs to the technical field of
mobile robot navigation, provides a
mobile robot obstacle avoidance method based on a DoubleDQN network and deep
reinforcement learning so that problems of long
response delay, long needed
training time, and low success rate of
obstacle avoidance based on the existing deep
reinforcement learning obstacle avoidance method can be solved. Specialdecision action space and a reward function are designed;
mobile robot trajectory data collection and Double DQN network training are performed in parallel at two threads, so that the training efficiency is improved effectively and a problem of long
training time needed by the existing deep
reinforcement learning obstacle avoidance method is solved. According to the invention, unbiased estimationof an action value is carried out by using the Double DQN network, so that a problem of falling into
local optimum is solved and problems of low success rate and high
response delay of the existing deep reinforcement learning obstacle avoidance method are solved. Compared with the prior art, the mobile
robot obstacle avoidance method has the following advantages: the network
training time is shortened to be below 20% of the time in the prior art; and the 100% of obstacle avoidance success rate is kept. The mobile
robot obstacle avoidance method can be applied to the technical field of mobilerobot navigation.