Trend analysis parameter’s impact for M1.
<div><p>Reinforcement learning is a remarkable aspect of the artificial intelligence field with many applications. Reinforcement learning facilitates learning new tasks based on action and reward principles. Motion planning addresses the navigation problem for robots. Current motion plan...
Saved in:
| Main Author: | |
|---|---|
| Other Authors: | , , , , |
| Published: |
2025
|
| Subjects: | |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | <div><p>Reinforcement learning is a remarkable aspect of the artificial intelligence field with many applications. Reinforcement learning facilitates learning new tasks based on action and reward principles. Motion planning addresses the navigation problem for robots. Current motion planning approaches lack support for automated, timely responses to the environment. The problem becomes worse in a complex environment cluttered with obstacles. Reinforcement learning can increase the capacity of robotic systems due to the reward system’s capability and feedback to the environment. This could help deal with a complex environment. Existing algorithms for path planning are slow, computationally expensive, and less responsive to the environment, which causes late convergence to a solution. Furthermore, they are less efficient for task learning due to post-processing requirements. Reinforcement learning can address these issues using its action feedback and reward policies. This research presents a novel Q-learning-based reinforcement algorithm with deep learning integration. The proposed approach is evaluated in a narrow and cluttered passage environment. Further, improvements in the convergence of reinforcement learning-based motion planning and collision avoidance are addressed. The proposed approach’s agent converged in 210<sup>th</sup> episodes in a cluttered environment and 400<sup>th</sup> episodes in a narrow passage environment. A state-of-the-art comparison shows that the proposed approach outperformed existing approaches based on the number of turns and convergence of the path by the planner.</p></div> |
|---|