Task-Directed Exploration in Continuous POMDPs for Robotic Manipulation of Articulated Objects

Aidan Curtis,Leslie Kaelbling,Siddarth Jain,Aidan Curtis,Leslie Kaelbling,Siddarth Jain

Representing and reasoning about uncertainty is crucial for autonomous agents acting in partially observable environments with noisy sensors. Partially observable Markov decision processes (POMDPs) serve as a general framework for representing problems in which uncertainty is an important factor. Online sample-based POMDP methods have emerged as efficient approaches to solving large POMDPs and hav...