3D reconstruction serves as the foundational layer for numerous robotic perception tasks,
including 6D object pose estimation and grasp pose generation. Modern 3D reconstruction
methods for objects can produce visually and geometrically impressive meshes from
multi-view images, yet standard geometric evaluations do not reflect how reconstruction
quality influences downstream tasks such as robotic manipulation performance. This paper
addresses this gap by introducing a large-scale, physics-based benchmark that evaluates
6D pose estimators and 3D mesh models based on their functional efficacy in grasping.
We analyze the impact of model fidelity by generating grasps on various reconstructed
3D meshes and executing them on the ground-truth model, simulating how grasp poses
generated with an imperfect model affect interaction with the real object. This assesses
the combined impact of pose error, grasp robustness, and geometric inaccuracies from 3D
reconstruction. Our results show that reconstruction artifacts significantly decrease the
number of grasp pose candidates but have a negligible effect on grasping performance given
an accurately estimated pose. Our results also reveal that the relationship between grasp
success and pose error is dominated by spatial error, and even a simple translation error
provides insight into the success of the grasping pose of symmetric objects. This work
provides insight into how perception systems relate to object manipulation using robots.