This paper proposes a novel framework to segment hand gestures in RGB-depth (RGB-D) images captured by Kinect using humanlike approaches for human–robot interaction. The goal is to reduce the error of Kinect sensing and, consequently, to improve the precision of hand gesture segmentation for robot NAO. The proposed framework consists of two main novel approaches. First, the depth map and RGB image are aligned by using the genetic algorithm to estimate key points, and the alignment is robust to uncertainties of the extracted point numbers. Then, a novel approach is proposed to refine the edge of the tracked hand gestures in RGB images by applying a modified expectation–maximization (EM) algorithm based on Bayesian networks. The experimental results demonstrate that the proposed alignment method is capable of precisely matching the depth maps with RGB images, and the EM algorithm further effectively adjusts the RGB edges of the segmented hand gestures. The proposed framework has been integrated and validated in a system of human–robot interaction to improve NAO robot’s performance of understanding and interpretation.
An integrative framework of human hand gesture segmentation for human robot interaction
2015-09-01
Ju , Z , Ji , X , Li , J & Liu , H 2015 , ' An integrative framework of human hand gesture segmentation for human robot interaction ' IEEE Systems Journal , no. 99 , pp. 1-11 . DOI:10.1109/JSYST.2015.2468231
Article (Journal)
Electronic Resource
English
DDC: | 629 |
Human-Robot Interaction Through Egocentric Hand Gesture Recognition
Springer Verlag | 2025
|Human-Robot Interaction Through Gesture-Free Spoken Dialogue
British Library Online Contents | 2004
|A Gesture Based Interface for Human-Robot Interaction
British Library Online Contents | 2001
|