辽宁石油化工大学学报

辽宁石油化工大学学报 ›› 2019, Vol. 39 ›› Issue (6): 91-.DOI: 10.3969/j.issn.1672-6952.2019.06.016

• 信息与控制工程 • 上一篇    

基于整体和个体分割融合的双人交互行为识别

魏鹏1曹江涛1姬晓飞2   

  1. (1.辽宁石油化工大学,辽宁 抚顺 113001; 2.沈阳航空航天大学,辽宁 沈阳110136)
  • 收稿日期:2018-12-05 修回日期:2018-12-20 出版日期:2019-12-25 发布日期:2019-12-24
  • 通讯作者: 曹江涛(1978⁃),男,博士,教授,从事智能方法及其在工业控制信息处理上的应用,视频分析与处理等领域研究;E⁃mail:cigroup@126.com
  • 作者简介:魏鹏(1993-),男,硕士研究生,从事基于深度信息的双人交互行为识别方向研究;E-mail:2513021587@qq.com
  • 基金资助:
    辽宁省自然科学基金项目(201602557);辽宁省科技公益研究基金项目(2016002006);辽宁省教育厅科学研究服务地方项目(L201708);辽宁省教育厅科学研究青年项目(L201745)

Double Human Interaction Recognition Based on Integration of Whole and Individual Segmentation

Wei Peng1Cao Jiangtao1Ji Xiaofei2   

  1. (1.Liaoning Shihua University,Fushun Liaoning 113001,China; 2.Shenyang Aerospace University,Shenyang Liaoning 110136,China)
  • Received:2018-12-05 Revised:2018-12-20 Published:2019-12-25 Online:2019-12-24

摘要: 在人类交互行为识别领域,基于RGB视频的局部特征往往不能有效区分近似动作,将深度图像(Depth)与彩色图像(RGB)在识别过程中进行融合,提出一种融合Depth信息的整体和个体分割融合的双人交互行为识别算法。该算法首先分别对RGB和Depth视频进行兴趣点提取,在RGB视频上采用3DSIFT进行特征描述,在Depth视频上利用YOLO网络对左右两人兴趣点进行划分,并使用视觉共生矩阵对局部关联信息进行描述。最后使用最近邻分类器分别对RGB特征和Depth特征进行分类识别,进一步通过决策级融合两者识别结果,提高识别准确率。结果表明,结合深度视觉共生矩阵可以大大提高双人交互行为识别准确率,对于SBU Kinect interaction数据库中的动作可以达90%的正确识别率,验证了所提算法的有效性。

关键词: 交互行为, 局部特征, 视觉共生矩阵, 深度特征, YOLO, 决策级融合

Abstract: In the field of human interaction recognition, local features based on RGB video often cannot effectively distinguish approximate actions. The Depth image information and the color image information are merged in the recognition process, and a two⁃person interactive behavior recognition algorithm that integrates the depth information and the individual segmentation fusion is proposed.The algorithm firstly extracts the points of interest for RGB and Depth video, then uses 3DSIFT to describe the features on RGB video. The YOLO network is introduced into divide the left and right points of interest on the Depth video, and the local co⁃occurrence matrix is used for local correlation information description. Finally, the nearest neighbor classifier is used to classify the RGB features and Depth features, and further the recognition results are obtained by the decision⁃level fusion, which improves the accuracy of recognition. The results show that the combination of depth visual co⁃occurrence matrix can greatly improve the recognition accuracy of double interaction behavior, and the correct recognition rate of 90% of the actions in SBU Kinect interaction database can verify the effectiveness of the proposed algorithm.

Key words: Interaction behavior, Local features, Co?occurrence visual matrix, Depth features, YOLO, Decision level fusion

引用本文

魏鹏,曹江涛,姬晓飞. 基于整体和个体分割融合的双人交互行为识别[J]. 辽宁石油化工大学学报, 2019, 39(6): 91-.

Wei Peng,Cao Jiangtao,Ji Xiaofei. Double Human Interaction Recognition Based on Integration of Whole and Individual Segmentation[J]. Journal of Liaoning Petrochemical University, 2019, 39(6): 91-.

使用本文

0
    /   /   推荐

导出引用管理器 EndNote|Ris|BibTeX

链接本文: http://journal.lnpu.edu.cn/CN/10.3969/j.issn.1672-6952.2019.06.016

               http://journal.lnpu.edu.cn/CN/Y2019/V39/I6/91