[an error occurred while processing this directive] [an error occurred while processing this directive] [an error occurred while processing this directive]
[an error occurred while processing this directive]
智慧矿山专栏

基于视频序列的矿卡司机不安全行为识别

  • 毕林 , 1, 2 ,
  • 周超 , 1, 2 ,
  • 姚鑫 1, 2
展开
  • 1. 中南大学资源与安全工程学院,湖南 长沙 410083
  • 2. 中南大学数字矿山研究中心,湖南 长沙 410083
周超(1997-),男,湖南岳阳人,硕士研究生,从事图像识别、行为识别和安全检测技术研究工作。

毕林(1975-),男,四川通江人,副教授,博士,从事铲运机智能化、数字矿山和地质建模等研究工作。

收稿日期: 2020-12-09

  修回日期: 2021-02-19

  网络出版日期: 2021-03-22

基金资助

国家重点研发计划项目“基于大数据的金属矿开采装备智能管控技术研发与示范”(2019YFC0605300)

Unsafe Behavior Identification of Mining Truck Drivers Based on Video Sequences

  • Lin BI , 1, 2 ,
  • Chao ZHOU , 1, 2 ,
  • Xin YAO 1, 2
Expand
  • 1. School of Resources and Safety Engineering,Central South University,Changsha 410083,Hunan,China
  • 2. Digital Mine Research Center,Central South University,Changsha 410083,Hunan,China

Received date: 2020-12-09

  Revised date: 2021-02-19

  Online published: 2021-03-22

本文亮点

目前许多矿山对于矿卡司机的不安全行为监督仍依赖于人为监管,无法及时准确地发现问题,利用计算机技术识别不安全行为是替代人工检测的一条高效途径。本文利用深度学习来解决视频序列的矿卡司机不安全行为识别,深度学习方法不依赖人工设计特征,而是自适应地学习更好的高维特征,具有稳健性更好、速度更快及准确率更高的优点。首先,对帧图像采用翻转、旋转和添加噪点等方法进行数据增强,以降低样本的不均衡性;其次,利用本文优化的模型训练数据。结果表明:网络测试准确率达到93.445%,相比原始双流网络模型提高了15%。将本文模型与不考虑时序动态信息的深度学习模型进行试验比较,证明了时域特征信息对于行为识别的重要性。综上,本文提出的网络模型对于矿卡司机不安全行为的识别率较高,对矿卡司机不安全行为的识别及采矿生产作业安全具有重要实践意义。

本文引用格式

毕林 , 周超 , 姚鑫 . 基于视频序列的矿卡司机不安全行为识别[J]. 黄金科学技术, 2021 , 29(1) : 14 -24 . DOI: 10.11872/j.issn.1005-2518.2021.01.216

Highlights

At present,many mines still rely on human supervision to supervise the unsafe behavior of mining truck drivers,and cannot find problems timely and accurately.This consumes a certain amount of manpower and material resources but cannot solve the problem.With the development of computer technology and artificial intelligence technology,more and more fields are beginning to use artificial intelligence technology to supervise the unsafe behavior of mining truck drivers,such as intelligent security,unmanned driving,and intelligent transportation.Behavior recognition is a hot issue in the field of computer vision.Using computer technology to identify unsafe behaviors is an efficient way to replace manual detection.This paper uses deep learning to solve the unsafe behavior recognition of mining truck drivers in video sequences.The traditional deep learning method does not rely on artificial design features,but adaptively learns better high-dimensional features,better robustness,and faster speed,the accuracy rate is higher.Firstly,according to the actual obtained video data,by analyzing the relative position relationship between the camera and the driver’s area,the video is clipped to obtain video data with less redundant information.At the same time,in order to reduce the imbalance of the data samples,by using flipping,methods such as panning and adding noise were used to enhance the data set,and then use Opencv to re-convert the enhanced image data into a video file and use the dense_flow method to obtain an optical flow diagram.Secondly,use the network for training and testing.In order to conduct com-parative experiments,firstly,a traditional classification model that does not consider time sequence information was used for training and testing,and the transfer learning method was used to train Resnet,Xception,and Inception.And fusion of three single models to get a new fusion model.At the same time,the time domain and spatial domain channels of the dual-stream network model are set to the pre-trained VGG16 using migration learning under the consideration of timing information,and the comparison experiment was carried out with the C3D-two-stream proposed in this paper.The experimental results show that the improved Vgg-two-stream model can reach an accuracy rate of 89.539%,and the accuracy rate of the C3D-two-stream model can reach 93.445%.In summary,the C3D-two-stream model proposed in this paper has a high recognition rate.It also proves that for behavior recognition,the acquisition of characteristic information in the time dimension can make the recognition results more accurate,which has important practical significance for the recognition of unsafe behaviors of mining truck drivers.

[an error occurred while processing this directive]

2020年澳大利亚生产黄金327 t

据咨询企业瑟比顿联合公司(Surbiton Associates)统计,2020年澳大利亚黄金产量为327 t,较2019年增长1.5 t。根据澳元计黄金日均价,2020年黄金产量价值270亿澳元。

澳大利亚是世界第二大黄金生产国,大多数黄金产品用于出口。

“虽然澳大利亚黄金产量仍保持在接近历史最高水平,但我们发现矿石整体品位出现明显下降”,瑟比顿公司董事桑德拉·克罗丝(Sandra Close)透露。她补充说,这是对去年金价暴涨的合理反应。当金价上涨时,矿石开采品位可以下调,但矿山仍可以盈利,克罗丝解释道。

“通常这会导致产量下降和单位生产成本上升,但此时经营者可以通过增加选矿量从低品位矿石中获得同等的金产量。”

“一些矿山在选择开采品位时有很大的灵活性,而另外一些矿山则储存低品位矿石,这样可以与矿山开采的矿石进行混合选矿。”

Covid-19疫情使得投资者大量购买“避险”资产,使得黄金价格在2020年8月份创下2 067美元/盎司,即2 868澳元/盎司的历史最高位。

尽管后来价格回调,但金价仍在1 753美元/盎司的高位。

2020年澳大利亚最大金矿山包括新锋矿业公司的卡迪亚东(Cadia East)金矿,产量为82.25万盎司,纽蒙特公司的伯丁顿(Boddington)金矿,产量为67万盎司,以及柯克兰湖黄金公司的福斯特维尔(Fosterville)金矿,产量为64万盎司。

克罗丝预计,2021年将有一些新项目投产,包括红河资源公司(Red River Resources)的希尔格罗夫(Hillgrove),以及凯普里康金属公司(Capricorn Metals)的卡拉温达(Karlawinda)金矿。

(来源:自然资源部)

脚注

http://www.goldsci.ac.cn/article/2021/1005-2518/1005-2518-2021-29-1-14.shtml

Cai Qiang Deng Yibiao Li Haisheng al et2020.Review of human behavior recognition methods based on deep learning[J].Computer Science47(4):85-93.

Dalal N Triggs B2005.Histograms of oriented gradients for human detection[C]//2005 IEEE Conference on Computer Vision and Pattern Recognition(CVPR),San Diego,CA,USA. Boston:IEEE. 1:886-893.

Dalal N Triggs B Schmid C2006.Human detection using oriented histograms of flow and appearance[C]//European Conferences on Computer Vision.Heidelberg:Springer:428-441.

Gao J Liu J Han J2019.A study for real-time identification of unsafe behavior of taking off safety helmet based on VSM model[C]// Proceedings of the 11th International Conference on Computer Modeling and Simulation.New York:Association for Computing Machinery.

Hacefendiolu K Baaa H B Demir G2021.Automatic detection of earthquake-induced ground failure effects through Faster R-CNN deep learning-based object detection using satellite images[J].Natural Hazards,105:383-403.

Huang Youwen Wan Chaolun Feng Heng2019.Multi-feature fusion human behavior recognition algorithm based on convolutional neural network and long-short-term memory neural network[J].Progress in Laser and Optoelectronics56(7):243-249.

Ji S Xu W Yang M al et2013.3D convolutional neural networks for human action recognition[J].IEEE Transactions on Pattern Analysis & Machine Intelligence35(1):221-231.

Klaser A Marszalek M Cordelia S2008.A spatio-temporal descriptor based on 3D-gradients[C]//British Machine Vision Conference, Aberystwyth, UK. Guildford:BMVC.

Laptev I Marszalek M Schmid C al et2008.Learning realistic human actions from movies[C]//2008 IEEE Conference on Computer Vision and Pattern Recognition.Boston:IEEE:1-8.

Li K Zou C Bu S al et2018.Multi-modal feature fusion for geographic image annotation[J].Pattern Recognition,73: 1-14.

Mao Zhiqiang Ma Cuihong,Cui Jinlong et al,2019.Research on behavior recognition based on two-stream convolution and two-center loss[J].Microelectronics and Computer36(3):96-100.

Mazda T Kajita Y Akedo T al et2020.Recognition of nonlinear hysteretic behavior by neural network using deep learning[J].IOP Conference Series Materials Science and Engineering,809:012010.

Yue-Hei Ng J Hausknecht M Vijayanarasimhan S al et2015.Beyond short snippets:Deep networks for video classification[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Boston:IEEE,4694-4702.

Simonyan K Zisserman A2014.Two-stream convolutional networks for action recognition in videos[J].Advances in Neural Information Processing Systems.

Sun Y Fu J Ma Q al et2020.Research on wear recognition of electric worker’s helmet based on neural network[J].Journal of Physics:Conference Series1449(1):012057.

Tran D Bourdev L Fergus R al et2015.Learning spatio temporal features with 3D convolutional networks [C]//Proceedings of the IEEE International Conference on Computer Vision. Boston:IEEE:4489-4497.

Wang H Kläser A Schmid C al et2011.Action recognition by dense trajectories[C]//2011 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).Boston:IEEE:3169-3176.

Wang H Schmid C2013.Action recognition with improved trajectories[C]//IEEE International Conference on Computer Vision(ICCV).Boston:IEEE:3551-3558.

Wang L Xiong Y Wang Z al et2016.Temporal segment networks:Towards good practices for deep action recognition[C]//European Conference on Computer Vision.Cham:Springer:20-36.

Wang Yi Ma Cuihong Mao Zhiqiang2020.Behavior recognition based on space-time dual-stream fusion network and attention model[J].Computer Applications and Software37(8):156-159,193.

蔡强,邓毅彪,李海生,等,2020.基于深度学习的人体行为识别方法综述[J].计算机科学47(4):85-93.

黄友文,万超伦,冯恒,2019.基于卷积神经网络与长短期记忆神经网络的多特征融合人体行为识别算法[J].激光与光电子学进展56(7):243-249.

毛志强,马翠红,崔金龙,等,2019.基于双流卷积与双中心loss的行为识别研究[J].微电子学与计算机36(3):96-100.

王毅,马翠红,毛志强,2020.基于时空双流融合网络与Attention模型的行为识别[J].计算机应用与软件37(8):156-159,193.

文章导航

/

[an error occurred while processing this directive]