J. Semicond. > Volume 41?>?Issue 2?> Article Number: 022403

Towards efficient deep neural network training by FPGA-based batch-level parallelism

Cheng Luo 1, , , Man-Kit Sit 2, , Hongxiang Fan 2, , Shuanglong Liu 2, , Wayne Luk 2, and Ce Guo 2,

+ Author Affiliations + Find other works by these authors

PDF

Turn off MathJax

Abstract: Training deep neural networks (DNNs) requires a significant amount of time and resources to obtain acceptable results, which severely limits its deployment in resource-limited platforms. This paper proposes DarkFPGA, a novel customizable framework to efficiently accelerate the entire DNN training on a single FPGA platform. First, we explore batch-level parallelism to enable efficient FPGA-based DNN training. Second, we devise a novel hardware architecture optimised by a batch-oriented data pattern and tiling techniques to effectively exploit parallelism. Moreover, an analytical model is developed to determine the optimal design parameters for the DarkFPGA accelerator with respect to a specific network specification and FPGA resource constraints. Our results show that the accelerator is able to perform about 10 times faster than CPU training and about a third of the energy consumption than GPU training using 8-bit integers for training VGG-like networks on the CIFAR dataset for the Maxeler MAX5 platform.

Key words: deep neural networktrainingFPGAbatch-level parallelism

Abstract: Training deep neural networks (DNNs) requires a significant amount of time and resources to obtain acceptable results, which severely limits its deployment in resource-limited platforms. This paper proposes DarkFPGA, a novel customizable framework to efficiently accelerate the entire DNN training on a single FPGA platform. First, we explore batch-level parallelism to enable efficient FPGA-based DNN training. Second, we devise a novel hardware architecture optimised by a batch-oriented data pattern and tiling techniques to effectively exploit parallelism. Moreover, an analytical model is developed to determine the optimal design parameters for the DarkFPGA accelerator with respect to a specific network specification and FPGA resource constraints. Our results show that the accelerator is able to perform about 10 times faster than CPU training and about a third of the energy consumption than GPU training using 8-bit integers for training VGG-like networks on the CIFAR dataset for the Maxeler MAX5 platform.

Key words: deep neural networktrainingFPGAbatch-level parallelism



References:

[1]

LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE, 1998

[2]

Russakovsky O, Deng J, Su H, et al. Imagenet large scale visual recognition challenge. IJCV, 2015

[3]

Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 2015, 91

[4]

He K, Gkioxari G, Dollár P, et al. Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision, 2017, 2961

[5]

Jia Y, Learning semantic image representations at a large scale. PhD Thesis, UC Berkeley, 2014

[6]

Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015

[7]

Umuroglu Y, Fraser N J, Gambardella G, et al. Finn: A framework for fast, scalable binarized neural network inference. Acm/sigda International Symposium on Field-Programmable Gate Arrays, 2016

[8]

Nurvitadhi E, Venkatesh G, Sim J, et al. Can FPGAs beat GPUs in accelerating next-generation deep neural networks. ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2017

[9]

Guo K, Zeng S, Yu J, et al. A survey of FPGA-based neural network accelerator. arXiv: 171208934, 2017

[10]

Parisi G I, Kemker R, Part J L, et al. Continual lifelong learning with neural networks: A review. arXiv: 180207569, 2018

[11]

Micikevicius P, Narang S, Alben J, et al. Mixed precision training. arXiv: 171003740, 2017

[12]

Das D, Mellempudi N, Mudigere D, et al. Mixed precision training of convolutional neural networks using integer operations. arXiv: 180200930, 2018

[13]

Banner R, Hubara I, Hoffer E, et al. Scalable methods for 8-bit training of neural networks. arXiv: 180511046, 2018

[14]

De Sa C, Leszczynski M, Zhang J, et al. High-accuracy low-precision training. arXiv: 180303383, 2018

[15]

Wu S, Li G, Chen F, et al. Training and inference with integers in deep neural networks. arXiv: 180204680, 2018

[16]

Wen W, Xu C, Yan F, et al. Terngrad: Ternary gradients to reduce communication in distributed deep learning. Advances in Neural Information Processing Systems, 2017

[17]

Zhu H, Akrout M, Zheng B, et al. Benchmarking and analyzing deep neural network training. IEEE International Symposium on Workload Characterization (IISWC), 2018

[18]

Redmon J. Darknet: Open source neural networks in C. http://pjreddie.com/darknet/

[19]

Pell O, Mencer O, Tsoi K H, et al. Maximum performance computing with dataflow engines. High-performance computing using FPGAs, 2013

[20]

Luo C, Sit M K, Fan H, et al. Towards efficient deep neural network training by FPGA-based batch-level parallelism. 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2019, 45

[21]

Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv: 14126980, 2014

[22]

Qiu J, Wang J, Yao S, et al. Going deeper with embedded FPGA platform for convolutional neural network. Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2016

[23]

Suda N, Chandra V, Dasika G, et al. Throughput-optimized opencl-based FPGA accelerator for large-scale convolutional neural networks. Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2016

[24]

Motamedi M, Gysel P, Akella V, et al. Design space exploration of FPGA-based deep convolutional neural networks. ASP-DAC, 2016

[25]

Zhang C, Sun G, Fang Z, et al. Caffeine: Towards uniformed representation and acceleration for deep convolutional neural networks. IEEE Trans Comput-Aid Des Integr Circuits Syst, 2019, 38, 2072

[26]

Ma Y, Cao Y, Vrudhula S, et al. An automatic rtl compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. 2017 27th International Conference on Field Programmable Logic and Applications (FPL), 2017, 1

[27]

Venkataramanaiah S K, Ma Y, Yin S, et al. Automatic compiler based FPGA accelerator for cnn training. 2019 29th International Conference on Field Programmable Logic and Applications (FPL), 2019, 166

[28]

Xiao Q, Liang Y, Lu L, et al. Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on FPGAs. Proceedings of the 54th Annual Design Automation Conference, 2017

[29]

Zhao W, Fu H, Luk W, et al. F-CNN: An FPGA-based framework for training convolutional neural networks. IEEE 27th International Conference on Application-specific Systems, Architectures and Processors (ASAP), 2016

[30]

Geng T, Wang T, Li A, et al. A scalable framework for acceleration of cnn training on deeply-pipelined FPGA clusters with weight and workload balancing. arXiv: 190101007, 2019

[31]

Geng T, Wang T, Sanaullah A, et al. Fpdeep: Acceleration and load balancing of CNN training on FPGA clusters. IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2018

[32]

Li Y, Pedram A, Caterpillar: Coarse grain reconfigurable architecture for accelerating the training of deep neural networks. IEEE 28th International Conference on Application-Specific Systems, Architectures and Processors (ASAP), 2017

[33]

Dicecco R, Sun L, Chow P. FPGA-based training of convolutional neural networks with a reduced precision floating-point library. International Conference on Field Programmable Technology, 2018

[34]

Nakahara H, Sada Y, Shimoda M, et al. FPGA-based training accelerator utilizing sparseness of convolutional neural network. 2019 29th International Conference on Field Programmable Logic and Applications (FPL), 2019, 180

[35]

Fox S, Faraone J, Boland D, et al. Training deep neural networks in low-precision with high accuracy using FPGAs. International Conference on Field-Programmable Technology (FPT), 2019

[36]

Moss D J, Krishnan S, Nurvitadhi E, et al. A customizable matrix multiplication framework for the Intel HARPv2 Xeon+ FPGA platform: A deep learning case study. ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2018, 107

[37]

He K, Zhang X, Ren S, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE International Conference on Computer Vision, 2015, 1026

[38]

Zhou S, Wu Y, Ni Z, et al. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv: 160606160, 2016

[39]

Matsumoto M, Nishimura T. Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans Model Comput Simul, 1998, 8(1), 3

[40]

Performance guide of using nchw image data format. [Online]. Available: https://www.tensorflow.org/guide/performance/overview

[41]

Ma Y, Cao Y, Vrudhula S, et al. Optimizing loop operation and dataflow in FPGA acceleration of deep convolutional neural networks. Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2017

[42]

Steinkraus D, Buck I, Simard P. Using GPUs for machine learning algorithms. Eighth International Conference on Document Analysis and Recognition (ICDAR), 2005

[43]

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv: 14091556, 2014

[44]

Wei X, Yu C H, Zhang P, et al. Automated systolic array architecture synthesis for high throughput cnn inference on FPGAs. Proceedings of the 54th Annual Design Automation Conference, 2017,

[45]

Krishnan S, Ratusziak P, Johnson C, et al. Accelerator templates and runtime support for variable precision CNN. CISC Workshop, 2017

[46]

Abadi M, Barham P, Chen J, et al. TensorFlow: a system for large-scale machine learning. OSDI, 2016, 265

[47]

Chetlur S, Woolley C, Vandermersch P, et al. cuDNN: Efficient primitives for deep learning. arXiv: 14100759, 2014

[1]

LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proc IEEE, 1998

[2]

Russakovsky O, Deng J, Su H, et al. Imagenet large scale visual recognition challenge. IJCV, 2015

[3]

Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 2015, 91

[4]

He K, Gkioxari G, Dollár P, et al. Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision, 2017, 2961

[5]

Jia Y, Learning semantic image representations at a large scale. PhD Thesis, UC Berkeley, 2014

[6]

Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015

[7]

Umuroglu Y, Fraser N J, Gambardella G, et al. Finn: A framework for fast, scalable binarized neural network inference. Acm/sigda International Symposium on Field-Programmable Gate Arrays, 2016

[8]

Nurvitadhi E, Venkatesh G, Sim J, et al. Can FPGAs beat GPUs in accelerating next-generation deep neural networks. ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2017

[9]

Guo K, Zeng S, Yu J, et al. A survey of FPGA-based neural network accelerator. arXiv: 171208934, 2017

[10]

Parisi G I, Kemker R, Part J L, et al. Continual lifelong learning with neural networks: A review. arXiv: 180207569, 2018

[11]

Micikevicius P, Narang S, Alben J, et al. Mixed precision training. arXiv: 171003740, 2017

[12]

Das D, Mellempudi N, Mudigere D, et al. Mixed precision training of convolutional neural networks using integer operations. arXiv: 180200930, 2018

[13]

Banner R, Hubara I, Hoffer E, et al. Scalable methods for 8-bit training of neural networks. arXiv: 180511046, 2018

[14]

De Sa C, Leszczynski M, Zhang J, et al. High-accuracy low-precision training. arXiv: 180303383, 2018

[15]

Wu S, Li G, Chen F, et al. Training and inference with integers in deep neural networks. arXiv: 180204680, 2018

[16]

Wen W, Xu C, Yan F, et al. Terngrad: Ternary gradients to reduce communication in distributed deep learning. Advances in Neural Information Processing Systems, 2017

[17]

Zhu H, Akrout M, Zheng B, et al. Benchmarking and analyzing deep neural network training. IEEE International Symposium on Workload Characterization (IISWC), 2018

[18]

Redmon J. Darknet: Open source neural networks in C. http://pjreddie.com/darknet/

[19]

Pell O, Mencer O, Tsoi K H, et al. Maximum performance computing with dataflow engines. High-performance computing using FPGAs, 2013

[20]

Luo C, Sit M K, Fan H, et al. Towards efficient deep neural network training by FPGA-based batch-level parallelism. 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2019, 45

[21]

Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv: 14126980, 2014

[22]

Qiu J, Wang J, Yao S, et al. Going deeper with embedded FPGA platform for convolutional neural network. Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2016

[23]

Suda N, Chandra V, Dasika G, et al. Throughput-optimized opencl-based FPGA accelerator for large-scale convolutional neural networks. Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2016

[24]

Motamedi M, Gysel P, Akella V, et al. Design space exploration of FPGA-based deep convolutional neural networks. ASP-DAC, 2016

[25]

Zhang C, Sun G, Fang Z, et al. Caffeine: Towards uniformed representation and acceleration for deep convolutional neural networks. IEEE Trans Comput-Aid Des Integr Circuits Syst, 2019, 38, 2072

[26]

Ma Y, Cao Y, Vrudhula S, et al. An automatic rtl compiler for high-throughput FPGA implementation of diverse deep convolutional neural networks. 2017 27th International Conference on Field Programmable Logic and Applications (FPL), 2017, 1

[27]

Venkataramanaiah S K, Ma Y, Yin S, et al. Automatic compiler based FPGA accelerator for cnn training. 2019 29th International Conference on Field Programmable Logic and Applications (FPL), 2019, 166

[28]

Xiao Q, Liang Y, Lu L, et al. Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on FPGAs. Proceedings of the 54th Annual Design Automation Conference, 2017

[29]

Zhao W, Fu H, Luk W, et al. F-CNN: An FPGA-based framework for training convolutional neural networks. IEEE 27th International Conference on Application-specific Systems, Architectures and Processors (ASAP), 2016

[30]

Geng T, Wang T, Li A, et al. A scalable framework for acceleration of cnn training on deeply-pipelined FPGA clusters with weight and workload balancing. arXiv: 190101007, 2019

[31]

Geng T, Wang T, Sanaullah A, et al. Fpdeep: Acceleration and load balancing of CNN training on FPGA clusters. IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2018

[32]

Li Y, Pedram A, Caterpillar: Coarse grain reconfigurable architecture for accelerating the training of deep neural networks. IEEE 28th International Conference on Application-Specific Systems, Architectures and Processors (ASAP), 2017

[33]

Dicecco R, Sun L, Chow P. FPGA-based training of convolutional neural networks with a reduced precision floating-point library. International Conference on Field Programmable Technology, 2018

[34]

Nakahara H, Sada Y, Shimoda M, et al. FPGA-based training accelerator utilizing sparseness of convolutional neural network. 2019 29th International Conference on Field Programmable Logic and Applications (FPL), 2019, 180

[35]

Fox S, Faraone J, Boland D, et al. Training deep neural networks in low-precision with high accuracy using FPGAs. International Conference on Field-Programmable Technology (FPT), 2019

[36]

Moss D J, Krishnan S, Nurvitadhi E, et al. A customizable matrix multiplication framework for the Intel HARPv2 Xeon+ FPGA platform: A deep learning case study. ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2018, 107

[37]

He K, Zhang X, Ren S, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE International Conference on Computer Vision, 2015, 1026

[38]

Zhou S, Wu Y, Ni Z, et al. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv: 160606160, 2016

[39]

Matsumoto M, Nishimura T. Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans Model Comput Simul, 1998, 8(1), 3

[40]

Performance guide of using nchw image data format. [Online]. Available: https://www.tensorflow.org/guide/performance/overview

[41]

Ma Y, Cao Y, Vrudhula S, et al. Optimizing loop operation and dataflow in FPGA acceleration of deep convolutional neural networks. Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2017

[42]

Steinkraus D, Buck I, Simard P. Using GPUs for machine learning algorithms. Eighth International Conference on Document Analysis and Recognition (ICDAR), 2005

[43]

Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv: 14091556, 2014

[44]

Wei X, Yu C H, Zhang P, et al. Automated systolic array architecture synthesis for high throughput cnn inference on FPGAs. Proceedings of the 54th Annual Design Automation Conference, 2017,

[45]

Krishnan S, Ratusziak P, Johnson C, et al. Accelerator templates and runtime support for variable precision CNN. CISC Workshop, 2017

[46]

Abadi M, Barham P, Chen J, et al. TensorFlow: a system for large-scale machine learning. OSDI, 2016, 265

[47]

Chetlur S, Woolley C, Vandermersch P, et al. cuDNN: Efficient primitives for deep learning. arXiv: 14100759, 2014

[1]

Yu Hongmin, Chen Stanley L, Liu Zhongli. Design of a Dedicated Reconfigurable Multiplier in an FPGA. J. Semicond., 2008, 29(11): 2218.

[2]

Ruan Aiwu, Li Wenchang, Xiang Chuanyin, Song Jiangmin, Kang Shi, Liao Yongbo. Graph theory for FPGA minimum configurations. J. Semicond., 2011, 32(11): 115018. doi: 10.1088/1674-4926/32/11/115018

[3]

Han Xiaowei, Wu Lihua, Zhao Yan, Li Yan, Zhang Qianli, Chen Liang, Zhang Guoquan, Li Jianzhong, Yang Bo, Gao Jiantou, Wang Jian, Li Ming, Liu Guizhai, Zhang Feng, Guo Xufeng, Stanley L. Chen, Liu Zhongli, Yu Fang, Zhao Kai. A radiation-hardened SOI-based FPGA. J. Semicond., 2011, 32(7): 075012. doi: 10.1088/1674-4926/32/7/075012

[4]

Mao Zhidong, Chen Liguang, Wang Yuan, Lai Jinmei. A new FPGA with 4/5-input LUT and optimized carry chain. J. Semicond., 2012, 33(7): 075009. doi: 10.1088/1674-4926/33/7/075009

[5]

Wu Lihua, Han Xiaowei, Zhao Yan, Liu Zhongli, Yu Fang, Stanley L. Chen. Design and implementation of a programming circuit in radiation-hardened FPGA. J. Semicond., 2011, 32(8): 085012. doi: 10.1088/1674-4926/32/8/085012

[6]

Zhao Yan, Wu Lihua, Han Xiaowei, Li Yan, Zhang Qianli, Chen Liang, Zhang Guoquan, Li Jianzhong, Yang Bo, Gao Jiantou, Wang Jian, Li Ming, Liu Guizhai, Zhang Feng, Guo Xufeng, Zhao Kai, Stanley L. Chen, Yu Fang, Liu Zhongli. An IO block array in a radiation-hardened SOI SRAM-based FPGA. J. Semicond., 2012, 33(1): 015010. doi: 10.1088/1674-4926/33/1/015010

[7]

Chen Liying, Hou Chunping, Mao Luhong, Wu Shunhua, Xu Zhenmei, Wang Zhenxing. A Novel Verification Development Platform for PassiveUHF RFID Tag. J. Semicond., 2007, 28(11): 1696.

[8]

Chen Liguang, Wang Yabin, Wu Fang, Lai Jinmei, Tong Jiarong, Zhang Huowen, Tu Rui, Wang Jian, Wang Yuan, Shen Qiushi, Yu Hui, Huang Junnai, Lu Haizhou, Pan Guanghua. Design and Implementation of an FDP Chip. J. Semicond., 2008, 29(4): 713.

[9]

Ni Minghao, Chan S L, Liu Zhongli. Optimization of Global Signal Networks for Island-Style FPGAs. J. Semicond., 2008, 29(9): 1764.

[10]

Chen Xun, Zhu Jianwen, Zhang Minxuan. Regular FPGA based on regular fabric. J. Semicond., 2011, 32(8): 085015. doi: 10.1088/1674-4926/32/8/085015

[11]

Zhengjie Li, Yufan Zhang, Jian Wang, Jinmei Lai. A survey of FPGA design for AI era. J. Semicond., 2020, 41(2): 021402. doi: 10.1088/1674-4926/41/2/021402

[12]

Gao Haixia, Ma Xiaohua, Yang Yintang. Accurate Interconnection Length and Routing Channel Width Estimates for FPGAs. J. Semicond., 2006, 27(7): 1196.

[13]

Wu Fang, Wang Yabin, Chen Liguang, Wang Jian, Lai Jinmei, Wang Yuan, Tong Jiarong. Circuit design of a novel FPGA chip FDP2008. J. Semicond., 2009, 30(11): 115009. doi: 10.1088/1674-4926/30/11/115009

[14]

Chen Zhujia, Yang Haigang, Liu Fei, Wang Yu. A fast-locking all-digital delay-locked loop for phase/delay generation in an FPGA. J. Semicond., 2011, 32(10): 105010. doi: 10.1088/1674-4926/32/10/105010

[15]

Wang Liyun, Lai Jinmei, Tong Jiarong, Tang Pushan, Chen Xing, Duan Xueyan, Chen Liguang, Wang Jian, Wang Yuan. A new FPGA architecture suitable for DSP applications. J. Semicond., 2011, 32(5): 055012. doi: 10.1088/1674-4926/32/5/055012

[16]

Hongzhen Fang, Pengjun Wang, Xu Cheng, Keji Zhou. High speed true random number generator with a new structure of coarse-tuning PDL in FPGA. J. Semicond., 2018, 39(3): 035001. doi: 10.1088/1674-4926/39/3/035001

[17]

Shang Liwei, Liu Ming, Tu Deyu, Zhen Lijuan, Liu Ge. One-Time Programmable Metal-Molecule-Metal Device. J. Semicond., 2008, 29(10): 1928.

[18]

Wu Fang, Zhang Huowen, Lai Jinmei, Wang Yuan, Chen Liguang, Duan Lei, Tong Jiarong. Designand implementation of a delay-optimized universal programmable routing circuit for FPGAs. J. Semicond., 2009, 30(6): 065010. doi: 10.1088/1674-4926/30/6/065010

[19]

Weixiong Jiang, Heng Yu, Jiale Zhang, Jiaxuan Wu, Shaobo Luo, Yajun Ha. Optimizing energy efficiency of CNN-based object detection with dynamic voltage and frequency scaling. J. Semicond., 2020, 41(2): 022406. doi: 10.1088/1674-4926/41/2/022406

[20]

Chunyou Su, Sheng Zhou, Liang Feng, Wei Zhang. Towards high performance low bitwidth training for deep neural networks. J. Semicond., 2020, 41(2): 022404. doi: 10.1088/1674-4926/41/2/022404

Search

Advanced Search >>

GET CITATION

C Luo, M K Sit, H X Fan, S L Liu, W Luk, C Guo, Towards efficient deep neural network training by FPGA-based batch-level parallelism[J]. J. Semicond., 2020, 41(2): 022403. doi: 10.1088/1674-4926/41/2/022403.

Export: BibTex EndNote

Article Metrics

Article views: 150 Times PDF downloads: 25 Times Cited by: 0 Times

History

Manuscript received: 07 November 2019 Manuscript revised: 19 December 2019 Online: Accepted Manuscript: 04 January 2020 Uncorrected proof: 15 January 2020 Published: 11 February 2020

Email This Article

User name:
Email:*请输入正确邮箱
Code:*验证码错误
XML 地图 | Sitemap 地图