Recent advancements in artificial intelligence (AI) and automation are indicated to significantly enhance operational efficiencies in various industries, with a particular focus on transportation systems such as conveyor belts particularly utilised in coal mining. A comprehensive study conducted by a research institute has introduced a specialised experimental platform aimed at examining the effectiveness of methods designed to detect tearing defects in these conveyor belts.
The study undertook foundational work to address a notable lack of available data concerning conveyor belt tearing within the coal mining sector. As a result, an experimental platform was meticulously designed to facilitate data acquisition and the production of specific datasets. The conveyor belt subjected to testing is characterised as a nylon rope core type (NN-300 L), measuring 800 mm in width and 8 mm in thickness, operating at a speed of 3 m/s. A CCD industrial camera (MV-CS050-20GM) equipped with a resolution of 2592 × 2048 pixels was employed for image acquisition, providing a frame rate of up to 22.7 fps.
To simulate realistic operating conditions and encompass various damage scenarios commonly encountered in the coal mining process, the researchers introduced several defect types. These included horizontal breaks and superficial scratches alongside longitudinal tearing, thus enhancing the robustness and reliability of the detection model. The initial dataset comprised 1,800 images, inclusive of 1,300 background images (where no tearing was present) and 500 images exhibiting tearing. Data augmentation techniques allowed for expansion of this dataset, yielding a total of 3,100 images divided into training, testing, and validation groups, ensuring a balanced representation of torn and non-torn images.
Within the experimental framework, model training was conducted using a Windows operating system with specifications including an Intel i5 12,400 F processor and an NVIDIA GeForce RTX3060Ti graphics card. The study utilised Python 11.3 and PyTorch 2.0.1 for its deep learning environment. This framework facilitated a meticulous evaluation of the model's performance through a range of indicators, including accuracy, recall, F1 score, and mean average precision (mAP), which collectively reflected the detection capabilities of the developed approaches.
As part of the evaluation, the research adopted advanced mechanisms such as EfficientNet, BotNet, and SimAm to improve detection performance. Results indicated that the integration of these mechanisms within the Yolov5 framework yielded enhancements across various performance metrics, notably increased recall rates—3.8% with SimAm and the highest overall improvements attributed to the BotNet mechanism, which allowed for better data processing through parallel computation.
The study also compared various loss functions utilised within the detection model, aiming for superior results in precision and speed. The inclusion of Shape_IOU contributed to an increase in the model's detection accuracy, demonstrating the significance of selecting appropriate computational strategies in machine learning frameworks.
The methodologies and findings of this research accentuate ongoing trends in artificial intelligence and machine learning within industrial applications. By refining detection capabilities in automated systems such as conveyor belts, the research aims to not only bolster operational efficiency but also adapt to the increasingly sophisticated technological landscape. The findings reflect a comprehensive understanding of how modern AI methodologies can evolve in conjunction with traditional industries, setting a benchmark for future research and operational enhancements.
Source: Noah Wire Services