-
Abstract: Effective vibration recognition can improve the performance of vibration control and structural damage detection and is in high demand for signal processing and advanced classification. Signal-processing methods can extract the potent time-frequency-domain characteristics of signals; however, the performance of conventional characteristics-based classification needs to be improved. Widely used deep learning algorithms (e.g., convolutional neural networks (CNNs)) can conduct classification by extracting high-dimensional data features, with outstanding performance. Hence, combining the advantages of signal processing and deep-learning algorithms can significantly enhance vibration recognition performance. A novel vibration recognition method based on signal processing and deep neural networks is proposed herein. First, environmental vibration signals are collected; then, signal processing is conducted to obtain the coefficient matrices of the time-frequency-domain characteristics using three typical algorithms: the wavelet transform, Hilbert–Huang transform, and Mel frequency cepstral coefficient extraction method. Subsequently, CNNs, long short-term memory (LSTM) networks, and combined deep CNN-LSTM networks are trained for vibration recognition, according to the time-frequency-domain characteristics. Finally, the performance of the trained deep neural networks is evaluated and validated. The results confirm the effectiveness of the proposed vibration recognition method combining signal preprocessing and deep learning.
-
Vibrations can significantly degrade the product quality in industrial manufacturing and the occupant comfort in buildings. Additionally, vibration recognition can be beneficial in damage detection for mechanical and civil-engineering structures. Previous studies indicated that effective vibration recognition could improve the performance of vibration control [1], seismic damage prediction [2-3], and structural damage detection [4]. Moreover, accuracy is a critical factor in vibration recognition [1,5]. Hence, the development of an accurate feature extraction and recognition method is imperative [6].
Traditional vibration recognition methods generally require complex algorithms to extract features, exhibiting low performance and a low efficiency. Deep-learning algorithms have developed rapidly in recent years and have achieved excellent performance in object recognition based on computer vision [7] owing to their ability to extract in-depth data features [8]. Among them, convolutional neural networks (CNNs) are highly invariant to the translation, scaling, and distortion of images [9] and are widely utilized in image recognition [10]. Furthermore, long short-term memory (LSTM) networks can accurately identify time-series data and avoid vanishing- and exploding-gradient problems [11]. They are suitable for natural language processing [12]. Therefore, CNN and LSTM exhibit considerable potential for vibration recognition.
CNN and LSTM training generally require a large amount of input data, while the number of available vibration samples is limited. To overcome the limitation of the data quantity, data preprocessing and extraction of data features to improve the training performance are essential [13]. Three classical signal-processing methods are useful for extracting the time-frequency-domain characteristics of vibrations and assisting the training of deep neural networks: the wavelet transform (WT), Hilbert-Huang transform (HHT), and Mel frequency cepstral coefficient (MFCC) extraction. The WT is widely used for processing non-stationary signals via multi-scale analysis [14-15]. The HHT combines adaptive time-frequency analysis (i.e., empirical mode decomposition (EMD)) with the Hilbert transform to obtain signal characteristics [16-17]. The MFCC extraction method characterizes signal features according to the known variation of the human ear’s critical frequency bandwidth [18], which is widely utilized in speech recognition processing. Previous studies on signal processing and deep-learning algorithms have proven the applicability of this method. For example, Zhang et al. (2020) [19] and Zeng (2016) [20] utilized a CNN, ensemble EMD, and a continuous WT for mechanical fault diagnosis. Lu et al. (2020) [1] adopted a WT and a CNN to recognize the micro-vibrations induced by vehicles and construction.
Thus, in the present study, different signal processing methods and deep neural networks were combined to improve the vibration recognition accuracy. Their performance was compared to identify the optimal combination. First, environmental vibrations were collected. Subsequently, three feature-extraction algorithms (WT, HHT, and MFCC extraction), as well as three deep neural networks (CNN, LSTM, and CNN-LSTM) were adopted; additionally, comparisons and parametric studies were performed. The results indicated that the combination of CNN and MFCC extraction is optimal for vibration recognition.
1 Method framework
The framework of the proposed vibration recognition method is presented in Figure 1. The critical parts are the feature extraction and the training of the deep neural networks. The specific steps are as follows:
1) Signal acquisition and data cleaning: Four types of typical vibration signals are collected, and the types of vibration sources are set as labels. Subsequently, the collected data are cleaned to identify valid data and establish the vibration datasets, according to a short-term energy analysis.
2) Extraction of vibration time-frequency-domain characteristics: Three signal-processing methods (WT, HHT, and MFCC extraction) are adopted, and the datasets are established with the obtained coefficient matrices of the time-frequency-domain features.
3) Vibration recognition based on deep-learning algorithms: The datasets from steps 1) and 2) are adopted to train the CNN, LSTM, and CNN-LSTM models for vibration recognition.
4) Evaluation and application of deep neural networks: The CNN, LSTM, and CNN-LSTM models trained in step 3) are evaluated. Subsequently, the qualified models can be adopted for (micro) vibration control, damage detection, fault identification, etc.
To further demonstrate the proposed framework, details of each step are introduced in Sections 2 to 5, taking the micro-vibration control as a potential scenario.
2 Data collection and cleaning
The main purpose of the micro-vibration control is to reduce the environmental vibration-induced negative influence on the production quality during the process of industrial production. The potential environmental vibration source mainly includes the construction activities, the subway, the trucks, etc. [1] In this work, bus-, construction-, subway-, and high-speed rail (HSR)-induced vibrations were collected using the acceleration sensors located on the hard and flat ground, approximately 10 m-50 m from the vibration source (Figure 2(a)). Moreover, an Orient Smart Testing-941B (V) accelerometer with a sampling resolution of approximately 10−4 g and a sampling frequency of 256 Hz was used.
Figure 2 shows the partial time-series data for four types of vibration signals. The subway- and bus-induced vibrations exhibited obvious peak portions, whereas the HSR- and construction-induced vibrations generally maintained a stable or regular distribution in a short time.
The collected original vibrations were primarily composed of significant environmental vibrations and tiny sensor-white-noise. However, neural networks' training datasets should only comprise the valid vibration segments and corresponding labels. Therefore, data cleaning was performed to remove noise and gain valid segments in the raw signal data. Short-term energy is a fundamental parameter of a vibration signal, characterizing the energy of each frame in a signal [21]. Because valid signals differ from noise signals in the acceleration amplitude (equivalently, the short-term energy), a short-term energy analysis was performed for data cleaning.
In this study, bus-induced vibrations were investigated to explain the data-cleaning process based on short-term energy. Figure 3(a) shows the time-domain waveforms of the raw vibration signals. After framing the data, the absolute values (or square values) of the amplitude in one frame were summed to obtain the short-term energy, as shown in Figure 3(b) and Figure 3(c), respectively. As shown in Figure 3(c), the difference between valid signals and noise in the square value-summed short-term energy analysis is more obvious; thus, it was utilized to identify valid signals. Because the vibration energy was generally concentrated near the peak amplitude, the signal length was uniformly set to 15 s around the peak portion. Furthermore, different types of vibration signals necessitate different energy-associated thresholds to eliminate the primary noise effectively, as well as to obtain sufficient data for satisfying the requirement of neural-network training. The thresholds of the bus-, construction-, subway-, and HSR-induced vibrations were 150 × (10−2 g)2, 80 × (10−2 g)2, 2500 × (10−2 g)2, and 800 × (10−2 g)2, respectively. According to these thresholds, the valid vibration signals were identified, and the number of valid vibration signals for the four types was 476, 302, 477, and 266, respectively. As shown in Figure 3, short-term energy-based vibration interception is an effective data extraction method.
3 Data processing for feature extraction
Deep neural network-based big data analysis performs well in civil engineering [22-23]. However, obtaining sufficient high-quality data and corresponding labels in civil engineering is a critical challenge due to the high cost of installation and maintenance of sensors [24]. Hence, to address the data quantity limitation, the combination method of classic time-frequency domain analysis and high-performance deep learning is proposed, which can achieve better recognition results when available data are limited. In this study, three signal-processing methods (e.g., WT, the HHT, and MFCC extraction) were adopted to extract preliminary features of signals for improving the vibration recognition performance of the deep-learning algorithms.
3.1 WT
The WT was derived from the Fourier transform [25] and is widely utilized in time-frequency-domain analysis [26]. The basis function and scale are critical parameters of the WT, which determine the size of the coefficient matrices (i.e., the frequency range) and the transformation speed. Hence, according to the continuous WT, parametric analyses of different basis functions and scales were performed.
The WT with the Cgau (complex Gaussian) basis function is faster and more adaptive to different transformation requirements than those with the Morlet and Mexican Hat basis functions. Therefore, the Cgau basis function was adopted. Figure 4 presents the WT results for the HSR-induced vibration based on the 4th-order and 8th-order Cgau basis functions at different scales. The results indicate that the WTs with the 4th-order and 8th-order functions performed similarly at the same scales. The processing with Cgau4 (i.e., 4th-order Cgau basis function) was faster, satisfying the rapid-output requirement for vibration control. Furthermore, the WT speeds varied among different scales, and more scales corresponded to a lower transforming speed, higher precision, and larger number of low-frequency features.
Table 1. Training results for the raw data inputNetwork Parameter value(i(j), k, l, m, n, p) Train loss Train accuracy Test loss Test accuracy Training speed (ms/step) CNN_1C1P 128(1), 0, 3, 256 0.5, 64 0.1835 0.9432 2.2494 0.7408 1 CNN_2C1P 128(1), 64, 3, 512, 0.6, 64 0.1635 0.9643 1.6913 0.8153 1 CNN_1C1P+ LSTM 256(1), 48, 3, 512, 0.6, 84 0.1035 0.9695 1.3981 0.8055 2 LSTM 64, 32, 0, 128, 0.4, 64 0.3709 0.8768 1.0043 0.7632 7 According to the foregoing analysis, considering the processing speed and extracted-feature quality, the Cgau4 wavelet basis function with 512 scales was adopted in the WT. Figure 5 presents the wavelet images of four typical vibrations, indicating that the time-frequency-domain characteristics of the four types of vibrations differed significantly.
3.2 HHT
The HHT is an adaptive time-frequency-domain analysis method that does not involve the selection of basis functions. It is composed of EMD and the Hilbert transform. Owing to the requirements for the intrinsic mode function (IMF) in EMD [27], the IMFs can characterize stationary and instantaneous non-stationary signals. Figure 6 presents the IMFs of four types of vibration signals after EMD, indicating the one-dimensional raw signals composed of a finite number of simple IMFs with different frequencies and amplitudes. For example, Figure 6(a) shows the decomposition results for bus-induced vibration signals, including time-domain waveforms of the typical mode functions (IMF1, IMF2, IMF11, IMF12) and the residual term (Res). As shown in Figure 6, the amplitude and waveform differed significantly between IMF1 and IMF2 for the four vibrations. In contrast, the noise interference and a slight difference in the decomposed signal frequency may cause modal aliasing during decomposition (e.g., IMF1 and IMF2 in Figure 6(d)), reducing the recognition accuracy [16]. Additionally, because of the different characteristics of the vibration signals, the numbers of the IMFs differed. The Hilbert transform was applied to the IMFs to obtain the Hilbert spectra.
3.3 MFCC extraction
Mel-frequency cepstrum analysis is widely utilized in speech recognition, and its nonlinear-transformation characteristic can effectively eliminate the noise. The process of MFCC extraction includes preprocessing, the discrete Fourier transform, Mel-frequency wrapping, and the discrete cosine transform [21, 28]. Figure 7 presents the linear and logarithmic power spectra of the vibration signal after Mel-frequency wrapping, where the decibel value at the maximum power is 0, and the decibel value is calculated relative to the maximum power. Hence, the brighter areas of the images correspond to a higher vibration energy. Figure 7(a) shows the energy distributions of bus-induced vibrations in the linear and logarithmic power spectra. The power is distributed uniformly in the linear power spectrum. In comparison, for the logarithmic power spectrum, the energy is concentrated in the low-frequency band, which is beneficial for signal recognition. According to Mel-frequency wrapping, the discrete cosine transform on the logarithmic spectra can extract the MFCC matrices. The matrix size is mainly determined by the frame length, frameshift, signal length, and quantity of bandpass Mel-scale filters, which can be adjusted according to the practical conditions.
4 Deep neural network training
According to the extracted signal features, CNN and LSTM were used to classify the vibration types. With a limited amount of training data, the depth of the network architecture should not be excessive, to avoid overfitting and low performance [29-30]. Four types of deep neural networks were adopted in this study: CNN_1C1P, CNN_2C2P, CNN_1C1P+LSTM, and LSTM (Figure 8). “C” and “P” in the CNN labels correspond to the convolutional and pooling layers, respectively, and the number is the number of layers. The LSTM network comprised two layers of LSTM units. CNN_1C1P represents the CNN model with one convolutional layer and one pooling layer; CNN_1C1P+LSTM represents a model with a layer of LSTM units after the pooling layer of CNN_1C1P. The deep neural networks were trained on a computer equipped with AMD Ryzen Threadripper 2990WX 32-Core, NVIDIA GeForce GT 710, Windows 10 operating system.
In this study, the dataset of signal time-frequency-domain features was adopted for training. Lu et al. (2020) [1]determined that the vibration recognition should be completed within the first 1.5 s of the vibration signal during vibration control tasks, to avoid a control delay. Hence, the input signals used for recognition constituted the first 1 s of the vibration signals.
There were four types of input data: 1) raw time-series vibration signals, 2) WT coefficient matrices, 3) HHT coefficient matrices, and 4) MFCC matrices. Previous studies indicated that the recognition speed with the coefficient matrices as inputs was faster and performed better than using the images as inputs [1]; thus, the coefficient matrices were adopted as the inputs instead of the images. A comparative study on the vibration recognition performance of different neural networks and different data inputs was conducted.
Furthermore, the architectures of deep neural networks varied with different input data types. For example, for the CNN models, raw time-series data were inputted in the form of (256, 2, 1), where time and vibration were in 2 columns; the WT coefficient matrix was inputted in the form of (256, 512, 1), where 512 and 1 denote the frequency scales and channel, respectively. For the LSTM models, raw data and the WT matrix were reshaped to (256 × 2, 1) and (256 × 512, 1), respectively. The networks using HHT and MFCC matrices as input were similar to that using WT matrix.
4.1 Raw vibration signal input
The training results of the deep neural networks are presented in Table 1, with the raw time-series signal data input. The hyperparameters of diverse neural networks are presented in the “parameter value” column: i(j), k, l, m, n, and p, where i, k, and m represent the numbers of neural units in the first and second convolutional layers (or the LSTM layer) and the fully connected layer, respectively; j and l represent the sizes of units in the first convolutional layer and the pooling layer, respectively; n represents the dropout ratio; and p represents the batch size (number of data per batch for batch training). According to different parameter combinations, each type of network was trained for 200 epochs-400 epochs until the performance was stable. The accuracy and loss values for the last 10 epochs after training stabilization were adopted to evaluate the training performance, to avoid unreasonable results due to numerical deviation [1]. Then, the networks with the best performance were selected.
Table 2. Training effects of different learning rates and dropout rates with the WT matrix inputLearning rate Dropout ratio Training loss Training accuracy Test loss Test accuracy Training speed (ms/step) 0.6 0.5 0.1255 0.9589 0.2121 0.9412 7.0 0.8 0.6 0.0101 0.9971 0.8402 0.9021 10.1 0.9 0.6 0.2049 0.9379 0.1958 0.9524 7.2 1.0 0.6 0.0083 0.9991 0.2412 0.9542 12.2 Figure 9 presents the CNN_1C1P training process for 100 epochs, indicating obvious overfitting and that the test accuracy is much lower than that of the training. As indicated by Table 1 and Figure 9, the various neural networks could hardly achieve satisfying results with the raw time-series signal input, possibly owing to the short signals with unobvious features and the limited number of signals. Additionally, as the number of data per batch and the number of neural units increased, the training slowed; the LSTM layer was relatively complex, resulting in a lower training speed.
4.2 WT coefficient matrix input
With the WT coefficient matrix input, analyses were performed to determine the suitable hyperparameters for different deep neural networks. CNN_1C1P is taken as an example for a detailed investigation. Parametric analyses of CNN_1C1P were conducted for different batch sizes, numbers of neural units in the convolutional layer, sizes of units in the pooling layer, and dropout ratios. The results are shown in Figure 10.
Figure 10(a) shows the influence of the batch size on the network performance, indicating that the optimal batch size for CNN_1C1P was 84, corresponding to the maximum accuracy and minimum loss. As shown in Figure 10(b), the network performance fluctuated with an increase in the number of convolutional kernels, and the performance decreased after this quantity exceeded 256, owing to the declining generalization ability of the complex network. Figure 10(c) presents the effect of the size of neural units in the pooling layer, revealing that the optimal size was 3, as a smaller size corresponded to a larger number of parameters trained in the neural network. Furthermore, a dropout layer was utilized to reduce the potential overfitting caused by the small amount of training data [32]. As shown in Figure 10(d), the influence of the dropout ratio was evaluated, and the optimal dropout ratio was 0.5.
The parametric adjustment process of CNN_2C2P was similar to those in the aforementioned studies. The effects of the batch size, dropout ratio (learning rate was 0.95), and learning rate were analyzed; the remaining parameters were identical to those of the optimal CNN_1C1P.
The effects of the dropout ratio (with a batch size of 32) and the batch size (with a dropout ratio of 0.6) are presented in Figure 11(a) and Figure 11(b), respectively, indicating that the optimal parameter values were 0.6 and 64, respectively. The effects of the learning rate and dropout ratio (with a batch size of 32) are presented in Table 2, indicating that the dropout ratio should be increased to maintain satisfactory performance when the learning rate is increased. The optimal learning rate and dropout ratio were 1.0 and 0.6, respectively.
The parametric adjustment processes of CNN_1C1P+LSTM and LSTM were similar, and the neural-unit quantity in the LSTM layer was adjusted with consideration of only the number of neural units. Table 3 presents the optimal performance of the four neural networks with the wavelet coefficient matrix input. As shown, with an increasing number of LSTM layers, the training speed of the neural network decreased significantly, and the classification performance became poor. The performance of the LSTM was inferior to that of the CNN, possibly because the short time-series with weak internal data connections were unfavorable for signal feature extraction using LSTM.
Thus, adopting the WT coefficient matrices as the inputs helped the CNN learn the data features sufficiently and perform well; however, they were unsuitable for LSTM.
4.3 Hilbert spectrum input
In the EMD process, the number of IMFs varied for different vibration signals, causing differences in the sizes of the Hilbert spectra. Thus, the parts that lacked an IMF were filled with zero to maintain identical sizes. The sizes of the HHT spectrum were Nmax × d, where N represents the number of IMFs (in this study, N varied from 10 to 13, and Nmax was 13), and d represents the number of data points in 1 s (sampling frequency was 256 Hz; thus, d was 256).
Compared with the foregoing parametric analysis, the training speed was higher with the Hilbert spectrum input, but the test results were less accurate, with more severe overfitting, owing to the small amount of data. Table 4 presents the training results and parameters for the neural networks with the best performance. As shown, the CNN was optimal, but its overall performance was worse than that with the WT coefficient matrix input.
Table 3. Training effects of different types of neural networks under the WTNetwork Parameter value Training loss Training accuracy Test loss Test accuracy Training speed (ms/step) CNN_1C1P 256(4), 0, 512, 0.5, 84 0.0647 0.9927 0.1823 0.9548 36.8 CNN_2C2P 256, 128, 256, 0.6, 32 0.0083 0.9991 0.2412 0.9542 12.2 CNN_1C1P + LSTM 256, 48, 256, 0.5, 84 0.0054 0.9973 1.0145 0.8693 41.1 LSTM 128, 32, 128, 0.5, 84 0.1511 0.9896 0.5547 0.8566 58.4 4.4 MFCC matrix input
As the MFCC inputs had the advantage of the smallest data size for signal feature characterization, the training and testing of the neural networks with MFCC inputs were fast, with good performance. Furthermore, the LSTM layer did not significantly reduce the processing speed or accuracy, and CNN_2C2P exhibited the best performance among the networks tested (Table 5).
Table 4. Training effects of different neural networks with the Hilbert spectrum inputNetwork Parameter value Training loss Training accuracy Test loss Test accuracy Training speed (μs/step) CNN_1C1P 128(4), 0, 256, 0.4, 84 0.0188 0.9956 1.0039 0.8562 2.0 CNN_2C2P 128, 64, 512, 0.5, 84 0.0133 0.9971 1.2358 0.8043 2.0 CNN_1C1P + LSTM 128, 64, 256, 0.45, 84 0.0056 0.9978 1.4861 0.8235 2.0 LSTM 64, 32, 128, 0.5, 84 0.0641 0.9897 3.2432 0.7687 2.0 Table 5. Training effects of different neural networks with the MFCC matrix inputNetwork Parameter value Training loss Training accuracy Test loss Test accuracy Training speed (μs/step) CNN_1C1P 256(4), 0, 512, 0.45, 84 0.0068 0.9985 0.1072 0.9804 484 CNN_2C2P 128, 64, 512, 0.4, 64 0.0627 0.9897 0.0087 0.9853 271 CNN_1C1P + LSTM 64, 48, 256, 0.5, 84 0.0097 0.9971 0.0956 0.9869 327 LSTM 128, 32, 128, 0.5, 84 0.0067 0.9978 0.1620 0.9804 878 4.5 Discussion on deep neural network performance
In addition to comparing the performance of different input types and network models, this study also analyzed the effect of the vibration length and dataset splitting schemes. Table 6 lists the influence caused by the vibration length. The vibration recognition accuracy only slightly improves with the increased vibration length, which indicates that the vibration length is not the critical factor in determining the recognition performance. Table 7 shows the influence on the recognition accuracy by different dataset splitting schemes. In this study, when 60% and 70% of the data are used for training, the recognition accuracy will be reduced. In comparison, when 80% and 90% of the data are used for training, a reliable recognition ability can be achieved. Moreover, Zhou [33] recommended that, the proportion of data used for training should be within 80% to avoid an unstable testing performance. Thus, 80% of the data in the training sets are suggested when the data are sufficient.
Table 6. Training effects of different vibration length (based on CNN_1C1P)Input Data length/s Train loss Training accuracy Test loss Test accuracy Raw data 1 0.1835 0.9432 2.2494 0.7408 5 0.1211 0.9630 1.6181 0.8105 10 0.1025 0.9798 1.5123 0.8203 15 0.1298 0.9723 1.3426 0.8159 Wavelet coefficient matrix 1 0.0647 0.9927 0.1823 0.9548 5 0.0982 0.9902 0.8828 0.9446 10 0.0444 0.9859 0.2998 0.9800 15 0.0952 0.9896 0.0520 0.9900 Hilbert spectrum 1 0.0188 0.9956 1.0039 0.8562 5 0.0688 0.9956 1.2835 0.8705 10 0.0421 0.9942 1.0058 0.8652 15 0.0160 0.9985 1.3662 0.8559 MFCC matrix 1 0.0068 0.9985 0.1072 0.9804 5 0.0095 0.9963 0.0336 0.9935 10 0.0059 0.9992 0.1023 0.9936 15 0.0028 0.9993 0.0052 0.9954 Table 7. Training effects of different dataset proportions (based on CNN_1C1P)Input Data ratio (training data∶test data∶validation data) Training loss Training accuracy Test loss Test accuracy Validation accuracy/(%) Raw data 60∶40∶1.5 0.1257 0.9401 4.3396 0.6897 70 70∶30∶1.5 0.2164 0.9213 2.2356 0.7163 70 80∶20∶1.5 0.1123 0.9489 3.2885 0.7218 70 90∶10∶1.5 0.1835 0.9432 2.2494 0.7408 60 Wavelet coefficient matrix 60∶40∶1.5 0.0211 0.9949 0.9257 0.9257 90 70∶30∶1.5 0.0489 0.9941 0.8592 0.9398 100 80∶20∶1.5 0.0307 0.9951 0.2287 0.9501 100 90∶10∶1.5 0.0647 0.9927 0.1823 0.9548 100 Hilbert spectrum 60∶40∶1.5 0.0205 0.9957 1.8757 0.7998 80 70∶30∶1.5 0.0279 0.9941 0.7257 0.8456 80 80∶20∶1.5 0.0366 0.9934 0.9895 0.8481 90 90∶10∶1.5 0.0188 0.9956 1.0039 0.8562 90 MFCC matrix 60∶40∶1.5 0.0322 0.9923 0.1989 0.9586 100 70∶30∶1.5 0.0329 0.9901 0.1558 0.9697 100 80∶20∶1.5 0.0238 0.9951 0.1601 0.9689 100 90∶10∶1.5 0.0068 0.9985 0.1072 0.9804 100 Table 8. Average prediction accuracy and processing speed for different inputsInput Signal 1 2 3 4 5 6 7 8 9 10 Accuracy/(%) Time/s True label Sub Bus HSR Sub Con Bus Con Bus Sub HSR Raw data CNN_1C1P Sub Sub Bus Sub Con Bus Con Bus Sub Sub 60 0.099 CNN_2C2P Sub Sub HSR HSR Con Cons HSR Bus Sub HSR CNN_1C1P+LSTM Sub Sub HSR Sub Con Bus Con Bus Sub Sub LSTM HSR HSR Sub Sub HSR HSR HSR HSR Sub HSR Wavelet coefficient matrix CNN_1C1P Sub Bus HSR Sub Con Bus Con Bus Sub HSR 100 0.167 CNN_2C2P Sub Bus HSR Sub Con Bus Con Bus Sub HSR CNN_1C1P+LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR Hilbert spectrum CNN_1C1P Sub Bus HSR Sub Con Bus Con Bus Sub HSR 90 0.134 CNN_2C2P Sub Bus Bus Sub Con Bus Con Bus Sub HSR CNN_1C1P+LSTM Sub Bus HSR Sub Con Bus Con Bus Sub Sub LSTM Sub Bus Bus Sub Con Bus Con Bus Sub Sub MFCC matrix CNN_1C1P Sub Bus HSR Sub Con Bus Con Bus Sub HSR 100 0.102 CNN_2C2P Sub Bus HSR Sub Con Bus Con Bus Sub HSR CNN_1C1P+LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR Consequently, compared with the raw time-series data, the time-frequency characteristics data effectively improved the vibration recognition accuracy. The primary reason is that the initial time-frequency characteristics constrain the range of features extracted by deep neural networks, and guide the deep neural networks' optimization direction. Furthermore, another reason is that time-frequency characteristics input also utilizes CNN's big-data analysis ability sufficiently, by expanding the data size from (256, 1) to (256, 512, 1).
5 Validation and evaluation of models
The validation datasets were adopted for case studies, with the proportion of bus-, construction-, subway-, and HSR-induced vibration signals being 2∶1∶2∶1. Ten signals were randomly selected from the validation set as the standard prediction set (numbered 1-10). Moreover, various signals were randomly selected to synthesize the coupled signals, to validate the recognition ability of different signals. The maximum short-term energy determined the dominant signal type in the coupled signals. The artificial coupled signals were numbered as 11-15, and the dominant signals were subway-, subway-, HSR-, HSR-, and bus-induced vibration signals, respectively. Figure 12 presents the time-frequency-domain features of signals in the prediction set. The numbers represent the prediction order; the first 1 s of the vibration signals were used as the inputs. The optimal CNN_1C1P, CNN_2C2P, CNN_1C1P+LSTM, and LSTM from Section 4 were utilized for the predictions.
The types of standard vibration signals were predicted, and the prediction accuracy and speed were examined. Table 8 presents the results, where the numbers of the vibration signals denote the prediction order, and the bolded and underlined signal types correspond to wrong predictions. The results indicate that the optimal neural networks effectively recognized the vibration type, with both the wavelet coefficient matrix and MFCC matrix inputs. In contrast, the deep neural networks performed poorly with the Hilbert spectrum input. The incorrect predictions may have been caused by the unbalanced compositions of the training datasets, in which the bus- and subway-induced vibration signals accounted for the largest amounts of data. All the predictions took < 0.2 s, and the networks with the MFCC matrix input were the fastest. The validation results were similar to the training results.
When recognizing coupled signals, it is rational to recognize the coupled vibrations as the dominant type, according to the vibration energy. Table 9 presents the dominant vibrations of the coupled signals and the prediction results. The bold and underlined signal types correspond to recognition results that differed from the dominant types. The results indicate that for coupled signals 11-14 with significant internal differences, most of the neural networks recognized them as the dominant types. The recognition results for coupled signal 15 differed significantly. Owing to the superimposed effects, some networks recognized it as an HSR-induced vibration signal. Consequently, the model recognition results with MFCC matrix, and wavelet coefficient matrix inputs were optimal for coupled vibration.
Table 9. Recognition results for coupled vibrationInput Signal 11 12 13 14 15 True labels Sub+ Bus Sub+ Cons HSR+ Bus HSR+ Cons Bus +Cons Dominant vibration Sub Sub HSR HSR Bus Raw data CNN_1C1P Sub Sub HSR HSR Cons CNN_2C2P Sub Sub HSR HSR Cons CNN_1C1P + LSTM Sub Sub HSR HSR Bus LSTM Sub Sub HSR HSR HSR Wavelet coefficient matrix CNN_1C1P Sub Sub HSR HSR Bus CNN_2C2P Sub Sub HSR HSR Bus CNN_1C1P + LSTM Sub Sub HSR HSR HSR LSTM Sub Sub Sub Sub HSR Hilbert spectrum CNN_1C1P Sub Sub HSR HSR Bus CNN_2C2P Sub Sub HSR HSR Bus CNN_1C1P + LSTM Sub Sub Bus Bus Cons LSTM Sub Sub HSR Sub Cons MFCC matrix CNN_1C1P Sub Sub HSR HSR Bus CNN_2C2P Sub Sub HSR HSR Bus CNN_1C1P + LSTM Sub Sub HSR HSR Bus LSTM Sub Cons HSR Cons Cons 6 Conclusions
A deep learning-based vibration recognition method using signal processing and diverse deep neural networks was proposed. The proposed method performed well, and the following conclusions are drawn.
(1) Adopting short-term energy analysis to remove the irrelevant noise of the raw data and identify valid signal fragments yields good performance.
(2) When using the raw time-series data as inputs, the deep neural networks perform poorly owing to the short signal length and small training data. Three signal-processing methods (i.e., WT, HHT, and MFCC extraction) were used to extract time-frequency-domain features, and yield the coefficient matrices which characterize the vibration signals. The extracted characteristics guided the neural networks' optimization direction and improved the recognition performance.
(3) Four types of neural networks (CNN_1C1P, CNN_2C2P, CNN_1C1P+LSTM, and LSTM) were built, and the coefficient matrix input was used for training. The four deep neural networks performed best with the MFCC matrix input. The CNNs had excellent recognition performance. Consequently, the recognition method combining MFCC extraction with the CNN was optimal, considering the evaluation results and the requirements of practical application scenarios.
Effective recognition of coupled vibration signals in complex situations will be the focus of our next study.
Acknowledgement
The authors would like to acknowledge the financial supports of Shandong Co-Innovation Center for Disaster Prevention and Mitigation of Civil Structures, Tsinghua University through "Improved Micro-Vibration Control Algorithm Research Based on Deep Learning" SRT Project (No. 2011T0017), Initiative Scientific Research Program, and the QingYuan Space of the Department of Civil Engineering. The authors would like to acknowledge Mr. Zhou Yuan, Mr. Zihan Wang, Ms. Peng An, Mr. Haolong Feng, and Mr. Haoran Tang for their contribution to this work.
-
Table 1 Training results for the raw data input
Network Parameter value(i(j), k, l, m, n, p) Train loss Train accuracy Test loss Test accuracy Training speed (ms/step) CNN_1C1P 128(1), 0, 3, 256 0.5, 64 0.1835 0.9432 2.2494 0.7408 1 CNN_2C1P 128(1), 64, 3, 512, 0.6, 64 0.1635 0.9643 1.6913 0.8153 1 CNN_1C1P+ LSTM 256(1), 48, 3, 512, 0.6, 84 0.1035 0.9695 1.3981 0.8055 2 LSTM 64, 32, 0, 128, 0.4, 64 0.3709 0.8768 1.0043 0.7632 7 Table 2 Training effects of different learning rates and dropout rates with the WT matrix input
Learning rate Dropout ratio Training loss Training accuracy Test loss Test accuracy Training speed (ms/step) 0.6 0.5 0.1255 0.9589 0.2121 0.9412 7.0 0.8 0.6 0.0101 0.9971 0.8402 0.9021 10.1 0.9 0.6 0.2049 0.9379 0.1958 0.9524 7.2 1.0 0.6 0.0083 0.9991 0.2412 0.9542 12.2 Table 3 Training effects of different types of neural networks under the WT
Network Parameter value Training loss Training accuracy Test loss Test accuracy Training speed (ms/step) CNN_1C1P 256(4), 0, 512, 0.5, 84 0.0647 0.9927 0.1823 0.9548 36.8 CNN_2C2P 256, 128, 256, 0.6, 32 0.0083 0.9991 0.2412 0.9542 12.2 CNN_1C1P + LSTM 256, 48, 256, 0.5, 84 0.0054 0.9973 1.0145 0.8693 41.1 LSTM 128, 32, 128, 0.5, 84 0.1511 0.9896 0.5547 0.8566 58.4 Table 4 Training effects of different neural networks with the Hilbert spectrum input
Network Parameter value Training loss Training accuracy Test loss Test accuracy Training speed (μs/step) CNN_1C1P 128(4), 0, 256, 0.4, 84 0.0188 0.9956 1.0039 0.8562 2.0 CNN_2C2P 128, 64, 512, 0.5, 84 0.0133 0.9971 1.2358 0.8043 2.0 CNN_1C1P + LSTM 128, 64, 256, 0.45, 84 0.0056 0.9978 1.4861 0.8235 2.0 LSTM 64, 32, 128, 0.5, 84 0.0641 0.9897 3.2432 0.7687 2.0 Table 5 Training effects of different neural networks with the MFCC matrix input
Network Parameter value Training loss Training accuracy Test loss Test accuracy Training speed (μs/step) CNN_1C1P 256(4), 0, 512, 0.45, 84 0.0068 0.9985 0.1072 0.9804 484 CNN_2C2P 128, 64, 512, 0.4, 64 0.0627 0.9897 0.0087 0.9853 271 CNN_1C1P + LSTM 64, 48, 256, 0.5, 84 0.0097 0.9971 0.0956 0.9869 327 LSTM 128, 32, 128, 0.5, 84 0.0067 0.9978 0.1620 0.9804 878 Table 6 Training effects of different vibration length (based on CNN_1C1P)
Input Data length/s Train loss Training accuracy Test loss Test accuracy Raw data 1 0.1835 0.9432 2.2494 0.7408 5 0.1211 0.9630 1.6181 0.8105 10 0.1025 0.9798 1.5123 0.8203 15 0.1298 0.9723 1.3426 0.8159 Wavelet coefficient matrix 1 0.0647 0.9927 0.1823 0.9548 5 0.0982 0.9902 0.8828 0.9446 10 0.0444 0.9859 0.2998 0.9800 15 0.0952 0.9896 0.0520 0.9900 Hilbert spectrum 1 0.0188 0.9956 1.0039 0.8562 5 0.0688 0.9956 1.2835 0.8705 10 0.0421 0.9942 1.0058 0.8652 15 0.0160 0.9985 1.3662 0.8559 MFCC matrix 1 0.0068 0.9985 0.1072 0.9804 5 0.0095 0.9963 0.0336 0.9935 10 0.0059 0.9992 0.1023 0.9936 15 0.0028 0.9993 0.0052 0.9954 Table 7 Training effects of different dataset proportions (based on CNN_1C1P)
Input Data ratio (training data∶test data∶validation data) Training loss Training accuracy Test loss Test accuracy Validation accuracy/(%) Raw data 60∶40∶1.5 0.1257 0.9401 4.3396 0.6897 70 70∶30∶1.5 0.2164 0.9213 2.2356 0.7163 70 80∶20∶1.5 0.1123 0.9489 3.2885 0.7218 70 90∶10∶1.5 0.1835 0.9432 2.2494 0.7408 60 Wavelet coefficient matrix 60∶40∶1.5 0.0211 0.9949 0.9257 0.9257 90 70∶30∶1.5 0.0489 0.9941 0.8592 0.9398 100 80∶20∶1.5 0.0307 0.9951 0.2287 0.9501 100 90∶10∶1.5 0.0647 0.9927 0.1823 0.9548 100 Hilbert spectrum 60∶40∶1.5 0.0205 0.9957 1.8757 0.7998 80 70∶30∶1.5 0.0279 0.9941 0.7257 0.8456 80 80∶20∶1.5 0.0366 0.9934 0.9895 0.8481 90 90∶10∶1.5 0.0188 0.9956 1.0039 0.8562 90 MFCC matrix 60∶40∶1.5 0.0322 0.9923 0.1989 0.9586 100 70∶30∶1.5 0.0329 0.9901 0.1558 0.9697 100 80∶20∶1.5 0.0238 0.9951 0.1601 0.9689 100 90∶10∶1.5 0.0068 0.9985 0.1072 0.9804 100 Table 8 Average prediction accuracy and processing speed for different inputs
Input Signal 1 2 3 4 5 6 7 8 9 10 Accuracy/(%) Time/s True label Sub Bus HSR Sub Con Bus Con Bus Sub HSR Raw data CNN_1C1P Sub Sub Bus Sub Con Bus Con Bus Sub Sub 60 0.099 CNN_2C2P Sub Sub HSR HSR Con Cons HSR Bus Sub HSR CNN_1C1P+LSTM Sub Sub HSR Sub Con Bus Con Bus Sub Sub LSTM HSR HSR Sub Sub HSR HSR HSR HSR Sub HSR Wavelet coefficient matrix CNN_1C1P Sub Bus HSR Sub Con Bus Con Bus Sub HSR 100 0.167 CNN_2C2P Sub Bus HSR Sub Con Bus Con Bus Sub HSR CNN_1C1P+LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR Hilbert spectrum CNN_1C1P Sub Bus HSR Sub Con Bus Con Bus Sub HSR 90 0.134 CNN_2C2P Sub Bus Bus Sub Con Bus Con Bus Sub HSR CNN_1C1P+LSTM Sub Bus HSR Sub Con Bus Con Bus Sub Sub LSTM Sub Bus Bus Sub Con Bus Con Bus Sub Sub MFCC matrix CNN_1C1P Sub Bus HSR Sub Con Bus Con Bus Sub HSR 100 0.102 CNN_2C2P Sub Bus HSR Sub Con Bus Con Bus Sub HSR CNN_1C1P+LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR LSTM Sub Bus HSR Sub Con Bus Con Bus Sub HSR Table 9 Recognition results for coupled vibration
Input Signal 11 12 13 14 15 True labels Sub+ Bus Sub+ Cons HSR+ Bus HSR+ Cons Bus +Cons Dominant vibration Sub Sub HSR HSR Bus Raw data CNN_1C1P Sub Sub HSR HSR Cons CNN_2C2P Sub Sub HSR HSR Cons CNN_1C1P + LSTM Sub Sub HSR HSR Bus LSTM Sub Sub HSR HSR HSR Wavelet coefficient matrix CNN_1C1P Sub Sub HSR HSR Bus CNN_2C2P Sub Sub HSR HSR Bus CNN_1C1P + LSTM Sub Sub HSR HSR HSR LSTM Sub Sub Sub Sub HSR Hilbert spectrum CNN_1C1P Sub Sub HSR HSR Bus CNN_2C2P Sub Sub HSR HSR Bus CNN_1C1P + LSTM Sub Sub Bus Bus Cons LSTM Sub Sub HSR Sub Cons MFCC matrix CNN_1C1P Sub Sub HSR HSR Bus CNN_2C2P Sub Sub HSR HSR Bus CNN_1C1P + LSTM Sub Sub HSR HSR Bus LSTM Sub Cons HSR Cons Cons -
[1] Lu X Z, Liao W J, Huang W, et al. An improved linear quadratic regulator control method through convolutional neural network-based vibration identification [J]. Journal of Vibration and Control, 2020 (11): 107754632093375.
[2] Xu Y J, Lu X Z, Cetiner B, et al. Real-time regional seismic damage assessment framework based on long short-term memory neural network [J]. Computer-Aided Civil and Infrastructure Engineering, 2020. doi: https://doi.org/10.1111/mice.12628
[3] Xu Y J, Lu X Z, Tian Y, et al. Real-time seismic damage prediction and comparison of various ground motion intensity measures based on machine learning [J]. Journal of Earthquake Engineering. 2020. doi: https://doi.org/10.1080/13632469.2020.1826371
[4] Yang B. Research on the structural damage detection methods based on the vibration response [D]. Changsha: Hunan University, 2014. (in Chinese)
[5] Zhong X Y. Research on time-frequency analysis methods and its applications to rotating machinery fault diagnosis [D]. Wuhan: Wuhan University of Science and Technology, 2014. (in Chinese)
[6] Yan X A, Jia M P. A novel optimized SVM classification algorithm with multi-domain feature and its application to fault diagnosis of rolling bearing [J]. Neurocomputing, 2018, 313(3): 47 − 64.
[7] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks [C]// Proceedings of the 2012 Advances in Neural Information Processing Systems. Nevada, USA: Curran Associates Inc, 2012: 1097 − 1105.
[8] Duan Yanjie, Lv Yisheng, Zhang Jie, et al. Deep learning for control: The state of the art and prospects [J]. Acta Automatica Sinica, 2016, 42(5): 643 − 654. (in Chinese)
[9] Lécun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition [J]. Proceedings of the IEEE, 1998, 86(11): 2278 − 2324. doi: 10.1109/5.726791
[10] Antipov G, Berrani S A, Dugelay J L. Minimalistic CNN-based ensemble model for gender prediction from face images [J]. Pattern Recognition Letters, 2016, 70: 59 − 65. doi: 10.1016/j.patrec.2015.11.011
[11] Graves A. Long short-term memory[M]. Berlin: Springer, 2012.
[12] Srivastava N, Mansimov E, Salakhutdinov R. Unsupervised learning of video representations using LSTMs [C]// Proceedings of the 32nd International Conference on Machine Learning. Lille: JMLR W&CP, 2015: 843 − 852.
[13] Wang J J, Ma Y L, Zhang L B, et al. Deep learning for smart manufacturing: Methods and applications [J]. Journal of Manufacturing Systems, 2018: 144 − 156. (inChinese)
[14] Liao Q, Li X, Huang B. Hybrid fault-feature extraction of rolling element bearing via customized-lifting multi-wavelet packet transform [J]. ARCHIVE Proceedings of the Institution of Mechanical Engineers Part C-Journal of Mechanical Engineering Science, 2014, 228(12): 2204 − 2216. doi: 10.1177/0954406213516305
[15] Wang H, Xu Z D, Tao T Y, et al. Field measurement study on the EPSD of non-stationary buffeting response of Sutong Bridge based on WT [J]. Engineering Mechanics, 2016, 33(9): 164 − 170. (in Chinese)
[16] Li Z M, Xu M, Pan T H, et al. A harmonic detection method for distributed connected grid system by using wavelet transform and HHT [J]. Power System Protection and Control, 2014, 42(4): 34 − 39. (in Chinese)
[17] Wang L Y, Li D S, Li H N. Parameter identification of nonlinear vibration systems based on the Hilbert-Huang transform [J]. Engineering Mechanics, 2017, 34(1): 28 − 32, 44. (in Chinese)
[18] Hossan M A, Memon S, Gregory M A. A novel approach for MFCC feature extraction [C]// Proc of the 4th International Conference on Signal Processing and Communication Systems. New York: IEEE Press, 2010, 1 − 5.
[19] Zhang A A, Huang J Y, Ji S W, et al. Bearing fault pattern recognition based on image classification with CNN [J]. Journal of Vibration and Shock, 2020, 39(4):165 − 171. (in Chinese)
[20] Zeng X Q. Classification and recognition of transmission fault based on convolutional neural network [D]. Guangzhou: South China University of Technology, 2016. (in Chinese)
[21] Lv X Y, Wang H X. Abnormal audio recognition algorithm based on MFCC and short-term energy [J]. Journal of Computer Applications, 2010, 30(3): 796 − 798. doi: 10.3724/SP.J.1087.2010.00796(inChinese)
[22] Tang Z Y, Chen Z C, Bao Y Q, et al. Convolutional neural network-based data anomaly detection method using multiple information for structural health monitoring [J]. Structural Control & Health Monitoring, 2019, 26(1): e2296. doi: 10.1002/Stc.2296
[23] Liao W J, Chen X Y, Lu X Z, et al. Deep transfer learning and time-frequency characteristics-based identification method for structural seismic response [J]. Frontiers in Built Environment. 2020. DOI: 10.3389/fbuil.2021.627058
[24] Shirzad-Ghaleroudkhani N, Mei Q, Gül M. Frequency identification of bridges using 455 smartphones on vehicles with variable features [J]. Journal of Bridge Engineering. 2020, 25(7): 04020041. doi: 45610.1061/(ASCE)BE.1943-5592.0001565
[25] Ruskai M, Beylkin G, Daubechies I, et al. Wavelets and their applications [M]. Boston, MA: Jones and Barlett, 1992.
[26] Zhang P, Li H B. A novel algorithm for harmonic analysis based on discrete wavelet transforms [J]. Transactions of China Electrotechnical Society, 2012,27(3): 252 − 259. (in Chinese)
[27] Huang N E, Shen Z, Long S R. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis [J]. Proceedings of the Royal Society of London, 1998, 454(1971): 903 − 995. doi: 10.1098/rspa.1998.0193
[28] Chang F, Qiao X, Zhang S, et al. Method of failure prediction and evaluation based on MFCC feature extraction [J]. Application Research of Computers, 2015,32(6): 1716 − 1719. (in Chinese)
[29] He K, Sun J. Convolutional neural networks at constrained time cost [C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA, USA, 2015: 5353 − 5360.
[30] Chollet F. Deep learning with Python and Keras [M]. Connecticut, USA: Manning Publications, 2017.
[31] Zhang R, Chen Z, Chen S, et al. Deep long short-term memory networks for nonlinear structural seismic response prediction [J]. Computers and Structures, 2019, 220(AUG.): 55 − 68 .
[32] Cheng J H, Zeng G H, Lu D K, et al. Improved convolutional neural network model averaging method based on dropout [J]. Journal of Computer Applications,2019, 39(6): 1601 − 1606. (in Chinese)
[33] Zhou Z H. Machine learning [M]. Beijing: Tsinghua University Press, 2016. (in Chinese)
-
期刊类型引用(13)
1. 穆文均,周仁忠,郑建新,朱金柱. 胶栓组合连接加固钢板疲劳性能试验研究. 中国港湾建设. 2024(05): 48-55 . 百度学术
2. 相敏,唐丽娜,章一萍,周练. FRP拉挤型材及其节点连接技术研究进展. 高科技纤维与应用. 2024(04): 13-19 . 百度学术
3. 祝明桥,李军,李志彬. GFRP-RPC组合双层交通梁桥的弹性性能试验研究与有限元分析. 土木与环境工程学报(中英文). 2023(01): 199-208 . 百度学术
4. 唐明杰,祝明桥,李强. GFRP拉挤圆形截面管材节点连接试验研究. 邵阳学院学报(自然科学版). 2023(03): 48-56 . 百度学术
5. 杨晓东,时建纬,陈栋,李成. 结构参数对CFRP三螺栓胶螺混合接头连接性能的影响研究. 复合材料科学与工程. 2023(09): 67-72+84 . 百度学术
6. 杨亚旭,时建纬,李成,陈栋. 基于多尺度及ECPL模型的平纹机织复合材料胶-螺混合连接承载性能分析. 机械工程学报. 2023(22): 287-301 . 百度学术
7. 刘礼平,王宇灿,原志翔,徐建新,鲍蕊,吴枫. 复合材料胶铆混合修理的拉伸性能研究. 机械强度. 2021(01): 63-70 . 百度学术
8. 杜力松,黄亚新,张釜恺. CFRP-铝合金层合板螺栓连接失效仿真及实验研究. 装备制造技术. 2021(11): 76-84+91 . 百度学术
9. 秦政. 复合材料在土木工程中的发展与应用探讨. 现代物业(中旬刊). 2020(01): 60 . 百度学术
10. 谢志强,张爱林,闫维明,张艳霞,虞诚,慕婷婷. 冷弯薄壁型钢螺钉-铆钉混合连接受剪性能及计算方法研究. 建筑结构学报. 2020(10): 160-172 . 百度学术
11. 京约,郝书研,袁智德,徐成林,韩序康,李振鲁. 复合材料在土木工程中的应用研究. 居舍. 2019(19): 19 . 百度学术
12. 余秋冶. FRP插接式T型节点连接方式试验研究. 铁道勘测与设计. 2018(04): 69-74 . 百度学术
13. 李峰,刘加顺,张冬冬,刘建邦. GFRP管-铝合金管纤维缠绕齿连接接头拉伸试验. 复合材料学报. 2018(10): 2678-2688 . 百度学术
其他类型引用(26)