NASA Bearing Data

NASA Bearing Data

This post is about detecting bearing failure based on vibration values of a bearing mounted on a rotating machine.

Table of Contents

  1. Data Introduction
  2. Data Visualization with Machbase Neo
  3. Table Creation and Data Upload in Machbase Neo
  4. Experimental Methodology
  5. Experiment Code
  6. Experimental Results

1. Data Introduction


  • DataHub Serial Number: 2024-3.
  • Data Name: NASA Bearing Dataset.
  • Data Collection Methods: Data collection was conducted using the NI DAQ Card 6062E. Each dataset is made up of individual files representing 1-second snapshots of vibration signals recorded at specific intervals, containing 20,480 points each, with a sampling rate of 20 kHz.
  • Data Source: Link
  • Raw data size and format: 6GB, CSV.
  • Number of tags: 16.
Tag Description
s1-c1 ~ c7 Group 1 consists of 8 channels
s2-c1 ~ c4 Group 2 consists of 4 channels
s3-c1 ~ c4 Group 3 consists of 4 channels

2. Data Visualization with Machbase Neo


  • Data visualization is possible through the Tag Analyzer in Machbase Neo.
  • Select desired tag names and visualize them in various types of graphs.
  • Below, access the 2024-3 DataHub in real-time, select the desired tag names from the data of 16 tags, visualize them, and preview the data patterns.
DataHub Viewer

3. Table Creation and Data Upload in Machbase Neo


  • In the DataHub directory, use setup.wrk located in the NASA Bearing Dataset folder to create tables and load data, as illustrated in the image below.

1) Table Creation

  • The table is created immediately upon pressing the "Run" button in the menu.
  • If the bearing table exists, execute the first line and then the second. If it does not exist, start from the second line.

2) Data Upload


  • Loading tables in two different ways.
Method 1) Table loading method using TQL in Machbase Neo (since machbase-neo v8.0.29-rc1

  • Pros

    • Markbase Neo loads as soon as you hit the launch button.
  • Cons

    • Slower table loading speed compared to other method.
Method 2) Loading tables using commands

  • Pros

    • Fast table loading speed.
  • Cons

    • The table loading process is cumbersome.
    • Run cmd window - Change machbase-neo path - Enter command in cmd window.
  • If run the below script from the command shell, the data will be entered at high speed into the bearing table.
curl http://data.yotahub.com/2024-3/datahub-2024-3-bearing.csv.gz | machbase-neo shell import --input -  --compress gzip --header --method append --timeformat ns bearing
  • If specify a separate username and password, use the --user and --password options (if not sys/manager) and add the options as shown below.
curl http://data.yotahub.com/2024-3/datahub-2024-3-bearing.csv.gz | machbase-neo shell import --input -  --compress gzip --header --method append --timeformat ns bearing --user USERNAME --password PASSWORD 

4. Experimental Methodology


  • Model Objective: Bearing Anomaly Detection.
  • Tags Used: s1-c5.
  • Model Configuration: LSTM AutoEncoder.
  • Learning Method: Unsupervised Learning.
    • Train: Model Training.
    • Validation: Threshold Calculation.
    • Test: Model Performance Evaluation Based on Threshold.
  • Model Optimizer: Adam.
  • Model Loss Function: Mean Squared Error.
  • Setting Thresholds: Max + K × Standard Deviation.
  • Model Performance Metric: F1 Score.
  • Data Loading Method
    • Loading the Entire Dataset.
    • Loading the Batch Dataset.
  • Data Preprocessing
    • Hanning Window.
    • Fast Fourier Transform.
    • MinMax Scaling.
    • Principal Component Analysis.

5. Experiment Code


  • Below is the code for each of the two ways to get data from the database.
  • If all the data can be loaded and trained at once without causing memory errors, then method 1 is the fastest and simplest.
  • If the data is too large, causing memory errors, then the batch loading method proposed in method 2 is the most efficient.

Method 1) Loading the Entire Dataset


  • The code below is implemented in a way that loads all the data needed for training from the database all at once.
  • It is exactly the same as loading all CSV files (The only difference is that the data is loaded from Machbase Neo).
  • Pros
    • Can use the same code that was previously utilizing CSVs (Only the loading process is different).
  • Cons
    • Unable to train if trainable data size exceeds memory size.

Method 2) Loading the Batch Dataset


  • Method for loading data from the Machbase Neo for a single batch size.
  • The code below is for fetching a time range sequentially for a single batch size.
  • Pros
    • It is possible to train the model regardless of the data size, no matter how large it is.
  • Cons
    • It takes longer to train compared to method 1.

6. Experimental Results


Method 1) Loading the Entire Dataset Result


Method 2) Loading the Batch Dataset Result


  • The F1 score for loading the entire dataset resulted in 1.0, loading the batch dataset resulted in 0.98.





※ Various datasets and tutorial codes can be found in the GitHub repository below.

datahub/dataset/2024 at main · machbase/datahub
All Industrial IoT DataHub with data visualization and AI source - machbase/datahub

Back to Top