Kaggle S&P 500 Intraday Data

Kaggle S&P 500 Intraday Data

This post describes training with S&P 500 index data and forecasting stock price.

Table of Contents

  1. Data Introduction
  2. Data Visualization with Machbase Neo
  3. Table Creation and Data Upload in Machbase Neo
  4. Experimental Methodology
  5. Experiment Code
  6. Experimental Results

1. Data Introduction


  • DataHub Serial Number: 2024-7.
  • Data Name: S&P 500 Intraday Data.
  • Data Collection Methods: Stock data for 502 large companies listed on the New York Stock Exchange (NYSE), NASDAQ, and the Chicago Board Options Exchange (CBOE) was collected at daily intervals.
  • Data Source: Link
  • Raw data size and format: 647MB, CSV.
  • Number of tags: 2510.
    • 5 tag names for each of the 502 stocks.
TAG DESCRIPTION
Close The closing price of the S&P 500 index at the end of the day.
High The highest value recorded by the S&P 500 index during the day.
Low The lowest value recorded by the S&P 500 index during the day.
Open The first value of the S&P 500 index at the start of the trading day.
Volume The total trading volume of the 502 stocks that are part of the S&P 500.

2. Data Visualization with Machbase Neo


  • Data visualization is possible through the Tag Analyzer in Machbase Neo.
  • Select desired tag names and visualize them in various types of graphs.
  • Below, access the 2024-7 DataHub in real-time, select the desired tag names from the data of 2510 tags, visualize them, and preview the data patterns.
DataHub Viewer

3. Table Creation and Data Upload in Machbase Neo


  • In the DataHub directory, use setup.wrk located in the S&P 500 Intraday Dataset folder to create tables and load data, as illustrated in the image below.

1) Table Creation

  • The table is created immediately upon pressing the "Run" button in the menu.
  • If the SP500 table exists, execute the first line and then the second. If it does not exist, start from the second line.

2) Data Upload


  • Loading tables in two different ways.
Method 1) Table loading method using TQL in Machbase Neo (since machbase-neo v8.0.29-rc1

  • Pros

    • Markbase Neo loads as soon as you hit the launch button.
  • Cons

    • Slower table loading speed compared to other method.
Method 2) Loading tables using commands

  • Pros

    • Fast table loading speed.
  • Cons

    • The table loading process is cumbersome.
    • Run cmd window - Change machbase-neo path - Enter command in cmd window.
  • If run the below script from the command shell, the data will be entered at high speed into the sp500 table.
curl http://data.yotahub.com/2024-7/datahub-2024-07-SP500.csv.gz | machbase-neo shell import --input - --compress gzip --header --method append --timeformat ns sp500
  • If specify a separate username and password, use the --user and --password options (if not sys/manager) and add the options as shown below.
curl http://data.yotahub.com/2024-7/datahub-2024-07-SP500.csv.gz | machbase-neo shell import --input - --compress gzip --header --method append --timeformat ns sp500 --user USERNAME --password PASSWORD

4. Experimental Methodology


  • Model Objective: S&P500 Volume Forecasting.
  • Tags Used: AAPL_volume.
  • Model Configuration: BILSTM.
  • Learning Method: Unsupervised Learning.
    • Train: Model Training.
    • Test: Model Performance Evaluation Based on S&P500 Volume Forecasting.
  • Model Optimizer: Adam.
  • Model Loss Function: Mean Squared Error.
  • Model Performance Metric: Mean Squared Error & R2 Score.
  • Data Loading Method
    • Loading the Entire Dataset.
    • Loading the Batch Dataset.
  • Data Preprocessing
    • Time series decomposition.
    • MinMax Scaling.

5. Experiment Code


  • Below is the code for each of the two ways to get data from the database.
  • If all the data can be loaded and trained at once without causing memory errors, then method 1 is the fastest and simplest.
  • If the data is too large, causing memory errors, then the batch loading method proposed in method 2 is the most efficient.

Method 1) Loading the Entire Dataset


  • The code below is implemented in a way that loads all the data needed for training from the database all at once.
  • It is exactly the same as loading all CSV files (The only difference is that the data is loaded from Machbase Neo).
  • Pros
    • Can use the same code that was previously utilizing CSVs (Only the loading process is different).
  • Cons
    • Unable to train if trainable data size exceeds memory size.

Method 2) Loading the Batch Dataset


  • Method for loading data from the Machbase Neo for a single batch size.
  • The code below is for fetching a time range sequentially for a single batch size.
  • Pros
    • It is possible to train the model regardless of the data size, no matter how large it is.
  • Cons
    • It takes longer to train compared to method 1.

6. Experimental Results


Method 1) Loading the Entire Dataset Result


Method 2) Loading the Batch Dataset Result


  • The R2 score for loading the entire dataset resulted in 0.993, loading the batch dataset resulted in 0.984.





※ Various datasets and tutorial codes can be found in the GitHub repository below.

datahub/dataset/2024 at main · machbase/datahub
All Industrial IoT DataHub with data visualization and AI source - machbase/datahub

Back to Top