Investigation of NYSE High Frequency Financial Data

for Intraday Patterns in Jump Components of

Equity Returns

Peter Van Tassel[1]

Final Report Submitted for Economics 201FS: Research Seminar

and Lab on High Frequency Financial Data Analysis

for Duke Economics Juniors

DukeUniversity

Durham, North Carolina

2 May 2007

Academic Honesty Pledge

1. I will not lie, cheat, or steal in my academic endeavors, nor will I accept the actions of those who do.
2. I will conduct myself responsibly and honorably in all my activities as a Duke student.

3. The assignment is in compliance with the Duke Community Standard as expressed on pp. 5-7 of “Academic Integrity at Duke: A Guide for Teachers and Undergraduates.”

. Peter Van Tassel Pledge

I. Introduction

Financial markets are complex systems in which agents interact to determine the prices of different assets. The goal of our research is to analyze high frequency financial data to improve our knowledge of how financial markets operate. In particular, we consider the literature on jump components in asset prices as a starting point for this report.

Our motivation is both practical and intellectual. From a practical standpoint, there is something awry in the traditional methods for modeling stock price evolution. The well documented smiles in implied volatility from the Black-Scholes option pricing formula are one of many indications that the market is valuing volatility in a different manner than the rudimentary academic models. The literature related to jump components in asset prices, including papers like Andersen, Bollerslev, Diebold 2004, Eraker 2003, and Huang and Tauchen 2005, all have practical implications for modeling volatility, including important results for derivative valuation, risk management, and asset allocation. These concerns are of utmost importance for a trader or portfolio manager.

From an intellectual standpoint, the recently available high frequency financial data provide new opportunities for frontier research in econometrics. In some instances this will allow us to investigate previous literature in finance, providing a better picture as to which ideas are robust to the tick by tick data at the New York Stock Exchange. In other circumstances the data will allow for new types of investigation that were never previously considered. Undoubtedly, the ability to zoom in on financial markets will add a new level of complexity to research and a variety of intellectual challenges before we can decipher the on-goings of the real world.

In this lab report the high frequency financial data will be considered to investigate intraday patterns in jump components of equity return variance. The purpose is to improve our understanding of the evolution of heavily traded stocks on the NYSE. Learning more about what drives intraday patterns in jump components will provide practical implications for trading and portfolio management, as well as raising several interesting questions at the end of the report. Our focus will be on the S&P 500. Figures 1a & 1b present the level prices and returns over our sample period. The rest of the report will proceed as such: Section II will describe the data, Section III will describe the statistics considered, Section IV will present some preliminary work related to SEC Filings and idiosyncratic jumps, including an interesting example, Section V will discuss patterns in flagged jump arrival, Section VI will continue the investigation of intraday patterns in jump components of equity return variance, and Section VII will present our ideas for future work. All tables, figures, and references can be found at the back of the report.

II. Data

High frequency financial data from the NYSE are analyzed for the purpose of this report.

We will focus on the S&P 500 but will also consider Pepsi Co., the Coca-Cola Company, and Bristol Myers Squibb Co.The specific stocks that are considered were assigned as a starting point for research in Econ 201FS. Moving forward the analysis will be extended to an additional 37 stocks and an aggregate portfolio of all 40 stocks included in Law (2007). In this report the SPY data set will be used as a proxy for the market portfolio. The data sets are obtained from the Trade and Quote Database (TAQ) which is available via Wharton Research Data Services (WRDS).A more comprehensive discussion in regard to the data and how the data sets were compiled can be found in Law (2007).[2]

Our selection of stocks is motivated by trading volume. In order for the statistics used in this report to behave properly it is necessary that the stocks considered be heavily traded. The stocks assigned in class and the stocks included in Law (2007) are 40 of the most actively traded stocks on the NYSE as defined by their 10-day trading volume. For each stock there is a data set that includes all trades from January 1, 2001 through December 31, 2005. The time period was selected for two reasons. During the late 1990s trading frequency increased significantly and by 2001 the volume was high enough to justify the use of the statistics. Additionally, by 2001 almost all of the stocks were converted from fractional to decimal trading which helped to reduce some of the market-microstructure noise.

To convert the TAQ data into a 30 second price series an adapted version of the previous tick method from Dacorogna, Gencay, Muller, Olsen, and Pictet (2001) is applied. It excludes the first five minutes of the trading day in order to ensure uniformity of trading and information arrival. The resulting price series includes 771 observations from 9:35am to 4:00pm across 1241 days. The structure of the data set is advantageous because it allows us to easily implement the statistics across different sampling intervals.

In this report the sampling frequency will be 17.5 minutes unless otherwise stated. Our primary concern in selecting a sampling frequency is the effect of market microstructure noise. The literature on market microstructure noise (MMN) dates back to Black (1976) and discusses a variety of sources that bias prices when sampling at a high frequency, including trading mechanisms and discrete prices. One approach to account for this problem is proposed in Andersen, Bollerslev, Diebold, and Labys (2000). They suggest the creation of signature plots of the realized variance across different sampling intervals to allow for the visual selection of a sampling interval where the MMN seems to have stabilized. The selection of 17.5 minutes is made because it seems to be the highest sampling frequency that is relatively unaffected by the MMN in the signature plots included in Law (2007). The result for our data set is that each stock has 22 returns per day across 1241 days.

Figures 2a-2b and Tables 2a-2b are also related to the discussion of sampling interval. The figures portray the number of flagged jumps by the LM statistic at a .999 significance level across different sampling intervals and window sizes. The tables include the number of flagged jumps using the recommended window sizes for instantaneous volatility as defined by Lee and Mykland. Visually the statistic seems to stabilize as the window size increases for each sampling interval. However, Tables 2a & 2b suggest that the number of flagged jumps does not stabilize across different sampling intervals. At 17.5 minutes there are 306 flagged jumps for the SPY data whereas at 55 minutes there are 120 flagged jumps. One explanation might be Type I errors. At different sampling intervals there are a significantly different number of statistics calculated. For example, a sampling interval of 17.5 minutes yields 27,152 statistics whereas a sampling interval at 55 minutes only yields 8,609 statistics over the same data set. The null hypothesis of no jump will likely be incorrectly rejected more frequently over 27,152 statistics versus 8,609 statistics. To account for this discrepancy Table 2c includes the number of flagged jumps at each sampling intervals with varying significance levels.[3] Although it does not convincingly suggest that the LM statistic has stabilized, it does provide more reassuring evidence that 17.5 minutes is an appropriate sampling interval than Tables 2a & 2b.

Afinal consideration needs to be made for the errors in the dataset. It is important to realize that the TAQ database is a human construction that relies on manual entry of the data. As such, it is inevitably subject to human error and needs to be highly scrutinized. Errors in the data set are removed in two manners. First, a simple algorithm sets suspect prices equal to zero when thirty second returns are at least 1.5% in opposite directions. Suspect prices are removed from the price series because it seems illogical that an efficient market would induce a stock to move 1.5% in opposite directions in the span of one trading minute. A likely cause of this phenomenon is data entry error. However, we do not presume that errors in data entry are the only possibility. One curious example can be found on Figure 2c included. The highlighted trade seems to be out of sync with the rest of the price series. However, further inspection reveals that the volume on the trade was actually 37 times greater than the average volume per transaction over the 5 year sample. Isn’t it possible that a large investor, perhaps a hedge fund, needed to unload a large quantity of shares and was willing to accept a slightly lower price than the rest of the market? Surely some behavior that seems irrational is in reality a well functioning and efficient market. With this concern taken into consideration we proceed with the second method for highlighting errors in the price series. Often a human eye is required for removing outliers that are undetected by the algorithm. In particular, manual inspection helps to remove returns that seem to have unreasonably high or low magnitudes. One example is discussed in Figures2d & 2e.

Ultimately, we arrive at our data for this report. It includes price series for Pepsi Co. (PEP), the Coca Cola Company (KO), Bristol Myers Squibb Co. (BMY), and the S&P 500 (SPY). Each price series begins on January 1, 2001 and ends on December 31, 2005, including 771 observations per day across 1241 trading days.

III. Modeling Jump Components in Equity Return Variance

Two non-parametric test statistics will be considered in the analysis that follows. The first statistic is recommended by Huang and Tauchen (2005) in response to their extensive Monte Carlo analysis. It utilizes the realized variance discussed in Andersen, Bollerslev, and Diebold (2002) and the bi-power variation developed in Barndorff-Nielsen and Shephard (2004) as a method for analyzing the contribution of jumps to total price variance. Here on referred to as the BNS or z-statistic, it tests the null hypothesis that no jumps occurred in an entire trading day. The second statistic is recommended by Lee and Mykland (2006) and is here on referred to as the LM statistic. Their statistic is relevant because it presents certain practical advantages for the analysis of intraday patterns. In particular, the Lee and Mykland statistic allows for the flagging of specific returns as statistically significant jumps. A more detailed explanation of the differences between the two statistics will follow in the subsequent pages. The rationale for including both statistics is simple. While the BNS statistic has been published and rigorously tested, the statistic proposed by Lee and Mykland is still under review. The comparison of both statistics will shed light on how the Lee and Mykland statistic compares to the BNS statistic and it will help to support our findings with different methods for analyzing the high frequency data.

The model behind the BNS statistic is a scalar log-price continuous time evolution,

(1)

The first and second term in the model date back to the assumptions made in the Black-Scholes option pricing formula. To be concrete, (t)dtis a drift term and (t)dw(t)is the instantaneous volatility with a standardized Brownian motion. The notation for the additional term Lj(t) was first used in Basawa and Brockwell (1982). It refers to a pure jump Lévy process with increments Lj(t) – Lj(s) = s≤≤()where() is the jump size.Huang and Tauchen consider a specific class of the Lévy process called the Compound Poisson-Process (CPP) where jump intensity is constant and jump size is independently identically distributed.

The realized variance and bi-power variation measures for price variation in high frequency financial data are presented below. As developed in Barndorff-Nielsen and Shephard and defined in Huang and Tauchen,

(2)

(3)

(5)

(6)

where

[4]

Here M is the within day sampling frequency. Combining the results of Andersen, Bollerslev, and Diebold (2002) with the Barndorff-Nielsen and Shephard (2004), the difference between realized volatility and bi-power variation provides a method to investigate the jump component in equity return variance.

(7)

Multiple statistics discussed in Huang and Tauchen (2005) use these results as a means to measure statistically significant jump days. Their recommended statistic will be used throughout this report. It is defined as,

(8)

where

Figures 3a & 3b plot the recommended BNS statistic applied to the SPY and PEP data sets at a 17.5 minute sampling interval. The number of days were the null hypothesis of no jumps is rejected at a statistically significant level is 37 for the SPY and 65 for PEP.

The model considered by Lee and Mykland is quite similar. They define the underlying stock price evolution as,

(9)

The only difference from the model described before is in the counting process. Here dJ(t) is a non-homogenous Poisson-type jump process. It does not make the assumption of constant jump intensity or the assumption of independent identically distributed jump size as used in the BNS statistic. The advantage of having a more general counting process is that it allows for scheduled events like earnings announcements to affect jump intensity. The assumption made by Lee and Mykland is that for any  > 0,

They later explain,

we use Opnotation throughout this paper to mean that, for random vectors {Xn}

and non-negative random variable {dn}, Xn= Op(dn), if for each 0, there exists a finite constant such that eventually. One can interpret Assumption 1 as the drift and diffusion coefficients not changing dramatically over a short time interval…This assumption also satisfies the stochastic volatility plus finite activity jump semi-martingale class in Barndorff-Nielsen and Shephard (2004).[5]

Aside from the subtle difference in stock price evolution, Lee and Mykland go on to make definitions for the realized variation and bi-power variation that come directly from Barndorff-Nielsen and Shephard (2004). They use the bi-power variation in their statistic as a means to estimate the instantaneous volatility. The term π/2 is multiplied by the estimate of instantaneous volatility to studentize the their statistic defined as,

(10)

The window size K determines the degree to which the instantaneous volatility is backward looking. Lee and Mykland recommend window sizes of 7, 16, 78, 110, 156, and 270 for sampling intervals of 1 week, 1 day, 1 hour, 30 minutes, 15 minutes, and 5 minutes, respectively. In this report a sampling interval of 17.5 minutes will be used unless otherwise stated. The acceptable values for K as defined in Lee and Mykland range from 75 to 5544. Our choice of window size will be K = 100. This decision is motivated by Figures 5a-5h. In keeping with recommendations proposed by Lee and Mykland the window size is chosen as a small value of K where the statistic has stabilized.

IV. SEC Filings & Idiosyncratic Jumps

It is well documented in the literature that idiosyncratic jumps are related to firm-specific events. To investigate this claim made in Law (2007) and Lee and Mykland (2006) we analyze the relation between flagged jumps by the BNS statistic and the SEC Filings for Pepsi Co.A Chi-Square Test of Independence is performed to test for a correlation between the filings and the jumps. The results are included below.

The first time the test was performed the BNS statistics were calculated at a 5 minute instead of the usual 17.5 minute sampling interval. Table 4a denotes the matches between the flagged jump days and the SEC Filings. A match is defined to be an SEC filing the day before or the day of a flagged jump. The rationale for this definition is two fold. A match of a flagged jump on the day of the SEC Filing is the trivial definition. We also consider the possibility that the information in the filing may precede the filing in the form of an announcement or information being leaked into the market, constituting a possible violation of the strong form of the efficient market hypothesis. Table 4b denotes the matches between the flagged jump days and the SEC Filings when calculated at a 17.5 minute sampling interval. The only common match between the two tables is August 1st, 2001, which provides an interesting example discussed in Figure 4a.

The conclusion of our preliminary work is that the null hypothesis of independence between SEC Filings and flagged jumps is rejected at a .999 level of statistical significance. Table 4c includes the values for the Chi-Square Test of Independence. The test supports the notion that jumps in specific stocks will be related to idiosyncratic concerns. Further, we find that the types the SEC filings that matched with flagged jumps at the 5 minute sampling interval are never quarterly or annual filings. Rather, unexpected filings like 8Ks and 13s are matched with flagged jumps. As defined by the SEC these filings are used to announce major events that shareholders should know about, often relating to mergers and acquisitions, changes in the ownership of a company, or forecasts of future earnings.