Abstract

The thesis intends to aim at below areas:

To examine long-term cointegration interlinkages between monetary policies and S&P 500 and what is the magnitude of the relationships.

To investigate is there any causality effect between above variables and what is the magnitude of the relationship.

To exploit recent financial ….

To summarize the findings and make comparison with existing literature reviews.

To outline the implications and make the recommendations for financial practitioners and relevant participators

Introduction

Monetary policy, which differs from the fiscal policy, refers to the measures taken by the central government or the central bank to influence macroeconomic variables, especially the measurements of controlling over money supply, adjusting the inflation and shifting interest rates. The formulation and enforcement of monetary policy in the United States have long been conducted by the Federal Reserve, namely known as the Federal Reserve Board (FRS). Monetary policy is the main macro-policy tool for the U.S. to regulate and control the domestic economy and social developments. On the other hand, U.S. legislatively stipulates the goal of monetary policy is to achieve full employment and maintain market value stability, creating a relatively stable financial environment. With the maturity and development of the market economy, the Fed’s monetary policy has played an increasingly important role in macroeconomic regulation and control. Meanwhile, S; P 500 as one of the major indices in the America, is very sensitive to the market actions and is broadly considered as the major representativeness of the U.S. economy. Therefore, it is crucial for investors to understand the relationship between policy actions and equity market prices.

The evolution of Fed conducting its monetary policy has been decades long, this long time cumulative process has led to the Fed acting Federal Funds rate as the main tool of the policy. If the interest rate keeps low, what is the consequence that will drive the inflation rate, money supply and unemployment rate? So before attempting to understand the effects of monetary policy on the stock market, the transmission of the policy actions needs to be found out.

This paper attempts to study the impact of Fed’s monetary policy on the stock prices, more specifically, the impact of changes in Federal Funds rates, inflation rates, unemployment rates and money supply on the S& P 500 indices. Time series monthly data of the period from 2002 to 2018 has been employed for U.S. For more information on these variables see the data section.

The objectives of this study are:

To examine long-term cointegration interlinkages between monetary policy and S&P 500 index and what is the magnitude of the relationships.

To investigate is there any causality effect between above variables and what is the direction of the relationship.

To summarize the findings and make comparison with existing literature reviews.

To outline the implications and make the recommendations for financial practitioners and relevant participators.

Chapter Two consists of a comprehensive review of the existing literature on this topic, Chapter Three concentrates on the explanation of the methodology used in this study.

Chapter Four analyzes and interpret the results while Chapter Five includes the conclusions and recommendations.

Literature review

Introduction

This chapter will provide a comprehensive review of the existing academic literature in relation to the inter-linages between changes in monetary policy and shocks in the stock market in the US.

The review will begin by briefly researching the how Fed takes action in terms of Taylor’s Rule affects the variation of interest rates. The review then will compare the classical view with the Keynesian view regarding the interest rate. After that, the review will move onto some models like Philips Curve and Cagan Model more relevant to this study.

The review will take into account the methodological employed in previous studies with their findings. The money aggregates will also be included.

Taylor rule-background

Since the 1980s, the Federal Reserve Bank of the United States has fundamentally accepted the “single rule” of monetarism, which means the money supply is regarded as the major tools of macroeconomic readjustment and control. In 1990s, one of the significant events in macro-control territory in U.S. was the adoption of Balanced Budget. Under the new fiscal framework, the Federal Reserve System (Fed) is no longer able to stimulate the economy through traditional fiscal policies such as expanding expenditures or reducing taxes, this leading to diminishing the impact of fiscal policy on conducting macro-control policies to some extent. In view of this background, monetary policy has become the major tool for the authorities to achieve the economic regulation targets. Faced with the new era, the Fed has decided to swap the initial monetary policy rules that focused on the manipulation of money supply to regulate economic operations for the prevailing monetary policy tools that primarily focused on the adjustments of real interest rates as the main tools of implementing macroeconomic regulation. This is known as the “Taylor Rule” in the U.S. financial community.

Taylor rule-basic principle

The Taylor rule, also considered as the interest rate rule, states how the central bank’s short-term interest rate instruments are adjusted for economic conditions. The idea of Taylor’s rule stems from the fact that interest rates are closely correlated to inflation and can theoretically be linked to the Fisher Effect. Namely, it shows as the following relationship between nominal interest rate and inflation expectation: i=r+ßpe (1)

In illusion to the above original interest rate rule, Taylor simplifies the lag response and obtains the following linear equation:

it=?t+gyt+h(?t-?*)+rf (2)

Where yt is the real GDP, measured as a percentage of its deviation from potential GDP; ‘it’ is the short-term nominal interest rate, measured as a percentage; ?t is the inflation rate, measured as a percentage. The parameters ?*, rf, g, and h are all positive. In this way, interest rates react to the inflation and its target value ?* and the deviation of real GDP from potential GDP. When the inflation rate rises, the nominal interest rate rises faster than the inflation rate. When real GDP rises relative to potential GDP, interest rates also rise. In this relationship, the intercept rf is the actual interest rate implied in the central bank’s reflection equation. The central bank take action to influence the nominal interest rate by affecting the money supply through open market operations. Assume that the long-term average yt of the actual GDP deviation is equal to zero, and that the long-term real interest rate is r*, so that in the long run, it-?t=r*. In equation (2), the coefficient g of the same economy varies in different periods; under different monetary systems, the value of (1+h) is also different, although most of the time is positive.

Thoughts on Taylor Rule

Taylor’s Rule suggests the Federal Reserve should raise rates once the inflation exceeds the target or the GDP growth is abnormally too high. By contrast, the Fed should reduce rates once inflation is less that the target or GDP growth is abnormally too low. It tends to be moderate when the inflation is on target and GDP growth is on the normal track. The short-term target of the model is to stabilize the economy while in the long run it sets for stabilizing inflation. The Taylor rule not only suits the economies in booming period, but also it is a good gauge when economies suffer recession. Like EU, Japan held their interest rates at a low level for too long period. This leads to the asset bubbles, therefore, interest rates have to eventually shift up to rebalance the inflation and output levels. A further concern of asset bubbles is that the level of money supply increases rather high in excess of the actual demand that balances an economy under inflation and output imbalances. For instance, looking at the housing crisis in 2007-2008. It looks like a debt crisis, but another reason is that interest rates long time kept low because of aftermath of the dot-com bubble in near 2000 so that this policy at least partially led to the housing market crash in 2008. Assuming if the central bank had followed the Taylor rule, which raised interest rates significantly up, so people would have been discouraged to leverage up to buy houses, and ultimately the bubble might be smaller. Taylor argued the crisis would have been significantly smaller if the central bank had followed rules-based monetary policy.

Interest rates in LOE Large Open Economy (LOE Net Capital Outflow (NCO)

How is the Federal Reserve control over its interest rates? As the Large Open Economy (LOE), U.S. or the Federal Reserve is considered to be the major interest rate setter in the world rather than interest rate taker like Small Open Economy (SOE). When a Federal Reserve (LOE) decreases its interest rate, savings then will go up and Net Capital Outflow (NCO) of Dollar in foreign exchange markets will increase. This means Dollar is to depreciate, leading to a growth in net exports and gaining competitiveness. In terms of the classical view, for LOE like U.S., the decrease in interest rates is offset against by increase in investments and currency depreciation will encourage its net exports. Since Federal Reserve is the interest rate maker, the exchange rate of the economy dominates the world foreign exchange markets. Like recently, accompanying with exchange rates of Argentina, Russia, Tukey collapsing, all of those crashing markets have to significantly rise theirs interest rates and depreciate their exchange rates to counter against the shocks and aim to stabilize domestic stock markets and economies by very tightening monetary policies.

On the other hand, in terms of Keynesian view, the interest rate should be determined among the money market, goods market and balance of payments, which is shown through IS/LM/BP analysis. Keynesian view is that the interest rate is vey sensitive to changes because of the speculative demand for money. Assuming an economy with exchange rates free floating, the way to eliminate the deflationary gap by monetary policy is that shift its output to the targeted output level of full employment with decrease in the interest rate. Then the depreciation of the currency is caused by balance of payments from initial increase in money supply. So the equilibrium is maintained after money supply is reduced and the new exchange rate is calibrated to restore the equilibrium at a lower interest rate. But Friedman (1956) points out the initial expansion in money supply will bring inflationary pressure, and we have seen its aftermath today in the wake of Global Financial Crisis and European Debt Crisis.

The Phillips Curve

The Phillips Curve is the discredited rational expectations theory model that tends to describe the relationship between inflation and employment. Today the original model is not broadly used by many economists because of its simplification. Friedman (1976) argues, firstly, the actual wages is the primary for rational workers to determine the supply of their labor rather than nominal wages. Therefore, the Phillips curve to discuss the relationship between unemployment rates and nominal wage rates is purely misleading. Friedman (1976) then argues the replacement relationship derived from Phillips Curve between inflation and unemployment simply exists in the short term. Since the costs of information of understanding the general price level are relatively high in the short run, this leads workers to a temporary “currency illusion”. In the long run, both workers and employers will adjust their expectations so that the expected inflation rates tend to be close to the actual inflation rates. At this time, the Phillips Curve with a negative slope no longer exists, with a vertical Phillips Curve instead. This means the unemployment rates are completely immune to the inflation, which Friedman (1976) describes “natural unemployment rate”. Friedman (1976) also asserts that by rising up inflation to achieve the employment targets is kind of like Keynesian demand policy, it can only be temporarily effective. In the long run, the increase in employment target wouldn’t be accomplished unless increasing the inflation rate further.However, modern versions of Phillips Curve are widely applied because those modified versions take into inflationary expectations consideration and distinguish both the short-term and long-term effects on the unemployment factor. Phelps and Friedman (1968) argue when the Curve shifts up with the inflationary expectations rising, the short-run Phillips curve then evolves into the expectations-augmented Phillips curve. In the long run, the monetary policy does have no impact on unemployment, so the model switches back to the natural rate, which is known as “NAIRU” or long-run Phillips Curve. Blanchard (2016) in his article explains the expectations-augmented Phillips Curve in detail. He suggests the long-run “neutrality” of monetary policy contains short-run fluctuations and the temporary decrease in unemployment by the increase in permanent inflation is effective. Meanwhile, Blanchard and Galí (2007) research that New Keynesian Phillips Curve including sticky prices suggests inflation is positive to the demand, but negative to the unemployment, which means New Keynesian Phillips Curve agrees with expectations-augmented Phillips Curve that increase in inflation can decrease the unemployment on temporarily basis, the difference is that New Keynesian Phillips Curve suggests this action can not affect the unemployment permanently.

What drives the inflation

It is considered that the Consumer Price Index, Producer Prices and Employment Index commonly drive the prices and inflation. Taylor recommends central banks should take the entire consumer price index into account rather than only look at core CPI that doesn’t include food and energy prices. By this way, policymakers are able to get a better understanding of an economy in relation to the prices and inflation. Meanwhile, Taylor recommends the real interest rate should remain 1.5 times the inflation rate in terms of the assumption that an equilibrium rate should factor the real inflation rate against the expected inflation rate. Taylor describes this the equilibrium, a 2% steady state, equals to a rate of about 2%. The other way is through the coefficients on the deviation of real GDP from trend GDP and the inflation rate. Both methods are in same purpose for forecasting.

The Cagan Model

Cagan Model is the model of analyzing coinage income and inflation. The government get a certain amount of earnings from the growing supply of money, which is known as, the ‘income of coins’. Coinage income is one of the sources of the government revenue. In low-inflationary industrialized countries, government coinage accounts for about 0.5% of GDP. In high-inflationary countries, the government collects more income from coinage. Kagan’s model consists of two components, which are the money demand function and the expected function of the inflation rate. The currency demand function is: m=M/P=c·exp(-a?).

Where m is the actual money demand, M is the nominal money demand, P is the price level, c and a are constants, and ? is the inflation rate. There are two important assumptions implied in the formula. One is that the output is established; the other is that the real interest rate is constant. Both are implicit in the constant c. When the money market is balanced, the actual money stock is equal to the money demand. This formula verified by statistical data and can better explain the demand for money in the case of inflation. The expectation of inflation rate is adaptive expectation, so the expectation of inflation is adjusted to the following formula:

d?/dt=b(?-?o).

If the actual inflation rate exceeds the expected rate of inflation, the expected rate of inflation then will rise. The constant b is the speed at which the individual corrects the expectations. When the growth rate R of the currency remains constant, the economic equilibrium depends on the product of the parameters a and b, which respectively reflect the adjustment of the monetary demand elasticity and the expected inflation rate. When the product of the two is less than 1, the equilibrium is stable; when the product of the two is greater than 1, the equilibrium is unstable, that means the economy may experience accelerating inflation or accelerating deflation. If individual adjustments are expected to be faster, higher inflation rates will lead currency holders to quickly correcting their inflation expectations and accelerate the inflation further. If the demand for money is flexible, the rise in inflation will lead to an upward adjustment in the expected rate of inflation, which will once again accelerate the inflation. Therefore, the model suggests when individuals are under the adaptive expectations, the rise in inflation might come from the endogenous instability of the economic system rather than from the growth of money.

In the case of a stable and balanced economy, the government can get the coinage income S as: S=(dM/dt)/P=Rm

Under the steady status, the inflation rate is the same as the money growth rate, namely R=?o, then the equation is converted into: S=cR exp(-aR)

When R = 1/a, the coinage gain is greatest under the stable status. The inflation tax is related to the income of the coin. Inflation tax is the tax that the government issues excessive money leading to inflation and is imposed on currency holders in the form of inflation. It is related to the income of coins, but it is not equal to the income of coins. The actual amount of tax imposed on money holders is the loss of their actual currency balance ?M/P, while the coinage income is RM/P. Only when the inflation rate and the money growth rate are equal, the two are equal. So in my test, the inflation rate and M2 will be used to run through the test.

Monetary policy by conducted Federal Reserve

The Federal Reserve was established by Congress in 1913, aiming at the stabilization of prices (i.e. lower inflation), employment maximization and moderate interest rates in the long term. Apart from these, Federal Reserve also targets other responsibilities like promoting financial stability, supervising financial activities and institutions. The effective monetary policy plays a complementary role towards the fiscal policy, and helps to keep economic growth. The Federal Reserve uses a set of instruments including managing the short-term interest rates or cost of credit to achieve this goal. Since the interest rates is directly affected by Fed’s monetary policy whereas stock prices, currency exchange rates are indirectly affected, so through these routes, Federal Reserve can optimize its monetary policy towards spending, investment, employment, production and inflation. Among them, interest rate is seen as a key tool for the Federal Reserve to run through its policies. When the interest rate shifts, money supply has to adjust so that the equilibrium is able to maintain at the money market. Traditional theory says money demand is inverse against the interest rate (Carlin ; Soskice. 2006, s.35), which means the lower interest rate, it will be higher demand for money, and vice versa. Although center banks can digest the effect of printing new money through the operations in open market to reduce the interest rates when they change the supply of money by changing the monetary base, money supply ultimately is dependent on private sector like households, banks.

Money supply refers to the total amount of money that can be used for various transactions over a certain period in an economy. The supply of money is classified into various levels in terms of the currency liquidity, namely M0, M1, M2, M3, etc. The initial supply of money supply is the base currency provided by the central bank. This base currency after deposited and withdrawn by commercial banks numerous times, leads to a wealth of deposit currencies derived, and cause the multiple expansions of its currency. The amount of money supply has a positive correlation with the final aggregate demand, so the central bank usually uses the money supply as an intermediate target of monetary policy. Maintaining the balance between money supply and money demand is primary for central bank’s monetary policy. In U.S., the classifications of money supply are slightly different from other nations, and classifications are not entirely used. M0 and M1, for instance, regarded as narrow currency, including coins and notes in circulation and other types of money equivalents that can be converted into cash. M2, which is known as broad money, contains M1, and other liquid assets with no attribution of cash. M3 is the broadest, including M2 and illiquid assets. However, in Fed’s reports M3 is excluded. In this paper, even though money supply probably seems to be an improper measure for monetary policy, because of the quite significant change in monetary policy after GFC when Fed decided to use QE as a solution, the M2 is still picked here as the data of money supply for measurement.

Money supply on stock returns present value(PV) Standard ; Poor’s 500 (S&P 500) Federal Open Market Committee (FOMC)

In general, the stock price is pegged to many factors, among which the present value of the future cash flows could be an indication towards the movements of stock price. Since we know the PV of the future cash flows takes the future cash flows at a discount rate into account, and the money supply has a significant impact on the discounting rate, therefore, also on the present value of cash flows.

Sellin (2001) takes the view that the stock prices will be affected by money supply only if the expectations about future monetary policy in money supply is changed. Sellin (2001) then claims that a positive shock in money supply will lead market participators to anticipating tightening monetary policy coming through in the future, so the current interest rates will be driven up by the subsequent increase in bonds. The discount rates rise in line with the interest rates increasing, while the PV of future earnings decrease, which eventually leads to the stock prices declining. Moreover, Sellin (2001) also argues the stock market will be further depressed by the economic activities in recession when the interest rates are likely to be lifted up.

However, in Sellin’ paper, the real activity economists, who take another angle, draw the opposite view comparing to Sellin. They argue that a change in the money supply that is caused by future output expectations is indicative of the information on money demand. If the money supply increases, it releases out a signal that money demand is increasing, which indicates economic activities are booming. Therefore, higher economic activities imply higher circulation and demand of cash flows, which leads to the stock prices to be rising (Sellin, 2001).

Ben Bernanke and Kenneth Kuttner (2005) argue that the characteristics of a stock price combines the function of its monetary value with the risk that is perceived in possessing the stock. The investors favor a stock when the monetary value of the stock that bears is high. By contrast, a stock turns negative when the perceived risk remains high. The authors argue that the money supply influences the stock market through its impact on both the monetary value and the perceived risk. In other words, the effect on the interest rate is the pathway towards the money supply affecting the monetary value of a stock. Ben Bernanke and Kenneth Kuttner (2005) claims that when money supply is tightening, the real interest rate goes up simultaneously. Since the shifting up in the interest rate would also rise up the discount rate in turn, this would lead to the decrease in the stock value, as argued by the real activity theorists (Bernanke and Kuttner, 2005).

Ben Bernanke and Kenneth Kuttner (2005) then argue that tightening money supply would extensively increase the risk premium that holding risky assets would be in compensation for investors. They claim that the tightening money supply symbolizes the economic downturn, under which reduces the profitability of firms in general, so the authors believe the consequence here is that investors would be in a more risky situation whereas demand more risk premium. Because of the stock being unattracted by risk premium, this would result in stock prices getting lower. (Bernanke and Kuttner, 2005).

Despite looking at the money supply itself, other extensive number of researches have been conducted to identify the impacts of anticipated and unanticipated variables in money supply on the stock market, but the views taken by those researchers have varied. Sorensen (1982) looks into the effectiveness of stock prices in a particular angle towards the anticipated and unanticipated movements in money supply. Sorensen (1982) runs through a two-stage regression model in the analysis. Firstly, the regression model takes the same variables as Barro’s model regarding money supply, in which the money supply is regressed on previous money supplies, the unemployment rate, and real federal government expenditure. At second stage, the stock index of S&P500 is regressed on anticipated growth in money flows by substituting into the estimates from the regression of first stage. Residuals of the first equation are treated as the unanticipated element, and they then are used to be regressed against a stock index to work out the its impact on the unanticipated component. Through this approach, Sorensen (1982) concludes that unanticipated changes in money supply have a bigger influence on the stock market of S&P 500 than anticipated changes, which is in correspondence with the efficient market hypothesis.

In contrast to Sorensen (1982), Bernanke and Kuttner (2005) examine the effect of any movements occurring in terms of federal funds rates on stock prices to investigate the anticipated and unanticipated components of monetary policy. In their model, the observations consist of the days in which federal funds rates were changed in line with FOMC meetings. Then a vector autoregression model is established based on 131 observations ranging from June 1989 to December 2001, with not taking the period of September 2001 into consideration. Finally, Bernanke and Kuttner (2005) find the unanticipated changes in the federal funds rates in the stock market is more reactive which again in favor of the efficient market hypothesis (Bernanke and Kuttner, 2005).

On the contrary, early on, Husain and Mahmood (1999) look into another angle of this field by investigating the relationship between expansionary monetary policy and stock returns. In this model, M1 and M2 are selected as endogenous variables and stock indices of main sectors are selected as exogenous variables. Then they conduct an Augemented Dickery Fuller test to find out the effect of the changes in money supply on the stock market prices in the short run and long run. Husain and Mahmood (1999) find that money supply does have huge impact on the stock markets, in other words, this finding which means is opposite to the efficient market hypothesis (Husain and Mahmood, 1999).

Relationship among money supply, inflation and stock prices

In general, the transmission mechanism through the money supply and inflation towards the stock prices can be tracked. Inflation affects the stock markets either in positive or negative side, but the magnitude of inflation on stock changing is affiliated to the adjustment of monetary policy. If an expansionary monetary policy by increasing money supply is conducted, stock markets would be reacted significantly. However, the reaction of stock prices won’t correspond simultaneously in the wake of money supply increasing until a certain period. This arrearage process in months probably could be derived from either liquidity effect or transmission mechanism. Theoretically, inflation and money supply have a dual effect on stock returns. On the downside, inflation increases with the money supply being increased, which then will increase the expected rate of return. In terms of the assets valuation, the use of higher expected rate of return will devalue the firm and therefore the stock prices will go down. On the other hand, increasing in inflation and money supply provide firms more future cash flows, which leads to the higher expectation on dividends, and eventually will benefit stock prices.

Hsing (2011) looks into the influence of macroeconomic variables on U.S. stock markets from the period of 1980 to 2010. Hsing (2011) concludes that the real interest rates and inflation rates manipulated by FRS have a negative impact on all three U.S stock indices. Meanwhile, Albaity (2011) researches the influence of the monetary policy instruments on U.S. stock markets from the period of 1999 to 2007 by examining volatilities of interest rates and inflation rates. Albaity (2011) finds monetary aggregates M2, M3 and inflation rate are significant in affecting U.S. stock indices, whereas he also testifies the U.S. stock indices are used as a hedge against inflation. Shiblee (2009), not only proves the inflation and money supply do have the impact on U.S. stock prices by reviewing the data from 1994 to 2007, but also demonstrates the level of changing rates of inflation and money supply affect the stock prices of each sector. Furthermore, Rahman and Mustafa (2008) more specifically illustrate how the dynamic effects of M2 on S;P 500 index move in both the short run and long run. Their results suggest negative monetary fluctuations depress the S;P 500 index in the short run, but with the time going, this interaction is diminishing in the long run. Besides these, Rahman and Mustafa (2008) also find inflation has an unidirectionally causality towards stock prices while money supply and stock prices have bi-directional causality. Then they try to find the direction of causality between monetary policy variables and asset prices. Their results show that there is one cointegrating long run dynamic relationship between stock prices and the set of broad money supply and inflation. Broad money supply (M2) has a positive and significant impact on stock prices, however, causality from stock prices to M2 is unidirectional.

Effects of monetary policy on stock returns-empirical study (some models from pervious study)

Rigobon and Sack (2004) examine the relationship between the interest rate and asset prices by focusing on the first problem that regards the endogeneity of the variables and the second problem that regards the existence of omitted variables. These two issues then are included in two equations:

??? = ???? + ??? + ?

??? = ???? + ?? + ?

where ??? represents the short-term change in the interest rate, ??? is an asset price change, ?? represents the monetary policy shock, and ?? is an asset price shock. Although this model is allowed to be rather flexible, Rigobon and Sack (2004) purely focus on the parameter (?) to see how the magnitude of a change in the short-term interest rate on the asset price. They use the Heteroskedasticity method to estimate this magnitude. The samples range from January 1994 to November 2001 and the data contains three U.S. stock indexes and the Wilshire 5000. The result regarding the S;P 500 suggests that increase in short-term interest rate is negative to stock prices of the S;P 500. The estimated parameter (?) of the S;P 500 is -6,8, which specifically states if the short-term interest rate increases by 25-basis point unexpectedly, it leads to the S;P index decreasing by 1.7%.

In terms of that Rigobon and Sack (2004), the central bank and the private sector are mutually affected. Björnland and Leitemo (2009) inherent this view and try to find out the magnitude of monetary policy and financial markets. The selected data is based on monthly data from January 1983 to December 2002 and the data contains logged consumer prices with annual change, the logged commodity price index in US dollars with annual change, the Federal Funds Target Rate, and the logged S;P 500 stock price index. They subtract the Consumer Price Index (CPI) factor out of stock prices to work out the real terms and then differentiate to monthly changes. The econometric analysis determines the VAR of four lags. Through the model they find that a shock in monetary policy affects the stock returns significantly, with the fall in returns by 9% for increase in the Federal Funds Target Rate by 100 basis-point. Björnland and Leitemo (2009) also find a shock, with an increase by 1% in real stock price, results in an increase by 4 basis points in the interest rate, so they conclude there is a strong synchronization between interest rate settings and shocks towards real stock prices. They eventually draw a similar view as Rigobon and Sack (2004) that since monetary policy impacts the stock prices, the stock market should be a major indicator for the central bank when conducting a policy. Because the data in above tests is away too far, in this paper, I will use recent data to see if the conclusions suit today.

Kurov (2009) aims to investigate whether monetary policy decisions affect the sentiment of investors and whether the investor’s psychology affects the reaction of stock market to monetary news. An event study approach with 129 observations regarding the Federal Funds Rate and daily returns on the S&P500 index are included. His result implies that monetary policy surprises strongly affect the investor sentiment in bearish periods. Kurov (2009) then concludes investors always tend to be overreacted to surprises in bearish periods. Although this paper is not going to test how the investor sentiment affects the stock market, this research provides me the information of investor’s reaction in terms of returns to monetary surprises.

Conclusion

The objective for the Fed is to achieve maximum employment, stabilize the prices and keep the moderate long-term interest rates. Inflation, employment, and long-term interest rates move ups and downs over time in accompany with the economic disturbances. The inflation rate and M2 are primarily dependent on monetary policy over the longer run. The level of employment is mostly determined by non-monetary factors.

The chapter began by looking into the basic theories to depict the inter-linkages among variables. The chapter then examined the effects of interest rate variation based on the different models. Authors mostly considered interest rate is important for macroeconomic variables and some views suggest interest rate should be proactively adjusted while the counter suggests it should be dependent upon money market. Then the chapter looked into the inflation and unemployment. Inflation can stimulate unemployment in a short period, but it seems to have no significant impact in the long term. The chapter also highlighted the impact of broad money (M2) on the stock price.

Finally, several empirical studies through different approaches suggested their views on the relationship between monetary policy and the stock market price. Although the views were various, majority of the authors found the existence of the inter-linkage between the variables.

???176-177???????

Methodology

Introduction

This chapter is to demonstrate the relevant data and methodology to investigate the existence of any linkage between the changes in monetary policy variables and the S;P 500 index and if so, the magnitude of monetary policy variables on the S;P 500 index.

Data Description

This section uses time series monthly data over the period from 2002 to 2018 and the data is obtained from datastream. The reason of selecting monthly data is that the frequency of most of the data like Federal Funds rate, unemployment rate, M2, PCE are determined on the monthly basis, and in order to keep data consistency, the only exception is that GDP is assessed based on quarterly frequency, so GDP will be excluded in this section. The reason of selecting the time period starting from 2002 is that as above literature review mentioned, around 2000 the dot.com bubble occurred right before the Global Financial Crisis, excluding others interference except GFC will help to better examine the integrated effects of monetary policy on the changes in stock prices of the S;P 500. The data also has been decided not to be logged, which means all data that will be applied in next chapter is raw data. Woodridge (2007) in his article states the logarithm means the elasticity of the original explanatory variable to the explanatory variable, namely the change in percentage rather than the value. Then he further argues one of the drawbacks of taking the logarithm is that it is difficult to predict the value of the original variable, because the logarithmic model predicts lny, not y. There are some conventional approaches that price, sales, wages relating to market value had better logarithm and variables like unemployment rate is free for no logarithm. Since the stock index will be tested rather than stock prices, the raw data that is not logged is used.

The first variable of all the variables is monthly indices of the S;P 500, this variable is treated as the dependent variable, and to be used as a proxy for the U.S. equity markets. The Standard ; Poor’s 500, also abbreviated as the S&P 500, is an U.S. stock market index based on the market capitalizations of 500 large enterprises that have common stock listed on the NYSE or NASDAQ. It is different from Dow Jones Industrial Average or the Nasdaq Composite index, because of its constituency of diversification and weighting methodology. The S&P 500 is widely considered as one of the major representations of the U.S. stock market, and the better barometer for the U.S. economy.

The independent variables include federal reserve funds rate, unemployment rate, PCE, and M2. There variables are treated as key indicators for the measurement as literature mentioned, and the reason of selecting PCE rather than CPI is that PCE is the most widely used as the inflation indicator in the U.S. In 2000, the Fed completed a transition from CPI to PCE which means the U.S. Open Market Committee (FOMC) no longer announced their expectations for CPI and began to present their views on inflation prospects in the form of PCE. So PCE here seems to bring more reliability in terms of Fed’s position.

Informal analysis

Before advancing into formal analysis, informal analysis in the form of graphs is conducted firstly to show the entire trends of the variables over the period. So through the correlograms and histograms, it is straightforward to get a better information on the series behavior and predict the future trend of the variables, such as signalizing if the series are stationary. In this study, the various types of graphs are used to briefly show the time series data.

Ordinary Least Squares (OLS)

The study will also involve the OLS analysis to support the findings. OLS is a mathematical optimization technique. It finds the best function matching for the data by minimizing the sum of the squares of the errors. The least squares method can be used to quickly obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data. The least squares method can also be used for curve fitting. Other optimization problems can also be expressed by least squares by minimizing energy or maximizing entropy.

Stationarity

However, before doing any former analysis, there is one issue that is crucial for time series data is Stationarity. Variables need to be stationary of the same order so that cointegration and causality tests will run through properly. If the regression is non-stationary, it will be meaningless and seen as spurious or inaccurate. Brooks (2008) suggests spurious regressions will break up the classical assumption of OLS technique, leading to the results of regressions being no value. Fama and French (1988) express the similar view that it is necessary to convert the stock prices into first differences. Therefore, any variable that is found to be non-stationary will be differenced. This means the data should not be the random walk and should be the meaning reverting. In this section, the differenced equation is Yt= (Yt-Yt-1)/ Yt., so transforming the unqualified variables in this structure will legitimize the econometric tests and bring more clear interpretation of the results.

In statistics, there are some classical forma tests that can be carried to detect stationarity or unit roots. These include Dickey-Fuller test and Augmented Dickey-Fuller (ADF) test. The Dickey-Fuller test can test whether an autoregressive model has a unit root, while the ADF test as the extension is similar to the Dickey-Fowler test, but the benefit of the ADF test is that it excludes the effects of autocorrelation. So as mentioned the above, the series are found to be non-stationary will be differenced, if the series are not stationary at the first differences, the series should be differenced further until ADF test signs stationarity. So the following is the process of running ADF test.

The Models with deterministic trend–ARMA(p) with constant and trend

?Yt =?+ ?t+?Yt-1+?1?Yt-1+…….?p-1?Yt-p+1 +?t

And also by VAR estimation to determine the lag length. Then check the T-test statistic lined with the ? coefficient and compare it to critical value. In ADF statistic, the test statistic is always a negative value. The more negative, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence. If the test statistic in absolute term is bigger than critical value, then the null hypothesis is rejected.

Lag Length Selection

In order to keep the results consistency, a lag length criteria is necessary to find the proper lag length for the following econometric tests in this study. There are two steps to select the proper lag length in terms of the AR(p) model, the first step is through the identification of the AR lag length p based on certain criteria such as lag length selection; the second step is through the estimation of the numerical values for intercept and parameters using regression analysis.

Mathematically, an AR(p) process of a series yt is shown below:

yt=a1yt-1+a2yt-2+…………apyt-p+?t

where a1, a2…………ap are the autoregressive parameters while ?t is normal distribution random error terms.

There are various lag length selection criteria including Aikaike’s information criterion (AIC), Schwarz information criterion (SIC), Hannan-Quinn criterion (HQC), and final prediction error (FPE). Because the outcomes through different criteria may affect the final findings, in this study, AIC and HQC will be particularly focused. Liew and Lim (2003) suggest when the number of samples is more than 120 observations HQC tends to be better performance in correctly identifying the true lag length, whereas AIC is found to be the better option for samples less than 60 observations. In additional, AIC also is found to produce the least probability of underestimation among 4 criteria.

The two models are shown below:

AICp= – 2T ln( 2 ˆ? p ) + 2p

SICp = ln( 2 ˆ? p )+ p ln(T)/T

Cointegration

Cointegration is the existence of long-run relationship between two or more variables. However, the correlation does not necessarily mean “long-run”. Correlation is simply a measure of the degree of mutual associations between two or more variables. The concept of cointegration is that first, all variables of the cointegration regression must be the same order, and the premise of the cointegration relationship does not mean that all the variables of the same order are cointegrated. Secondly, in the covariance equation of two variables, the cointegration vector (k0, k1) is unique. However, if there are k variables in the system, there may have k-1 cointegration relations. The Engle-Granger and Johansen cointegration tests are used to find the interlinkages between the S&P 500 index and monetary policy variables. If two or multiple variables are found to be cointegrated, which implies there is the existence of the long-run relationship between the variables. Johansen cointegration test is used to find relationship while Engle-Granger cointegration test is used to test the bivariate relationship.

Engle-Granger cointegration test

The Engle-Granger two-step cointegration test considers to solve out the null hypothesis as a set of I (1) variables without cointegration. The technique estimates the stationary relationship coefficients between these variables by using the ordinary least squares method and then use the unit root test to test the residuals. The null hypothesis of rejecting the existence of a unit root is evidence of the existence of a cointegration. Let’s start with the simplest case. Assuming two variables yt and xt are in the series, the following long-term static regression model:

yt = ?0 + ?1xt + ?t

In terms of parameters of the above model, the parameter estimates are given by using the Least Squares Method. Using the cointegration ADF test statistic, it tests whether the residual ?t of the regression obtained under the above estimation is stationary. The test on the non-stationary assumption of the residual ?t, which is to test the assumption that yt and xt are not cointegration. More specifically, using the ADF helps test the single order of all variables in the long-term static model. Cointegration regression requires that all explanatory variables are first-order and monotonic. Therefore, high-order single-round variables need to be differentiated to obtain the I(1) sequence. Or to estimate the long-term static regression equation by the OLS method, and then use the ADF statistic to test the stationarity of the residual estimate.

Johansen cointegration test

When there are more than two variables in a long-term static model, there may be more than one cointegration relationship. If you use the Engle-Granger cointegration test, you may not find more than two cointegration vectors. Johansen and Juselius therefore establish a method for testing the cointegration relationship between multivariate by using maximum likelihood estimation under VAR.

The model is formed as Yt = B1Yt ? 1 + B2Yt ? 2 + … + BpYt ? p + Ut,

Where Yt is an m-dimensional random vector, Bi(i=1,2,…,p) is an m×m-order parameter matrix, Ut~IID(0,?)

Then this equation can be converted into ?Yt=Yt ? i+?Yt ? p+Ut

This equation is called the Vector Error Correction Model (VECM), the one-time differential VAR model plus the error correction term ?Yt ? p, the main purpose of setting the error correction term is to guide the long-term information lost in the system due to the difference. And the number of cointegration vectors can be obtained by examining the saliency of the eigenvalue of ? based upon three cases.

The number of feature roots can be calculated by the following two equations:

?max = ? Tlog(1 ? ?r + 1)

?

Because the null hypothesis implies ?r + 1=?r + 2=…=?m=0, which means that there are m-r unit roots in this system. The null hypothesis has m unit roots, which is, r=0, if the it rejects H0, which means ?1;0, there is a cointegration relationship; continue to test until H0 fails to reject.

Granger Causality Test

The Granger causality test is designed to check the nature of the causal relationships among the S;P500 index, GDP, M2 and federal funds rate. The Granger causality test has been broadly accepted and used by economists as a method of measurement, although there is still much debate on whether the Granger causality is a “true” causal relationship at the philosophical angle. The Granger causality relationship defines the cause happens prior to its effect and has unique information about the future values of its effect. Briefly speaking, the occurrence of Variable A does not occur with the probability of occurrence of another Variable B has an effect, and the two events have a sequence in time (A ahead of B), then we can say that A causes B.

To run through the causality test, like cointegration tests, the variables need to be stationary based on the values of at least two variables. If the variables are non-stationary, then the test is converted into first or more differences. After the number of lag is determined, any lagged value of one of the variables is retained in the regression if statistically significant comparing to t-test, and it with the other lagged values of the variable jointly add explanatory variables onto the model comparing to an F-test. Then we do not reject the null hypothesis of no causality only if there is no lagged values of an explanatory variable in the regression. A similar test with more variables can be performed through vector autorepression model. In practice, it is possible to see neither variable causes the other, or each of the two variables causes the other.

Conclusion

The chapter reviewed the relevant data and methodology that will be used for next section to investigate any interlinkage between the changes in U.S monetary policy and shocks in the S;P 500 index. The chapter started with describing the nature of the data, then followed closely by reviewing the importance of informal analysis.

Moving onto formal analysis, firstly this section outlined the importance of OLS model, then explained the treatment of variables under stationary scenario and variables under non-stationary scenario. The procedure of the optimal lag length selection also was discussed, which is significant for estimating econometric models. Furthermore, Johansen Cointegration test and Engle-Granger Cointegration test were included to look into the long run cointegrating relationships between the dependent variable and independent variables. Finally, the section ended up with the research upon the Granger Causality approach. This methodology will then be applied to the relevant data and the results will be discussed in the next section.

Results and analysis

Introduction

This section attempts to outline results performed by Eviews. The time series monthly data range from 2002 to 2018 has been obtained from datastream. The dependent variable is S;P 500 index, and the independent variables are Federal Funds rate, unemployment rate, PCI and M2. The section seeks to (1) examine the cointegrating relationships between S;P500 index and variables associated with the U.S. monetary policy. (2) examine the direction of causality relationship between above variables. (3) outlining the findings and comparing the findings with existing academic literature outlined in the literature review. All tests done by Eviews and all relevant diagrams, tables shown in the Appendix.

Informal analysis

The informal analysis through graphs has been conducted. As the graphs shown below, M2 appears to keep it consistent throughout the period while Federal Funds rate dropped dramatically in around 2008 and it kept low for years. The indicates Fed conducted an expansionary monetary policy after the GFC. Unemployment rate was significantly high in around 2008-2010 and then has been improving over time. The S;P 500 index hit down in between 2008 to 2009 and went back after that period. PCE seems to keep upwards with the same direction as M2. The graphs also roughly indicate Federal Funds rate has the negative relationship on PCE and M2 while unemployment also has the negative correlation with the S;P 500.

The histograms below provide the information of the descriptive statistics of the dependent variable and one independent variable M2. The histograms summarize the mean, median,standard deviation, and these information will assist the analysis in the following tests. More histograms shown in the appendix.

OLS Regression

An OLS Regression has been performed in order to investigate that the percentage change of stock indices of S;P 500 can be explained by M2, unemployment rates, federal funds rates and PCE.

The regression model is: Y (S;P 500)= c+?1M2+?2 Federal funds rate+ ?3Unemployment rate+?4PCE+?

Table 4.1 below suggests the model is statistically significant at 1% confidence level, because the prob(F-statistic) is signficicant. R-squared is 0.95, which implies 95% variation in the S;P500 indices can be explained by Federal funds rate, M2, Unemployment rate and PCE. The PCE which here considered as the inflation rate, is negative to S;P 500 index, which means the PCE falls by 1 percent, S;P 500 index increase by 0.038 units, keeping other variables constant. M2, Federal funds rate and Unemployment rate have been found to be statistically significant while PCE is statistically insignificant. As PCE is insignificant, so the model was conducted to re-run excluding explanatory variable of PCE.

Table 4.1

Dependent Variable: SP500 Method: Least Squares Sample: 1 198 Included observations: 198 Variable Coefficient Std. Error t-Statistic Prob.

C 542.9650 113.5678 4.780976 0.0000

M2 0.203000 0.030382 6.681540 0.0000

FFR 40.92569 12.61338 3.244626 0.0014

PCE -0.038745 0.039244 -0.987305 0.3247

UNEMPLOY -87.77319 8.270328 -10.61302 0.0000

R-squared 0.953465 Mean dependent var 1492.422

Adjusted R-squared 0.952501 S.D. dependent var 496.4582

S.E. of regression 108.1998 Akaike info criterion 12.23077

Sum squared resid 2259491. Schwarz criterion 12.31380

Log likelihood -1205.846 Hannan-Quinn criter. 12.26438

F-statistic 988.6057 Durbin-Watson stat 0.242121

Prob(F-statistic) 0.000000

As below Table 4.2 shows, the three variables have still been found to be statistically significant as the p value is rather low. So in accordance with the stats, the study will apply cointegration and causality tests.

Table4.2

Variable Coefficient Std. Error t-Statistic Prob.

C 458.3921 74.56029 6.147939 0.0000

M2 0.173260 0.003963 43.71727 0.0000

FFR 31.02852 7.655100 4.053314 0.0001

UNEMPLOY -93.09503 6.272017 -14.84292 0.0000

Lag length selection

In order to keep the consistent results, a proper lag length is required to identify for the econometric models. Table 4.3 and 4.4 compare the two models that includes and excludes PCE. Because the observations reach 198 in excess of 120 observations, for this period the Hannan-Quinn Criterion (HQC) will be applied. So after adjusting the lags to 8 lags, it was found that 2 lags tend to be the optimal lag length.

Stationarity

Next step is to review and tidy up the stationarity concern, the importance of time series variables being stationary is mentioned in previous chapter. To assess whether this is the case, the informal and formal analysis were performed.

The graphs of each variable are shown in appendix 1. It provides the big picture of the movements of each variable during the period. And graphs show none of these variables stationary, so the next is to difference theses into stationarity in the same order.

In appendix 2 the diagrams show all the variables are stationary in their 1st difference (?Yt=Yt-Yt-1), as the adjusted variables seem to have a constant mean and variance. However, to verify this results the Augmented Dickey Fuller test is conducted. Each of the variables was examined by Eviews.

The statistics produced by this test is supposed to be negative. The null hypothesis (H0) for those variables is that all of which have the unit root and thus are non-stationary. The more negative value of the test statistic is, the stronger rejection of the H0 that has a unit root at a certain confidence level. Suppose pick a test critical value at 5% level, which is -2.87, if the test statistic is greater than this critical value in absolute terms, then we reject the null hypothesis.

As below diagrams showing, the ADF test statistic for five variables are -5.27 for M2, -6.15 for federal funds rate, -12.88 for the S;P 500 index, -4.94 for unemployment rate, and -7.34 for PCE. All the test statistics are larger than the critical value of -2.87 in absolute terms. Therefore, the null hypothesis is rejected and variables at 1st difference do not have a unit root. The variables at 1st difference are deemed as stationary and integrated of the same order. So the following tests can be performed.

Cointegration

To find out if the S;P500 index and U.S. monetary policy are cointegrating together in the selected time period, the Johansen cointegration approach and Engle-Granger two-step approach are employed.

The Johansen Cointegration method was undertaken first. The outputs of this method are seen in the Appendix 1. According to the diagram, the null hypothesis is there is no cointegration, if the p value is less than 0.05, then we reject the null hypothesis. Since here the p value is less than 0.05, this implies there is the evidence of cointegration among these variables.

The Engle-Granger method consists of the individual bivariate regressions which are going to test the interlinkages between the variables of S;P 500 index and M2, variables of S;P 500 index and federal funds rate, variables of S;P 500 and unemployment rate entirely. In each regression, the S;P 500 index will be dependent variable (Yt) and M2, federal funds rate, unemployment rate will be explanatory variables (Zt). The regression is Yt=?0+?1 Zt+?t

As long as the coefficient ?1 is proved to be significant, then the test is permitted to proceed to test cointegration. Then it is necessary to make residuals and test the residual if it is stationary. If the residual is stationary, then there might have the existence of the cointegration. To test the unit root issue on the residual, again the ADF test was conducted. The lag length of residuals was reviewed as 2 lags. The null hypothesis is that the residuals have a unit root with non-stationarity, the critical value is -2.87 at 5% level, if the test statistic is above this value in absolute terms, then null hypothesis is rejected. Since as appendix shown, the ADF test statistic for the residuals of the regression is -11.4, this value in absolute terms is greater than critical value, so the null hypothesis is rejected, which implies there is the cointegrating relationship between the S;P 500 index and variables regarding U.S. monetary policy.

The cointegration relationships were found in both way of the approaches. The consistent results suggest the existence of the cointegration, and support the previous literature review.

Causality

In this section, causality test was performed by Eviews to investigate if the relationships are significant and what is the direction of causality will be. Meanwhile, OLS regressions were conducted to support the analysis.

As OLS model suggests, the S;P 500 index and PCE are not cointegrated but they are integrated at same order, so the causality test was particularly constructed upon these two variables in first differences. The result shown in the appendix states that the null hypothesis is S;P 500 indices do not Granger Cause changes the PCE is rejected at 1% confidence level while PCE do not Granger Cause changes the S;P 500 indices is rejected at 5% confidence level. When the lag length was extended to 5 lags, it suggests the same result (see appendix). Therefore, the results indicate there is the existence of a directional causality moving from the S;P 500 index towards PCE during the period.

Conclusion

This chapter attempted to present the results of methods outlined in Chapter Three. It started with analyzing the OLS regression to review if the stock indices are dependent on M2, unemployment rate, federal funds rate and PCI. Regression results indicated M2, unemployment rate, federal funds rate have a positive effect on the S;P 500 index with the exception of PCI.

Then the optimal lag length selection was conducted by VAR model and the optimal selection was ultimately determined at 2 lags. It was also important to carry out to check stationarity. In case of selected samples, none of the variables have found to be stationary at level. However, all variables became stationary after being converted into 1st difference in case of ADF.

The next section of the chapter was to investigate the long-run cointegrating relationship between stock indices are dependent on M2, unemployment rate, federal funds rate and the performance of S;P 500 index. Through two methods of cointegration analysis, both the Johansen Cointegration and Engle-Granger method mutually proved the evidence of the existence of cointegrating association in the long run. The consistency in two approaches support the view from previous literatures.

Then the chapter moved to look into the causality relationship between the index and PCI. The results suggest the S;P 500 index Granger cause change the PCI and the casual effect is directional. The reason that this test didn’t include more variables that were found to be cointegrated is that Granger causality only tests time sequence, rather than the real cause and effect. Since the rest of variables were first-ordered, there was no need to test more.

Conclusion

As discussed in the introductory chapter, the aim of this dissertation was to estimate the impact of monetary policy on the S&P 500 index by using monthly data including Federal Funds rate, PCE, M2, Unemployment rate and the S&P 500 indices. The feature of this analysis is the use of multiple independent variables

Reference

Appendix

Table 4.3

VAR Lag Order Selection Criteria Endogenous variables: M2 FFR UNEMPLOY PCE Exogenous variables: C Date: 08/08/18 Time: 15:50 Sample: 1 198 Included observations: 190 Lag LogL LR FPE AIC SC HQ

0 -3665.358 NA 6.99e+11 38.62482 38.69318 38.65251

1 -1648.463 3927.638 498.3505 17.56276 17.90456 17.70122

2 -1574.321 141.2591 270.2974 16.95075 17.56597* 17.19997*

3 -1560.579 25.60262 276.9589 16.97452 17.86318 17.33450

4 -1538.751 39.75001 260.7558 16.91317 18.07527 17.38392

5 -1529.255 16.89387 279.7089 16.98163 18.41716 17.56314

6 -1501.153 48.80806 246.8734 16.85425 18.56321 17.54652

7 -1478.542 38.31996* 231.0797* 16.78465* 18.76705 17.58769

8 -1470.625 13.08369 252.7653 16.86974 19.12557 17.78354

Table 4.4

VAR Lag Order Selection Criteria Endogenous variables: M2 FFR UNEMPLOY Exogenous variables: C Date: 08/08/18 Time: 15:54 Sample: 1 198 Included observations: 190 Lag LogL LR FPE AIC SC HQ

0 -2406.222 NA 20719871 25.36023 25.41150 25.38100

1 -738.4875 3265.249 0.541320 7.899868 8.104943 7.982941

2 -667.4653 136.8112 0.281808 7.247003 7.605884* 7.392380*

3 -655.9743 21.77229 0.274568 7.220782 7.733470 7.428465

4 -642.5368 25.03631 0.262129 7.174071 7.840566 7.444058

5 -637.3055 9.581541 0.272889 7.213742 8.034043 7.546034

6 -621.6579 28.16570 0.254653 7.143767 8.117874 7.538364

7 -605.6295 28.34496* 0.236758* 7.069784* 8.197697 7.526685

8 -596.9058 15.15154 0.237798 7.072693 8.354413 7.591899

Table before

Table 4 after differenced

TABLE ADF UNIT ROOT TEST

Table Johnsen

APPdenix reside02

Resid lag length selection

Engle-granger

VAR Lag Order Selection Criteria Endogenous variables: RESID03 Exogenous variables: C Date: 08/08/18 Time: 23:23 Sample: 2002M01 2018M08 Included observations: 189 Lag LogL LR FPE AIC SC HQ

0 -1007.345 NA 2521.071 10.67032 10.68747* 10.67726

1 -1007.233 0.220950 2544.884 10.67972 10.71402 10.69361

2 -1003.493 7.362457* 2472.144* 10.65072* 10.70217 10.67156*

3 -1003.391 0.197992 2495.780 10.66023 10.72884 10.68802

4 -1003.389 0.003944 2522.292 10.67079 10.75655 10.70553

5 -1002.144 2.411393 2515.778 10.66819 10.77111 10.70989

6 -1001.776 0.708342 2532.696 10.67488 10.79495 10.72352

7 -1001.621 0.298301 2555.468 10.68382 10.82103 10.73941

8 -1000.069 2.955204 2540.652 10.67798 10.83235 10.74052

* indicates lag order selected by the criterion LR: sequential modified LR test statistic (each test at 5% level) FPE: Final prediction error AIC: Akaike information criterion SC: Schwarz information criterion HQ: Hannan-Quinn information criterion

Lag2

Causality

Pairwise Granger Causality Tests

Date: 08/08/18 Time: 23:08

Sample: 2002M01 2018M08

Lags: 2 Null Hypothesis: Obs F-Statistic Prob.

DIFFPCE does not Granger Cause DIFFSP500 195 4.39130 0.0137

DIFFSP500 does not Granger Cause DIFFPCE 7.58619 0.0007

lag5

Pairwise Granger Causality Tests

Date: 08/08/18 Time: 23:09

Sample: 2002M01 2018M08

Lags: 5 Null Hypothesis: Obs F-Statistic Prob.

DIFFPCE does not Granger Cause DIFFSP500 192 2.38435 0.0401

DIFFSP500 does not Granger Cause DIFFPCE 3.31601 0.0069