Archive for ‘Surviving Graduate Econometrics with R’

February 21, 2012

Parallel computing with package ‘snowfall’

Lately I have been looking for ways to decrease the amount of time it takes me to run multiple regressions over a very large data set. There are several options that I am investigating to do this, and certainly more that I don’t know of yet.

  • Code more efficiently.
  • Compute several operations in parallel over a two or more CPU cores.
  • Tap into a network of computers, and further expand the number of CPU cores to parallelize calculations.

Because many of my computer jobs are “embarassingly parallel”, the options mentioned above would immediately improve the speed I can compute (and re-compute) jobs. This post will go through an example using the CRAN package snowfall to parallelize a computation over several CPU cores on the same computer (bullet #2 above).

The CRAN package snowfall is built to make it easy to create parallel processes. I recommend taking a look at the associated vignette and tutorial.

Before beginning to use snowfall, do the following:

  1. Upgrade to the latest version of R – as of this post version 2.14.1 (or the patched version of R-2.13.0 – available here). FYI – There is a bug in version 2.13.0 (for MS Windows 7) that prevents snowfall from operating smoothly.
  2. Install the latest version of the package snowfall ( install.packages('snowfall', dependencies = TRUE) )
  3. Find out how many cores you have on the CPU of the machine you will be using.  In my example below, I am using a machine with 8 CPU cores and running Windows 7.
  4. Convert any ‘for’ loops into a function that you can call using apply(). See my previous post that outlines this process.

Using snowfall: A simple example

The reason I put together this post is because I couldn’t easily find a ‘plug’n play’ code example in the existing online literature to execute the type of parallelization I wanted. Out of necessity I worked through the wrinkles and am now successfully utilizing multiple CPU cores in R.  –  Note: By default, R uses only one CPU core unless you explicitly code it to use multiple cores (as in this example).

read more »

May 27, 2011

Surviving Graduate Econometrics with R: Advanced Panel Data Methods — 4 of 8

Some questions may arise when contemplating what model to use to empirically answer a question of interest, such as:

  1. Is there unobserved-heterogeneity in my data sample? If so, is it time-invariant?
  2. What variation in my data sample do I need to identify my coefficient of interest?
  3. What is the data-generating process for my unobserved heterogeneity?

The questions above can be (loosely) translated into these more specific questions:

  1. Should include fixed-effects (first-differenced, time-demeaned transformations, etc.) when I run my regression? Should I account for the unobserved heterogeneity using time dummy variables or individual dummy variables?
  2. Is the variation I’m interested in between individuals or within individuals? This might conflict with your choice of time or individual dummy variables.
  3. Can I use a random effects model?

That said, choosing a model for your panel data can be tricky. In what follows, I will offer some tools to help you answer some of these questions.  The first part of this exercise will use the data panel_hw.dta (can be found here); the second part will use the data wr-nevermar.dta (can be found here).

A Pooled OLS Regression

To review, let’s load the data and run a model looking at voter participation rate as a function of a few explanatory variables and regional dummy variables (WNCentral, South, Border). panel_hw.dta is a panel data set where individual = “stcode” (state code) and time = “year”. We are, then, pooling the data in the following regression.

STATA:

use panel_hw.dta

reg vaprate gsp midterm regdead WNCentral South Border

And then run an F-test on the joint significance of the included dummy variables:

test WNCentral South Border

R:

require(foreign)
voter = read.dta("/Users/kevingoulding/DATA/wr-nevermar.dta")

reg1 <- lm(vaprate ~ gsp + midterm + regdead + WNCentral + South + Border, data=voter)

Then run an F-test on the joint significance of the included regions:

require(car)
linearHypothesis(reg1, c("WNCentral", "South", "Border = 0"))

Similarly, this could be accomplished using the plm package (I recommend using this method).

reg1.pool <- plm(vaprate ~ gsp + midterm + regdead + WNCentral + South + Border, 
data=voter, index = c("state","year"), model = "pooling")
summary(reg1.pool)

# F-test
linearHypothesis(reg1.pool, c("WNCentral", "South", "Border = 0"), test="F")

A Fixed Effects Regression

To review, let’s load the data and run a model looking at voter participation rate as a function of a few explanatory variables and regional dummy variables (WNCentral, South, Border). panel_hw.dta is a panel data set where individual = “stcode” (state code) and time = “year”. We are, then, pooling the data in the following regression.

STATA:

iis stcode
tis year
xtreg vaprate midterm gsp regdead WNCentral South Border, fe

In R, recall that we’ll have to transform the data into a panel data form.

R:

require(plm)

# model is specified using "within" estimator -&gt; includes state fixed effects.
reg1.fe <- plm(vaprate ~ gsp + midterm + regdead + WNCentral + South + Border,
data=voter, index = c("state","year"), model = "within")	
summary(reg1.fe)

Well, should we use the fixed effects model or the pooled OLS model? In R, you can run a test between the two:

pFtest(reg1.fe,reg1.pool)

Or, we can test for individual fixed effects present in the pooled model, like this:

plmtest(reg1.pool, effect = "individual")

The Random Effects Estimator

It could be, however, that the unobserved heterogeneity is uncorrelated with all of the regressors in all time periods — so called “random effects”. This would mean that if we did not account for these effects, we would still consistently estimate our coefficients, but their standard errors will be biased. To correct for this, we can use the randome effects model, a form of Generalized Least Squares that accounts for the embedded serial correlation in the error terms caused by random effects.

STATA:

xtreg vaprate midterm gsp regdead WNCentral South Border, re

R:

reg1.re <- plm(vaprate ~ gsp + midterm + regdead + WNCentral + South + Border, 
data=voter, index = c("state","year"), model = "random")	
summary(reg1.re)

Pooled OLS versus Random Effects

The Breush-Pagan LM test can be used to determine if you should use Random Effects model or pooled OLS. The null hypothesis is that the variance of the unobserved heterogeneity is zero, e.g.

H_0 = \sigma_\alpha^2 = 0
H_a = \sigma_\alpha^2 \neq 0

Failure to reject the null hypothesis implies that you will have more efficient estimates using OLS.

STATA:

xttest0

R:

plmtest(reg1.pool, type="bp")

Fixed Effects versus Random Effects

The Hausman test can help to determine if you should use Random Effects (RE) model or Fixed Effects (FE). Recall that a RE model is appropriate when the unobserved heterogeneity is uncorrelated with the regressors. The logic behind the Hausman test is that under the scenario that truth is RE, both the RE estimator and the FE estimator will be consistent (so you should opt to use the RE estimator because it is efficient). However, under the scenario that truth is FE, the RE estimator will be inconsistent — so you must use the FE estimator. The null hypothesis then, is that the unobserved heterogeneity \alpha_i and the regressors X_{it} are uncorrelated. Another way to think about it is that in the null hypothesis, the coefficient estimates of the two models are not statistically different. If you fail to reject the null hypothesis, this lends support for the use of the RE estimator. If the null is rejected, RE will produce biased coefficient estimates, so a FE model is preferred.

H_0: \text{Corr}[X_{it},\alpha_i] = 0
H_a: \text{Corr}[X_{it},\alpha_i] \neq 0

STATA:

xtreg vaprate midterm gsp regdead WNCentral South Border, fe
estimates store fe

xtreg vaprate midterm gsp regdead WNCentral South Border, re
estimates store re

hausman fe re

R:

phtest(reg1.fe,reg1.re)

Some plots

The following examples use the data wr-nevermar.dta

Say we are interested in plotting the mean of the variable “nevermar” over time.

STATA:

egen meannevermar = mean(nevermar), by(year)
twoway (line meannevermar year, sort), ytitle(Mean--nevermar)

R:

nmar <- read.dta(file="/Users/kevingoulding/DATA/wr-nevermar.dta")

b1 <- as.matrix(tapply(nmar$nevermar, nmar$year , mean))

plot(row.names(b1), b1, type="l", main="NEVERMAR Mean", xlab = "Year", ylab = "Mean(nevermar)", col="red", lwd=2)

Tags: ,
May 25, 2011

Surviving Graduate Econometrics with R: Fixed Effects Estimation — 3 of 8

The following exercise uses the CRIME3.dta and MURDER.dta panel data sets from Jeffrey Wooldridge’s econometrics textbook,

Wooldridge, Jeffrey. 2002. Introductory Econometrics: A Modern Approach. South-Western College Pub. 2nd Edition.

If you own the textbook, you can access the data files here.

Load and summarize the data

STATA:

use "C:\Users\CRIME3.dta"
des
sum

R:

require(foreign)
crime = read.dta(file="/Users/CRIME3.dta")
sumstats(crime)
as.matrix(sapply(crime,class))

If you haven’t yet loaded in the sumstats function, I suggest you do – you can find the code here.

A hypothesis test

See Part 2 of this series for a primer on hypothesis testing. Here, we will do one more example of testing a hypothesis of a linear restriction. Namely, from the regression equation:
\text{log}(crime_{it}) = \beta_0 + \delta_0 d78_t + \beta_1 clrprc_{i,t-1} + \beta_2 clrprc_{i,t-2} + \alpha_i + \varepsilon_{it}
where \alpha_i are “district” fixed effects, and \varepsilon_{it} is a white noise error term.
We would like to test the following hypothesis:
H_0: \beta_1 = \beta_2
H_a: \beta_1 \neq \beta_2
This can be re-written in matrix form:
H_0: R \beta = q
H_a: R \beta \neq q
Where:
R =  \begin{bmatrix}  0 & 0 & 1 & -1\\  \end{bmatrix}

\beta =  \begin{bmatrix}  \beta_0 \\  \delta_0 \\  \beta_1 \\  \beta_2 \\  \end{bmatrix}

q =  \begin{bmatrix}  0 \\  \end{bmatrix}

STATA:

reg clcrime cclrprc1 cclrprc2
cclrprc1= cclrprc2

R:

# Run the regression
reg1a = lm(lcrime ~ d78 + clrprc1 + clrprc2, data=crime)

# Create R and q matrices
R = rbind(c(0,0,1,-1))
q = rbind(0)

# Test the linear hypothesis beta_1  = beta_2
require(car)
linearHypothesis(reg1a,R,q)

# Equivalently, we can skip creating the R and q matrices
# and use this streamlined approach:
linearHypothesis(reg1a,"clrprc1 = clrprc2")

# Or, we can use the glhtest function in gmodels package
require(gmodels)
glh.test(reg1a, R, q)

First-Differenced model

As a review, let’s go over two very similar models that take out individual-specific time-invariant heterogeneity in panel data analysis. Our example regression is:

Y_{it} = X_{it} \beta + \varepsilon_{it}

where individual and time period are denoted by the i and t subscripts, respectively.

The within estimator — a.k.a the “fixed effects” model, wherein individual dummy variables (intercept shifters) are included in the regression.  All variation driving the coefficients on the other regressors is from the differences from individual specific means (= individual dummy estimates). The new model is:

Y_{it} = X_{it} \beta + \alpha_i + \varepsilon_{it}

where \alpha_i represents the individual dummy variables.

The first-differenced model — The first-differenced model creates new variables reflecting the one-period change in values. The regression then becomes \Delta Y_{i} = \Delta X_{i} \beta + \Delta \varepsilon_{i} where \Delta Y_{i} = Y_{it} - Y_{i,t-1}.

Note: These two models are very similar because they “strip out” / “eliminate” / “control for”  the variation “between” individuals in your panel data.  To do this, they use slightly different methods.  The variation left over, and therefore identifying the coefficients on the other regressors, is the “within” variation — or the variation “within” individuals.

STATA:

reg clcrime cavgclr
outreg2 using H3_1312, word replace

There are two ways we can calculate the first-differenced model, given the variables included in CRIME3.dta . Since the data set included changed variables with a “c” prefix (e.g. “clcrime” = change in “lcrime”; “cavgclr” = change in “avgclr”) we can do a simple OLS regression on the changed variables:

R:

reg2 =  lm(clcrime ~ cavgclr, data=crime)
summary(reg2)

Or, we can take a more formal approach using the plm package for panel data. This approach will prepare us for more advanced panel data methods.

require(plm)		# load panel data package

# convert the data set into a pdata.frame by identifying the
# individual ("district") and time ("year") variables in our data
crime.pd = pdata.frame(crime, index = c("district", "year"),
			 drop.index = TRUE, row.names = TRUE)

# Now, we can run a regression choosing the
# first-differenced model ("fd")
reg.fd = plm(lcrime ~ avgclr, data = crime.pd, model = "fd")
summary(reg.fd)

Back to Pooled OLS

Let’s switch over to the MURDER.dta data set to do some further regressions and analysis. First, we’ll compute a pooled OLS model for the years 1990 and 1993:

mrdrte_{it} = \delta_0 + \delta_1 d93_t + \beta_1 exec_{it} + \beta_2 unem_{it} + \alpha_{it} + \varepsilon_{it}

By using pooled OLS, we are disregarding the term \alpha_{it} in the regression equation above.

STATA:

reg mrdrte d93 exec unem if year==90|year==93

R:

crime = read.dta(file="/Users/CRIME3.dta")
sumstats(crime)

mrdrYR = subset(mrdr, year == 90 | year == 93)

reg3 = lm(mrdrte ~ d93 + exec + unem, data=mrdrYR)
summary(reg3)

# convert the data set into a pdata.frame (panel format) by identifying the
# individual ("state") and time ("year") variables in our data
require(plm)
mrdr.pd = pdata.frame(mrdrYR, index = c("state", "year"),
			 drop.index = TRUE, row.names = TRUE)

# Run a pooled OLS regression - results are the same as reg3
reg3.po = plm(mrdrte ~ d93 + exec + unem, data = mrdr.pd, model = "pooling")
summary(reg3.po)

Another First-Differenced Model

STATA:

reg cmrdrte cexec cunem if year==93

R:

# We can run the regression using the variables
# provided in the data set:
reg4 = lm(cmrdrte ~ cexec + cunem, data = subset(mrdrYR,year == 93))
summary(reg4)

# Or, we can run a regression using the plm package by choosing the
# first-differenced model ("fd")
reg4.fd = plm(mrdrte ~ d93 + exec + unem, data = mrdr.pd, model = "fd")
summary(reg4.fd)

# Note: we don't need the d93 dummy anymore, so it's equivalent
# to running the regression without it:
summary(plm(mrdrte ~ exec + unem, data = mrdr.pd, model = "fd"))

The Fixed Effects model

Another way to account for individual-specific unobserved heterogeneity is to include a dummy variable for each individual in your sample – this is the fixed effects model. Following from the regression in the previous section, our individuals MURDER.dta are states (e.g. Alabama, Louisiana, California, Montana…). So, we will need to add one dummy variable for each state in our sample but exclude one to avoid perfect collinearity — the “dummy variable trap”.

In STATA, if your data is set up correctly (e.g. individual in first column, time variable in second column), it is accomplished by adding ,fe to the end of your regression command.

STATA:

reg mrdrte exec unem, fe

In R, we can add dummy variables for each state in the following way:

R:

reg5 = lm(mrdrte ~ exec + unem + factor(state), data=mrdr)
summary(reg5)

See Part 4 of this series for more attention to fixed effects models, inference testing, and comparison to random effects models.

The Breusch-Pagan test for Heteroskedasticity

The Breusch-Pagan (BP) test can be done via a LaGrange Multiplier (LM) test or F-test. We will do the LM test version; this means that only one restricted model is run.
Var(\varepsilon_{it}|X_{it}) = \Omega \sigma^2
H_O: \Omega = identity matrix, \Rightarrow Var(\varepsilon_{it}|X_{it}) =\sigma^2 \Rightarrow homoskedasticity
H_a: \Omega \neq identity matrix, e.g. heteroskedasticity

First, we will run the test manually in three stages:

  1. Square the residuals from the original regression \rightarrow \hat{\varepsilon}^2.
  2. Run an auxiliary regression of \hat{\varepsilon}^2 on the original regressors.
  3. Calculate the BP LM test statistic = nR^2, where R^2 is r-squared fit measure from the auxiliary regression, and n is the number of observations used in the regression.

STATA:

reg cmrdrte cexec cunem if year==93
predict resid , resid
gen resid2 = resid^2
reg resid2 cexec cunem if year==93

R:

# Breusch-Pagan test for heteroskedasticity

# Square the residuals
res4 = residuals(reg4)
sqres4 = res4^2

m4 = subset(mrdr,year == 93)
m4$sqres = sqres4

# Run auxiliary regression
BP = lm(sqres ~ cexec + cunem, data = m4)
BPs = summary(BP)

# Calculation of LM test statistic:
BPts = BPs$r.squared*length(BP$residuals)

# Calculate p-value from Chi-square distribution
# with 2 degrees of freedom
BPpv = 1-pchisq(BPts,df=BP$rank-1)

# The following code uses a 5% significance level
if (BPpv < 0.05) {
    cat("We reject the null hypothesis of homoskedasticity.\n",
    "BP = ",BPts,"\n","p-value = ",BPpv)
} else {
    cat("We fail to reject the null hypothesis; implying homoskedasticity.\n",
    "BP = ",BPts,"\n","p-value = ",BPpv)
}

Now, let’s compare the results obtained above to the function bptest() provided in the R lmtest package:

require(lmtest)
bptest(reg4)

I Hope your results are exactly the same as when you did the Breush-Pagan test manually — they should be!

White’s Test for Heteroskedasticity

White’s test for heteroskedasticity is similar the Breusch-Pagan (BP) test, however the auxiliary regression includes all multiplicative combinations of regressors. Because of this it can be quite bulky and finding heteroskedasticity may simply imply model mispecification. The null hypothesis is homoskedasticity (same as BP).

So, here we will run a special case of the White test using the fitted values of the original regression:

\hat{\varepsilon}_{it}^2 = \hat{Y}_{it} + \hat{Y}_{it}^2

STATA:

reg cmrdrte cexec cunem if year==93
gen resid2 = resid^2
predict yhat
gen yhat2 = yhat^2
reg resid2 yhat yhat2 if year==93

R:

# White's test for heteroskedasticity: A Special Case

# Collect fitted values and squared f.v. from your regression
yhat = reg4$fitted.values
yhat2 = yhat^2
m4 = NULL 			# clears data previously in m4

# create a new data frame with the three variables of interest
m4 = data.frame(cbind(sqres4,yhat,yhat2))

# Run auxiliary regression
WH = lm(sqres4 ~ yhat + yhat2, data = m4)
WHs = summary(BP)

# Calculation of LM test statistic:
WHts = WHs$r.squared*length(WH$residuals)

# Calculate p-value from Chi-square distribution
# with 2 degrees of freedom
WHpv = 1-pchisq(WHts,df=WH$rank-1)

# The following code uses a 5% significance level
if (WHpv < 0.05) {
    cat("We reject the null hypothesis of homoskedasticity.\n",
    "BP = ",WHts,"\n","p-value = ",WHpv)
} else {
    cat("We fail to reject the null hypothesis; implying homoskedasticity.\n",
    "BP = ",WHts,"\n","p-value = ",WHpv)
}

Heteroskedasticity-Robust Standard Srrors

If heteroskedasticity is present in our data sample, using OLS will be inefficient. See this post for details behind calculating heteroskedasticity-robust and cluster-robust standard errors.

To continue on to Part 4 of our series, Advanced Panel Data Methods, click here.

Tags: ,
May 24, 2011

Surviving Graduate Econometrics with R: Difference-in-Differences Estimation — 2 of 8

The following replication exercise closely follows the homework assignment #2 in ECNS 562. The data for this exercise can be found here.

The data is about the expansion of the Earned Income Tax Credit. This is a legislation aimed at providing a tax break for low income individuals.  For some background on the subject, see

Eissa, Nada, and Jeffrey B. Liebman. 1996. Labor Supply Responses to the Earned Income Tax Credit. Quarterly Journal of Economics. 111(2): 605-637.

read more »

Tags: ,
May 24, 2011

Surviving Graduate Econometrics with R: The Basics — 1 of 8

Introduction

The following is an introduction to statistical computing with R and STATA. In the future, I would like to include SAS. It is meant for the graduate or undergraduate student in Econometrics that may want to use one statistical software package, but his teacher, adviser, or friends are using a different one.  I encountered this issue when I wanted to learn and use R, while both my econometrics courses were taught using SAS and STATA.  I will be following the course homeworks for ECNS 562: Econometrics II taught by Dr. Christiana Stoddard in the Spring of 2011, so you may see reference to STATA in the actual questions. Read further for the R code.

ACKNOWLEDGMENTS

Special thanks to Dr. Christiana Stoddard for letting me use her homework assignments and class notes to structure this blog series. In a subject that is prone to dry class experiences, her econometrics course was incredibly engaging, useful, and challenging — a true pleasure. Also, thank you to Dr. Joe Atwood for his help in getting me started using R and providing insightful guidance on my code and supporting me in myriad ways. Roger Avalos, a fellow graduate student, provided his STATA code for this series — as well as encouragement in writing this blog. Thank you, Roger.

Let’s Get Started

For this assignment, we will be using the data available available at www.montana.edu/stock/ecns403/rawcpsdata.dta – raw Consumer Pricing Index data.

read more »

Tags: ,