Bayesian Econometrics Introduction
Introduction to Bayesian Econometrics¶
This course will cover the application of Bayesian statistical methods for econometric inference. Broadly speaking we will
- Briefly discuss sampling methods for classical statistics
- Introduce Bayes Rule and Provide an Application
- Examine the use of Monte Carlo Markov Chains
- Link to Bayes Rule
- Metropolis-Hastings and other Samplers
- Chain "convergence" and diagnostics
- Application: OLS, Time-Series Econometrics, Heirarchical Models
The frequentist paradigm, arguably the dominant statistical paradigm in the social sciences (and what you all have studied), relies on the following notions:
- $\beta$ is not random but neither is it known. It is a fixed quantity.
- To uncover information about $\beta$, we observe part of some process (e.g. $\mathbf{y=x\beta+\epsilon}$).
- For statistical inference, we rely on repeated trials of $\mathbf{y}$ and $\mathbf{x}$, even if this repetition rarely (if ever) occurs in the social science context.
- $\mathbf{y}$ and $\mathbf{x}$ are considered random
- The model typically attempts to uncover information about $\mathbf{\beta}$ by examining the likelihood function
where $\mathbf{b}$ are our estimates of $\beta$
The bayesian paradigm tackles the issue of estimating $\beta$ by
- Treating $\beta$ as random and unknown
- Treating $\mathbf{y}$ and $\mathbf{x}$ as fixed and non-random (at least once they are recorded in your dataset)
- Uncovers information about $\mathbf{\beta}$ by examining the posterior likelihood
where $\mathbf{b}$ are our estimates of $\beta$
In quite a lot of instances, these two approaches give you the same estimate for $\beta$. Until recently, Bayesian Statistical modeling wasn't used because calculating the posterior likelihood was computationally challenging, but recent advances in the theory and construction of Monte Carlo Markov Chains and computational ability has really opened the door for Bayesian analysis for problems that might not be estimated using the frequentist paradigm (ie. Maximim Likelihood). There is an ongoing holy war in the two statistical camps, during the semester I will attempt to highlite the pros and cons of each paradigm without taking a position on which one is better. My philosophy is that if it gets the job done, use it while being aware of limitations and advantages.
Repeated trials in frequentist statistics¶
A good jumping off point for this course is to understand the use of sampling techniques in a classical statistical paradigm.
- Underlying all statistical inference that you have learned in statistics and econometrics is the idea of repeated trials. Bootstrapping highlites this really well.
- We will begin with an exploration of bootstrapping and see the implementation steps.
#load python libraries for this ipython notebook:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sbn
import statsmodels.formula.api as smf
import warnings
warnings.filterwarnings('ignore')
plt.style.use('ggplot')
np.random.seed(12578)
Load Tobias and Koop data for time==4¶
tobias_koop=pd.read_csv('https://rlhick.people.wm.edu/econ407/data/tobias_koop_t_4.csv')
tobias_koop.head()
tobias_koop.describe()
# explore distribution of ln_wage (our dependent variable)
plt.figure(figsize=(8,6))
tobias_koop.ln_wage.plot(kind='hist',bins=30)
plt.xlabel('ln_wage')
plt.show()
# run OLS regression:
formula = 'ln_wage ~educ + pexp + pexp2 + broken_home'
mod = smf.ols(formula,data=tobias_koop)
res=mod.fit()
res.summary()
These standard errors aren't robust (although we could ask for that). As we saw in Cross-Section, we can non-parametrically bootstrap confidence intervals of our parameters. While bootstrapping is implemented in many ols routines, we implement them manually to see how it works:
Steps for bootstrapping:
- Loop $R$ times ($R$ = # of boostrapped replicates)
- For each $r\in R$, sample $N$ times with replacement from rows in $\mathbf{Y}$ and $\mathbf{X}$, noting that $N$ is equal to the number of observations in the original dataset $\mathbf{Y}$ and $\mathbf{X}$. Denote each of these samples as $\mathbf{X}_r$ and $\mathbf{Y}_r$.
- Estimate an OLS model and recover estimate for replicate $r$: $\hat{\beta}_r = (\mathbf{X}_r'\mathbf{X}_r)^{-1}\mathbf{X}_r'\mathbf{Y}_r$
- Store results
- Calculate the standard deviation (our estimate for the standard errors) of each parameter estimate, or construct non-parametric confidence intervals (preferred)
Note, for our purposes we don't need to see the R regression results. We only need to store them and move on to the next replicate. Let's investigate further the res
object we created above.
How to Recover OLS Estimates from Statsmodels¶
# The parameter values can be accessed here
beta = res.params
print(beta)
print("\nExtract the education parameter")
print(beta['educ'])
Sampling with replacement¶
Fundamental to the idea of bootstrapping is to sample with replacement to preserve the shape of the underlying distribution of our parameters. We can do this quite easily using numpy tools.
To do this, suppose we want to sample with replacement from 5 rows of data indexed by [0,1,2,3,4]
.
So we can sample with replacement from row numbers (index) and then use that to select the rows we need for that replicate. For each replicate, run the model or calculate the statistic we are interested in, store the results, and then repeat for all subsequent replicates.
row_id = np.arange(5)
print(row_id)
for r in range(3):
this_sample = np.random.choice(row_id,size=row_id.shape[0],replace=True)
print("\nReplicate", r+1)
print("Rows to sample for this replicate")
print(this_sample)
Bootstrap for our OLS Model¶
The following code uses the ideas above to perform the following steps for each replicate $r$ of $R$ total replicates:
- Sample with replacement from rows of our dataset
- Calculate the statistic of interest (our OLS parameter estimates)
- Store results
# Number of replicates
R = 2000
# store each r of the R replicates here in rows:
results_boot = np.zeros((R,res.params.shape[0]))
row_id = range(0,tobias_koop.shape[0])
for r in range(R):
# this samples with replacement from rows in the Tobias and Koop dataset
this_sample = np.random.choice(row_id, size=tobias_koop.shape[0],
replace=True) # gives sampled row numbers
# Define data for this replicate:
tobias_koop_r = tobias_koop.iloc[this_sample]
# Estimate model
results_r = smf.ols(formula,data=tobias_koop_r).fit().params
# Store in row r of results_boot:
results_boot[r,:] = results_r
# Convert results to pandas dataframe for easier analysis:
results_boot = pd.DataFrame(results_boot,columns=['b_Intercept','b_educ',
'b_pexp','pexp2','b_broken_home'])
results_boot.head()
results_boot.head(10) # the first 10 replicates
results_boot.describe(percentiles=[.005,.025,.05,.5,.95,.975,.995])
From the above results, we see that the 95% confidence interval for education is in the range [0.0668,0.10517] (approximately, as these numbers will change each time you run the code above). It is tempting to interpret the confidence intervals above as There is a 95% chance that $\beta$ is in the range [0.0668,0.10517]. However, recall that in the frequentist paradigm $\beta$ is fixed and non-random. So it is either in the range [0.0668,0.10517] or it isn't. What is random is the range [0.0668,0.10517]. So a better way to verbalize the confidence interval is that if you repeated your regression analysis many times, 95% of your calculated confidence interval would contain the true parameter $\beta$.
Let's plot our bootstrap replicates, by parameter. Note that the values of each of these replicates are completely independent (or should be). Replicate 3 doesn't in any way inform us about replicate 4. For each parameter, we show the estimated parameter (solid black line) and two parameter estimates:
- The standard 95% confidence interval (dashed red line):
- The 95% confidence interval calculated based off the 2.5 and 97.5 percentiles from our boostrap database (dashed blue line).
Focusing on educ
¶
plt.figure(figsize=(10,8))
plt.xlabel('b_educ')
lw = 2
plt.axvline(beta['educ'],color='k',linestyle='dashed',lw=lw,
label='OLS Estimate')
# Method 1. 95% Parametric CI's
plt.axvline(beta['educ'] -1.96*np.std(results_boot.b_educ),color='b',
linestyle='dashed',
lw=lw,label='Par. Lower 95%')
plt.axvline(beta['educ'] +1.96*np.std(results_boot.b_educ),color='b',
linestyle='dashed',
lw=lw,label='Par. Lower 95%')
# Method 2. Non-Parametric 95% CI's
plt.axvline(np.percentile(results_boot.b_educ,2.5),color='r',
linestyle='dashed',
lw=lw,label='Non-Par. Lower 95%')
plt.axvline(np.percentile(results_boot.b_educ,97.5),color='r',
linestyle='dashed',
lw=lw,label='Non-Par. Upper Lower 95%')
# scootch the upper limit of the x-axis a bit to the right for
# non-overlapping legend
plt.xlim([.05,.15])
results_boot.b_educ.plot(kind='hist',bins=20,alpha=.5)
plt.legend()
plt.show()
For all parameters:¶
# set x axis values
replicate= np.arange(R)
# plot point estimate and confidence intervals:
plt.figure(figsize=(14, 20), dpi=200)
lw = 1
plt.subplot(521)
plt.ylabel("Intercept")
plt.plot(replicate, results_boot.b_Intercept, label="Intercept",lw=lw)
plt.subplot(522)
plt.hist(results_boot.b_Intercept,lw=lw, label="b_intercept", orientation='horizontal')
plt.subplot(523)
plt.ylabel("b_educ")
plt.plot(replicate, results_boot.b_educ, label="b_educ",lw=lw)
plt.subplot(524)
plt.hist(results_boot.b_educ,lw=lw, label="b_educ",orientation='horizontal')
plt.subplot(525)
plt.ylabel("b_pexp")
plt.plot(replicate, results_boot.b_pexp, label="b_pexp",lw=lw)
plt.subplot(526)
plt.hist(results_boot.b_pexp,lw=lw, label="b_pexp",orientation='horizontal')
plt.subplot(527)
plt.ylabel("b_pexp2")
plt.plot(replicate, results_boot.b_pexp, label="b_pexp2",lw=lw)
plt.subplot(528)
plt.hist(results_boot.b_pexp,lw=lw, label="b_pexp2",orientation='horizontal')
plt.subplot(529)
plt.ylabel("b_broken_home")
plt.plot(replicate, results_boot.b_broken_home, label="b_broken_home",lw=lw)
plt.subplot(5,2,10)
plt.hist(results_boot.b_broken_home,lw=lw, label="b_broken_home",orientation='horizontal')
plt.show()
Visualizing Covariances $\beta$:¶
plt.figure(figsize=(10,8));
sbn.jointplot(results_boot['b_educ'],results_boot['b_Intercept'],kind='hex');
plt.show()
g = sbn.PairGrid(results_boot)
g.map_diag(sbn.kdeplot)
g.map_offdiag(sbn.kdeplot, cmap="Blues_d")
plt.show()