Estimating Custom Maximum Likelihood Models in Python (and Matlab)

In this post I show various ways of estimating "generic" maximum likelihood models in python. For each, we'll recover standard errors.

We will implement a simple ordinary least squares model like this

\begin{equation} \mathbf{y = x\beta +\epsilon} \end{equation}

where \(\epsilon\) is assumed distributed i.i.d. normal with mean 0 and variance \(\sigma^2\). In our simple model, there is only a constant and one slope coefficient (\(\beta = \begin{bmatrix} \beta_0 & \beta_1 \end{bmatrix}\)).

For this model, we would probably never bother going to the trouble of manually implementing maximum likelihood estimators as we show in this post. However, for more complicated models for which there is no established package or command, there are benefits to knowing how to build your own likelihood function and use it for estimation. It is also worthwhile noting that most of the methods shown here don't use analytical gradiants or hessians, so are likely (1) to have longer execution times and (2) to be less precise than methods where known analytical gradiants and hessians are built into the estimation method. I might explore those issues in a later post.

tl;dr: There are numerous ways to estimate custom maximum likelihood models in Python, and what I find is:

  1. For the most features, I recommend using the Genericlikelihoodmodel class from Statsmodels even if it is the least intuitive way for programmers familiar with Matlab. If you are comfortable with object oriented programming you should definitely go this route.
  2. For fastest run times and computationally expensive problems Matlab will most likely be significantly even with lots of code optimizations.

Read more…

Reproducible Research and Literate Programming for Econometrics

Jump straight to the discussion on Stata and Emacs Org Mode

This post describes my experience implementing reproducible research and literate programming methods for commonly used econometric software packages. Since literate programming aims to store the accumulated scientific knowledge of the research project in one document, the software package must allow for the reproduction of data cleaning and data analysis steps, store the record of methods used, generate results dynamically and use these for the writeup, and be executable by including the computational environment.

Perhaps most importantly, this dynamic document can be executed to produce the academic paper. The researcher shares this file with other researchers rather than the only a pdf of the paper, making the research fully reproducible by executing the dynamic document. It is my view that this will be expected in most scientific journals over the next few decades.

Read more…

The Gordon Schaefer Model

Fisheries Simulation Model

In this notebook, we examine the workings of the Gordon-Schaefer Fisheries Model for a single species.

Denoting \(S(t)\) as the stock at time \(t\), we can write the population growth function as

$$ \frac{\Delta S}{\Delta t} = \frac{\partial S}{\partial t} = r S(t) \left(1- \frac{S(t)}{K} \right) $$

where
\(S(t)\) = stock size at time \(t\)
\(K\) = carrying capacity
\(r\) = intrinsic growth rate of the population

Read more…

Part II: Comparing the Speed of Matlab versus Python/Numpy

In this note, I extend a previous post on comparing run-time speeds of various econometrics packages by

  1. Adding Stata to the original comparison of Matlab and Python
  2. Calculating runtime speeds by
    • Comparing full OLS estimation functions for each package
      • Stata: reg
      • Matlab: fitlm
      • Python: regression.linear_model.OLS from the statsmodels module.
    • Comparing the runtimes for calculations using linear algebra code for the OLS model: \( (x'x)^{-1}x'y \)
  3. Since Stata and Matlab automatically parralelize some calculations, we parallelize the python code using the Parallel module.

    Read more…

Comparing the Speed of Matlab versus Python/Numpy

Update 1: A more complete and updated speed comparison can be found here.

Update 2: Python and Matlab code edited on 4/5/2015.

In this short note, we compare the speed of matlab and the scientific computing platform of python for a simple bootstrap of an ordinary least squares model. Bottom line (with caveats): matlab is faster than python with this code. One might be able to further optimize the python code below, but it isn't an obvious or easy process (see for example advanced optimization techniques).

As an aside, this note demonstrates that even if one can't optimize python code significantly enough, it is possible to do computationally expensive calculations in matlab and return results to the ipython notebook.

Data Setup

We will bootstrap the ordinary least squares model (ols) using 1000 replicates. For generating the toy dataset, the true parameter values are $$ \beta=\begin{bmatrix} 10\\-.5\\.5 \end{bmatrix} $$

We perform the experiment for 3 different sample sizes (\(n = \begin{bmatrix}1,000 & 10,000 & 100,000 \end{bmatrix}\)). For each of the observations in the toy dataset, the independent variables are drawn from

$$ \mu_x = \begin{bmatrix} 10\\10 \end{bmatrix}, \sigma_x = \begin{bmatrix} 4 & 0 \\ 0 & 4 \end{bmatrix} $$

The dependent variable is constructed by drawing a vector of random normal variates from Normal(0,1). Denoting this vector as \(\epsilon\) calculate the dependent variable as $$ \mathbf{Y=Xb+\epsilon} $$

Read more…

Tapping MariaDB / MySQL data from Ipython

In this short post, I will outline how one can access data stored in a database like MariaDB or MySQL for analysis inside an Ipython Notebook. There are many reasons why you might want to store your data in a proper database. For me the most important are:

  1. All of my data resides in a password protected and more secure place than having a multitude of csv, mat, and dta files scattered all over my file system.

  2. If you access the same data for multiple projects, any changes to the underlying data will be propagated to your analysis, without having to update copies of project data.

  3. Having data in a central repository makes backup and recover significantly easier.

  4. This allows for two-way interaction with your database. You can read and write tables from/to your database. Rather than use SQL, you can create database tables using pandas/ipython.

Read more…