# Using stata_kernel and Jupyter Lab (or Notebook) for reproducible research goodness

The method outlined in this post is in my opinion the very best way to run Stata and generate dynamic and reproducible research documents that you can share with co-authors, instructors, etc. This method requires some setup including the installation of python. We cover in detail most of these steps.

# Using stata_kernel and Emacs Orgmode for reproducible research goodness

This post is hopefully the last in a series of posts outlining how to use Stata in a proper dynamic document/reproducible research setting using Emacs. As of the summer of 2020, I am only using stata_kernel for my own work and no longer recommend using my customized ob-ipython.el for reasons described here.

This post shows the installation steps to get this working and some usability recommendations if using Org-mode. Before proceeding with anything below, make sure you complete the "Python Preliminaries" steps first.

# Tensorflow with Custom Likelihood Functions

This post builds on earlier ones dealing with custom likelihood functions in python and maximum likelihood estimation with auto differentiation. This post is approaching tensorflow from an econometrics perspective and is based on a series of tests and notes I developed for using tensorflow for some of my work. In early explorations of tensorflow nearly all of the examples I encountered were from a machine learning perspective, making it difficult to fit code examples to econometric problems. For the tensorflow uninitiated who want to dive in (like me!), I hope this will prove useful.

The goals of the post:

1. Some tensorflow basics I wish I had known before I started this work
2. Define a custom log-likelihood function in tensorflow and perform differentiation over model parameters to illustrate how, under the hood, tensorflow's model graph is designed to calculate derivatives "free of charge" (no programming required and very little to no additional compute time).
3. Use the tensorflow log-likelihood to estimate a maximum likelihood model using tensorflow_probability.optimizer capabilities.
4. Illustrate how the tensorflow.probability.mcmc libraries can be used with custom log-likelihoods.

# Make matplotlib histograms look like R's

I prefer the look of R's histograms. This short post pulls together some resources for mimicking R histograms in Matplotlib.

# Teaching with Stata

This post skews towards recommending the stata add-on "markstat". For most of my students this is still recommended. For those that have some python interest/skills and especially those that have already installed jupyter notebook or lab, I highly recommend stata_kernel and jupyter notebook (or lab).

This post is a followup to two earlier blog posts on reproducible research found here and here. This post focuses on my usage of Stata for classroom assignments turned in by students. These assignments entail

1. Model Description including mathematical equations (Latex)
2. Data Summaries and Figures
3. Stata Code
4. Stata Results
5. Quality publishing system to produce a problem set document containing all of the above elements
6. Easy for students to use (given a willingness to learn the markdown syntax)

These are different from my own research requirements. For me, emacs org-mode is the best tool for the reasons I outline in the prior posts linked above. For my students, however, learning Emacs and org-mode is totally impractical. This post quickly surveys the three available options: Markdoc, Markstat, and Jupyter Notebook.

# Using Autograd for Maximum Likelihood Estimation

Thanks to an excellent series of posts on the python package autograd for automatic differentiation by John Kitchin (e.g. More Auto-differentiation Goodness for Science and Engineering), this post revisits some earlier work on maximum likelihood estimation in Python and investigates the use of auto differentiation. As pointed out in this article, auto-differentiation "can be thought of as performing a non-standard interpretation of a computer program where this interpretation involves augmenting the standard computation with the calculation of various derivatives."

Auto-differentiation is neither symbolic differentiation nor numerical approximations using finite difference methods. What auto-differentiation provides is code augmentation where code is provided for derivatives of your functions free of charge. In this post, we will be using the autograd package in python after defining a function in the usual numpy way. In python, another auto-differentiation choice is the Theano package, which is used by PyMC3 a Bayesian probabilistic programming package that I use in my research and teaching. There are probably other implementations in python, as it is becoming a must-have in the machine learning field. Implementations also exist in C/C++, R, Matlab, and probably others.

The three primary reasons for incorporating auto-differentiation capabilities into your research are

1. In nearly all cases, your code will run faster. For some problems, much faster.
2. For difficult problems, your model is likely to converge closer to the true parameter values and may be less sensitive to starting values.
3. Your model will provide more accurate calculations for things like gradiants and hessians (so your standard errors will be more accurately calculated).

With auto-differentiation, gone are the days of deriving analytical derivatives and programming them into your estimation routine. In this short note, we show a simple example of auto-differentiation, expand on that for maximum likelihood estimation, and show that for problems where likelihood calculations are expensive, or for which there are many parameters being estimated there can be dramatic speed-ups.

# Stata and Literate Programming in Emacs Org-Mode

Important Note: The following post is outdated and is no longer the recommended approach for running stata in orgmode. Please see this post on using emacs with jupyter and the stata_kernel for a method that works and that is more robust moving forward.

Stata is a statistical package that lots of people use, and Emacs Org-mode is a great platform for organizing, publishing, and blogging your research. In one of my older posts, I outlined the relative benefits of Org-mode compared to other packages for literate programming. At that time, I argued it was the best way to write literate programming documents with Stata (if you are willing to pay the fixed costs of learning Emacs). I still believe that, and I use it a lot for writing course notes, emailing students with code and results, and even for drafting manuscripts for publishing.

Despite how good Emacs Org-mode is for research involving Stata, Stata is still something of a second class citizen compared to packages like R or Python. While it is functional, it can be a little rough around the edges, and since not many people use Stata with Emacs finding answers can be tough. This post does 3 things:

1. Demonstrates some issues using stata in org-mode
2. Introduces an updated version of ob-stata.el. With only minor modifications, this version avoids some issues with the current version of ob-stata found here. My version of ob-stata.el can be downloaded from gitlab.
3. Provides full setup instructions that enables code-highliting in html and latex export.

# Estimating Custom Maximum Likelihood Models in Python (and Matlab)

In this post I show various ways of estimating "generic" maximum likelihood models in python. For each, we'll recover standard errors.

We will implement a simple ordinary least squares model like this

$$\mathbf{y = x\beta +\epsilon}$$

where $\epsilon$ is assumed distributed i.i.d. normal with mean 0 and variance $\sigma^2$. In our simple model, there is only a constant and one slope coefficient ($\beta = \begin{bmatrix} \beta_0 & \beta_1 \end{bmatrix}$).

For this model, we would probably never bother going to the trouble of manually implementing maximum likelihood estimators as we show in this post. However, for more complicated models for which there is no established package or command, there are benefits to knowing how to build your own likelihood function and use it for estimation. It is also worthwhile noting that most of the methods shown here don't use analytical gradiants or hessians, so are likely (1) to have longer execution times and (2) to be less precise than methods where known analytical gradiants and hessians are built into the estimation method. I might explore those issues in a later post.

tl;dr: There are numerous ways to estimate custom maximum likelihood models in Python, and what I find is:

1. For the most features, I recommend using the Genericlikelihoodmodel class from Statsmodels even if it is the least intuitive way for programmers familiar with Matlab. If you are comfortable with object oriented programming you should definitely go this route.
2. For fastest run times and computationally expensive problems Matlab will most likely be significantly even with lots of code optimizations.

# Reproducible Research and Literate Programming for Econometrics

This post describes my experience implementing reproducible research and literate programming methods for commonly used econometric software packages. Since literate programming aims to store the accumulated scientific knowledge of the research project in one document, the software package must allow for the reproduction of data cleaning and data analysis steps, store the record of methods used, generate results dynamically and use these for the writeup, and be executable by including the computational environment.

Perhaps most importantly, this dynamic document can be executed to produce the academic paper. The researcher shares this file with other researchers rather than the only a pdf of the paper, making the research fully reproducible by executing the dynamic document. It is my view that this will be expected in most scientific journals over the next few decades.

# The Gordon Schaefer Model

### Fisheries Simulation Model¶

In this notebook, we examine the workings of the Gordon-Schaefer Fisheries Model for a single species.

Denoting $S(t)$ as the stock at time $t$, we can write the population growth function as

$$\frac{\Delta S}{\Delta t} = \frac{\partial S}{\partial t} = r S(t) \left(1- \frac{S(t)}{K} \right)$$

where
$S(t)$ = stock size at time $t$
$K$ = carrying capacity
$r$ = intrinsic growth rate of the population