Friday, July 28, 2017

Dynamic equilibrium and ensembles (and collected results)


I previously worked out that ensembles of information equilibrium relationships have a formal resemblance to a single aggregate information equilibrium relationship involving the ensemble averages:

$$
\frac{d \langle A \rangle}{dB} = \langle k \rangle \frac{\langle A \rangle}{B}
$$

I wanted to point out that this means ensemble ratios and abstract prices will exhibit a dynamic equilibrium just like individual information equilibrium relationships if $\langle k \rangle$ changes slowly (with respect to both $B$ and now time $t$):

$$
\frac{d}{dt} \log  \frac{\langle A \rangle}{B} \approx (\langle k \rangle - 1) \beta
$$

plus terms $\sim d\langle k \rangle /dt$ where we assume (really, empirically observe) $B \sim e^{\beta t}$ with growth rate $\beta$. The ensemble average version allows for the possibility that $\langle k \rangle$ can change over time (if it changes too quickly, additional terms become important in the solution to the differential equation as well as the last dynamic equilibrium equation).

Generally, considering the first equation above with a slowly changing $\langle k \rangle$, we can apply nearly all of the results collected in the tour of information equilibrium chart package to ensembles of information equilibrium relationships. These have been described in three blog posts:
1. Self-similarity of macro and micro 
Derives the original ensemble information equilibrium relationship 
2. Macro ensembles and factors of production 
Lists the result for two or more factors of production (same result gives matching models) 
3. Dynamic equilibrium and ensembles 
The present post arguing the extension of the dynamic equilibrium approach to ensemble averages

Thursday, July 27, 2017

Adding race and gender to macroeconomics

Narayana Kocherlakota has an article at Bloomberg View about how macroeconomists can't keep ignoring race and gender ‒ something I agree with. In fact, I believe that ignoring race and gender has lead to two major misunderstandings in macroeconomics ... two misunderstandings that can be clarified by use of the dynamic equilibrium model.

Women entering the workforce

There are a variety of explanations of the so-called "Great Inflation" of the 60s and 70s, some monetary, some focused on government spending. However, due to the strong connection between labor force growth and inflation (see also the piece on demographic inflation by Steve Randy Waldman aka Interfluidity), it seems likely that the long non-equilibrium process of women entering the workforce in the 1960s and 70s is the causal factor. The main shock to the civilian labor force participation is dominated by the effect of women getting jobs and the employment-population ratios for men and women show different general structures in terms of dynamic equilibrium:


In fact, using the charts from here to display the shocks (shown as vertical lines in the graphs above) and their width (duration), we can see that the shocks to the labor force participation and to the employment population ratio for women precede the shocks to inflation using various measures:


[added in update] Positive shocks to the measure in blue, negative shocks in red. (Note that the increase in labor force participation for women consists of a long positive shock with a few negative shocks corresponding to recessions that aren't shown.)

Racial disparities in unemployment

Another area where macro without race and gender leads to misunderstanding is in unemployment rate dynamics. Ordinary observation of unemployment statistics leads Kocherlakota to write:
Arguably the most important is that blacks ‒ especially black men ‒ are much more likely to lose their jobs. This risk of job loss is highly cyclical, which is why blacks fare so much worse than whites during recessions. For example, the black unemployment rate peaked at nearly 17 percent after the Great Recession, compared with just over 9 percent for whites.
The wrong framework (and general lack of including race and gender) leads Kocherlakota to the wrong diagnosis in this case. The problem is not necessarily a dynamic one (i.e. due to black losing jobs more than whites), but rather one of hysteresis [1]. The overall dynamics for black and white unemployment are approximately the same (with this model indicating a similar matching function).


In the graph above, the black dynamic equilibrium is applied to white unemployment with the only difference being the starting value (about 5% instead of 10%). The model describes both sets of data roughly equally well indicating that the issue is initial conditions (slavery, Jim Crow), not present day dynamics. This hysteresis is caused by the fact that unemployment declines at the same relative rate for both black and white workers and both are subjected to the same shocks to the macroeconomy.

One way to imagine this is as two airplanes flying from Seattle to Chicago, with one taking off about an hour later than the other. Since both planes are subjected to the same wind conditions (macro shocks), the plane taking off later never catches up. In this case, the solution required is different from the solution to the problem as diagnosed by Kocherlakota: one would need to either make macro shocks affect black workers less, or increase employment through increased hiring. We are talking about something akin to reparations: Black Americans need to be compensated for being kept out of jobs by racist policies of the past.

Just two examples

Those are just two examples I've seen in my work with the dynamic equilibrium model, but they're definitely not the only ones. The "Great Inflation" sent macroeconomics off on a wild goose chase ending in DSGE models that can't forecast, attributing the inflation to central bank policy (which continues to this day). If it had been understood at the time that a certain amount of inflation would be inevitable because of women entering the workforce, the history of past 40 years of macroeconomics might have been different.

...

Update: changed EPOP graphs to have the same x- and y-axis. The original graph is here:


...

Footnotes:

[1] When I say hysteresis, I am in no way saying discrimination has ended. For example, being employed does not tell us whether someone is underemployed or paid less for the same job.

Macro ensembles and factors of production


I was inspired by Dietrich Vollrath's latest blog post to work out the generalization of the macro ensemble version of the information equilibrium condition [1] to more than one factor of production. However, as it was my lunch break, I didn't have time to LaTeX up all the steps so I'm just going to post the starting place and the result (for now).

We have two ensembles of information equilibrium relationships $A_{i} \rightleftarrows B$ and $A_{j} \rightleftarrows C$ (with two factors of production $B$ and $C$), and we generalize the partition function analogously to multiple thermodynamic potentials (see also here):

$$
Z = \sum_{i j} e^{-k_{i}^{(1)} \log B/B_{0} -k_{j}^{(2)} \log C/C_{0}}
$$

Playing the same game as worked out in [1], except with partial derivatives, you obtain:

$$
\begin{align}
\frac{\partial \langle A \rangle}{\partial B} = & \; \langle k^{(1)} \rangle \frac{\langle A \rangle}{B}\\
\frac{\partial \langle A \rangle}{\partial C} = & \; \langle k^{(2)} \rangle \frac{\langle A \rangle}{C}
\end{align}
$$

This is the same as before, except now the values of $k$ can change. If the $\langle k \rangle$ change slowly (i.e. treated as almost constant), the solution can be approximated by a Cobb-Douglas production function:

$$
\langle A \rangle = a \; B^{\langle k^{(1)} \rangle} C^{\langle k^{(2)} \rangle}
$$

And now you can read Vollrath's piece keeping in mind that using an ensemble of information equilibrium relationships implies $\beta$ (e.g. $\langle k^{(1)} \rangle$) can change and we aren't required to maintain $\langle k^{(1)} \rangle + \langle k^{(2)} \rangle = 1$.

...

Update 28 July 2017

I'm sure it was obvious to readers, but this generalizes to any number of factors of production using the partition function

$$
Z = \sum_{i_{n}} \exp \left( - \sum_{n} k_{i_{n}}^{(n)} \log B^{(n)}/B_{0}^{(n)} \right)
$$
where instead of $B$ and $C$ (or $D$), we'd have $B^{(1)}$ and $B^{(2)}$ (or $B^{(3)}$). You'd obtain:

$$
\frac{\partial \langle A \rangle}{\partial B^{(n)}} = \; \langle k^{(n)} \rangle \frac{\langle A \rangle}{B^{(n)}}
$$

Gross National Product

I looked at NGDP data in the past with the dynamic equilibrium model (see here [1], and here [2]), however the annual time series on GNP data at FRED goes back a bit further in time and includes the onset of the Great Depression. Here are the results, first using the housing and stock market "bubble" frame for 1990s-2000s, and then the "no knowledge" frame (discussed in [1]):



Here are the GNP growth rates:



It will be interesting to see which one is the better model. The latter suggests a potential "demographic" shift of e.g. baby boomers leaving the workforce over a ten year period centered around 2014.

Self-similarity in dynamic equilibrium

Let me say up front I am not saying the idea that stock market price time series are self-similar is new. What's new is that a specific structure (i.e. the dynamic equilibrium + shocks) appears at different scales. Here we steadily zoom in on the S&P 500 from a multi-year timescale, to a few years, to on the order of a year, down to months (discovering new shocks at smaller and smaller scales):





Wednesday, July 26, 2017

Updating Samuelson's family tree ...


PS This was a bit tongue in cheek.

A dynamic equilibrium history of the United States


In writing the previous post, I got the idea of collecting all of the dynamic equilibrium results for the US into a single "infographic". It was also inspired by the recently scanned 75 Years of American Finance: A Graphic Presentation, 1861-1935, the 85-foot long detailed timeline compiled by Merle Hostetler in 1936 available at FRASER.

Hopefully the chart is fairly self-explanatory: positive shocks in blue, negative shocks in red, recessions in beige. The "tapes" indicate the range of data analyzed with the dynamic equilibrium model. The widths of the bars are proportional to the widths of the shock (roughly, the 1-sigma width).

U is the unemployment rate. CLF is the civilian labor force (participation rate). EPOP is the employment-population ratio (for men and women). PCE is the personal consumption expenditures price index. CPI is the consumer price index (all items), and C-S is the version available with the Case-Shiller data (see here). The Case-Shiller housing price index itself is included along with the S&P 500. AMB stands for adjusted monetary base.

The arrow indicates the non-equilibrium process of women entering the workforce (where I didn't try to decouple the recession shocks from the broad positive shock).

Here is a zoomed-in version of the post-war period:


I'll leave the possible narratives to comments.

...

Update 4pm

I added the multiple shock version of the PCE dynamic equilibrium discussed in this post on the Phillips Curve. We can resolve the main shock of the 1970s into several shocks; these are shown in purple:


Additionally, we can see the "vanishing" Phillips Curve:


Unemployment shocks are preceded by PCE inflation shocks such that as unemployment recovers from the previous shock, inflation rises (unemployment goes down, inflation goes up, and because an inflation shock is ending when unemployment rises, inflation is going down when unemployment is going up). That goes for the recessions of the 70s, 80s and 90s. However, the early 2000s recession doesn't have a distinct shock (at least one that the algorithm can find in the data), and the Great Recession is preceded by a fairly small shock relative to the previous ones.

And even if it wasn't fading away, the causality is uncertain here. It could well be from unemployment to inflation and not the other way around (i.e. low unemployment causes inflation, but inflation doesn't cause unemployment).

...

Update 27 July 2017

I updated all the figures with more accurate versions of the widths and years (I had read some of them off the relevant graphs). I also increased the font size a bit and the image sizes because some of the lines are too narrow to show up except on a big image.

Tuesday, July 25, 2017

Causality in money and inflation ... plus some big questions

I noticed that the monetary base had what looks like a series of stepped transitions, so I tried the dynamic equilibrium model out on the data. The description is decent:



I am showing the results assuming a dynamic equilibrium growth rate μ of zero, but I also tried the entropy minimization and found that μ ~ 1.6%. It doesn't strongly change the results either way, so we can reasonably say that monetary base growth dynamic equilibrium is close to zero.

I imagine many readers of econ blogs out there spitting out their coffee and saying: "Close to zero?!!" Yes, the monetary base in the absence of shocks grows about as fast as PCE inflation in the absence of shocks (π ~ 1.7%).

Aha! So, that's basically the quantity theory of money, right?

Well, no.

The interesting piece comes from the big shock in the middle part of the twentieth century. I've looked at several models of several macroeconomic observables that have this major shock, and we can play a game of "one of these things is not like the others":

NGDP [added in update]
1976.6

NGDP/L
1977.5 ± 0.1

CPI
1977.7 ± 0.1

PCE
1978.3 ± 0.1

(Prime age) CLF
1978.4 ± 0.1

AMB [this post]
1985.2 ± 0.1

The center of the monetary base shock comes well after the inflation, labor force participation, and output per employee. So while the PCE inflation rate and the monetary base growth rate match up in equilibrium (both ~ 1.6-1.7%), shocks to inflation are *followed* by shocks to the monetary base. Causality appears to go from inflation to "money printing", not the other way around.

As a side note, the slowdown in monetary base growth preceding the Great Recession that is used as part of the case for the claim that the Fed caused the recession (by e.g. Scott Sumner) is actually just the end of this large transition/shock in the monetary base.

I have an hypothesis that instead of Fed policy, the labor force growth slowdown might be behind the various bubbles (dot-com, housing) and financial crises of the past few decades. Much of the growth people were accustomed to in the 60s, 70s, and 80s was due to women entering the workforce (enhanced by more minorities in the workforce due to Civil Rights legislation, and by the post WWII baby boom). As this surge in labor force participation faded in the 1990s reaching its new equilibrium, investors looked for new (and potentially risky) sources of growth. This lead to bubbles and crashes as investors sought to maintain the rates of asset growth once supported by a growing labor force.

I am also working on an hypothesis that the Great Depression was caused by similar factors, except in that case it was the agriculture-industry transition that was ending. There was a analogous surge in labor force participation (including a surge in women entering the workforce) in the 1910s and 20s that is apparent in census data (see e.g. here or here).

The question arises: what does a "normal" economy look like? How does an economy that isn't undergoing some major shock (demographic or otherwise) function? I wrote about this before, and I think the answer is that we don't really know as there's no data. More and more I'm convinced we're flying blind here.

Wednesday, July 19, 2017

What mathematical theory is for

Blackboard photographed by Spanish artist Alejandro Guijarro at the University of California, Berkeley.

In the aftermath of the Great Recession, there has been much discussion about the use of math in economics. Complaints range from "too much math" to "not rigorous enough math" (Paul Romer) to "using math to obscure" (Paul Pfleiderer). There are even complaints that economics has "physics envy". Ricardo Reis [pdf] and John Cochrane have defended the use of math saying it enforces logic and that complaints come from people who don't understand the math in economics.

As a physicist, I've had no trouble understanding the math in economics. I'm also not adverse to using math, but I am adverse to using it improperly. In my opinion, there seems to be a misunderstanding among both detractors and proponents of what mathematical theory is for. This is most evident in macroeconomics and growth theory, but some of the issues apply to microeconomics as well.

The primary purpose of mathematical theory is to provide equations that illustrate relationships between sets of numerical data. That what Galileo was doing when he was rolling balls down inclined planes (comparing distance rolled and time measured with water flowing), discovering distance was proportional to the square of the water volume (i.e. time).

Not all fields deal with numerical data, so math isn't always required. Not a single equation appears in Darwin's Origin of Species, for example. And while there exist many cases where economics studies unquantifiable behavior of humans, a large portion of the field is dedicated to understanding numerical quantities like prices, interest rates, and GDP growth.

Once you validate the math with empirical data and observations, you've established "trust" in your equations. Like a scientist's academic credibility letting her make claims about the structure of nature or simplify science to teach it, this trust lets the math itself become a source for new research and pedagogy.

Only after trust is established can you derive new mathematical relationships (using logic, writing proofs of theorems) using those trusted equations as a starting point. This is the forgotten basis in Reis' claims about math enforcing logic. Math does help enforce logic, but it's only meaningful if you start from empirically valid relationships.

This should not be construed to require models to start with "realistic assumptions". As Milton Friedman wrote [1], unrealistic assumptions are fine as long as the math leads to models that get the data right. In fact, models with unrealistic assumptions that explain data would make a good scientist question her thinking about what is "realistic". Are we adding assumptions we feel in our gut are "realistic" that don't improve our description of data simply because we are biased towards them?

Additionally, toy models, "quantitative parables", and models that simplify in order to demonstrate principles or teach theory should either come after empirically successful models and establish "trust", or they themselves should be subjected to tests against empirical data. Keynes was wrong when he said that one shouldn't fill in values in the equations in a letter to Roy Harrod. Pfleiderer's chameleon models are a symptom of ignoring this principle of mathematical theory. Falling back to claims a model is a simplified version of reality when it fails when compared to data should immediately prompt questions of why we're considering this model at all. Yet Pfleiderer tells us some people consider this argument a valid defense of their models (and therefore their policy recommendations).

I am not saying that all models have to perform perfectly right out of the gate when you fill in the values. Some will only qualitatively describe the data with large errors. Some might only get the direction of effects right. The reason to compare to data is not just to answer the question "How small are the residuals?", but more generally "What does this math have to do with the real world?" Science at its heart is a process for connecting ideas to reality, and math is a tool that helps us do that when that reality is quantified. If math isn't doing that job, we should question what purpose it is serving.  Is it trying to make something look more valid than it is? Is it obscuring political assumptions? Is it just signaling abilities or membership in the "mainstream"? In many cases, it's just tradition. You derive a DSGE model in the theory section of a paper because everyone does.

Beyond just comparing to the data, mathematical models should also be appropriate for the data.

A model's level of complexity and rigor (and use of symbols) should be comparable to the empirical accuracy of the theory and the quantity of data available. The rigor of a DSGE model is comical compared to how poorly the models forecast. Their complexity is equally comical when they are outperformed by simple autoregressive processes. DSGE models frequently have 40 or more parameters. Given only 70 or so years of higher quality quarterly post-war data (and many macroeconomists only deal with data after 1984 due to a change in methodology), 40 parameter models should either perform very well empirically or be considered excessively complex. The poor performance ‒ and excessive complexity given that performance ‒ of DSGE models should make us question the assumptions that went into their derivation. The poor performance should also tell us that we shouldn't use them for policy.

A big step in using math to understand the world is when you've collected several different empirically successful models into a single paradigm or framework. That's what Newton did in the seventeenth century. He collected Kepler's, Galileo's, and others' empirical successes into a framework we call Newtonian mechanics.

When you have a mathematical framework built upon empirical successes, deriving theorems starts to become a sensible thing to do (e.g. Noether's theorem in physics). Sure, it's fine as a matter of pure mathematics to derive theorems, but only after you have an empirically successful framework do those theorems have implications for the real world. You can also begin to understand the scope of the theory by noting where your successful framework breaks down (e.g. near the speed of light for Newtonian mechanics).

A good case study for where this has gone wrong in economics is the famous Arrow-Debreu general equilibrium theorem. The "framework" it was derived from is rational utility maximization. This isn't a real framework because it is not based on empirical success but rather philosophy. The consequence of inappropriately deriving theorems in frameworks without empirical (what economists call external) validity is that we have no clue what the scope of general equilibrium is. Rational utility maximization may only be valid near a macroeconomic equilibrium (i.e. away from financial crises or recessions) rendering Arrow-Debreu general equilibrium moot. What good is a theorem telling you about the existence of an equilibrium price vector when it's only valid if you're in equilibrium? That is to say the microeconomic rational utility maximization framework may require "macrofoundations" ‒ empirically successful macroeconomic models that tell us what a macroeconomic equilibrium is.

From my experience making these points on my blog, I know many readers will say that I am trying to tell economists to be more like physics, or that social sciences don't have to play by the same rules as the hard sciences. This is not what I'm saying at all. I'm saying economics has unnecessarily wrapped itself in a straitjacket of its own making. Without an empirically validated framework like the one physics has, economics is actually far more free to explore a variety of mathematical paradigms and empirical regularities. Physics is severely restricted by the successes of Newton, Einstein, and Heisenberg. Coming up with new mathematical models consistent with those successes is hard (or would be if physicists hadn't developed tools that make the job easier like Lagrange multipliers and quantum field theory). Would-be economists are literally free to come up with anything that appears useful [2]. Their only constraint on the math they use is showing that their equations are indeed useful ‒ by filling in the values and comparing to data.

Footnotes:

[1] Friedman also wrote: "Truly important and significant hypotheses will be found to have 'assumptions' that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense) (p. 14)." This part is garbage. Who knows if the correct description of a system will involve realistic or unrealistic assumptions? Do you? Really? Sure, it can be your personal heuristic, much like many physicists look at the "beauty" of theories as a heuristic, but it ends up being just another constraint you've imposed on yourself like a straitjacket.

[2] To answer Chris House's question, I think this freedom is a key factor for many physicists wanting to try their hand at economics. Physicists also generally play by the rules laid out here, so many don't see the point of learning frameworks or models that haven't shown empirical success.

Python!


I have put together the 0.1-beta version of IEtools (Information Equilibrium tools) for python (along with a demo jupyter notebook looking at the unemployment rate and NGDP/L). Everything is available in my GitHub repositories. The direct link to the python repository is:

https://github.com/infotranecon/IEtools

While I still love Mathematica, (and will likely continue to use it for most of my work here), python is free for everybody.