Sunday, January 21, 2018

Money is the aether of macroeconomics

So I've never really understood Modern Monetary Theory (MMT). In some sense, I can understand it as a counter to the damaging "household budget" and "hard money" views of government finances. To me, it still cedes the equally damaging "money is all-important" message of monetarism and so-called Austrian school that manifests even today when a "very serious person" tells you it's really the Fed, not Congress or the President that controls the path of the economy and inflation when neither inflation nor recessions are well-understood in academic macroeconomics. People have a hard time giving up talking about money.

Austrian school? Yes. Austrian school. This dawned on me some time ago when I read Noah Smith's steps for combating the monetary "hive mind" he says is pervasive in finance:
So how does one extract an individual human mind from this hive mind? That is always a tricky undertaking. But I've found two things that seem to have an effect: 
Method 1: Introduce them to MMT. MMT is a great halfway house for recovering Austrians.
It does make sense to think of MMT as a way for an Austrian school devotee to wrap their head around quantitative easing not causing inflation without abandoning too many priors. They just have to nudge their target for the "right" amount of inflation a bit higher (or even just to the Fed's ostensible target of 2%).

I came across a link in several places in my Twitter feed the other day (which is why I decided to write this post) that's actually a really good explainer of MMT. It also helps explain this connection to Austrian school economics. Read these two quotes; first: 
Money is created effortlessly every day on computers in large numbers. It’s our access to real resources that is limited.
and second:
As the issuer of the currency, governments have the ability to out-bid any private sector business or even control sectors of the economy, such as education, public infrastructure or health care (nations choose varying approaches). Governments should be held accountable to act responsibly when competing for certain scarce resources in the economy to avoid undesired levels of price escalation. 
At the same time, governments have too often been guilty of the opposite problem – not managing the currency in a way that maintains domestic full employment and acceptable base living standards.
Now read Ludwig von Mises :
In theoretical investigation there is only one meaning that can rationally be attached to the expression Inflation: an increase in the quantity of money (in the broader sense of the term, so as to include fiduciary media as well), that is not offset by a corresponding increase in the need for money (again in the broader sense of the term), so that a fall in the objective exchange-value of money must occur.
In both cases, money is simply a tool to move real resources (i.e. the real goods and services money is needed for). The question of inflation then becomes a question of whether there are too many or not enough real resources to be moved with money, as well as a level of inflation we define as "right" (with traditional Austrians usually going for 0% and MMT-ers going for something like 4% or more). The other conclusions generally follow from this (e.g. a sovereign government can never run out of its own currency, only produce excessive inflation in MMT). And if inflation hit 10% or more, both MMT and Austrian economics could find themselves on the same page. To put in physics terms, the theories converge as inflation becomes large compared to the (inverse) length of business cycles.

I'm not saying these movements are politically aligned — Austrians tend to be more conservative and MMT-ers more liberal. The issue here is that where these theories converge (at high inflation) is also the only place where they're supported by empirical data. Inflation really seems to be proportional to whatever you might think of as money when inflation is high. As we'd say in physics, it's a great effective theory. But at moderate levels of inflation, the theory breaks down. As I wrote in my post on what to do when your theory is rejected, we should set a scale (inflation ~ 10%/y or 10 years, remarkably comparable to the observed period between recessions) and let our imaginations run wild with whatever model fits the data, not blindly apply a high inflation theory to low inflation. Constraining us to thinking about "money" is tying our hands.

So instead of saying "money is just a tool for moving real resources, therefore money is all-important to the economy", what if we say "money is just a tool for moving real resources, therefore (except in extreme circumstances) money doesn't matter"?

Usefully, these views turn out to be transparently expressed in the information equilibrium framework. This framework allows me to more precisely write down what it means for something to move distributions of real resources around — which is equivalent to moving the information specifying the distributions around. Claude Shannon invented the field that studies this specific subject (information theory), and I think the idea that "money" (whatever you mean by it) is most fruitfully thought of as medium of information flow. Let's consider aggregate demand and aggregate supply, assuming they match in equilibrium. We can then say:

(1) P ≡ ∂AD/∂AS = k AD/AS

We can introduce "money" M by using the chain rule [0] in calculus plus M/M = 1:

(2) (∂AD/∂M) (∂M/∂AS) = k (AD/M) (M/AS)

Here, "money" is simply functioning as a tool moving "real resources" AS. You could insert anything in that equation: B for bonds or bitcoin. Or G for government debt. If aggregate demand and aggregate supply are in information equilibrium, and money is in information equilibrium with demand, then money is in information equilibrium with supply, i.e.

(3a) ∂AD/∂M = k₁ AD/M

Implies

(3b) ∂M/∂AS = k₂ M/AS

By dividing Eq (2) by Eq (3a). Note that the left hand side of (1) is the exchange rate for a piece of the aggregate economy —  i.e. the price level P. Now Eq (1) tells us 

log AD ~ k log AS 

as well as 

log P ~ (k −1) log AS

If we define ADP Y to use a common symbol in economics for real output (Y), then we find — using the right hand side of Eq (1):

AD ≡ P Y = k (AD/AS) Y

so AS = k Y

That is to say "real output" is directly related to "real resources" in equilibrium. But Eq (3a) also tells us that log AD ~ k₁ log M which means that if "money" grows rapidly compared to real resources AS (i.e. Y is approximately constant), we also find

(4) log P ~ k₁ log M

This latter relationship requires a disequilibrium between money and real resources (AS) because the equilibrium allowing us to write (3a,b) also implies log Y ~ log AS ~ (1/k₂) log M making

(5) log P ~ (k₁ − 1/k₂) log M

reducing the inflation rate in (4) — and in fact requiring some strong restrictions on the form of M (and the relationship between k₁ and k₂) if AD and AS are in equilibrium. Basically, the quantity theory of money as well as money being the source of inflation requires the disequilibrium between money and real resources both Austrian and MMT devotees claim. That's the nugget of truth. But empirically (4) is only roughly true for economies where inflation is well above 10% where M is identified with base money.

But since M was arbitrary (it is introduced via two mathematical identities), the typical case of an economy near equilibrium —  i.e. not in recession or experiencing hyperinflation per Eq (4) —  should be independent of an arbitrary redefinition of the medium of information flow. Whether we say we exchange work for money and then money for goods, or we just exchange work for goods doesn't matter unless you're in hyperinflation or recession. You might say the latter is extremely important, but it turns out economies aren't in recession most of the time (a few quarters every ~ 8 years for the US [1]) so most of the inflation that happens isn't monetary [2]. 

Whatever you think money is, it doesn't really matter.

At least if you're not in hyperinflation or possibly right in the moment of a financial system seizing up as some theories of the financial crisis shock of 2008 propose.

That's the conclusion we should be drawing from the idea that money is "just a tool" to move information about real resources around. Much like how air doesn't really matter to the transmission of sound waves under typical conditions in a room (all the physics of molecules and thermodynamics is subsumed into a constant speed) —  it's just a tool to move vibration information from one point to another —  money appears to have little to do with the bulk of inflation from what empirical data is available. In fact, the inflation rate went right through the financial crisis with nary a blip right when commercial paper —  one of the major mechanisms by which large payrolls are funded —  lost its moneyness.

So what is inflation if it's not monetary? As Noah says in his post linked above that inflation is "one of the biggest mysteries of macroeconomics". My intuition is telling me that inflation is demographic —  the shocks to inflation are contemporary with or follow shocks to the labor force size (coupled with the structure of the fading Phillips curve). Shocks to the monetary base follow the shocks to inflation. You can also read Steve Randy Waldman's account. Social factors leading to the baby boom and women entering the workforce were the likely real drivers of inflation (i.e. "real resources" like labor that money was just a tool to help move around), and money was just along for the ride.

Regardless of whether you like the information equilibrium take or whether you find it useful, the key fact is that inflation —  except in cases of hyperinflation —  empirically isn't related to "money" regardless of what you think money is [3]. The "money is all important" view —  regardless of your pet theory of money —  is based on a facile extrapolation from a completely different regime [4]. That view might be what's behind all these various measures of "money": money has to be important to unemployment and inflation, therefore some measure must exist that makes the correlation manifest. M1? No, M2! Interest rates! No, it's government debt! No! It's NGDP expectations!

It's all reminiscent of the aether in physics. Something must be the medium in which light waves oscillate! Aether dragging! No, partial aether dragging! Really, the aether is just a tool to move electromagnetic energy around.

Money is the aether of macroeconomics [5].

Footnotes

[0] The chain rule is dy/dx = (dy/dz) (dz/dx).

[1] I don't want to be flippant about the actual human suffering in recessions, but I think it is better for that suffering in the long run to have empirically accurate theory that can yield real solutions than dubious monetary maxims that claim to help.

[2] "Most" of the inflation in post-war US economic history was caused by a large shock centered in the late 70s. Aside from that period, inflation has been roughly constant at approximately 2.5% (CPI all items) or 1.7% (core PCE).

[3] Unless you think money is people, which is a slogan I could get behind —  at least in terms of empirical data.

[4] Some theories say that hyperinflation is actually a political phenomenon, meaning even there the correlation between money and inflation may be subordinate to the actual process.

[5] I am probably trolling here more than I should be, but it really doesn't put me that far from Paul Romer who wrote up a menagerie of terms for economic concepts including aether and phlogiston.

Wednesday, January 17, 2018

What to theorize when your theory's rejected

Sommerfeld and Bohr: ad hoc model builders rejecting Newtonian physics ... for action p dx ~ h (ca. 1919)
I was part of an epic Twitter thread yesterday, initially drawn in to a conversation about whether the word "mainstream" (vs "heterodox") was used in natural sciences (to which I said: not really, but the concept exists). There was one sub-thread that asked a question that is really more a history of science question (I am not a historian of science, so this is my own distillation of others' work as well a couple of my undergrad research papers). It began with Robert Waldmann tweeting to Simon Wren-Lewis:
... In natural sciences hypotheses don't survive statistically significant rejection as they do in economics.
Simon's response was:
They do if there is no alternative theory to explain them. The relevant question is what is an admissible theory.
To which both Robert and I said we couldn't think of any examples where this was the case. Simon Wren-Lewis then asks an interesting question about what happens when your theory starts meeting the headwind of empirical rejection:
How can that logically work[?] Do all empirical deviations from the (at the time) believed theory always come along at the same time as the theory that can explain those observations? Or in between do people stop doing anything that depends on the old theory?
The answer to the second question is generally "no". Some examples followed, but Twitter can't really do them justice. So I thought I'd write a blog post discussing some case studies in physics of what happens when your theory's rejected.

The Aether

The one case I thought might be an example where natural science didn't reject a theory (therefore making me qualify that there were no examples in post-war science) was the aether: the substance posited to be the medium in which light waves were oscillating. The truth was that this theory wasn't invented to make sense of any particular observations (Newton thought it explained diffraction), but rather to soothe the intuition of physicists (specifically Fresnel's, who invented the wave theory of light in the early 1800s). If light is a wave, it must be a wave in something, right? The aether was terribly stubborn for a physical theory in the Newtonian era. Some of the earliest issues arose with Fizeau's experiments in the 1850s. The "final straw" in the traditional story was the Michelson and Morely experiment, but experiments continued to test for the existence of "aether wind" for some years later (you could even call this 2009 precision test of Lorentz invariance a test of the aether). 

So here we have a case where a hypothesis was rejected and it was over 50 years between the first rejection and when the new theory "came along". What happened in the interim? Aether dragging. Actually the various experiments were considered confirmation of particular questions about how aether interacts with matter (even including Michelson and Morely's). 

But Fresnel's wave theory of light didn't really need the aether, and there was nothing that the aether did in Fresnel's theory besides exist as a medium for transverse waves. Funny enough, this is actually a problem because apparently aether didn't support longitudinal waves which makes it very different from any typical elastic medium. Looking back on it, it really doesn't make much sense to posit the aether. To me, that implies its role was solely to soothe the intuition; since we as physicists have long given up that intuition we can't really reconstruct how we would think about it at the time in much the same way we can't really imagine what writing looked like to us before we learned how to read.

So in this case study, we have a theory that was rejected and before the "correct" theory came along and physicists continued to use the "old theory". However, the problem with this as an example of Simon's contention is that the existence of the aether didn't have particular consequences for the descriptions of diffraction and polarization (the "old theory") for which it was invented. It was the connection between aether and matter that had consequences — in a sense, you could say this connection was assumed in order to be able to try and measure it. I can't remember the reference, but someone once wrote that the aether experiments seems to imply that nature was conspiring in such a way as to make the aether undetectable!

The Precession of Mercury

This case study brought up by Simon Wren-Lewis better represents what happens in natural sciences when data casts doubt on a theory. Precision analysis of astronomical data in the mid-1800s by Le Verrier led to one of the most high profile empirical errors of Newton's gravitational theory: it got the precession of Mercury wrong by several arc seconds per century. As Simon says: physicists continued to use Newton's "old" theory (and actually do so to this day) for nearly 50 years until the "correct" general theory of relativity came along.

But Newton's old theory was wildly successful (even the observed error was about 40 arc seconds per century). In one century, Mercury travels about 54 million seconds of arc meaning this error is on the order of one in one million. No economic theory is that accurate, so we could say that this case study is actually a massive case of false equivalence.

However, I think it is still useful to understand what happened in this case study. In our modern language, we would say that physicists set a scope condition (region of validity) based on a relevant scale in the problem: the radius of the sun (R). Basically, when the perihelion of the orbit r is large relative to R, other effects potentially enter. And at r/R ~ 2%, this ratio is much larger for Mercury than for any other planet (Mercury is in a 3:2 orbit resonance, tidally locked with the sun). Several ad hoc models of the sun's mass distribution (as well as other effects) were invented to try an account for the difference from Newton's theory (as mentioned by Robert). Eventually general relativity came along (setting a scale — the Schwarzchild radius 2 G M/c² — in terms of the strength of the gravitational field based on the sun's mass M and the speed of light, not its radius). Despite the how weird it was to think of the possibility of e.g. black holes or gravitational waves as fluctuations of space-time, the theory was quickly adopted because it fit the data.

The scale R set up a firewall preventing Mercury's precession from burning down the whole of Newtonian mechanics (which was otherwise fairly successful), and ad hoc theories were allowed to flourish on the other side of that firewall. This does not appear to happen in economics. As Noah Smith says:
I have not seen economists spend much time thinking about domains of applicability (what physicists usually call "scope conditions"). But it's an important topic to think about.
And as Simon says in his tweet, economists just go on using rejected theory elements and models without limiting its scope or opening the field to ad hoc models. This is also my own experience reading the economics literature.

Old Quantum Theory

Probably my favorite case study is so-called old quantum theory: the collection of ad hoc models that briefly flourished between Planck's quantum of light in 1900 to Heisenberg's quantum mechanics in 1925. Previously, lots of problems started to arise with Newtonian physics (though with the caveat that it was mostly wildly successful as mentioned above). There was the ultraviolet catastrophe (a singularity as wavelength goes to zero) which was related to blackbody radiation. Something was happening when the wavelength of light started to get close to the atomic scale. Until Planck posited the quantum of light, several ad hoc models including atomic motion were invented to give different functional forms for blackbody radiation in much the same way different models of the sun allowed for possible explanations of Mercury's precession.

In much the same way the radius of the sun set the scale for the firewall for gravity, Planck set the scale for what would become quantum effects by specifying a fundamental unit of action (energy × time or momentum × distance) now named after him: h. Old quantum theory set this up as a general principle by saying phase space integrals could only result integer multiples of h (Bohr-Sommerfeld quantization). Now h = 6.626 × 10⁻³⁴ J×s is tiny in terms of our human scale which is related to Newtonian physics being so accurate (and still used today); again using this as a case study for economics is another false equivalence as no economic theory is that accurate. But in the case, Newtonian physics was basically considered rejected within the scope of old quantum theory and stopped being used. That rejection was probably a reason why quantum mechanics was so quickly adopted (notwithstanding its issues with intuition that famously flustered Einstein and continue to this day). Quantum mechanics was invented in 1925, and by the 1940s physicists were working out renormalization of quantum field theories putting the last touches on a theory that is the most precise ever developed. Again, it didn't really matter how weird the theory seemed (especially at the time) because the only important criterion was fitting the empirical data.

There's another way this case study shows a difference between the natural sciences and economics. Old quantum theory was almost immediately dropped when quantum mechanics was developed, and ceased to be of interest except historically. Its one major success lives on in name only as the Bohr energy levels of Hydrogen. However, Paul Romer wrote about economic models using the Bohr model as an analogy for models like the Solow model that I've discussed before. Romer said:
Learning about models in physics–e.g. the Bohr model of the atom–exposes you to time-tested models that found a good balance between simplicity and insight about observables.
Where Romer sees a "balance between simplicity and insight" that might well be used if it were an economic model, this physicist sees a rejected model that's part of the history of thought in physics. Physicists do not learn the Bohr model (you learn of its existence, but not the theory). The Bohr energy level formula turned out to be correct, but today's undergraduate physics students derive it from quantum mechanics not "old quantum theory" using Bohr-Sommerfeld quantization.

A Summary

There is a general pattern where some empirical detail is at odds with a theory in physics:

  • A scale is set to firewall the empirically accurate pieces of the theory
  • A variety of ad hoc models are developed at that new scale where the only criterion is fitting the empirical data, no matter how weird they may seem

I submit that this is not how things work in economics, especially macroeconomics. Simon says we should keep using theories without a scope condition firewall, which Noah says doesn't seem to be thought about at all. New theories in macro- or micro-economics, no matter how weird, aren't judged based on their empirical accuracy alone.

But a bigger issue here I think is that there aren't any wildly successful [1] economic models. There really aren't any macroeconomic models accurate enough to warrant building a firewall. This should leave the field open to a great deal of ad hoc theorizing [2]. But in macro, you get DSGE models despite their poor track record. Unless you want to consider DSGE models to be ad hoc models that may go the way of old quantum theory! That's really my view: it's fine if you want to try DSGE model macro and it may well eventually lead to insight. But it really is an ad hoc framework operating in a field that hasn't set any scales because it hasn't had enough empirical success to require them.

...

Update 19 January 2018

Both Robert Waldmann and Simon Wren-Lewis responded to the tweet about this blog post (thread here) saying that physics is not the optimal natural science for comparison with economics. However, I disagree. Physics (and chemistry) are the only fields with a comparable level of mathematical formalism to economics. Other natural sciences use lots of math, too, but there is no over-arching formal mathematical way to solve a problem in e.g. biology (and some of the ones that do exist are based on either dynamical systems, the same kind of formalism used in economicsor even economic models). There's even less in medicine (Wren-Lewis's example).

Now you may argue that (macro)economics shouldn't have the level of mathematical formalism it does (I would definitely agree that the mathematical macro models used are far to complex to be supported by the limited data and that it's funny to write stuff like this). If you want to argue that macroeconomics shouldn't be using DSGE models, or that social science isn't amenable to math, go ahead [3]. But that wasn't the argument we were having which was what to do when your mathematical framework (e.g. standard DSGE models with Euler equations and Phillips curves) is rejected. Additionally, the reasons that these models are rejected are due to comparing the mathematical formalism with data — not their non-mathematical aspects. To that end, physics provides a best practice: set a scale and firewall off the empirically accurate parts of your theory.

Aside from the question of how one "uses" a non-mathematical model, one of the issues with the discussion of rejection of non-mathematical models is that there's no firm metric for rejection. When were Aristotle's crystal spheres rejected? Heliocentric models didn't really require rejection of the principle that planets were fixed to spheres made of aether. Kepler even mentions them in the same breath as the elliptical orbits that would reject the Aristotelian/Ptolemaic model completely, so comets and novae didn't reject the concept in Kepler's mind (you could make the case that the aether survives all the way to special relativity above). The "bad air" theory of disease around malaria (since it was associated with swampy areas, hence the name) was moderately successful up until a new theory came along in the sense that staying away from swamps or closing your windows is a good way to avoid mosquitoes.

Actually, it's possible the mathematical formalism is part of the reason macro doesn't just reject the models because of sunk costs (or 'regulatory capture') involved in learning the formalism. I don't know if non-mathematical models are more easily rejected in this sense (lower sunk costs), but I as I mentioned in my tweet as part of the thread linked above I couldn't even think of any non-mathematical models that were rejected that economics still uses — rendering the entire discussion moot if we're not talking about mathematical models.

PS I also added footnotes [2] and [3].

...

Footnotes:

[1] Noah likes to tell a story about the prediction of the BART ridership using random utility discrete choice models (I mentioned here). One of the authors of that study has said that result was a bit of a fluke ("However, to some extent, we were right for the wrong reasons.").

[2] Added in update. This is part of my answer to Chris House's question (that I also address in my book): Why Are Physicists Drawn to Economics? Because it is a field that uses mathematical models and there are no real scope conditions known opening up the possibilities of any ad hoc model by physicists' standards.

[3] But you do have to contend with the fact that some of this non-mathematical social science is pretty empirically accurately described by mathematical models.

Monday, January 15, 2018

Is low inflation ending?


I'm continuing to compare the CPI forecasts to data (new data came out last Friday, shown on the forecast graph for YoY CPI all items [1]). I think the data is starting to coalesce around a coherent story of the Great Recession in the US. As you can see in the graph above, the shock centered at 2015.1 (2015.1 + 0.5 = 2015.6 based on how I displayed the YoY data)  is ending. This implies that (absent another shock to CPI), we should see "headline" CPI (i.e. all items) average 2.5% [2].

It is associated with the shock to the civilian labor force (CLF, at 2011.3), nominal output per worker (NGDP/L, at 2014.6), and the prime-age CLF paricipation rate (in 2011) — all occurring after the Great Recession shock to unemployment (2008.8, see also my latest paper). What we have is a large recession shock that pushed people out of the labor force (as well as reduced uptake of people trying to enter the labor force). This shock is what then caused the low inflation [3] (in terms of CPI or PCE [2]). This process is largely ending and we are finally returning to a "normal" economy [4] nearly 10 years later.

...

Update + 2 hrs

I thought I'd add the graph of the full model over the post-war period (including the guides mentioned in [1]), but also note that two of the three periods David Andolfatto mentions as "lowflation" periods line up with the two negative shocks to CPI (~ 1960-1970, and ~ 2008-2018):


The period 1996-2003 does not correspond to low headline CPI inflation in the same way core PCE inflation was below 2%. However 1996-2003 roughly corresponds to the "dynamic equilibrium" period of CPI inflation as well as PCE inflation (~ 1995-2008) — which in the case of PCE inflation is ~ 1.7% (i.e. below 2%). Therefore the 2% metric for lowflation measured with PCE inflation would actually include the dynamic equilibrium, and not just shocks. Another way to say it is that the constant threshold (at 2%) detector gives a false alarm for 1996-2003, whereas a "dynamic equilibrium detector" does not.

...

Footnotes:

[1] Here is the log derivative (i.e. continuously compounded annual rate of change) and the level (with new dynamic equilibrium guides as diagonal lines at 2.5% inflation rate):


[2] Note that the dynamic equilibrium for core PCE inflation that economists like to use is 1.7%, and so the end of the associated shock will not bring inflation all the way back up to the Fed's stated target of 2%.

[3] Interestingly, this negative shock to inflation happens at the same time as a negative shock to unemployment: i.e. inflation went down at the same time unemployment went down, giving further evidence that the Phillips curve has disappeared.

[4] This is a "normal" economy in the sense of dynamic equilibrium, but it might not seem normal to a large portion of the labor force as there has been only a limited amount of time between the end of the demographic shock of the 1970s and the Great Recession shock of the 2000s. As I've said before, there is a limited amount of "equilibrium" data in this sense (the models above would say ca. 1995 to 2008).

Friday, January 12, 2018

Immigration is a major source of growth

Partially because of the recent news — and most certainly because nearly half this country can be classified as a racist zero-sum mouth-breather — I wanted to show how dimwitted policies to limit immigration can be. One of the findings of the dynamic information equilibrium approach (see also my latest paper) is that nominal output ("GDP") has essentially the same structure as the size of the labor force:


The major shocks to the path of NGDP roughly correspond to the major shocks to the Civilian Labor Force (CLF). Both are shown as vertical lines. The first is the demographic shock of women entering the workforce. This caused an increase in NGDP (the shock to CLF precedes the shock to NGDP). The second major shock is the Great Recession. In that case a shock to NGDP caused people to exit the labor force driving down the labor force participation rate (the shock to NGDP came first). The growth rates look like this (NGDP is green, CLF is purple):


The gray horizontal lines represent the dynamic equilibrium growth rates of CLF (~ 1%) and NGDP (~ 3.8%). The dashed green line represents the effects of two asset bubbles (dot-com and housing, described here). Including them or not does not have any major effects on the results (they're too small to result in statistically significant changes to CLF). You may have noticed that there's an additional shock centered in 2019; I will call that the Asinine Immigration Shock (AIS). 

I estimated the relationship between shocks to CLF and to NGDP. Depending on how you look at it (measuring the relative scale factor, or comparing the integrals relative to the dynamic equilibrium growth rate), you can come up with a factor α between about 4 and 6. That is to say a shock to the labor force results in a shock that is 4 to 6 times larger to NGDP.

Using this this estimate of the contribution of immigration to population growth, I estimated that the AIS over the next four years (through 2022) could result in about 2 million fewer people in the labor force (including people deported, people denied entry, and people who decide to move to e.g. Canada instead of the US). The resulting shock to NGDP [1] using the low end estimate of a factor of α = 4 would result in NGDP that is 1 trillion dollars lower in 2022 [2].  This is what the path of the labor force and nominal output look like:



As you can see, the AIS is going to be a massive self-inflicted wound on this country. What is eerie is that this shock corresponds to the estimated recession timing (assuming unemployment "stabilizes") — as well as the JOLTS leading indicators — implying this process may already be underway. With the positive shock of women entering the labor force ending, immigration is a major (and perhaps only) source of growth in the US aside from asset bubbles [3].

...

Footnotes:

[1] Since I am looking at the results sufficiently following the shock in 2022, it doesn't matter whether which shock comes first (so I show them as simultaneous, centered in January 2019). However, I think the most plausible story is that the shock to CLF would come first followed by a sharper shock to NGDP as the country goes into a recession about 1/2 to 1/3 the size of the Great Recession.

[2] It's roughly a factor of 500 billion dollars per million people (evaluated in 2022) since both NGDP and CLF are approximately linear over time periods of less than 10 years (i.e. 1 million fewer immigrants due to the AIS results in an NGDP that is 500 billion dollar lower in 2022).

[3] I also tried to assess the contribution of unauthorized immigration on nominal output. However the data is limited leaving the effects uncertain. One interesting thing I found however is that the data is consistent with a large unauthorized immigration shock centered in the 1990s that almost perfectly picks up after the demographic shock of women entering the workforce wanes (also in the 1990s). As that shock wanes we get the dot-com bubble, the housing bubble, and the financial crisis. It is possible that the estimate of the NGDP growth dynamic equilibrium may be too high because it is boosted by unauthorized immigration that doesn't show up in the estimates of the Civilian Labor Force.

Wednesday, January 10, 2018

Labor shortages reported by firms

Via Twitter (H/T @traderscrucible), I came across this survey data about firms reporting shortages of "qualified" workers. It looks remarkably correlated with JOLTS data (e.g. here), so I ran it through the dynamic information equilibrium model. In general it works fine, but due to how short the series is there is some ambiguity in the dynamic equilibrium (there are two local minima where one is about 0.07/y, and the other is about 0.11/y). I thought this made for an interesting case study of which model we should believe.

Scenario 1:

0.07/y dynamic equilibrium.
2008.8 recession center (lagging indicator)
overshooting step response to recession
no indication of next recession (lagging indicator)


Scenario 2:

0.11/y dynamic equilibrium.
2008.9 recession center (lagging indicator)
no overshooting
signs of next recession (leading indicator)


Which is it? Neither dynamic equilibrium slope (or any other model parameter) seems wrong -- both are comparable to the 0.098/y value for JOLTS openings or the -0.096/y value for the unemployment rate. My guess is that Scenario 1 is correct because of its consistency as a lagging indicator at the expense of a completely plausible overshooting in survey data. It also seems unlikely that it would go from a measure with one of the longest lags to one with one of the longest leads (assuming the other JOLTS leading indicators are accurate there is another recession in the next year or so). It is of course arguable that the upcoming recession (if it is indeed upcoming) might be a different type of recession compared to the Great Recession and accompanying financial crisis (e.g. the financial crisis was a surprise, whereas low future growth due to a labor "shortage" is more slow-rolling). In either case, auxillary hypotheses are needed to resolve the ambiguity in either direction [1].

Whatever the final resolution, I thought it was fascinating that the outcome of survey data with a somewhat vague question (What does "qualified" mean to the survey respondent? [2] Are the firms answering "yes" to a perceived shortage offering below-market wages?) posed to human beings resulted in data that follows mathematical formula. True, it is probably because it is directly anchored by the unemployment rate. However, using this model we can potentially predict how humans will answer a question in the near future -- a question that I thought would be potentially clouded by politics. People report inflation of the deficit is higher if the President is of the opposite political party, so why wouldn't this affect whether you think it's easy or hard to find "qualified" workers ... and there is of course footnote [2].

...

Footnotes

[1] It should be noted that this isn't an indication of a degenerating research program per Lakatos: eventually more data will resolve the dynamic equilibrium slope.

[2] To a significant fraction of HR managers hiring for particular jobs, "qualified" includes being white and male per numerous studies of e.g. submitting resumes with different genders or names that 'sound black' and 'sound white'.

JOLTS follow-up

I thought I'd also show the plot of the JOLTS quits data to the ensemble of leading indicators forecasts:




Tuesday, January 9, 2018

Happy JOLTS data day

The week after the latest unemployment rate data is released, we have the Job Openings and Labor Turnover Survey (JOLTS) data at FRED. I've been tracking these as potential leading indicators of recessions since last summer. There isn't much change in the results, however I do want to start posting the job openings counterfactual shock estimate alongside the hires. In the leading indicators post, I noted that hires seems to experience its shock earlier than other indicators. However I also noted that I have exactly one recession to work with [1], so that should be taken with a grain of salt. With the latest data, the indicator that came second [2] (i.e. openings) seems to be showing a possible shock as well (but the series is much noisier and therefore uncertain).

Here are the two measures with the latest shock counterfactual (in gray):


And here are animations of the evolution of the shocks counterfactuals:



And finally, here are the latest points on the Beveridge curve (also hinting at a shock which would take it back along the path between the 2001 and 2008 labels on the graph):


Note that my most recent paper available at SSRN talks about these models and theory behind them.

...

Foonotes:

[1] The JOLTS data series on FRED begins in December of 2000, effectively at the start of the 2001 recession, so only one complete recession exists in the data.

[2] The center and width of the shocks to various JOLTS measures: hires, openings, quits, and the unemployment rate:


Monday, January 8, 2018

Qualitative economics done right, part 3

Ed. note: This post is late by almost a year. As mentioned below, part of the reason is that I think Wynne Godley's work has been misrepresented by some of his proponents. I added footnote [1] and the text referencing it, and toned down footnote [3].
This was originally going to be a continuation in a series of posts (part 1, part 2, part 2a) based on an UnlearningEcon tweet:
[Steve] Keen (and Wynne Godley) used their models to make clear predictions about crisis
It was part of a debate about what it means to predict things with a qualitative model. I covered Keen in part 2. This post was going to focus on Wynne Godley. One of Godley's influences on the subject is his "sectoral balances" approach, which is uncontroversial and not exclusively MMT or Post-Keynesian (for example, here is Brad DeLong using the approach).

Now UnlearningEcon says "predictions about crisis" (i.e. how it would play out), not "predictions of crisis" (i.e. that it would occur) which leaves in a large gray area of interpretation. However much of the references to Godley by the heterodox economics community say that he predicted the global financial crisis. As a side note, I wonder if Martin Wolf's FT piece saying Godley helps understand the crisis lent credence to others saying he predicted the crisis?

However in my research, I found that Godley himself doesn't say many of the things attributed to him. He doesn't predict a global financial crisis. He doesn't tell us that the bursting of a housing bubble will lead to a global financial crisis. In the earliest documented source [pdf], Godley says that falling house prices (as already observed in 2006) will lead to lower growth over the next few years (more on this below). This has little to do with "heterodox economics" and in fact is indistinguishable from the story told by mainstream economists like Paul Krugman. For example, Krugman was warning about the effect of a deflating housing bubble on the broader economy in the summer of 2005:
Meanwhile, the U.S. economy has become deeply dependent on the housing bubble. The economic recovery since 2001 has been disappointing in many ways, but it wouldn't have happened at all without soaring spending on residential construction, plus a surge in consumer spending largely based on mortgage refinancing. ... Now we're starting to hear a hissing sound, as the air begins to leak out of the bubble. And everyone ... should be worried.
Unfortunately, Godley's policy note linked above is completely mis-represented in a paper by Dirk Bezemer that I have been directed to on multiple occasions as "documentation" of how the heterodox community predicted the global financial crisis. It was even cited in the New York Times. The paper is “No One Saw This Coming” Understanding Financial Crisis Through Accounting Models [pdf], and its introduction claims that it's simply a survey of economic models that anticipated the crisis:
On March 14, 2008, Robert Rubin spoke at a session at the Brookings Institution in Washington, stating that "few, if any people anticipated the sort of meltdown that we are seeing in the credit markets at present”. ... [‘no one saw this coming’] has been a common view from the very beginning of the credit crisis, shared from the upper echelons of the global financial and policy hierarchy and in academia, to the general public. ... The credit crisis and ensuing recession may be viewed as a ‘natural experiment’ in the validity of economic models. Those models that failed to foresee something this momentous may need changing in one way or another. And the change is likely to come from those models (if they exist) which did lead their users to anticipate instability. The plan of this paper, therefore, is to document such anticipations, to identify the underlying models, to compare them to models in use by official forecasters and policy makers, and to draw out the implications
Godley's paper above is cited and purportedly quoted to provide a basis for using Stock Flow Consistent models because of their supposed validity. Bezemer's purported quotes of Godley are:
“The small slowdown in the rate at which US household debt levels are rising resulting form the house price decline, will immediately lead to a …sustained growth recession … before 2010”. (2006). “Unemployment [will] start to rise significantly and does not come down again.” (2007)
These quotes appear in a table at the end of the paper (p. 51) as well as in the text (p. 36), but neither of these quotes appear in the cited references to Godley. The second one doesn't appear in any form in any of the cited papers that could be construed as Godley (2007) — which is great for Godley as unemployment in the US has since fallen to levels unseen in almost two decades [1]. The first is cobbled together from a few words in a much longer passage in Godley (2006) linked above:
It could easily happen that, if house prices stop rising or if the financial-obligations ratio published by the Fed continues to rise, the debt-to-income ratio will slow down during the next few years, much as it did in the late 1980s and early 1990s. ...
The results are a bit surprising, since the apparently quite small differences between debt levels in the four scenarios generate such huge differences in the lending flows. In particular, Scenario 4, the lowest projection, shows that the debt percentage only has to level off slowly and then fall very slightly for the flow of net lending to fall from 15 percent of income in 2005 to 5 percent in 2010. ...
The average growth rates for 2005–10 come out at 3.3 percent, 2.6 percent, 1.8 percent, and 1.4 percent. The last three projections imply sustained growth recessions—very severe ones in the case of the last two. ...
Is it plausible to suppose that the growth of GDP would slow down so much just because of a fall in lending of this size? Figure 7, which shows past (and projected Scenario 4) figures for net lending combined with successive, overlapping three-year growth rates, suggests that it could. Major slowdowns in past periods have often been accompanied by falls in net lending.
Bezemer also says "This recessionary impact of the bursting of asset bubbles is also a shared view." which is to say that the the predictions of Godley and Keen [2] about the negative impact of a fall in housing prices are not unique to their models. A good example is the aforementioned Krugman quote; he probably didn't use an SFC model or some non-linear system of differential equations.

But the original discussion with UnlearningEcon was about the usefulness of qualitative economic models (per the title of this post). The thing is that Godley's models were quantitative and do look a bit like real data:


Of course the debt data does look a bit like the the counterfactual path shown (in shape, as usual I have no idea what heterodox economists mean when they say "debt" and therefore what their graphs represent; I plotted several different data sources) However, the GDP growth rates miss the giant negative shock associated with the global financial crisis. This means this model definitely misses something because debt did follow the shape of the path Godley used as the worst case scenario.


I wouldn't call this a prediction about the global financial crisis, but rather just a model of the contribution of housing assets to lower GDP growth. But still, it was a quantitative model (one of Godley's sectoral balance models based on the GDP accounting identity). And this is all Godley says it is [3].

Doing the research for this post has given me a newfound respect for Wynne Godley (and Mark Lavoie), but also a real sense of the sloppiness of heterodox economics more broadly including MMT and stock flow consistent approaches. Maybe because it is such a tribal community (see [3]) there is little introspection and genuine peer review. I know from my own efforts that I get few critiques of my conclusions from people who agree with those conclusions. This leads me to try and be my own "reviewer #2" even to the point where I have built two independent versions of the models I show on this blog on separate computers.

...

Footnotes:

[1] People will undoubtedly bring up other measures of unemployment. However these do not appear to contain additional information not captured in the traditional "U3" measure — U6 ~ α U3 for some fixed α.

[2] Bezemer also says that Steve Keen predicted the crisis:
“Long before we manage to reverse the current rise in debt, the economy will be in a recession. On current data, we may already be in one.” (2006)
But in the original source, this is in reference to Australia. Australia hasn't had a recession since 1991 (in September of 2016, Australia had managed to rack up 100 quarters without recession and at 25 [now 26!] years is second only to [now tied with] the Netherlands that went for 26 years from 1982 to 2008).

[3] I do want to take a moment to mention that Wynne Godley and Mark Lavoie are far more reasonable than you might be lead to believe by their proponents out in the Post-Keynesian and MMT community. They'd probably be fine with what I pointed out about SFC models since the "fix" is just adding a parameter.

On Twitter (see the whole thread), there was an excellent example of how the supporters of Godley and Lavoie aren't doing them any favors. Simon Wren-Lewis showed how a non-flat Philips curve implied a Non-Accelerating Inflation Rate of Unemployment [NAIRU]. It's a pretty basic argument ...
If π(t) = E[π(t+1)] - a U + b, there exists a U at which inflation is stable (NAIRU) = b/a.
Post-Keynesian blogger and Godley and Lavoie fan Ramanan said that they (G&L) showed there was an exception, therefore Wren-Lewis's argument was not valid.

Wren-Lewis responded "[t]hat is obviously not a NAIRU model, because you are saying the [Phillips curve] is flat", which is what I also said:
But [Ramanan]'s purported exception has a flat piece, so it's not a counterexample to [Simon Wren-Lewis]'s argument.
I added that
Techncally, [Ramanan]'s [Philips Curve] has two point NAIRUs plus a continuum (between [two of the] points on his graph).
Which turns out is exactly what Mark Lavoie said to Ramanan (and he quoted it on his blog):
Another way to put it is to say that there is an infinite number of NAIRU or a multiplicity of NAIRU (of rates of employment with steady inflation).

Friday, January 5, 2018

Labor market update: comparing forecasts to data

The latest data for the unemployment (U) rate and the (prime age) civilian labor force (CLF) participation rate are available, so I get to test to see if the models have failed or not. Here's the last unemployment model update [1] (includes a discussion of a "structural unemployment") and here's the post about the novel dynamic equilibrium "Beveridge curve" for CLF/U [2] shown below. Now let's add the newest data points (shown in black in the figures below).

First, the unemployment rate forecast remains valid:


And it's still looking better than the history of forecasts from FRB SF and FOMC:


As discussed in the second link [2] above, here are the two CLF forecasts (with and without a shock in 2016):


The "Beveridge curve" (the theory of these dynamic equilibrium "Beveridge curves" is discussed in my latest paper) relating labor force participation to the unemployment rate (a curve you likely would not have seen unless you use the dynamic information equilibrium model) also discussed in [2] is also on track with the latest data:


The shocks to CLF are in red-orange and the shocks to U are green. In the absence of recession shocks, the data should continue to follow the dotted blue line upwards from the black point. However, it is likely that we will have a recession in the mean time, and so — like the rest of the curve — we will probably see the data deviate towards another dynamic equilibrium (another gray hyperbola). The only place I have seen so far where these kinds of Beveridge curves are stable enough to be useful is in the classic Beveridge curve (data for which will be available next Tuesday). This stability arises from both the size and timing of the shocks being approximately equal. In the case above, the shocks to CLF are not only much smaller, but also much later (even years later) which cause the Beveridge curve above to become a spaghetti-like mess.

Thursday, January 4, 2018

Structural breaks, volatility regimes, and dynamic equilibrium

In scanning through the posters, papers, and discussions of the preliminary schedule of the upcoming ASSA 2018 meeting in Philadelphia I found a lot of interesting sessions (e.g. two machine learning sessions). As a side note, those who think economics ignores alternative approaches should note the (surprising number of) sessions on institutionalist, Marxian, Feminist, and other heterodox approaches.

One poster from the student poster session caught my eye — in particular the identification of low volatility and high volatility regimes in the S&P 500:


That's from "Structural Breaks in the Variance Process and the Pricing Kernel Puzzle" by Tobias Sichert [pdf]. It seems these low volatility and high volatility regimes line up with the transition shocks of the dynamic information equilibrium model (green line):


The top picture is the dynamic information equilibrium model with shock widths (full width at half maximum, described here). The bottom graph is Sichert's paper's structural breaks (black indicating the start of a low volatility regime, red indicating the start of a high volatility one per the figure at the top of this post). However, the analysis started at 1992, so that isn't so much the beginning of a low-volatility regime as the beginning of the data being looked at (therefore I indicated it with a dashed line). I colored in the high volatility regime with light red, and we can see these regions line up with the shock regions in the dynamic equilibrium model. The late 1990s early 2000s is seen as a single high volatility regime in Sichert's analysis and the Great Recession seems to continue for awhile after the initial shock — possibly due to step response? However, overall volatility looks like a good independent metric to identify periods of dynamic equilibrium (low volatility) and shocks (high volatility).