Australia goes to the polls on 21 May. Here is my very reductionist, efficient markets-inspired, take. My basic model of elections is that the two main parties endogenise the preferences of the median voter. Parties may be constrained by ideological or other factors from endogenising those preferences, so the model is an approximate one, but no less useful for that. Analysis of betting odds on Australian federal elections shows that those prices follow a random walk, that is, they efficiently price available information. Because they are discrete and irregular events, we probably shouldn’t apply the same tests to election outcomes themselves, but betting/prediction markets are effectively derivative contracts on those outcomes.
Some are dismissive of betting/prediction markets on the basis that they are heavily influenced by opinion polls and so do not add additional information. Opinion polls are obviously a large part of the information set on which betting markets condition, but the latter have the advantage of being able to discount other information as well. The 2019 federal election result was missed by both polls and prediction markets. We now know the polls were biased estimates of voting intention and this fed into market pricing. I have spent the last few years working alongside people far more expert in polling than me and the main thing I have learned is that you do not want to see how that particular sausage is made. I will take prediction markets over polls any day of the week. Sportsbet has a Coalition majority at $4.40 and an ALP majority at $1.70, so around a 21% probability of a Coalition majority and 54% chance of an ALP majority (I’m assuming a bookie’s margin of 8%). Keep in mind the Coalition only need to lose one seat on net to become a minority government.
One of my former teachers, with Clive Granger, was noted for their work on the implications of aggregating over time series with different orders of integration and long memory time series. The relevance to voting is that some voters are rusted on, while other voters swing, so we would expect the underlying data generating processes for the two voter types to be different. Technically speaking, vote shares should be fractionally integrated, ie, have an order of integration between zero and one. So votes share dynamics are probably best thought of in terms of an ARFIMA process. There is a bit of quant poli sci devoted to this, although I have not delved into it very deeply.
Another perspective views vote shares as having a deterministic relationship with macroeconomic variables, although this is presumably also part of the prediction market information set. The macro models did a better job calling the ‘surprise’ 2019 election result, since the macro variables were not skewed by poll bias. My most recent boss, with Gary Marks, is known for an econometric model of the Australian two-party preferred vote share in terms of macroeconomic variables (see the link above for 2019 model estimates). Ray Fair does the same for US elections.
I have never understood why vote shares were not modelled as central bank loss functions, that is, the deviation of inflation from target and output from potential. Many voting models assume the vote share is negatively and linearly related to inflation, but that would imply disinflation/deflationary conditions would increase the vote share, which is counter-intuitive. It is also at odds with some political-business cycle models, which assume that engineering an inflationary boom should be good for incumbents. A central bank loss function penalises macroeconomic volatility, which would seem to be a good way of characterising voters’ economic preferences.
Modelling the vote share in terms of a quadratic central bank loss function also serves as a test of the endogeneity of central bank preferences. It seems sensible to me that we should make consistent assumptions about voter and central bank preferences. That still leaves a role for central bank independence, delegating policy to a Brainard/Rogoff ‘conservative’ central banker, in solving the dynamic inconsistency problem faced by politicians/voters.
So where does that leave the incumbent government? We have gone from undershooting the inflation target for seven years to overshooting in very short order, so the deviation from target over the government’s term in office has been significant. Using NAIRU estimates as an output gap proxy, we may now be close to potential, but on the RBA’s/ Treasury downwardly revised NAIRU estimates, have been mostly short of full employment. I leave open the question of whether voters pass judgement on contemporaneous economic conditions or the overall record during the current and/or previous terms of office. Needless to say, this is why governments should pay close attention to monetary policy governance. If the government delegates the minimisation of the loss function, but doesn’t hold the monetary authority accountable for that function, then voters will hold the government accountable instead.
A more serious issue in relation to potential output is that the official sector has revised down its estimates of potential growth over time. As recently as 2006, you can find Treasury models with steady-state real growth rate assumptions as high as 3.6%. Potential output growth is now seen as more like 2.75%. But if your expectations for the long-run evolution of your living standards were based on the experience of trend growth in previous decades, actual growth in line with this downwardly revised potential would look and feel disappointing, even if the economy is operating at or even above potential. I can remember when Australian economic growth rates would often begin with a 4 in front, rather than a 2.
In a recent USSC publication, I updated Claudia Sahm’s analysis of actual output relative to different vintages of the CBO’s estimates of US potential output. From the standpoint of someone in 2005, assuming they shared the CBO’s expectations for potential output but did not update those expectations subsequently, the actual growth path of the US economy would look disappointing, even if at future points in time, growth was consistent with the CBO’s downwardly revised estimates. To understand why some people might be disappointed in US economic outcomes, despite strong growth from a cyclical perspective, you need to look at recent outcomes relative to the long-run expectations formed from experience in previous decades.
The official sector’s downward revisions to potential may become self-reinforcing, by effectively lowering the bar for public policy. An economy can be fully employed at a lower standard of living than could have been achieved at higher rates of trend growth. The long-run rate of economic growth should be a target for policy, not a technical assumption that just gets fitted to recent growth outcomes. The future is a policy choice.
As for the macroeconomic implications of the election outcome, the change in the Australian dollar exchange rate and interest rates between the Friday before the election and when the result is known tells us most of what we need to know. Bring your own microscope for that one. That does not mean that subsequent policy decisions by the next government won’t matter for macroeconomic outcomes. But ex ante, there is not much to distinguish the two main parties on macroeconomic policy that would give rise to significant movements in financial market prices. Note that a lack of movement in financial market prices would also imply no change in trend growth either, at least ex ante. Again, this is not to deny that the policies of the next government will matter for trend growth on an ex post basis. It’s just hard to make the case for a significant partisan difference ex ante.
Betting odds and financial market prices tell you most of what you need to know about the election. The rest, as they say, is commentary.
ICYMI
Australia goes to the polls
We´re NEVER at potential. It´s always being revised Upward or Downward!
https://marcusnunes.substack.com/p/is-the-new-monetary-policy-framework?s=w