Thoughts on "A Random Physicist Takes on Economics"

Jason Smith has interesting ideas. I’ve followed his blog on and off since around the winter of 2014, in my first year of grad school. At the time I was working on developing Shannon entropy-related algorithms for detecting actions in time series data from motion sensors on surf- and snowboards, and his blog posts about applying Shannon entropy to economics intrigued me. I was not (and am not) really a macro person, so a lot of the applications he focused on seemed ho-hum to me (important, but not personally exciting). At some point I stopped following blogs as much to focus on getting my own research going, and lost track of his work.

Smith has a new book out, A Random Physicist Takes on Economics, which I just read. If you’re interested in economic theory at all, I highly recommend it. It’s a quick read. I want to lay out some of my thoughts on the book here while they’re fresh. Since I have the Kindle version, I won’t mention pages, just the approximate location of specific references. This isn’t a comprehensive book review, rather a collection of my thoughts on the ideas, so it’ll be weighted toward the things that stood out to me.

Big ideas

I think the big idea behind Smith’s work is that much of the rational actor framework used in economics is not necessary. Instead, a lot of the results can be derived from assuming that agents behave randomly subject to their constraints. He traces this idea back to Becker’s Irrational Behavior and Economic Theory, and also cites some of the experimental econ work that backs this up.

One formalism for this idea is that entropy maximization in high-dimensional spaces moves averages to points at the edge of the feasible set pretty quickly, and that changes to the constraint set cause similar changes to the average as they do to the constrained optimum for a convex function. In this view, comparative statics exercises on budget sets will get the right signs, but not necessarily the right magnitudes.

Random behavior and exploring the state space

Smith argues that humans are so complex that assuming uniformly random behavior over a feasible set is a more reasonable starting point than assuming some sort of complex optimization process. This isn’t to say that people actually behave randomly, but that randomness is a modeling choice guided by our own ignorance. In aggregate, we can get to results that replicate representative agent results from assuming random micro-level behavior. Smith describes this random micro-level behavior as “agents exploring the state space” (somewhere early on). The choice of “uniformly random” is guided by the principle of maximum entropy over a closed and bounded budget set.

Joshua Gans mentions this in his Amazon review of the book: random behavior is a useful benchmark against which to compare rational behavior. One of my takeaways from Smith’s work is to think about which of my modeling conclusions would be robust to random behavior and which wouldn’t be. My work deals more with the behavior of firms, where I think rationality is maybe less of a stretch. A funny anecdote: I heard an economist who worked with a large firm once say that he “had yet to meet the profit maximizer”. The point is that firms aren’t always rational profit maximizers. Simon’s behavioral work on firm decision making is in this spirit.

There’s a helpful example I remember from Smith’s blog that didn’t make it into the book. Observe: people buy less gas when the price is higher. A rational behavior proponent might say that this is because people look at the price and say, “hey I can’t afford as much gas, so I’m going to buy less”. A random behavior proponent would say that this is because there are fewer people who can afford gas at the higher price, and so less gas gets bought. The former is about a bunch of individuals continuously adjusting their purchases, while the latter is about a bunch of individuals discretely not buying. Both can generate an observed continuous decrease in gas purchased when price increases.

I think that the truth for any given situation is likely to be somewhere in between random and rational behavior. There’s a lot more about information transfer and equilibrium at his blog, which I recommend any economist reading this to check out. Spend at least an hour going through some of his posts and thinking seriously about his arguments - I think you’ll likely get some mileage out of it.

Game theory, random behavior, and common resources

I spend a lot of time thinking about the use of common resources. Smith doesn’t really discuss these issues much - there’s a mention of negative and positive externalities at the end, but it’s brief. So what does the random behavior hypothesis mean for common resources?

The rational behavior hypothesis for the overexploitation of common resources is that selfish agents choose to act in a way that is personally beneficial at the cost of the group as a whole. Cooperation and defection become statements about people trying to get higher payoffs for themselves. I think the random behavior hypothesis here would be something like, “there are fewer states of the world in which people cooperate than in which they defect”. Cooperation and defection then become statements about how few ways there are for people to organize relative to the number of ways they could fail to organize.

I think this is plausible… it seems like the random behavior hypothesis is another way to view coordination failures. It’s not that coordination doesn’t happen because it’s difficult for individuals to stick to, it’s that it doesn’t happen because it requires a confluence of more events than discoordination does.

But there’s a lot of work on the ways that coordination does happen in commons (Ostrom’s work, for example). The game theoretic perspective seems to be valuable here: it gives a direction for the policy to aim for, and policy that incorporates game theoretic insights to commons management seems to work. So… maybe rational actor models can be more useful than Smith’s book lets on? Maybe the random behavior interpretation is that applying Ostrom’s principles create more ways for people to cooperate than existed before, thus making cooperation more likely.

Whither welfare?

The big consequence of the random behavior framework is that we lose the normative piece of economic modeling. Using utility maximization framework gives us a way to talk about what should be done in the same model as we describe what will be done. In the random behavior framework, we can say that we should loosen constraints in one direction or another, but the “why” of doing it is a bit more obscured. Smith says that loosening constraints can increase the entropy, but I didn’t quite follow his argument for why that’s desirable in and of itself. It seems like there are some more principles in the background guiding that choice.

I have a lot of issues with how “improving welfare” gets (ab)used as a goal in economic analysis. People go around unthinkingly saying “Kaldor-Hicks improvements are possible” as they advocate for specific policies, often explicitly sidestepping equity concerns. Other folks use a concave social welfare function as a criterion to avoid this, and argue against inequality-increasing policies. I lean toward the latter camp. I think there are technical arguments in favor of this - the time with which we can enjoy things is one among many fixed factors, generating decreasing marginal benefits to any individual accumulating large amounts of wealth - but to be honest it’s probably also a reflection of my personal politics. These things interact and all so I resist the claim that it’s purely personal politics, but that’s a separate conversation.

Anyway, I think that good economists know that the decision of what to prioritize in a society can’t be just about “growing the pie” without discussing who gets which slices. But there are a lot of economists who act as though “growing the pie” can be regarded as desirable independent of how the pie will be split. This can be true for a theoretical “ceteris paribus” conversation, but I don’t think this can be true for a policy discussion with real-world consequences. There’s a post I once read (I think it was on interfluidity, but possibly on econospeak which argued that part of the purpose of leadership was to select one among the many possible equilibria, including those where the Second Welfare Theorem would or wouldn’t be usable. The random behavior hypothesis, by getting rid of economic welfare, might make the need for this leadership and value judgement more explicit. I think that would be a good thing.

Edit: It occurs to me that Smith’s framework also allows normative statements to be made alongside positive statements; they’re just about reshaping constraint sets. I still think it decouples the two more than the standard utility maximization framework does, but maybe I’m wrong.

Some issues I had with the book’s arguments

I want to be clear: I enjoyed reading Smith’s book, and I’ve enjoyed reading his blog. To the extent I’ve been bored by it, it’s because it’s about macro stuff and I don’t do macro. I am not pointing out issues in the spirit of “here’s why this sucks”, but in the spirit of “here are places where I disagree with the presentation of interesting ideas that I want to continue engaging with”.

I think an economist inclined to be critical could find issues in the text. Smith seems to be writing for a more general audience, so there are places where his use of terms is not quite correct. For example, near the end (around 91%) he describes “tatonnement” as “random trial and error in entropy maximization”; I understand it as a process of “adjusting prices in the direction of excess demand”. I don’t think this matters for his argument, so it’s not a big deal.

I think the more substantive issue a random critical economist would raise is related to his treatment of empirical economics. By and large, he seems to ignore empirical economics almost entirely, and conflate empirical economics with empirical macroeconomics. To the extent that he discusses microeconomics at all, it’s all about the specific pieces of micro theory used in parts of macro modeling. That’s fine! To echo one of Smith’s points, limiting the scope of an argument is perfectly valid. I’m mostly a theorist right now, and I think there are lots of solid points he makes about the things he’s talking about. But as an environmental economist with empirical leanings, it sort of annoys me to see him lump all of economics with macro and all of micro with the micro used in macro. There’s some discussion of game theory, but not a lot.

Smith also takes issue with the use of math formalism in economics. One point he raises, which I remember from his blog, is the use of $\mathbb{R}_+$ to describe a feasible set. Why, he asks, do economists feel the need to say “positive real numbers” rather than “a number greater than zero”? What is gained? He argues that this is a symptom of economics’ excessive and inappropriate use of math. I think this criticism is sort of misguided, but also kind of on point.

Sort of misguided: A lot of economic theory is styled as a branch of logic. So being precise about the field of numbers being used is kind of a cultural thing. The existence proofs we use, or at least the earlier ones, are/were often not constructive. Existence followed from properties of the reals. More modern proofs often use Fixed Point Theorems for existence, followed by Contraction Mapping approaches for computation. The point is that being precise was important for the people making the arguments to convince the people reading the arguments. This is the “it’s just a symbol, get over it” counter-argument.

Kind of on point: In a class I took with him, Miles Kimball was fond of saying that whether or not there is a smallest number that can be used can’t matter to the substantive economics, so any economic proof based on reals has to go through for integers or rationals as well. If it doesn’t, that’s a sign that there’s something funky about the proof. Daniel Lakeland makes similar arguments in justifying his use of nonstandard analysis (it’s somewhere in his blog…). So, yeah, just saying “a number greater than zero” would be fine for any proof that really needed it, though the author would need to go through more hoops to satisfy their likely audience (economists who seem to like real analysis).

I think some of the math in economic theory that Smith takes issue with probably falls in this category: people were using formalisms as shortcuts, because they don’t want to do the proof in even more detail over the rationals or something, but it doesn’t really matter for the substantive economics at play. I think that whether this offends you or not probably says more about your priors over economics and math than it does about the math itself.

I think there’s a similar issue at play with Smith’s read of rational expectations and infinity. Smith argues that rational expectations are somewhere between incoherent (inverting distributions is ill-posed) and a fudge factor that lets a modeler get whatever they want. I agree that the latter is a thing that some, possibly many, economists do. Why did XYZ happen? Oh, because of expectations about XYZ! Assuming good faith on all sides, though, I think there are two things going on here.

The first is that expectations are about beliefs, and self-fulfilling prophecies are a thing. I see this in my students when I teach intro math for econ: if they buy into the notion that they’re “just not math people”, they will do much worse than if they reframe the issue as “math is not easy, but if I work hard I can do well”. Their expectations about their future performance and inherent abilities shape their future outcomes, which reinforce their expectations. If Anakin hadn’t believed his vision of Padme dying on Mustafar, he wouldn’t have acted in a way to make it happen. This is a feature of the human condition, and modeling it is relevant. I think Smith’s concerns about information flowing back through time are missing this point, and getting too caught up in the math formalism.

The second is that modeling beliefs is hard, and rational expectations is a tractable shortcut. There are other tractable shortcuts, like assuming that variables follow martingale processes, which can be useful too. But given that beliefs seem to matter, and that it’s hard to model heterogeneous beliefs being updated in heterogeneous ways, I think the use of rational expectations is at least understandable. There’s a similar point in the use of infinity (which Smith only touches upon at the end, and I may be misunderstanding what he’s getting at). It’s not that economists actually believe that agents think they’ll live forever, at least not theorists who have what I consider good economic intuition. It’s that using finite horizons in conjunction with backwards induction yields weird results, so infinite horizons is a modeling shortcut to get “more realistic” results. This is another of Miles’ arguments: whether or not the universe will really end can’t matter to real economics happening today, so don’t take the “infinite” horizon too literally. Smith seems to grok this in his discussion of scope conditions. Maybe some of this is just that we’re using different languages; I agree that economists could stand to be more explicit about scope conditions.

Conclusion

This has gotten way too long. To summarize:

  1. I liked the book. I think it should be widely read by economists, applied and theoretical.
  2. I think Smith is on to something with his modeling approach. I want to try working with it soon.
  3. I think Smith’s work would benefit from more engagement with economists. Partly this would add some relevant nuance to his approach (e.g. rational expectations and self-fulfilling prophecies), and partly this would expand the set of topics he considers beyond macro-focused things. It goes the other way too: I think engaging with his work would be good for economists, at the very least offering a useful benchmark to compare rational actor models against.

Read the book!