Are economists getting choice theory all wrong because of ergodicity?
14 Jan 2021Ole Peters made waves with a paper in Nature Physics about some things economists have been doing incorrectly for centuries. I’ve seen people saying things like “economists don’t understand ergodicity” on the econ blogs for at least the last decade (I think), so I was really interested to read this paper and try to understand what all the fuss was about.
I also read a few responses by economists: the Doctor et al. response in Nature Physics, Ben Golub’s Twitter thread, Roger Farmer’s post, and the recent r/badeconomics thread (I read the old thread way back when it came out, but forgot about it until I was looking for a link). I don’t think any of them said quite what I’m saying here, but I’m not very original and I’ve probably drawn from them more than I realize. You should read them for yourself. For what it’s worth, of those pieces I felt like Farmer and some r/badecon posters made the strongest arguments. I agree with Doctor et al. and Golub that economists look at things other than gambles and EUT has many known issues, but I want to take Peters’ point seriously on its own grounds: choices over gambles. I’ll try to characterize his arguments accurately in good faith. If you spot an error or mischaracterization, or think I’m being unfair, please let me know.
I’ve only read this one paper about ergodicity economics. It’s possible some of the points I raise are addressed in other papers.
Setting the table
I’ll come clean at the start: I’m not convinced by Peters’ arguments that agents do and ought to evaluate the ergodic properties of sequences of gambles when considering individual gambles, but I think I see what he’s trying to get at. I’m agnostic on whether the approach generates useful ways to conceptualize inequality, but I don’t see how it improves financial modelling or management beyond what’s already known. I disagree with Peters’ claim that agents evaluating the ergodic properties of sequences of gambles is “not predicted by expected utility theory (EUT)”. I think he has an interesting idea about how to pin down the curvature of the utility function. I think there are two big issues with the ergodicity economics approach, though:
-
Not addressing the role of utility functions in economic analysis: Based on the paper, ergodicity economics (EE, or “ergonomics”) seems to be providing another microfoundation for CRRA utilities. This seems to stem from a misunderstanding of the role of utility functions in economic theory. I actually find the specific microfoundations quite odd and implausible, though they’re testable (I think Peters is missing an opportunity by focusing on finance/inequality instead of natural resource exploitation). But because EE ends up stumping for CRRA utilities, it falls into the same problems EUT has with those utilities, such as calibration problems at different scales of risk and an inability to distinguish between smoothing over states vs smoothing over time. EE offers an interesting way to pin down utility function curvature, but it’s not clear how to reconcile that with leisure-consumption substitutability.
-
Strange psychology and physics: Peters misses some psychological assumptions implicit in the ergodicity heuristic, which I think are at least as odd as EUT’s psychological assumptions. So I’m unconvinced by his microfoundations. I don’t know if it’s a misunderstanding on his part or mine, but I think he’s making some weird statements about what “physical reality” is. I think he’s mischaracterizing or misunderstanding the interpretations of probability that economists use in EUT.
EUT and Peters’ critique
If I’m choosing between sequences of gambles, EUT predicts that I will and ought to look at the value of those sequences. If I’m choosing over a single gamble, EUT says I should look at that single gamble. EE says to accept/reject a single gamble I should look at the ergodic properties of accepting that gamble into the infinite future, potentially reinvesting 100\% of my wealth in it each round. I’ll refer to this approach to making decisions as the “ergodicity heuristic”. I call it a heuristic because if you’re trying to choose \(x\) to maximize \(f(x)\), choosing \(x\) to maximize a surrogate \(g(x)\) may or may not achieve your goal. Peters seems to mix the positive and normative arguments here, saying that agents both do and ought to maximize an \(f\) by maximizing a \(g\).
(I neither agree nor disagree with Peters’ claim that an agent ought to evaluate a single gamble by the ergodicity heuristic. The heuristic will maximize some utility functions but not others, whether over the growth rate of a sequence or some other underlying object. That Peters found a utility function which is maximized by the ergodicity heuristic is cool, and it both makes sense and is pretty neat that \(u = \ln\) is it, but that finding isn’t something I’ll be writing home about.)
Peters argues that EUT is problematic because the concavity of the utility function is a free parameter, unconstrained by theory. Thus when determining whether or not an individual will or ought to accept a gamble, the economist has at least one degree of freedom to derive whatever result they want: the curvature of the utility function. An economist can always choose a utility function curvature (or more generally, an entire utility function) to fit whatever choice data they’re trying to explain.
He also takes issue with the use of subjective discount rates based on individual psychology. He doesn’t seem to have an issue with using discount factors determined by alternative uses of scarce resources—what he calls the “no-arbitrage argument”—just with notions like hyperbolic discounting, and possibly time-inconsistent or flexible discounting in general. He describes this kind of use of subjective discounting as a part of economists’ recourse to psychological patches because “observed behavior deviates starkly from model predictions”. He argues that psychological arguments are often “hard to constrain and circular”, their use an “error” leading to “a narrative of human irrationality in large parts of economics”. (He says this in the context of discounted expected utility more broadly, but I think he’d probably agree with this language for discounting specifically.)
His proposed fix is to use the ergodicity heuristic: when deciding whether or not to accept a gamble, I ought not consider whether that gamble has positive expected value based on many different “static” realizations of the same event. Instead, I ought to consider what would happen to my wealth were I to accept the same gamble repeatedly for the infinite future, i.e. the time-average of a gambling process. Peters claims this approach delivers specific utility functions suitable to the growth processes being considered, e.g. linear utility for additive processes or log utility for multiplicative processes. He argues that the ergodicity heuristic avoids invoking “loose references to psychology” and a severe conceptual issue of individuals assuming they can interact with multiverse copies of themselves to transfer wealth.
What do economists actually ask for out of utility functions?
It’s not a resolution of the St. Petersburg paradox. Most economists don’t particularly care about that issue. There are many resolutions to the St. Petersburg paradox, from utility functions, to mental limitations in computing values, to beliefs over the counterparty’s solvency, and even other kinds of computational heuristics. We can now add “thinking about ergodicity” to the list of resolutions. I don’t see why the ergodicity explanation ought to be privileged over any of the others (I quite like the one about counterparty solvency).
Broadly, I classify economists’ use of utility functions into two non-exclusive categories:
-
Solving “inverse” problems: The economist is faced with some choice data, and wants to “rationalize” the data by finding a utility function and constraint set such that the observed choices maximize the utility function subject to the constraint set, and satisfy some notion of equilibrium. This is like an inverse problem, since the choices are known and the economist wants to learn the functions which generated the choices. From here the economist typically does some kind of predictive or inferential exercise using the functions they’ve learned, e.g. predicting a counterfactual set of choices under some policy change, or inferring the welfare gain/loss from existing institutions relative to some counterfactual institutions (or vice versa). Economists have used this approach in a lot of different settings: to understand pricing or college enrollment behavior, to predict the effects of patent protections on pharmaceutical supply, and even in experiments testing strategic behavior in games. Sometimes the economist wants to see conditions under which specific behaviors can emerge, e.g. preferences for fairness or conditions under which economic growth and trade liberalization cause environmental degradation. Some of the most empirically-successful models in economics, discrete-choice models, are typically used in this fashion. One of my favorite papers ever is in this category.
-
Calculating equilibrium or optimal allocations: The economist is trying to understand how resources will be or ought to be allocated under a set of preferences and constraints. To do this, they posit some set of utility functions, some constraints, some notion of equilibrium, and solve the model (maximize utility subject to constraints and equilibrium) to see what the final allocation is. Sometimes the goal is to compare the properties of solutions to similar problems (e.g. compare allocations under two different utility functions, or under a change in some parameter), or to show that allocations under a particular problem structure satisfy some property (e.g. conditions under which auction mechanisms produce the same expected revenues). The economist may calibrate the model parameters to produce observed behavior along some dimensions (e.g. have a particular labor supply elasticity) to study how different policies, technologies, etc. would affect the equilibrium/optimal allocations. This approach is often used for dynamic problems in macroeconomics and natural resource economics. It can be pretty effective in understanding and predicting natural resource extraction.
These two approaches aren’t exclusive, and bleed over into each other. Calibration, for example, has much in common with structural estimation. In the first approach, the utility function is an unknown object to be learned from observed allocations, often restricted to lie in a specific class of functions with known/desirable properties. In the second approach, the utility function is taken as given (again from a class with known/desirable properties) and the focus is on using the utilities to generate allocations which satisfy some criteria. For either approach, it’s fair game to argue that the analyst has chosen an inappropriate set of restrictions on the utility functions used, and that different restrictions would yield materially different results. It’s pretty rare in my experience to see an economist argue that it would be silly for an agent to choose to use a specific utility function, though—we usually take the utility function as something beyond the agent’s control. It’s easy to see why we do this: if we didn’t, we’d have to explain how agents choose utility functions, thus generating a “meta-utility function”, a utility function over utility functions. But then where does the meta-utility function come from? It’s utility functions all the way up. As a discipline we seem to have collectively agreed to stop at the first level for most analyses, say there exists a utility function which the agent optimizes, and move forward.
So what I find notable about the EE approach is not that they settle on log utility for multiplicative gambles, but the reasoning they give for it. Log utility and its relatives (the CRRA class) are actually pretty standard. They have a lot of known issues, but people use them because they get a bunch of first-order features right and they’re easy to work with. But EE claims a different argument for using log utility: agents choose the log utility function because it reflects the ergodic properties of a multiplicative gamble.
(Funnily enough, Peters actually argues in favor of the CRRA class of utilities in the paper: “For financial processes, fitting more general functions often results in an interpolation between linear and logarithmic, maybe in a square-root function, or a similar small tweak.” That’s the CRRA class, which is already standard!)
Peters’ argument is interesting, but I think it hits some snags. Why assume agents maximize final wealth? Why maximize the growth rate of wealth? Surely some people sometimes do (e.g. professional gamblers on the circuit), but just as surely some people sometimes don’t (e.g. players in friendly poker matches). Pinning the utility function curvature down using the ergodic properties of the gamble allows us to calculate equilibrium/optimal allocations for professional gamblers on the circuit, but it leaves us unable to explain behavior in friendly poker matches. It also leaves us unable to solve inverse problems without adding a bunch more structure in other places. Maybe we should do that, but I don’t find this paper’s evidence compelling enough to make me toss out my demand estimation toolkit.
An aside: seriously, why should I always assume people are maximizing a growth rate? Peters states this based, it seems, on his own introspection: “… it’s the growth rate I would optimize.” But that’s just, like, your opinion, man. I have no problem with people introspecting to determine reasonable objective functions. I do have a problem with people claiming their introspection somehow implies universality. I think it’s one of the major missteps we economists often make in describing how we choose objective functions. The more defensible statement, I think, is “If agents optimize a growth rate, then the ergodicity heuristic says …”. It’s a small tweak but I think a meaningful one.
Does EE address some of the existing problems with CRRA utilities?
In any case, by stumping for CRRA utilities and this particular method of calibrating the utility function curvature, the EE approach also fails to address some of the big problems we know those utilities have:
a. Concave utilities imply weird behavior over modest stakes. Rabin applies his calibration theorem as a critique of EUT, but it’s broader than that. It applies to any concave utility functions, including the CRRA class which EE stumps for. I run into this calibration issue when I’m trying to solve nonstationary dynamic optimization models with probabilities that are endogenous and vary by orders of magnitude over a typical trajectory. Calibrating a globally-concave utility function to reproduce observed tradeoffs over the entire trajectory often requires some assumptions on the value agents place on “exiting” the problem (e.g. the utility of death, bankruptcy, etc). I’m not sure how to incorporate these kinds of preferences or valuations into an EE approach.
b. We know that utility functions don’t have constant coefficients of relative risk aversion, even when we’re just talking about wealth and income. Sure, environmental and natural resource problems like the ones I work on tend to have nonlinear dynamics which, in an EE world, could generate varying coefficients of relative risk aversion. But how does that come out when we’re “just” talking about consumption, income, and wealth? This is arguably the domain EE is targeting so it ought to have some explanation of how to proceed. The EE approach also makes it hard to take “validation” approaches to solving inverse problems, where we use information about utility curvature from choices that aren’t over gambles to infer something about preferences over gambles. Fundamentally, I think these are manifestations of the same methodological problem in the EE approach: by having the utility function be determined entirely by the properties of the wealth process and not at all by the properties of the agent, EE seems unable to account for things like consumption commitments, unemployment durations, or labor supply elasticities.
c. We know that people have preferences over when uncertainty gets resolved. They’re willing to pay to learn information earlier even if they have no way to re-optimize on the basis of that information. By revealed preference, those people just don’t like uncertainty. CRRA-class preferences don’t allow this because they entangle preferences over states with preferences over time. Recursive preferences are often used to get around this issue, with attention to dynamics and probabilistic behavior in growth phenomena. Preferences over when uncertainty gets resolved also have major implications for environmental policy and forecasting, e.g. 1, 2. I find it odd that EE, which claims to respect dynamics and uncertainty more than standard economic theory, ends up with functions which create the same kind of issues when considering dynamic policy design in the face of uncertainty. I’m not saying recursive preferences are The One True Way or anything. But in making the pitch for an EUT replacement on dynamics/uncertainty grounds with finance-y examples, I’d expect to see some engagement with this issue.
As I mentioned, these are known issues in EUT, so I’m not claiming EE is somehow worse than EUT because it fails to solve them. But if EE doesn’t address some of these issues, what is the real value add? Why should I appeal to ergodicity heuristics when they’re going to take me to the same functions as EUT, and they won’t even let me resolve issues that I’d run into anyway? Resolving the St. Peterburg paradox ain’t it. I’m sensitive to the fact that the EE approach is relatively new and Peters presents it as a null model. But strong claims require strong evidence: if you’re going to say economists have been getting this whole utility function business wrong for centuries, you ought to engage with the problems that we’re grappling with now (not the problems of centuries past).
(I actually agree that there’s room for improvement in how we model preferences and choice! I’m just not convinced by the EE critique or solution.)
At a mechanical level, accounting for non-ergodicity in actually solving economic models is a reasonably well-understood challenge. I think the implications for estimation are also pretty well-understood in econometrics. So I wouldn’t say economists “ignore” ergodicity; it’s more that we don’t typically consider it relevant for static decision problems. The novelty of the ergodicity heuristic is that it explicitly links a class of static decision problems (where economists don’t usually see ergodicity being relevant) to a class of dynamic decision problems (where economists do often think about ergodicity).
There’s another oddity here about stationarity. Peters claims that economists “… typically [deal] with systems far from equilibrium—specifically with models of growth….[making] an indiscriminate assumption of ergodicity”. And it’s true that economists often deal with growth (though we distinguish between “equilibrium” and “steady state”—in economics the latter is often a specialization of the former, but neither is technically a proper subset of the other). But when we do study growth, we tend to “stationarize” the system to study “balanced” growth paths (wiki, more-detailed notes). I don’t really work with these models, but it sure seems like there’s some similarity between how economists construct balanced growth paths and the purpose of \(v(x)\) in equation (5) of Peters’ paper.
Psychological and physical implications of the ergodicity heuristic
The ergodicity heuristic is reasonable if people aim to maximize growth rates of physical observables, and Peters seems to argue they ought to do so. It also assumes people are able to calculate long-run time-averages of noisy processes correctly, but he doesn’t seem to have any issues with rational expectations in EUT (I don’t always love ratex, but EUT\(\neq\)ratex). Applied to a static gamble, this leads people to behave as though they were choosing a sequence of gambles.
But I think Peters is missing a big and fundamental point here. All utility functions and optimization procedures carry psychological implications about the agents who use them! Peters argues EE gets away from “loose connections to psychology”, but I don’t see how. EE still assumes rational expectations (i.e. agents perceive probabilities correctly)—that’s a psychological assumption. EE still assumes agents maximize some kind of CRRA utility—that’s a psychological assumption. EE assumes agents care about growth rates of observables—another psychological assumption. The EE approach to determining to use a CRRA function also allows us to use different objective functions for different processes, which can lead to contradictions or weird implications about memories.
Personally, I think you can’t escape these kinds of psychological assumptions/implications when you’re trying to model behavior (whether in humans or other species). They exist whether you intend them or not, so it seems reasonable to actually think about the implications and make sure they’re what you want to imply. To put it a bit bluntly in a different context: just because I say creationism gets away from loose speculation about unobservable beginnings of the universe, it doesn’t mean creationism doesn’t imply things about the unobservable beginnings of the universe.
Is the ergodicity heuristic rational?
The only definition of “rational” allowed in our house is “arising from a complete and transitive preference relation”. I have no issue with declaring some behavior “irrational”, or equivalently agreeing that it can’t be rationalized by some particular choice rule. It’s not obvious to me that the ergodicity heuristic is necessarily rational. Putting completeness aside for a moment, the ergodicity heuristic seems like it could lead to preference cycles from being unable to distinguish between static gambles and sequences of gambles.
To make this argument a little more precise, consider two gambles \(A\) and \(A'\). \(A\) is a gamble Peters describes (a fair coin toss, heads you gain 50\% of your wealth tails you lose 40\%). \(A'\) is the same gamble with two sequential coin tosses (if you accept, you’re committed to accepting both outcomes—no stopping option). I don’t see how the ergodicity heuristic would distinguish between the \(A\) and \(A'\)—the time serieses of both used for the heuristic would look the same. But EUT sees the two differently: one is a single coin toss, and the other is a sequence of coin tosses with no option to stop. Now, I may accept both, or reject both, or accept/reject one but not the other, depending on my preferences (utility function), but within EUT I can at least clearly distinguish between these two. I haven’t tried it, but if it’s possible to construct two sequences which lead to different choices under EUT but are indistinguishable under the ergodicity heuristic, I think that’s a bad sign. I think it could be done with \(A\) and an \(A''\) composed of two gambles, which if accepted once have a different ensemble expectation than \(A\) but if accepted repeatedly produce the same time average as \(A\). If anyone can try this, or confirm that it’s impossible, or tell me how the ergodicity heuristic distinguishes between two such gambles of different sequence lengths, do let me know. The fundamental point I’m trying to get at here is that the ergodicity heuristic asks me to impose some dynamics over a not-necessarily-dynamic gamble. Maybe this makes sense, but maybe it doesn’t. Not all gambles are offered more than once.
An aside: Real options, like the option to stop gambling or to switch to Geico, are important. I’m leaving the option to stop out of the picture because I don’t know how to handle it in the ergodicity heuristic; would I consider an average over time serieses ending at different stopping points, for infinitely many stopping points? It would be pretty limiting if the ergodicity heuristic couldn’t handle real options in risky choices, so I’m going to assume it can be done and I just don’t know how.
What does the ergodicity heuristic say about how I think?
Anyway, Peters’ example is a bit weird to me. It seems to be doing exactly what he pans EUT for doing: drawing on loose references to psychology to justify a choice rule. In accepting being committed to a sequence of coin tosses, EUT says I ought to calculate the final value of those sequences, then calculate the expected value I’ll receive from randomly drawing a sequence. If I’m being committed to a sequence then I’m drawing a sequence, not a single toss. If you do the EUT thing for an infinite sequence (even with linear utility over wealth), you’ll end up with exactly the outcome Peters describes: reject this gamble and stay home.
So what the ergodicity heuristic is adding here isn’t “how do I choose to accept/reject a sequence of gambles”, it’s “a psychological interpretation of how people choose over static gambles based on how they ought to choose over sequences of gambles.” I don’t see any references to psychological or neurological studies indicating people tend to choose on this basis.
It matters that this is a psychological implication/assumption of the ergodicity heuristic, because Peters argues the heuristic is more credible than EUT since it only invokes “physical reality”. I would argue EUT also invokes physical reality—what goes on in our minds has physical correlates, and ought to be as physical as the value of the growth of a socially-constructed variable like “wealth in dollar terms”. The physical reality of our psychology is often harder to observe than counting the number of dollar bills on my table at the end of poker night. But harder to observe isn’t unobservable, and this starts to feel a bit like the “looking under the lamppost” neoclassical economists are often panned for.
Multiverses and multiverses
The heuristic has some other strange implications, at least on the grounds Peters is assessing it and EUT. Peters claims it avoids invoking a multiverse of identical copies of myself with whom I can transfer resources so that I harvest the expected value of the gamble. If you buy that interpretation of probability and expectations (I don’t), then the ergodicity heuristic does avoid this multiverse issue. But it comes at the cost of invoking a multiverse of almost-identical copies of the universe where I transfer resources in a unidirectional “timelike” (I’m probably using physics jargon wrong) flow so that I harvest the value of a typical time series. To see this, let’s put some meat on the bones of Peters’ gamble.
Suppose a friend offers me a bet over whether India will beat England in an upcoming cricket match: if India wins my friend gives me +50\% of my wealth, if India loses I pay out 40\% of my wealth. Suppose also that India and England are evenly matched in every respect: however we calculate it, the probability of India winning is indistinguishable from 50\% (this is the same gamble Peters uses). Letting my initial wealth by \(w\), Peters argues that to choosing so as to maximize
\[\begin{align} u = \begin{cases} 0.5*1.5w + 0.5*0.6w &\text{ if accepted}\\ w &\text{ if rejected} \end{cases} \end{align}\]invokes a multiverse of copies of my friend and I betting on the same match, where I consider \(0.5*1.5w + 0.5*0.6w\) because I’m considering the average consequence experienced by my ensemble of multiverse clones, potentially implying the ability (or my belief in the ability) to transfer resources across the ensemble. The ergodicity heuristic would have me instead choose so as to maximize
\[\begin{align} u = \begin{cases} \lim_{N,T\to\infty} \frac{1}{N}\sum_{i=1}^N\frac{1}{T}\sum_{t=1}^T w_{it} \text{ s.t. } w_{it+1} = w_{it}(1+x_{it}) &\text{ if accepted}\\ w &\text{ if rejected} \end{cases} \end{align}\]where \(\begin{align} x_{it} = \begin{cases} 0.5 &\text{w.p. } 0.5\\ -0.4 &\text{ w.p. } 0.5. \end{cases} \end{align}\)
This assumes I’m going to consider what happens to a typical instance of a time series of wealths into the future if I repeatedly accept this bet. But in the same way considering alternate possible outcomes of the bet invokes a multiverse of identical “me and my friend betting on this one match”, the idea of a time series arising from bets like this invokes a sequence of multiverses with identical “me and my friend betting on this one match” where I get to transfer my wealth from moment to moment but nothing else changes. Personally, I find this stranger than the “ensemble multiverse” Peters takes issue with—at least it could map to some kind of physically-appropriate introspection. What physical justification could I have for considering a Groundhog Day sequence where only my wealth flows with me?
Peters justifies this computation by saying “[w]e all live through time and suffer the physical consequences (and psychological consequences, for that matter) of the actions of our younger selves.” But I don’t live through time making the same bet over and over again. The ergodicity heuristic seems to assume I do.
To be clear: I have no problem saying people may or may not consider alternate possible outcomes through some kind of introspective process when determining whether to accept a risky gamble. The “typical instance” piece of the ergodicity heuristic—Peters call it a “typical individual trajectory”—seems to invoke some notion of evaluating branching timelines that are identical but for the sequences of match outcomes. I’m ok assuming people project alternate timelines where some random variable or sequence of random variables turns out differently. Peters gets out of this kind of thing by stripping context out of the choice problems he considers and applying a presumably-physics-inspired interpretation of what it means to calculate an expectation across states.
I just don’t understand why that physics-y interpretation of an expectation here is relevant. I much prefer an interpretation based on Savage and subjective probabilities (these probabilities reflect subjective beliefs about the likelihood of ending up in different states of nature), some kind of frequentist ratex interpretation (these probabilities are objective frequencies with which an outcome is realized in some large-sample limit, and I know the probabilities, and I will experience one of these realizations), or any convex combination. Unless we’re thinking about issues where transfers across different-but-simultaneous-states are relevant (e.g. insurance), I don’t understand why I should interpret expectations as me “harvest[ing] the average…consequences of the actions of my multiverse clones.” This might make sense for particles sharing momentum in collisions and defining a temperature as an expected value or \(<>\) of that process—I don’t know, I’m not a physicist.
What is “physically realistic”, anyway?
EE also seems less physically realistic to me than EUT in terms of computational complexity. Applying the ergodicity heuristic to a one-off gamble requires an agent to perform more calculations and use more memory than just doing the standard EUT thing and considering the gamble on its own. I’m used to people panning neoclassical economics for assuming agents have unrealistically sophisticated computational abilities; I don’t think I’ve ever heard anyone say “neoclassical economics assumes these agents just aren’t doing enough calculations”. But calculations take space, time, and energy. It seems odd to be worried about the physical realism, then ignore the physical reality of computations in favor of the physical reality of multiverse implications. Behavioral economics seems more respectful of physical constraints on computation by messy flesh-and-blood beings than ergodicity economics.
Maybe I’m fundamentally misunderstanding what an ergodic growth rate is, among other things, and it’s nothing like the expression I wrote above. In discussing equation (8) of his paper, Peters makes the point that considering the natural log enables one to account for the long-run consequences of a typical trajectory (“The time average of this growth rate is identical to the rate of change of the specific expected utility function — because of ergodicity”). But that just shifts the burden of the extra calculation, it doesn’t replace it: now the agent has to learn/know the growth function \(v(x)\), solve something like equation (5) (i.e. identify a \(v(x) : \frac{\Delta v(x)}{\Delta t} = constant\)), and then solve the optimization problem \(\max v(x) ~ s.t. ~ x \in X\). Since we’re accepting ratex here, the agent already knows \(v(x)\) so we’re not imposing an extra knowledge burden on the agent relative to standard EUT. But we are imposing an extra computational burden. EUT with ratex assumes agents have unbounded compute capacity so it’s fine, but it’s just weird to me to call this more physically realistic than standard EUT.
A bit more on meta-utility functions
The key of the ergodicity heuristic is equation (8) in the paper. Peters describes it as a mapping between EUT and EE as follows: “the appropriate growth rate for a given process is formally identical to the rate of change of a specific utility function”,
\[\begin{align} g = \frac{\Delta v(x)}{\Delta t} = \frac{\Delta u(x)}{\Delta t} \end{align}\]The phrasing above suggests we go from a utility function to a growth rate, but based on the rest of the paper I assume we’re supposed to start with the growth rate and then get a utility function. So, faced with a choice, EE says I should select a utility function with which to evaluate that choice as follows:
- compute the growth rate of the process implied by the choice;
- integrate the growth rate to back out the utility function.
This is a bit the reverse of how things are normally done in economics, where the utility function is treated as a primitive object describing an agent. If I’m not mistaken, it implies that utility functions are not only context-dependent but choice-specific: if I give you a choice over two additive gambles and two multiplicative gambles, where you can pick one additive gamble and one multiplicative gamble, you should choose the first based on a linear utility function and the second based on a logarithmic utility function. This is, to say the least, a big shift from EUT and the neoclassical way of doing things.
I think this is actually a clever approach to pinning down utility function curvature, but it doesn’t tell me how to construct a meta-utility function over utility functions. Or meta-meta-utilities over meta-utilities, and so on…
What does EE predict you’ll do at the grocery store?
One of the first EUT problems taught in an undergraduate micro theory class is allocating a budget across two goods. Given two goods \(x\) and \(y\), a utility function \(u(x,y)\) that is increasing and strictly concave in both goods, prices \(p_x\) and \(p_y\), and a total budget \(B\), students maximize \(u(x,y)\) subject to \(p_x x + p_y y \leq B\). It’s a simple but flexible problem that can introduce students to isoquants, budget sets, and constrained optimization with linear and nonlinear functions. The solution is straightforward: spend all your budget, buying \(x\) until \(\frac{\partial u(x,y)}{\partial x} = p_x\) and \(y\) until \(\frac{\partial u(x,y)}{\partial y} = p_y\).
It’s important in the EUT approach that we take the utility function as given because there’s no way to define it from the other elements of the problem. I can represent a lot more information in the utility function, like the complementarity between \(x\) and \(y\), but I need that information given to me. I don’t see how to even get started with EE here. There’s no gamble, no growth process to speak of: just two goods, prices, and a budget. Telling an EUT user \(x\) and \(y\) are perfect complements tells them to use something like \(u(x,y) = \min\{x,y\}\). What does perfect complementarity tell me if I use EE?
I’ve written the choice above with deterministic items, but the same issue shows up if I make one of the items a gamble. How many PS5 raffle tickets should I buy at the store? How should I split my shopping budget between raffle tickets and wrapping paper?
Conclusion
These are the major issues I saw with the ergodicity heuristic as a tool for describing choices over gambles. I’m not saying it can’t be useful, but I think Peters is overselling EE when he talks about how it fixes a centuries-old error in economic theory. There are many resolutions to the St. Petersburg paradox. We can add another to the list, but I see no reason to give it privileged status over the other resolutions we have.
I’m assuming from the way Peters describes the problems of choice under uncertainty that he’s not especially familiar with the economics literature outside of finance/growth/macro. Fair enough, I’m not super familiar with those things myself. But I think it’s leading him to miss a really natural/favorable testing ground for EE: natural resource economics. Natural resource economics offers a big selection of high impact problems with well-understood nonlinear growth processes and choices under uncertainty, like fisheries management or forest rotation. Does EE offer a different perspective on the Faustmann equation or Hotelling’s rule, or a different way to think about capital-resource economies? Natural resource economists tend to be pretty open to modelling techniques from the natural sciences and empirical fit, so I think the crowd could even be a sympathetic one.
To make this a bit more concrete, consider standard fisheries problems. We assume fishers catch fish to maximize profits, which are linear in the total catch and effort exerted. The dynamics of the fish population are typically assumed to follow a logistic growth equation, but there’s a lot of uncertainty in the current stock. Based on equation (8) of Peters’ paper, it seems like EE predicts the objective function ought to be determined by the logistic growth function rather than just a linear function of effort. Ok, test that! See whether the utility function implied by EE fits the data better than the profit functions typically used. There’s great data available on fisheries harvests and stock estimates. There are plenty of economists working in this area who are open to models with non-EUT behavior. Just a couple of recent pubs by such economists off the top of my head: adaptive behavior in fisheries, opportunities for agent-based modelling in fisheries, cost-effective endangered species management for viable populations. Folks in this area are also often quite aware of dynamical systems issues like ergodicity. Personally I’m skeptical EE will add much because it still maintains rationality hypotheses and doesn’t seem to add any useful constraints. But here’s a great testing ground where an EE-user can show how EE allows us to avoid invoking failures of rationality (which Peters doesn’t seem to like: “Paired with a firm belief in its models, this has led to a narrative of human irrationality in large parts of economics”) while fitting the data better than existing approaches. If it works to produce better fisheries management programs, EE could meaningfully improve life for millions of people who depend on fishing for employment and sustenance.
None of this is to say ergodicity is irrelevant in economics. I just think the more interesting questions about ergodicity are related to learning. Outside of economics, I think computer scientists are doing some really interesting stuff in this area under “bandit problems”. But questions related to “how do I learn what the probabilities of winning/losing are when they keep changing and I get limited feedback” and “what does this challenge mean for the objective I maximize/how I choose what to do” seem to be very different from the ones EE seems to be posing.
To wrap it up, I’m just not convinced the ergodicity heuristic is as useful or much of a replacement for EUT as Peters claims it to be. It doesn’t help me address most things I care about when I think about choices over gambles, and it constrains my modelling flexibility in ways I can see to be problematic. The psychological/physical rationales for using EE don’t make any sense to me. I appreciate that it’s a null model to use. But I can’t see myself using it in the near future: as it stands, I’m not sure what I gain from rejecting or failing to reject it.
View or add comments