Life update

A life update: I graduated today! I’m a doctor now. B)

I started working towards an Econ PhD in 2012. Then, as a graduating Business Administration student, an Econ PhD seemed unattainably distant. Over the next couple years I learned more math, became a Coro Fellow, and worked at Red9.

Coro felt like a sideways move: it wasn’t directly connected to academic economics, but I learned a lot and gained some valuable real-life skills (like how to get and give a business card, how to actively listen and ask good questions, and how to critically analyze my own thought processes). Red9 felt a lot more directly connected, since the data analysis was high-frequency time series stuff and the theoretical modeling involved a lot of derivations, but it ended up being much less relevant than I’d expected. Learning R was one of the most helpful things I took from my Red9 experience, but in hindsight Coro was probably the most helpful thing I did to get through the PhD in 5 years. Funny how that works. I finally started grad school in the Fall of 2014.

Grad school was tough. I struggled a lot in the first couple years, first figuring out how to be a grad student, then with what to do research on. Teaching was fun; my Coro training helped me immensely. In the first year (and during my time as a Coro Fellow) I was also working 5-30 hours a week on Red9 stuff, which was cool but also made it harder to focus on doing well in school.

I came to grad school thinking I’d use applied micro methods to do something at the intersection of development, labor, and finance, but found that I liked theory a lot more than I expected. I ended up focusing on recursive dynamic stochastic modeling and environmental economics, which eventually led me to orbit use.

I defended my dissertation, “The Economics of Orbit Use: Theory, Policy, and Measurement”, on April 2nd. Over the next few months, I’ll start posting more. At some point this will include a non-technical summary of my dissertation work. For now, I’m just thrilled to have finished at last. Hooray!

View or add comments

Tidying TLEs in R

I’ve been working with the Union of Concerned Scientists’ data on active satellites for a while now, and decided it was time to add Space-Track’s debris data to it. The UCS data is nice to work with because it’s already tidy: one row per observation, one column per variable. One format for debris data is Two-Line Elements (TLEs) (Space-Track description).

TLEs are apparently great for orbital propagation and stuff, I don’t know, I’m not an aerospace engineer. I’ve seen some stuff about working with TLEs for propagators in MATLAB or Python, but nothing about (a) working with them in R or (b) tidying a collection of TLEs in any program. This post documents a solution I hacked together, mixing an ugly bit of base R with a neat bit of tidyverse. The tidyverse code was adapted from a post by Matthew Lincoln about tidying a crazy single-column table with readr, dplyr, and tidyr. Since I’m reading the data in with read.csv() from base, I’m not using readr.

This process also isn’t necessary. Space-Track makes json formatted data available, and read_json() from jsonlite handles those nicely. This is a “doing things for the sake of it” kind of post.

So. Supposing you’ve gotten your TLEs downloaded into a nice text file, the first step is to read it into R. The TLEs I’m interested in are for currently-tracked objects in LEO, which comes out to a file with 8067 rows and 1 column (API query, at least as of when I wrote this).

library(dplyr)
library(tidyr)

# read in TLEs
options(stringsAsFactors=FALSE) # it's more convenient to work with character strings rather than factors
leo_3le <- read.csv("leo_3le.txt",header=FALSE) # first row is not a column name

This is entirely a statement about me and not the format: TLEs look weird as hell.

> dim(leo_3le)
[1] 8067    1

> head(leo_3le)
                                                                     V1
1                                                          0 VANGUARD 2
2 1 00011U 59001A   18194.13149990  .00000063  00000-0  13264-4 0  9995
3 2 00011  32.8728 183.1765 1468204 230.1854 116.0167 11.85536077532985
4                                                          0 VANGUARD 3
5 1    20U 59007A   18193.39885059 +.00000024 +00000-0 +12986-4 0  9998
6 2    20 033.3397 177.1390 1667296 121.6835 255.7164 11.55589562148902

Lines 1:3 represent a single object. Lines 4:6 represent another object. It’s an annoying format for the things I want to do.

Ok, now the ugly hack stuff: I’m going to select every third row using vectors with ones in appropriate spots, relabel zeros to NAs, drop the NAs, then recombine them all into a single dataframe.

# the ugly hack: rearranging the rows with pseudo matrix math. first, I select the indices for pieces that are line 0 (names), line 1 (a set of parameters), and line 2 (another set of parameters)
rownums <- as.numeric(row.names(leo_3le)) # make sure the row numbers are numeric and not characters - probably unnecessary
tle_names_idx <- rownums*rep(c(1,0,0),length.out=dim(leo_3le)[1])
tle_1line_idx <- rownums*rep(c(0,1,0),length.out=dim(leo_3le)[1])
tle_2line_idx <- rownums*rep(c(0,0,1),length.out=dim(leo_3le)[1])
# rename the zeroed observations to NA so they're easy to drop
tle_names_idx[tle_names_idx==0] <- NA
tle_1line_idx[tle_1line_idx==0] <- NA
tle_2line_idx[tle_2line_idx==0] <- NA
# now drop the NAs
tle_names_idx <- tle_names_idx[!is.na(tle_names_idx)]
tle_1line_idx <- tle_1line_idx[!is.na(tle_1line_idx)]
tle_2line_idx <- tle_2line_idx[!is.na(tle_2line_idx)]
# recombine everything into a dataframe
leo_3le_dfrm <- data.frame(sat.name = leo_3le[tle_names_idx,1], 
						   line1 = leo_3le[tle_1line_idx,1],
						   line2 = leo_3le[tle_2line_idx,1])

This leaves me with a 2689-row 3-column dataframe. The first column has the satellite name (line 0 of the TLE), the second column has the first set of parameters (line 1 of the TLE), and the third column has the second set of parameters (line 2 of the TLE). There’s probably a way to do this in tidyverse.

> dim(leo_3le_dfrm)
[1] 2689    3

> head(leo_3le_dfrm)
           sat.name
1      0 VANGUARD 2
2      0 VANGUARD 3
3      0 EXPLORER 7
4         0 TIROS 1
5      0 TRANSIT 2A
6 0 SOLRAD 1 (GREB)
                                                                  line1
1 1 00011U 59001A   18194.13149990  .00000063  00000-0  13264-4 0  9995
2 1    20U 59007A   18193.39885059 +.00000024 +00000-0 +12986-4 0  9998
3 1 00022U 59009A   18193.85420323  .00000017  00000-0  25009-4 0  9994
4 1 00029U 60002B   18194.55070431 -.00000135  00000-0  10965-4 0  9992
5 1 00045U 60007A   18194.14978757 -.00000039  00000-0  17108-4 0  9997
6 1 00046U 60007B   18194.29883214 -.00000041  00000-0  15170-4 0  9997
                                                                  line2
1 2 00011  32.8728 183.1765 1468204 230.1854 116.0167 11.85536077532985
2 2    20 033.3397 177.1390 1667296 121.6835 255.7164 11.55589562148902
3 2 00022  50.2826 171.3798 0140753  83.9529 277.7437 14.94580679119776
4 2 00029  48.3805  35.0120 0023682  97.0974 263.2631 14.74254161114643
5 2 00045  66.6952  79.4545 0248473 109.6184 253.1917 14.33604371 21605
6 2 00046  66.6897 151.6848 0217505 295.8620  62.0202 14.49157393 36927

The tidyverse functions are the prettiest part of this. I create new objects to hold the modified vectors (just a personal tic), then run two pipes to do the cleaning. The first pipe splits the strings at the appropriate character numbers. Why not just whitespace, you ask? Apparently there can be whitespaces in some of the orbital elements ¯\(ツ)/¯ (Element Set Epoch, columns 19-32 of line 1). The second trims leading and trailing whitespace. Finally, I recombine everything into a dataframe. There are tidy ways to do this, but I like using base R for this.

# make separate objects for the first and second line elements
line1_col <- data_frame(text = leo_3le_dfrm[,2])
line2_col <- data_frame(text = leo_3le_dfrm[,3])

# the beautiful tidying: split the dataframe where there are variables and trim whitespace
leo_3le_dfrm_line1 <- line1_col %>% 
						# split the strings
						separate(text, into=c("line.num1","catalog.number","elset.class","intl.des","epoch","mean.motion.deriv.1","mean.motion.deriv.2","b.drag","elset.type","elset.num","checksum"), sep=c(1,7,8,17,32,43,52,61,63,68)) %>%
						# trim whitespace
						mutate_at(.funs=str_trim, .vars=vars(line.num1:checksum))
leo_3le_dfrm_line2 <- line2_col %>% 
						# split the strings
						separate(text, into=c("line.num2","catalog.number.2","inclination","raan.deg","eccentricity","aop","mean.anomaly.deg","mean.motion","rev.num.epoch","checksum"), sep=c(1,7,16,25,33,42,51,63,68))  %>%
						# trim whitespace
					    mutate_at(.funs=str_trim, .vars=vars(line.num2:checksum))
leo_3le_dfrm <- as.data.frame(cbind(sat.name=leo_3le_dfrm$sat.name,
									leo_3le_dfrm_line1, 
									leo_3le_dfrm_line2))

The end result is a 2689-row 22-column tidy dataframe of orbital parameters which can be merged with other tidy datasets and used for all kinds of other analysis:

> dim(leo_3le_dfrm)
[1] 2689   22

> head(leo_3le_dfrm)
           sat.name line.num1 catalog.number elset.class intl.des
1      0 VANGUARD 2         1          00011           U   59001A
2      0 VANGUARD 3         1             20           U   59007A
3      0 EXPLORER 7         1          00022           U   59009A
4         0 TIROS 1         1          00029           U   60002B
5      0 TRANSIT 2A         1          00045           U   60007A
6 0 SOLRAD 1 (GREB)         1          00046           U   60007B
           epoch mean.motion.deriv.1 mean.motion.deriv.2   b.drag elset.type
1 18194.13149990           .00000063             00000-0  13264-4          0
2 18193.39885059          +.00000024            +00000-0 +12986-4          0
3 18193.85420323           .00000017             00000-0  25009-4          0
4 18194.55070431          -.00000135             00000-0  10965-4          0
5 18194.14978757          -.00000039             00000-0  17108-4          0
6 18194.29883214          -.00000041             00000-0  15170-4          0
  elset.num checksum line.num2 catalog.number.2 inclination raan.deg
1       999        5         2            00011     32.8728 183.1765
2       999        8         2               20    033.3397 177.1390
3       999        4         2            00022     50.2826 171.3798
4       999        2         2            00029     48.3805  35.0120
5       999        7         2            00045     66.6952  79.4545
6       999        7         2            00046     66.6897 151.6848
  eccentricity      aop mean.anomaly.deg mean.motion rev.num.epoch checksum
1      1468204 230.1854         116.0167 11.85536077         53298        5
2      1667296 121.6835         255.7164 11.55589562         14890        2
3      0140753  83.9529         277.7437 14.94580679         11977        6
4      0023682  97.0974         263.2631 14.74254161         11464        3
5      0248473 109.6184         253.1917 14.33604371          2160        5
6      0217505 295.8620          62.0202 14.49157393          3692        7

It still needs to be cleaned a bit - those + signs in the mean motion derivatives are annoying, I don’t need the line number or checksum columns, and I want to get rid of the leading 0 and whitespace in sat.name - but this is good enough for now.

View or add comments

Long-run equilibria in a Zero Dawn economy

Last week I played through Horizon: Zero Dawn on my partner’s dad’s PS4. It’s a really fun game. The controls felt natural, the story was great, and the difficulty on “Hard” was the right balance for a relaxing-but-not-trivial vacation game. It reminded me of Mass Effect 2 that way. I really wish more developers would make story-driven open-world single-player RPGs rather than the MMORPGs that seem to be popular these days. I loved Knights of the Old Republic 1 and 2 (KotOR 2 is, in my humble opinion, one of the best storylines in the Star Wars game universe, and I would pay $60 or more for a remake), but was sorely disappointed by SW:TOR. I suppose the market has spoken, though.

H:ZD is set in a post-post-apocalyptic world, where there are machine-animals and no bio-animals larger than a boar, no governments larger than a modestly-sized empire (the Carja) and some smaller tribes, and technology is somewhere between hunter-gatherer and early agriculture. There are some more advanced technologies harvested from machines and the ruins of our civilization (“the ancients”), which went kaput in the late 2060s. The protagonist, Aloy, is a member of one of the smaller tribes, the Nora, who have a fairly isolationist matriarchal society. The game is a fascinating study of the rise, fall, and re-rise of human societies. With the exception of a couple more-skittish machine-animals, with names like “Strider” (mecha-horse) and “Grazer” (mecha-antelope), the machines tend to be quite aggressive toward humans happening by. The designs are pretty cool, blending dinosaur and megafauna with advanced weapons. For example, “Watchers”, the first predator-machine encountered, are like mecha-deinonychus (deinonychi?) which can sometimes shoot lasers; “Sawtooths” and “Ravagers” are like mecha-sabretooth cats with the latter having some sort of energy beam and radar-like system; and “Thunderjaws” are like mecha-T-rexes with laser beams, energy guns, and missile launchers. The game is a blast.

The world’s economy is somewhere between barter and a metal specie standard. Purchasing items involves trading using a mix of metal shards harvested from machine-animals and some other things (like other parts of machine-animals and parts of bio-animals). Metal shards are the main currency, though. The shards are used in crafting arrows, so they’re more useful than gold bars. The machine-animals don’t really like humans much so hunting them involves some risk. There doesn’t seem to be any banking in the world, private or centralized; there are predators, but no predatory lending.

I thought it’d be fun to think a bit about some of the economics that follow from the ecologically-controlled money supply. I initially thought of looking at how the machine-animal population dynamics would drive inflation, but modifying a standard money supply model to account for the lack of governments and banking felt like too much work. Instead, I’m going to add a hunting-driven price to a standard fisheries model (Gordon-Schaefer) and think about the steady state equilibria.

The model

The model is three equations: the machine-animal population dynamics, the relative price of shards, and the profits from hunting. The first two are just relabeled fish equations, and the third is a simple linear inverse demand curve. To simplify the model, suppose machines are homogeneous and that one unit of shards is harvested per machine. \(P_t\) is the price of a unit of shards at time \(t\), the size of the machine population is \(M_t\), and the number of machines harvested is \(H_t\). The model equations are:

\[\begin{align} \text{(Machine population:) } \dot{M} &= rM_t(K - M_t) - H_t \\ \text{(Shard price:) } P_t &= A - B H_t \\ \text{(Hunting industry profits:) } \pi_t &= P_t H_t M_t - c H_t \end{align}\]

\(A\) and \(B\) are the usual maximum willingness-to-pay and slope parameters for the shard price. Note that this is partial equilibrium; the shard price is relative to a consumption good with price normalized to \(1\). \(r\) and \(K\) are the natural machine renewal rate (the production rate from the Cauldrons) and the environment’s machine-carrying capacity (the machines feed on biomass). \(c\) is the real cost of hunting one machine (arrows, risk, and opportunity cost) relative to the price of the consumption good.

Steady state equilibria

I’m interested in two types of long-run equilibria here: open access to hunting, which is the default state in the game; and a hunting monopoly controlled by the Hunting Lodge, a Carja organization in the game. Under open access, hunters will take down machines until the return from another unit of shards just covers the cost of taking down another machine. Under the Hunting Lodge’s monopoly, hunters will take down machines until the marginal return from another unit of shards is equal to the marginal cost of taking down another machine. Of course, the Hunting Lodge in the game is far from a monopoly and seems to be more interested in the “sport” aspect of hunting than the “lucre” aspect of it. There’s a series of quests where you can disrupt the Hunting Lodge’s antiquated norms and shitty leadership - it’s a really fun plotline.

The steady state condition for the machine population gives the machine population size as a function of the number of machines hunted, \(\begin{align} \dot{M} &= 0 \\ \implies M_t &= \frac{K}{r}(r - H_t). \end{align}\)

This lets us reduce the model to a single equation in a single variable, profit in \(H_t\), \(\begin{equation} \pi_t(H_t) = \frac{BK}{r}H_t^3 - \left( \frac{AK}{r} + BK \right)H_t^2 + (AK - c)H_t . \end{equation}\) Unlike the usual profit function in the Gordon-Schaefer model, this one is cubic in the hunting rate (harvest effort) because the price is no longer a constant.

Under open access, the number of machines hunted will make industry profits zero, i.e. \(H^{OA}_t : \pi_t(H^{OA}_t) = 0\). We can factor out an \(H_t\) and drop it to get rid of the uninteresting \(H_t = 0\) solution. This leaves us with two solutions: \(H^{OA}_t = \left( \frac{r}{2BK} \right) \left[ \left( \frac{AK}{r} + BK \right) \pm \left( \left( \frac{AK}{r} + BK \right)^2 - 4 \left( \frac{BK}{r} \right) (AK - c) \right)^{1/2} \right] .\)

When the Hunting Lodge is the monopoly shard supplier, they’ll control hunting to maximize industry profits. Again, this gives two solutions: \(H^{HL}_t = \left( \frac{r}{3BK} \right) \left[ \left( \frac{AK}{r} + BK \right) \pm \left( 4\left( \frac{AK}{r} + BK \right)^2 - 12 \left( \frac{BK}{r} \right) (AK - c) \right)^{1/2} \right] .\)

The Hunting Lodge solutions should be closer to zero than the open access solutions… but when the parameters are all individually positive, the Hunting Lodge solutions are minima! They definitely won’t try to minimize profits, so that doesn’t really make economic sense. Maybe there are some reasonable conditions we can assume to fix this, but this is not a high-effort post so I’m not going to look for them. A lazy explanation for this: that’s why the Hunting Lodge isn’t a monopoly shard supplier in the game!

The long-run effects of a little more HADES

HADES, one of the big baddies of the game, wants to produce more machines, make them stronger and eviler, and wipe out all life on the planet. We can think about the effects of HADES getting a little stronger on the open access hunting rate by looking at the derivatives of \(H^{OA}_t\) with respect to \(r\) and \(c\). Increasing \(c\) should reduce \(H^{OA}_t\), all else equal*; increasing \(r\) looks like it might be more interesting. I like pictures so I’ll do this numerically.

*The first solution, with the plus sign, is increasing in the cost. This is economically weird so I’m going to ignore it. There’s a story where this makes sense: if the cost of hunting is a barrier to entry, then increasing the cost can also increase the value of a unit of hunted shards, since there are fewer suppliers. This type of effect shows up in mining and satellite launching when the fixed cost of entry increases. This solution has the same signs with respect to \(r\) and \(K\) as the other one.

library(ggplot2)
library(gridExtra)

rm(list=ls())

# economic parameters
A <- 10
B <- 2
c <- 5

# ecological parameters
K <- 100
r <- 10

# open access hunting rate
H <- function(c,r,K,...) {
	X <- B*K/r
	Y <- A*K/r + B*K
	Z <- A*K - c
	#rate_1 <- (1/2*X)*(Y + sqrt(Y^2 - 4*X*Z)) # drop this solution because it increases in the cost - economically weird!
	rate_2 <- (1/2*X)*(Y - sqrt(Y^2 - 4*X*Z))
	rates <- c(rate_2)
	return(rates)
}

# HADES statics
c_seq <- seq(from=1,to=51,by=1)
r_seq <- seq(from=5,to=55,by=1)

HADES_statics <- as.data.frame(cbind(c_seq,H(c_seq,r,K),r_seq,H(c,r_seq,K)))
colnames(HADES_statics) <- c("cost","hunt_c","renew","hunt_r")

cost_change <- ggplot(data=HADES_statics) + geom_line(aes(x=cost,y=hunt_c),size=1) + ggtitle("Effect of changing c") + xlab("Marginal cost of machine hunting") + ylab("Number of machines hunted") + theme_bw()
renew_change <- ggplot(data=HADES_statics) + geom_line(aes(x=renew,y=hunt_r),size=1) + ggtitle("Effect of changing r") + xlab("Rate of machine production") + ylab("Number of machines hunted") + theme_bw()

grid.arrange(cost_change,renew_change,ncol=2)

The stronger HADES gets, the fewer machines get hunted. Since machines the the main source of metal and advanced technology in this world, this is bad for the folks in Zero Dawn.

The long-run effects of a little more GAIA

GAIA, the force for good fighting HADES in this world, wants to improve the ecosystem so that it can support more life + more diverse life. GAIA’s main tool for doing this is producing machines through the Cauldrons, and having them shepherd the world’s ecological development*. These are the same machines and Cauldrons that HADES hijacks. We can think about the effects of GAIA getting a little stronger on the open access hunting rate by looking at the derivative of \(H^{OA}_t\) with respect to \(K\), since she is improving the ecosystem. The code from the HADES case can be repurposed for this.

*Actually, HADES is a module of GAIA that’s run amok. So in a sense, GAIA is both the force for good and the force for evil.

library(ggplot2)

rm(list=ls())

# economic parameters
A <- 10
B <- 2
c <- 5

# ecological parameters
K <- 100
r <- 10

# open access hunting rate
H <- function(c,r,K,...) {
	X <- B*K/r
	Y <- A*K/r + B*K
	Z <- A*K - c
	#rate_1 <- (1/2*X)*(Y + sqrt(Y^2 - 4*X*Z)) # drop this solution because it increases in the cost - economically weird!
	rate_2 <- (1/2*X)*(Y - sqrt(Y^2 - 4*X*Z))
	rates <- c(rate_2)
	return(rates)
}

# GAIA statics
K_seq <- seq(from=90,to=140,by=1)

GAIA_statics  <- as.data.frame(cbind(K_seq,H(c,r,K_seq)))
colnames(GAIA_statics) <- c("capacity","hunt_K")

capacity_change <- ggplot(data=GAIA_statics) + geom_line(aes(x=capacity,y=hunt_K),size=1) + ggtitle("Effect of changing K") + xlab("Environment's machine-carrying capacity") + ylab("Number of machines hunted") + theme_bw()
capacity_change

The stronger GAIA gets, the more machines get hunted. Presumably, this means the societies doing the hunting get more access to metal and advanced technology.

Conclusion

I’m not trying to say that hunting == socially good everywhere and always (even though it is in this model), but boy that HADES is bad news!

Horizon: Zero Dawn is a terrific game. Solid gameplay, interesting story, great visuals. If you like single player RPGs, I think you’ll enjoy it.

View or add comments

Thoughts on "A Random Physicist Takes on Economics"

Jason Smith has interesting ideas. I’ve followed his blog on and off since around the winter of 2014, in my first year of grad school. At the time I was working on developing Shannon entropy-related algorithms for detecting actions in time series data from motion sensors on surf- and snowboards, and his blog posts about applying Shannon entropy to economics intrigued me. I was not (and am not) really a macro person, so a lot of the applications he focused on seemed ho-hum to me (important, but not personally exciting). At some point I stopped following blogs as much to focus on getting my own research going, and lost track of his work.

Smith has a new book out, A Random Physicist Takes on Economics, which I just read. If you’re interested in economic theory at all, I highly recommend it. It’s a quick read. I want to lay out some of my thoughts on the book here while they’re fresh. Since I have the Kindle version, I won’t mention pages, just the approximate location of specific references. This isn’t a comprehensive book review, rather a collection of my thoughts on the ideas, so it’ll be weighted toward the things that stood out to me.

Big ideas

I think the big idea behind Smith’s work is that much of the rational actor framework used in economics is not necessary. Instead, a lot of the results can be derived from assuming that agents behave randomly subject to their constraints. He traces this idea back to Becker’s Irrational Behavior and Economic Theory, and also cites some of the experimental econ work that backs this up.

One formalism for this idea is that entropy maximization in high-dimensional spaces moves averages to points at the edge of the feasible set pretty quickly, and that changes to the constraint set cause similar changes to the average as they do to the constrained optimum for a convex function. In this view, comparative statics exercises on budget sets will get the right signs, but not necessarily the right magnitudes.

Random behavior and exploring the state space

Smith argues that humans are so complex that assuming uniformly random behavior over a feasible set is a more reasonable starting point than assuming some sort of complex optimization process. This isn’t to say that people actually behave randomly, but that randomness is a modeling choice guided by our own ignorance. In aggregate, we can get to results that replicate representative agent results from assuming random micro-level behavior. Smith describes this random micro-level behavior as “agents exploring the state space” (somewhere early on). The choice of “uniformly random” is guided by the principle of maximum entropy over a closed and bounded budget set.

Joshua Gans mentions this in his Amazon review of the book: random behavior is a useful benchmark against which to compare rational behavior. One of my takeaways from Smith’s work is to think about which of my modeling conclusions would be robust to random behavior and which wouldn’t be. My work deals more with the behavior of firms, where I think rationality is maybe less of a stretch. A funny anecdote: I heard an economist who worked with a large firm once say that he “had yet to meet the profit maximizer”. The point is that firms aren’t always rational profit maximizers. Simon’s behavioral work on firm decision making is in this spirit.

There’s a helpful example I remember from Smith’s blog that didn’t make it into the book. Observe: people buy less gas when the price is higher. A rational behavior proponent might say that this is because people look at the price and say, “hey I can’t afford as much gas, so I’m going to buy less”. A random behavior proponent would say that this is because there are fewer people who can afford gas at the higher price, and so less gas gets bought. The former is about a bunch of individuals continuously adjusting their purchases, while the latter is about a bunch of individuals discretely not buying. Both can generate an observed continuous decrease in gas purchased when price increases.

I think that the truth for any given situation is likely to be somewhere in between random and rational behavior. There’s a lot more about information transfer and equilibrium at his blog, which I recommend any economist reading this to check out. Spend at least an hour going through some of his posts and thinking seriously about his arguments - I think you’ll likely get some mileage out of it.

Game theory, random behavior, and common resources

I spend a lot of time thinking about the use of common resources. Smith doesn’t really discuss these issues much - there’s a mention of negative and positive externalities at the end, but it’s brief. So what does the random behavior hypothesis mean for common resources?

The rational behavior hypothesis for the overexploitation of common resources is that selfish agents choose to act in a way that is personally beneficial at the cost of the group as a whole. Cooperation and defection become statements about people trying to get higher payoffs for themselves. I think the random behavior hypothesis here would be something like, “there are fewer states of the world in which people cooperate than in which they defect”. Cooperation and defection then become statements about how few ways there are for people to organize relative to the number of ways they could fail to organize.

I think this is plausible… it seems like the random behavior hypothesis is another way to view coordination failures. It’s not that coordination doesn’t happen because it’s difficult for individuals to stick to, it’s that it doesn’t happen because it requires a confluence of more events than discoordination does.

But there’s a lot of work on the ways that coordination does happen in commons (Ostrom’s work, for example). The game theoretic perspective seems to be valuable here: it gives a direction for the policy to aim for, and policy that incorporates game theoretic insights to commons management seems to work. So… maybe rational actor models can be more useful than Smith’s book lets on? Maybe the random behavior interpretation is that applying Ostrom’s principles create more ways for people to cooperate than existed before, thus making cooperation more likely.

Whither welfare?

The big consequence of the random behavior framework is that we lose the normative piece of economic modeling. Using utility maximization framework gives us a way to talk about what should be done in the same model as we describe what will be done. In the random behavior framework, we can say that we should loosen constraints in one direction or another, but the “why” of doing it is a bit more obscured. Smith says that loosening constraints can increase the entropy, but I didn’t quite follow his argument for why that’s desirable in and of itself. It seems like there are some more principles in the background guiding that choice.

I have a lot of issues with how “improving welfare” gets (ab)used as a goal in economic analysis. People go around unthinkingly saying “Kaldor-Hicks improvements are possible” as they advocate for specific policies, often explicitly sidestepping equity concerns. Other folks use a concave social welfare function as a criterion to avoid this, and argue against inequality-increasing policies. I lean toward the latter camp. I think there are technical arguments in favor of this - the time with which we can enjoy things is one among many fixed factors, generating decreasing marginal benefits to any individual accumulating large amounts of wealth - but to be honest it’s probably also a reflection of my personal politics. These things interact and all so I resist the claim that it’s purely personal politics, but that’s a separate conversation.

Anyway, I think that good economists know that the decision of what to prioritize in a society can’t be just about “growing the pie” without discussing who gets which slices. But there are a lot of economists who act as though “growing the pie” can be regarded as desirable independent of how the pie will be split. This can be true for a theoretical “ceteris paribus” conversation, but I don’t think this can be true for a policy discussion with real-world consequences. There’s a post I once read (I think it was on interfluidity, but possibly on econospeak which argued that part of the purpose of leadership was to select one among the many possible equilibria, including those where the Second Welfare Theorem would or wouldn’t be usable. The random behavior hypothesis, by getting rid of economic welfare, might make the need for this leadership and value judgement more explicit. I think that would be a good thing.

Edit: It occurs to me that Smith’s framework also allows normative statements to be made alongside positive statements; they’re just about reshaping constraint sets. I still think it decouples the two more than the standard utility maximization framework does, but maybe I’m wrong.

Some issues I had with the book’s arguments

I want to be clear: I enjoyed reading Smith’s book, and I’ve enjoyed reading his blog. To the extent I’ve been bored by it, it’s because it’s about macro stuff and I don’t do macro. I am not pointing out issues in the spirit of “here’s why this sucks”, but in the spirit of “here are places where I disagree with the presentation of interesting ideas that I want to continue engaging with”.

I think an economist inclined to be critical could find issues in the text. Smith seems to be writing for a more general audience, so there are places where his use of terms is not quite correct. For example, near the end (around 91%) he describes “tatonnement” as “random trial and error in entropy maximization”; I understand it as a process of “adjusting prices in the direction of excess demand”. I don’t think this matters for his argument, so it’s not a big deal.

I think the more substantive issue a random critical economist would raise is related to his treatment of empirical economics. By and large, he seems to ignore empirical economics almost entirely, and conflate empirical economics with empirical macroeconomics. To the extent that he discusses microeconomics at all, it’s all about the specific pieces of micro theory used in parts of macro modeling. That’s fine! To echo one of Smith’s points, limiting the scope of an argument is perfectly valid. I’m mostly a theorist right now, and I think there are lots of solid points he makes about the things he’s talking about. But as an environmental economist with empirical leanings, it sort of annoys me to see him lump all of economics with macro and all of micro with the micro used in macro. There’s some discussion of game theory, but not a lot.

Smith also takes issue with the use of math formalism in economics. One point he raises, which I remember from his blog, is the use of $\mathbb{R}_+$ to describe a feasible set. Why, he asks, do economists feel the need to say “positive real numbers” rather than “a number greater than zero”? What is gained? He argues that this is a symptom of economics’ excessive and inappropriate use of math. I think this criticism is sort of misguided, but also kind of on point.

Sort of misguided: A lot of economic theory is styled as a branch of logic. So being precise about the field of numbers being used is kind of a cultural thing. The existence proofs we use, or at least the earlier ones, are/were often not constructive. Existence followed from properties of the reals. More modern proofs often use Fixed Point Theorems for existence, followed by Contraction Mapping approaches for computation. The point is that being precise was important for the people making the arguments to convince the people reading the arguments. This is the “it’s just a symbol, get over it” counter-argument.

Kind of on point: In a class I took with him, Miles Kimball was fond of saying that whether or not there is a smallest number that can be used can’t matter to the substantive economics, so any economic proof based on reals has to go through for integers or rationals as well. If it doesn’t, that’s a sign that there’s something funky about the proof. Daniel Lakeland makes similar arguments in justifying his use of nonstandard analysis (it’s somewhere in his blog…). So, yeah, just saying “a number greater than zero” would be fine for any proof that really needed it, though the author would need to go through more hoops to satisfy their likely audience (economists who seem to like real analysis).

I think some of the math in economic theory that Smith takes issue with probably falls in this category: people were using formalisms as shortcuts, because they don’t want to do the proof in even more detail over the rationals or something, but it doesn’t really matter for the substantive economics at play. I think that whether this offends you or not probably says more about your priors over economics and math than it does about the math itself.

I think there’s a similar issue at play with Smith’s read of rational expectations and infinity. Smith argues that rational expectations are somewhere between incoherent (inverting distributions is ill-posed) and a fudge factor that lets a modeler get whatever they want. I agree that the latter is a thing that some, possibly many, economists do. Why did XYZ happen? Oh, because of expectations about XYZ! Assuming good faith on all sides, though, I think there are two things going on here.

The first is that expectations are about beliefs, and self-fulfilling prophecies are a thing. I see this in my students when I teach intro math for econ: if they buy into the notion that they’re “just not math people”, they will do much worse than if they reframe the issue as “math is not easy, but if I work hard I can do well”. Their expectations about their future performance and inherent abilities shape their future outcomes, which reinforce their expectations. If Anakin hadn’t believed his vision of Padme dying on Mustafar, he wouldn’t have acted in a way to make it happen. This is a feature of the human condition, and modeling it is relevant. I think Smith’s concerns about information flowing back through time are missing this point, and getting too caught up in the math formalism.

The second is that modeling beliefs is hard, and rational expectations is a tractable shortcut. There are other tractable shortcuts, like assuming that variables follow martingale processes, which can be useful too. But given that beliefs seem to matter, and that it’s hard to model heterogeneous beliefs being updated in heterogeneous ways, I think the use of rational expectations is at least understandable. There’s a similar point in the use of infinity (which Smith only touches upon at the end, and I may be misunderstanding what he’s getting at). It’s not that economists actually believe that agents think they’ll live forever, at least not theorists who have what I consider good economic intuition. It’s that using finite horizons in conjunction with backwards induction yields weird results, so infinite horizons is a modeling shortcut to get “more realistic” results. This is another of Miles’ arguments: whether or not the universe will really end can’t matter to real economics happening today, so don’t take the “infinite” horizon too literally. Smith seems to grok this in his discussion of scope conditions. Maybe some of this is just that we’re using different languages; I agree that economists could stand to be more explicit about scope conditions.

Conclusion

This has gotten way too long. To summarize:

  1. I liked the book. I think it should be widely read by economists, applied and theoretical.
  2. I think Smith is on to something with his modeling approach. I want to try working with it soon.
  3. I think Smith’s work would benefit from more engagement with economists. Partly this would add some relevant nuance to his approach (e.g. rational expectations and self-fulfilling prophecies), and partly this would expand the set of topics he considers beyond macro-focused things. It goes the other way too: I think engaging with his work would be good for economists, at the very least offering a useful benchmark to compare rational actor models against.

Read the book!

View or add comments

A few ways to curve class grades

I’ve been teaching an introductory math class for econ majors for the last two semesters. I curve the class scores so that the average is a B-, according to the university’s recommended thresholds for letter grades. I like using those thresholds, but I have yet to write a test where the students get to a B- on their own. Maybe my tests are too hard; maybe I’m just inflating grades to reduce complaints. I’m working on writing tests that reveal abilities according to the letter grade thresholds (a subject for another post). In this post, I’d like to write down a few different curves I’ve used or seen used.

I like curves that I can explain to students. Algebra and derivations are a big focus of my class, so I like it when I can have my students solve for the curve parameter(s) using just simple algebra and the class average. That way they can calculate their own post-curve grade before I post it. I’m not sure how many of them actually do, but they could…

Notation

\(x_i\) is an individual student’s raw score, \(\bar{x}\) is the average of the scores, the curved score is the output of a function \(C(x_i,p)\), \(p\) refers to a vector of parameters of the curve function. There are \(n\) students, and the instructor wants the curved grades to be close to \(\tau\). The maximum score achievable is normalized to \(100\).

A flat curve targeting a mean

A constant number of points added to each student’s score is the simplest and most popular curve I’ve seen. Add \(p\) points to each student’s grade, until the class average is close to the desired level, \(\tau\). Formally, \(C(x_i,p) = x_i + p ,\) where \(p\) is such that
\(\frac{1}{n} \sum_{i=1}^n C(x_i,p) = \tau .\) Doing some algebra, \begin{align} \frac{1}{n} \sum_{i=1}^n (x_i + p) &= \tau \cr \bar{x} + p &= \tau \cr \implies p = \tau - \bar{x} \end{align}

All the instructor needs to do with this curve is add the difference between the target and the class average to each student’s score, and the average hits the target. Very easy to implement and communicate to students. Each student gets the same boost, and because the curve function is monotonic ranks aren’t changed. One downside to this method is that students’ scores can be pushed over \(100\). If letter grades are awarded based on fixed thresholds, this means that some of the points may be “wasted”. That is, some students at the top may get extra points that don’t benefit them at the cost of students across the rest of the distribution who could have gotten a higher letter grade. In theory, an instructor who wanted to use a flat curve while avoiding wastage could do a round of curving, truncate the over-the-top scores to \(100\), and repeat the curving until \(p\) stops changing. I haven’t seen anyone do the full process, just a single iteration.

I’ve received curves like this in my undergrad. I feel like my incentive to work hard was reduced in classes that curved like this. As long as I was above the average, I was usually sure I would get an A. As a teacher, I’d like it if my curve function distorted incentives as little as possible.

A linear proportional curve targeting a mean

In this curve students are given back a proportion \(p\) of the points they missed. I’ve been using this function lately. Formally, \(C(x_i,p) = x_i + (100-x_i)p .\) If we’re targeting the mean, \(p\) is such that \begin{align} \frac{1}{n} \sum_{i=1}^n C(x_i,p) &= \tau \cr \frac{1}{n} \sum_{i=1}^n (x_i + (100-x_i)p) &= \tau \cr \frac{1}{n} \sum_{i=1}^n x_i + (100 -\frac{1}{n} \sum_{i=1}^n x_i )p &= \tau \cr \bar{x} + (100 - \bar{x} )p &= \tau \cr \implies p &= \frac{\tau - \bar{x}}{100 - \bar{x}} \end{align}

This gives a more points back to students who did worse, but is still monotonic so ranks are preserved. It never goes over $100$, so no points are wasted. It’s simple enough to implement and easy to communicate (“you get back a portion of what you missed”).

I’ve never received this curve so I don’t know how it feels on the receiving end. I think it preserves some incentives for students at the top to work hard, since they know their scores won’t move much after the curve. By the same token, I can see it feeling unfair to students at the top. I do like not having to iterate or anything to avoid wastage.

A least-squares curve that matches a mean and a median

In this curve the mean and median of the scores are brought as close as possible to some targets \(\tau_{ave}\) and \(\tau_{med}\).

This one came up recently in a conversation about a grading issue. My colleague was teaching a class with two TAs running recitation sections. At the end of the semester, the TA with the lower mean had the higher median (this is TA \(1\), the other is TA \(2\)). My colleague wanted to find a way to match the recitation grades from the two TAs in some “fair” way. Using a flat curve to bring TA \(1\)’s mean up to TA \(2\)’s would have given an extra benefit to the students at the top of TA \(2\)’s class, while matching the medians seemed like it would end up boosting TA \(2\)’s average student that much higher.

I thought, “why not use least squares to match both?” (EDIT 12/25/18: Why stop at LS? In principle, any type of GMM can work.) Using the convention that TA \(1\)’s scores are being matched to TA \(2\)’s, denoting the \(n_1\) students in TA \(1\)’s class by \(x_{i1}\) and the \(n_2\) students in TA \(2\)’s class by \(x_{i2}\), and using a flat curve \(C(x_i,p) = x_i + p\), we define the sum of squared errors for the mean and median as \(\epsilon(\{x_i\},p) = \left( \bar{C(x_{i1},p)} - \bar{x}_{i2} \right)^2 + \left( \hat{C(x_{i1},p)} - \hat{x}_{i2} \right)^2\)

where \(\bar{C(x_{i1},p)}\), \(\hat{C(x_{i1},p)}\) are the mean and median of TA \(1\)’s curved scores, and \(\bar{x_{i2}}\), \(\hat{x_{i2}}\) are the mean and median of TA \(2\)’s raw scores (these are the \(\tau_{ave}\) and \(\tau_{med}\)). The curve parameter \(p\) minimizes the sum of squared errors, \(p = \text{argmin}_p \epsilon(\{x_i\},p) .\)

I’ve never done this in my own class, but I like the idea of matching more than one statistic of the distribution. If \(p\) comes out negative, then the curve could be interpreted as points to add to TA \(2\)’s scores. If the instructor wants to emphasize the mean over the median (or vice versa), they could put weights in front of the squared error terms. I’ve heard of someone using GMM to set their mean and variance to some targets, but IIRC in that case the variance piece ended up not mattering. I didn’t try to solve this one algebraically. Instead, I wrote a short R function (below) which uses optim() to solve for \(p\) numerically. (I think J is TA 1, and N is TA 2, but it’s been a while since I wrote this.)

curvefinder <- function(...){
  rawscores <- read.csv("rawscores.csv")
  
  # expects the first column to be J, second column to be N
  Jscores <- rawscores[,1]
  Nscores <- rawscores[,2]
  # removes NAs from N's column. NAs are created when read.csv() notices that J has more rows than N, and fills extra cells in N with NA so that both columns are the same length.  
  Nscores <- Nscores[!is.na(Nscores)]
  
  # calculates the curved mean for whichever column will be curved. x is the parameter vector.
  curvedmean <- function(x,scores) {
  	curved_scores <- scores+x # adds a flat curve - could try other functions, like score + (1-score)*x
  	newmean <- mean(curved_scores)
  	return(newmean)
  }
  
  # calculates the curved median for whichever column will be curved. x is the parameter vector.
  curvedmedian <- function(x,scores) {
  	curved_scores <- scores+x # flat curve
  	newmedian <- median(curved_scores)
  	return(newmedian)
  }
  
  # calculates the sum of squared errors between the curved column and the target column. x is the parameter vector.
  sse <- function(x,Jscores,Nscores) {
  	error <- (curvedmean(x,Jscores) - mean(Nscores))^2 + (curvedmedian(x,Jscores) - median(Nscores))^2
  	return(error)
  }
  
  # solves for a curve parameter (or parameter vector) by nonlinear least squares
  optim(0.001, sse, Jscores=Jscores, Nscores=Nscores, lower=0, method="L-BFGS-B")
}

I like that different curve functions could be used easily, and that the code can reduce to any other single-statistic-targeting curve I can think of. I suppose it’d be easy enough to explain this version’s flat curve to students, but it might be harder to explain where \(p\) comes from for any version.

Conclusion

  • There is always some arbitrariness in curving, if only in the selection of curve.
  • Curving is useful when the test is poorly calibrated to student ability. I struggle with this calibration.
  • “Fairness” seems like an intuitively desirable concept without a clear definition. Monotonicity seems fair, but beyond that… is wastage fair or unfair? I tend to think it is unfair, but I recognize that that’s an opinion and not a result. The fairness of monotonicity seems less disputable, but I’m open to hearing arguments against it. This leads me to favor the linear proportional curve or the least-squares curves.
  • Transparency seems important to me, if only from a “customer relations” standpoint. I want my students to be able to understand how the curve works and why it’s being used, so that they can better assess their own abilities. This leads me to avoid the least-squares curves, at least for the class I teach where students are not as familiar with least-squares. Maybe transparency isn’t the word—maybe it’s better expressed as “explainability” or “intuitiveness”. What is explainable or intuitive will depend on the audience, so there can’t really be an eternal answer to “what is maximally explainable/intuitive?”
  • I like the linear proportional curve targeting a mean, and usually use that. Since I usually teach a math class, I spend some time explaining the function and its properties. There are worksheet exercises to drive some of these points home. Obviously, this isn’t appropriate for every class.
View or add comments