This is a model I wrote some time ago, a very stylized special case of a more general recursive model I’m currently working on. Hopefully, the more general model will feature as a chapter of my dissertation, and this might be a subsection of that chapter. I think it’s a sort of interesting model in its own right, even apart from the setting.
The basic motivation is the “orbital debris” problem: as satellites are launched into orbit, there are some debris that accumulate and pose a threat to other objects in the orbital environment. There’s a pretty big literature on this in the aerospace engineering and astrophysics communities, and the popular press has written about this as well. I’ve blogged about a couple papers on the subject before (physics/engineering, economics).
The basic intuition is pretty straightforward and well-known in economics: pollution is a negative externality, firms don’t face the full cost of polluting the environment, they overproduce pollution relative to the socially optimum level. I’m not going to present the planner’s solution, but in the stylized model here firms can cooperate to reduce the amount of debris produced. Without cooperation, they’ll end up choosing higher orbits and producing more debris. The debris can destroy satellites (and that is bad).
In this model I’m focusing on how a firm’s optimal choice of altitude in low-Earth orbit is affected by another firm’s altitude choice. This is an inter-firm externality, which is a little different from the usual consumer-facing externality, but is conceptually similar to strategic substitutability in oligopoly games.
The model setting
Consider an environment with two orbits, high (H) and low (L). We can think of these as spherical altitude shells, similar to the approach described in Rossi et. al (1998).
There are 2 identical firms, each with 1 satellite per period. Debris decays completely after 1 period. Collisions completely destroy a satellite, and generate no debris. Satellites last 1 period, and then are properly disposed of. This lets me talk about dynamics while keeping the decision static.
\(O_i \in \{H,L\}\) is the orbit chosen by firm \(i\) for its satellite. The probability that firm \(i\)’s satellite survives the period is \(S_i(O_i, O_j)\). \(Y_i(O_i, O_j)\) is the probability of a collision between two satellites in the same orbit*. Putting a satellite in orbit \(H\) generates some debris in orbit \(L\) for that period. \(D_L\) is the probability a satellite in the low orbit is destroyed by debris from a satellite in the high orbit**.
*We could say that satellites never collide with each other, but the analysis carries through as long as satellites generate some collision probability for other satellites in the same shell. I think this is generally true, since objects like final stage boosters, random bits that break off, or dead satellites which are not properly disposed of generate such probabilities.
**The idea here is that debris orbits decay toward Earth. This is more relevant for objects in low-Earth orbit, which is what I’m thinking about with this model.
The returns from owning a satellite are normalized to 1, so that we can focus on the probabilities \(S_i\). With the above definitions, we can define the satellite survival probabilities for firm \(i\) as
So being the only satellite in the high orbit is the best position to be in, since you’re not at risk from debris or the other satellite. It seems reasonable to assume that \(\gamma_{HH} = \gamma_{LL}\) as long as the altitude shells aren’t too large.
The really important assumption is the relationship between \(\gamma_{LH}\) and \(\gamma_{HH}\). If \(\gamma_{HH} > \gamma_{LH}\) (case 1, debris is more likely to cause a collision than a satellite), we’ll end up with one Nash equilibrium in pure strategies. If \(\gamma_{HH} \leq \gamma_{LH}\) (case 2), we can have up to three Nash equilibria in pure strategies. When we relax the assumption that debris decays completely at the end of the period and allow debris growth, we’ll have transitions between the two cases.
(Best responses are underlined. Row player is the first entry, column player is the second.)
The only Nash equilibrium in pure strategies here is for both firms to go high, \((H,H)\). I call this case “orbital pooling”.
The folk region:
(The images in this post are all photos of diagrams I drew in pencil in my notebook many months ago.)
This case is like a prisoner’s dilemma. Neither firm wants to be in the low orbit when the other firm can go high and make them take on risk. Both firms want to try to be the only firm in the high orbit with no risk - you can see this in the folk region diagram and best responses. So, both firms end up high and with risk.
There are up to three Nash equilibria in pure strategies here: \((H,L), (L,H)\), and \((H,H)\). The \((H,H)\) equilibrium is possible if \(\gamma_{HH} = \gamma_{LH}\). I call this case “orbital separation”.
The folk region:
The intuition here is straightforward: pooling on the same orbit is worse than (or, if \(\gamma_{HH} = \gamma_{LH}\), as good as) mixing it up, so the firms mix it up.
Orbital separation has less overall risk and debris than orbital pooling. The firm which went low bears more risk than the firm which went high under orbital separation, but the orbits are cleaner overall. If we had more realistic debris dynamics (where debris could interact with other debris to generate more debris), orbital separation would be even better than orbital pooling.
There are four inferences we can draw about the process dynamics from this:
If \(D_L\) is initially low but grows faster than \(Y_i\), orbital separation will transition to orbital pooling
If \(D_L\) increases at the same rate as or a rate slower than \(Y_i\), orbital separation is sustainable
If \(D_L\) decreases faster than \(Y_i\), orbital pooling can transition to orbital separation
Orbital pooling will increase \(D_L\)
Let’s look at the debris dynamics a little more formally.
Putting some debris dynamics in
We’ll keep it simple here: debris in the low orbit will decay each period at a rate of \(\delta_D < 1\), and launches to the high orbit will generate \(\gamma\) many debris in the low orbit. Letting \(D_L'\) be the next period debris stock, the three cases for the debris law of motion are
The diagram below shows the three possible fixed points of debris:
If both firms go low, the fixed point will be \(0\) debris in the low orbit. If the firms separate, it will be \(\tilde{D}_L^{LH}\). If the firms pool, it will be \(\tilde{D}_L^{HH}\). The next diagram shows the returns from orbital pooling and orbital separation as a function of the current period debris stock \(D_L\).
(The x and y axes are flipped because economics.) \(\bar{D}_L\) is a debris threshold. Above \(\bar{D}_L\), orbital pooling dominates orbital separation, and vice versa below \(\bar{D}_L\).
One question is whether the steady state debris level under orbital separation is higher or lower than the pooling-separation threshold, i.e. is \(\tilde{D}_L^{LH} \leq \bar{D}_L\).
If \(\tilde{D}_L^{LH} > \bar{D}_L\), then \(\tilde{D}_L^{LH}\) will occur, then firms will shift from orbital separation to orbital pooling, and \(\tilde{D}_L^{HH}\) will be the final debris steady state.
If \(\tilde{D}_L^{LH} \leq \bar{D}_L\), \(\tilde{D}_L^{LH}\) will occur, and firms will stay in orbital separation.
Below are payoff-debris plots for orbital separation and orbital pooling (with proper x-y axes):
Cooperation with a grim trigger
The folk region diagrams show us that cooperating to get higher payoffs is generally possible. One way to see what the cooperation could look like is to write a trigger strategy for an infinitely repeated game and then see when it will/won’t lead to cooperation.
The trigger strategy for firm \(i\) is:
Play \(H,L,...\) if \(j\) plays \(L,H,...\)
If firm \(j\) deviates, play \(H\) forever
Firm \(j\)’s strategy is defined similarly.
We can see that there’s no incentive to deviate from \((H,L)\) to \((L,L)\), only from \((L,H)\) to \((H,H)\). Assuming the firms share a discount factor \(\beta \in (0,1)\) and expanding out the series of payoffs, they’ll cooperate as long as
So, they can cooperate and alternate orbital separation with a grim trigger if \(\gamma_{LH} > 2 \gamma_{HH} - 1\). We can get a sense for how likely this cooperation is in a \(\gamma_{HH} - \gamma_{LH}\) payoff space,
So, cooperation seems more likely when orbital separation is already the Nash equilibrium. This seems intuitive enough to me.
Concluding thoughts
This is obviously a very stylized model, but I think the general notion of orbital separation vs orbital pooling is more generally applicable. I think this conclusion is kinda neat.
With more altitudes, I would expect the pooling/separation dynamic to result in firms moving progressively higher in LEO. I think we can sort of see that in SpaceX and OneWeb’s altitude choices for their announced constellations - around 1,200 and 1,100 km up, close to or a little higher than the LEO altitudes which are most-used right now. Obviously there’s a lot more than collision risk going into the choice of altitude for a constellation, but I expect the risk to be a factor.
Adding the benefits to a particular altitude (e.g. coverage area) parameterizes the problem some more, but doesn’t seem to add any interesting economic dynamics. Launch costs are necessary in the dynamic decision model, but can be ignored here. Allowing satellites to last more than one period really complicates the economic dynamics, as does adding more firms or altitudes. The physical dynamics are cool and have been studied fairly well, but the economic dynamics have not really been studied at all. I may be biased - I think the exciting action in the space debris problem is in the economic dynamics.
I would really like to model constellation size choices, but again the economic dynamics make it really complicated. I wrote a single-shell model of comparative steady state constellation choices with free entry and debris accumulation for a class last semester which I might be able to extend with altitudes. The steady states are not easy to compute - mechanically, the problem is that the debris accumulation can make the cost function concave, making the firm’s optimization problem nonconvex. Getting the full transition paths would be cool and presumably even harder. I’m working on this, but I don’t expect to get the most general case with constellations, multiple firms, multiple altitudes, and debris accumulation any time soon.
An unnamed source sent me some fun datasets for an industrially-produced widget sold in Producistan for P-dollars (denoted $). The widget manufacturers seem to compete in a monopolistically competitive industry. There may be some vertical and horizontal integration, but I can’t see it in the data I have.
This post, like the previous one, is just a fun exercise in holiday programming. I found a neat R package, GGally, while writing this.
The data
I’ll look at two datasets here: one a spreadsheet with data aggregated to the brand level where each row is a separate brand observation, and another spreadsheet with data broken down to a products offered by different brands. The data come from Producistan’s popular online shopping portal, and include the number of reviews per firm and per product as well as the average review ranking (usual 1-5 scale).
I have a third dataset where I linked the two by brand. I haven’t done anything with the linked dataset, but I like knowing it’s an option.
My source would prefer the data not be public, so I’ve loaded the spreadsheets from a pre-saved workspace and anonymized the firms and products. I’m not really interested in the industry, and my source doesn’t really care about the code and pictures being public.
Plots and regressions and stuff
Let’s look at the aggregated data first.
Aggregated data
I really like the scatterplot matrix produced by ggpairs(). The diagonal is density plots for the variables, the upper triangular has correlation coefficients between the row and column variables, and the lower triangular has pairwise scatterplots with the row variable on the y-axis and the column variable on the x-axis. The scatterplots would be more useful with outliers trimmed, but I think there’s some utility to keeping them in. Let’s move through the diagonal and lower triangular column by column.
It looks like there are a lot of firms with 0-50 SKUs offered, and very few with more than that. I think there may be only one with more than 300 offered. Most firms have a decent price spread, but firms with a lot of SKUs tend to have prices below $20,000. Most of these SKUs have on the order of tens or hundreds of reviews, but a handful of firms with lots of SKUs have thousands of reviews. The phrase “long tail” comes to mind here.
I’m not sure how much we can glean from the correlation numbers. They’re just raw correlations, not controlling for anything or accounting for outliers, so I won’t spend any more time on them.
The dispersion variable is interesting. It’s the difference between the maximum and minimum observed prices for the item. My source tells me that a number of the listings they scraped for the data include fraudulent listings with prices that are way too high/low. I’m not sure what the purpose of these listings is, but it looks like products with very few reviews seem to have higher dispersions (though the correlation coefficient isn’t very high).
I think one of the weaknesses of the ggpairs() plots (and maybe scatterplot matrices in general) is that the y-axis scales aren’t always easy to figure out. The density plots, for example, are on a different scale from the scatterplots, but it’s hard to show that in a pretty way.
Now let’s run some simple linear regressions:
The standard errors aren’t robust, so I’m ignoring the t-stat/p-values here. I wouldn’t take these regressions very seriously - they’re just giving fancier correlations than the raw coefficients in the scatterplot matrix. Caveats issued, let’s read the tea leaves look at the parameter estimates.
From the first model, it looks like there is a positive correlation with the dispersion and the number of SKUs, and a positive correlation with the average price. There’s maybe a small negative correlation between the dispersion and the total number of reviews.
From the second model, it looks like a firm’s average price has a positive correlation with the number of SKUs the firm offers, and a small negative correlation with the number of reviews. These support a story where firms with more SKUs tend to target higher-income markets more aggressively, and where firms with higher prices tend to have fewer sales, but the standard errors are so large that it’s hard to buy the story just from the data.
Ok, let’s look at the disaggregated data now.
Disaggregated data
The prices have some commas and ranges in them, so they need to be cleaned before we can work with them. Getting rid of the commas is an easy application of gsub(). The ranges are a little trickier. There are dashes and number pairs in the fields. Getting rid of them and picking a single number would be fairly easy application of str_split_fixed() from the stringr library, but I’d like to take an average. That needs a few more lines.
In the end, I couldn’t think of an elegant one-liner to get the averages I wanted in the one minute I spent thinking about the problem. My lazy “non-R” thought was to write a for loop, but copy-pasting and editing the values ended up being just as easy. If I get a bigger dataset with more ranges, it would be worth it to spend a little more time writing something that scales better.
A lot of the products don’t have reviews or ratings.
Starting from the central density: most products have ratings around 4. Moving to the left, it looks like products with ratings around 4 tend to have the most reviews, as we might expect if having a rating is correlated with the number of reviews. Moving down, it looks like products with reviews and ratings tend to be lower-priced. One story for this is that higher-priced products sell fewer units.
Let’s look at the average price by brand now.
I dropped the brands with too few products for error bars which exclude zero. This also makes the plot slightly more legible - the full plot has over 150 brands, and is just not very easy to read. It looks kinda like something we’d get out of a Hotelling model of monopolistic competition, with maybe heterogeneous consumers/firms. Most firms seem to stay in the $20,000 or less price range, with a few gunning for the higher-end segments.
Let’s wrap up with some more linear regressions:
The Firm.id variable should be entered as a factor in m2.wmic rather than a numeric, but this makes the output a little cleaner and we’re not taking these regressions very seriously anyway.
It looks like higher-rated products tend to be higher-priced, controlling for the number of reviews and the identity of the firm. It also looks like the number of reviews is negatively correlated with the price. These results are consistent with a story where price is positively correlated with quality and negatively correlated with sales, and sales are positively correlated with the number of reviews (the story for the sign of the reviews coefficient is just a fancier way to say “demand slopes downward”).
Conclusion
Widgets in Producistan look like a monopolistically competitive industry. Price might be positively correlated with quality, and demand slopes downward. Firms with more products might go for the higher-WTP consumers more aggressively on average, while making fewer sales. Storytime aside, ggpairs() is a cool function.
My cousin sent me some data on motorcycle helmet weights for a project he’s working on. I thought it would be fun to slice it up and look at it a bit instead of working on things I should be working on.
The data
The dataset is a spreadsheet with 4 columns and 135 rows. The columns are: a URL, a brand, a model name, and a weight.
Some plots
What can we say?
It looks like most helmets cluster around the 1.6 kg range. Biltwell has the lightest helmets, with an average weight of 1227.5 grams, and Gmax the heaviest at a whopping 1966 grams - almost 2 kgs!
Of course, Gmax only has 1 helmet in this dataset. AGV and HJC are the kings of variety, with 14 models each in this dataset. Bell and LS2 are close runners up with 12 models each. Fly, GMax, Joe Rocket, Klim, and Vemar only have 1 model each in this dataset. In this dataset, brands have an average of 5.4 models.
The x axis labels on the histograms are messy. I don’t feel like spending the time to pretty them up. I have no idea if this information is in any way useful to anyone, but it was a fun exercise.
I read Cixin Liu’s The Three-Body Problem and The Dark Forest a while ago. If you like science fiction, I highly recommend both of them. I hear the English version of the third book in the series, Death’s End, is coming out on September 20th. I’m looking forward to it.
Though I think Three-Body Problem is the better story of the two, The Dark Forest had some interesting ideas of its own. The most interesting to me came late in the plot: a game theoretic explanation of the Fermi Paradox. From wikipedia’s page about TDF,
The universe is full of life. Life in the universe functions on two axioms: 1. Life’s goal is to survive and 2. That matter (resources) are finite. Like hunters in a dark forest, life can never been sure of alien life’s true intentions. The extreme distances between stars creates an insurmountable ‘Chain of Suspicion’ where the two civilizations cannot communicate fast enough to relieve mistrust, making conflict inevitable. Therefore, it is in every civilization’s best interests to preemptively strike any developing civilization before it can become a threat.
These axioms really appealed to me; “utility maximization” and “scarcity” are foundational in economics, and I like seeing them applied in science fiction settings.
This philosophy stackexchange thread has a nice discussion of the “Dark Forest Postulate” (including a simple matrix game formalizing the intuition from OP) and its assumptions. TL,DR: the Dark Forest assumptions might not hold, and there are many possible explanations for the Fermi Paradox.
Another assumption I don’t recall being stated explicitly in the book (but I think is important to the DF postulate) is speed-of-light restrictions on communication and travel - otherwise, the cost of communication/travel could be low enough to support cooperative equilibria.
In any case, I liked the idea, and wanted to try writing down a formal model in that vein. The question I consider here is: when is it optimal for a species to broadcast its presence to the universe at large?
The Model
Let species \(i\) be a species deciding whether or not to broadcast its presence, \(B_i \in \{0,1\}\). Any species is characterized by two parameters: its technology \(\theta\) and its resources \(\rho\). Resources are matter and energy, and can be measured in energy units. I’m not sure how technology is measured, but I don’t think it’s too unreasonable to assume we can at least rank levels of technological development. Maybe it’s maximum energy consumption from a unit of resources, so then it’s energy units as well.
Broadcasting (\(B_i=1\)) is costless, and reveals species \(i\)’s presence to some other species \(j\) with probability \(D_i \in [0,1]\). If \(i\) does not broadcast its presence, it gets to hide without ever being discovered.
A more involved model might actually model \(D_i\) as a function of the type of broadcast, density of stars in the local neighborhood of \(i\), add a cost to broadcasting, maybe even add elements of \(i\) faking its type. This is not that model.
Assumption 1: \(\theta_i \sim IID [0,\infty)\) and \(\rho_i \sim IID [0,r]\), where \(F\) and \(R\) are their respective CDFs. \(\theta_i\) and \(\rho_i\) both also have PDFs, and expectations \(E(\theta_i) = \bar{\theta} < \infty\), \(E(\rho_i) = \bar{\rho} < \infty\).
Corollary 1a: With nothing else known about \(F\) and \(R\), by the Principle of Maximum Entropy \(\theta \sim \exp(\lambda), \lambda \in (0,\infty)\) and \(\rho \sim U[0,r]\). (I don’t think matter and energy are really uniformly distributed throughout the universe, but I’m not sure what other tractable distribution would be reasonable here.)
Assumption 1 encodes the notions of finite resources and potentially unbounded technological advancement (handwaving at thermodynamic constraints), while Corollary 1a gives the maximum entropy probability distributions under those assumptions. I think maximum entropy is an appropriate principle here since functional forms are convenient but anything else would impose stronger assumptions.
Assumption 2: Any species \(i\) attempts to maximize a concave utility function \(u(\cdot)\), and receives utility from technology \(\theta_i\) and resources \(\rho_i\) as \(u(\theta_i \rho_i)\).
Assumption 2 captures the idea that a species is better off with higher levels of technology/resources subject to diminishing marginal utility, and tries to be as well off as it can. It could be made more general with \(u_i(\cdot)\) instead of just \(u(\cdot)\), but I don’t think it’s necessary for a simple model. Assumption 2a captures the notion that a species considers total destruction (no resources/no technology) the worst possible outcome, and has \(0\) marginal utility at infinitely high levels of technology. I’m not sure if the latter really matters in this setting, but the former will be important and it seemed weird to have one without the other.
Assumption 3: When two species \(i\) and \(j\) interact and both survive the interaction, their resources after interacting are given by the solution to the Nash Bargaining problem
Species \(i\)’s resources after the interaction are \(x^*_i = \left ( \frac{\theta_i}{\theta_i + \theta_j} \right ) (\rho_i + \rho_j)\), and species \(j\)’s are \(x^*_j = \left ( \frac{\theta_j}{\theta_i + \theta_j} \right ) (\rho_i + \rho_j)\). Species \(i\)’s expectation of \(x^*_i\) prior to the interaction is \(\hat{x}_i = E(x^*_i) = \left ( \frac{\theta_i}{\theta_i + \bar{\theta}} \right ) (\rho_i + \bar{\rho})\).
For simplicity, I ignore potential technology transfers; each species still has technology \(\theta_i\) and \(\theta_j\) after the interaction.
I think Assumption 3 is strong, but maybe not unreasonable. NBS is a convenient assumption for sure. The idea I wanted to get at was that they’ll split the total resources between them in proportion to their levels of technological development. The form of \(\hat{x}_i\) also implies that species \(i\) knows its own level of technological development and resource stock, but not species \(j\)’s until they interact. I think ignoring technology transfers makes this a little more pessimistic/conservative, as well as more tractable. I don’t think it affects the qualitative conclusions.
Assumption 4: When two species \(i\) and \(j\) interact, the probability that the one with the lower level of technology (say it’s \(i\)) does not survive the interaction is given by \(Y_i = Pr(\theta_j - \theta_i \geq \bar{C}) = 1 - Pr(\theta_j \leq \theta_i + \bar{C}) = 1 - e^{-\lambda (\bar{C} + \theta_i)}\). The more technologically advanced species \(j\) has net resources \(\rho_i + \rho_j\) after the interaction.
Assumption 4 is strong, but I don’t know a weaker way to encode the probability that a species is destroyed by interacting with a technologically more advanced one. I calculated \(Y_i\) assuming \(i\) knows \(\theta_i\), so the only unknown is \(\theta_j\).
The net resources bit isn’t to say that the more advanced species is going to try to steal the less advanced species’ resources (though that’s a possibility), just that the stuff’s there for the taking after the less advanced species is wiped out and I assume species \(j\) takes it. Whether \(j\) takes the resources or not doesn’t really matter to \(i\)’s decision to broadcast or not, since \(i\) would be dead. I ignore the potential incentive of finding a weaker species and killing them and taking their stuff in the analysis below.
Analysis
Let assumptions 1, 2, 3, and 4 hold. Let \(D_i = 1\). Species \(i\)’s value function is
\(u(\theta_i \rho_i)\) is \(i\)’s utility from not broadcasting (\(B_i = 0\)), and \(Y_i u(0) + (1 - Y_i) u(\theta_i \hat{x})\) is \(i\)’s utility from broadcasting (\(B_i = 1\)). It is optimal for \(i\) to broadcast its existence to the galaxy at large if
The concavity of \(u(\cdot)\) from assumption 2 tells us that the bound on \(Y_i\) in \((1)\) is non-negative if \(\rho_i \leq \hat{x}_i\), which implies \(\frac{\theta_i}{\rho_i} \geq \frac{\bar{\theta}}{\bar{\rho}}\) . In words, \(i\)’s technology/resource ratio should be at least as high as the average technology/resource ratio for broadcasting to be worth the risk of extinction. This is coming from the Nash Bargaining mechanism, since \(i\) wants to broadcast only if there are good odds it will come out ahead in the event of a peaceful interaction.
Corollary 1a lets us clean up the bound on \(Y_i\) to say that it’s non-negative if \(\theta_i \geq \frac{r \rho_i}{\lambda}\) . I suppose this is at least in principle computable, though the uniform distribution assumption is sketchy. We can also do some basic comparative statics to say that \(i\) is more likely to broadcast if
the maximum potential resources available to \(j\) are larger
\(i\) has more resources
the average level of technological development is lower
Under corollary 1a, any species \(i\) with no information about the distribution of \(\theta\) except for its own technology should be cautious: the probability that they are below the universal average level of technological development is \(\int_0^{\frac{1}{\lambda}} \lambda e^{-\lambda x} dx = \frac{e - 1}{e} \approx 0.63\).
Assumption 2a makes the bound on \(Y_i\) much more pessimistic. With 2a, no matter what \(i\)’s resource level is, \(\lim_{x \to 0} = -\infty\) means broadcasting is never worth the risk of getting destroyed (left with no technology/resources).
What can we say about the distribution of broadcasting species \(\theta_{B=1}\) in the universe? If assumption 2a holds, it’s a spike at \(0\) since no one wants to broadcast and risk annihilation. Let’s suppose 2a doesn’t hold, and express \((1)\) as a lower bound on \(\theta_i\):
Be careful about broadcasting your species’ presence to the universe. If there is nothing worse than annihilation, never broadcast. If you’re willing to risk annihilation and you haven’t found any evidence of a species other than your own… your technology is probably below average, so be careful anyway.
Here I discuss some aspects of Bradley and Wein (2009), published in Advances in Space Research.
The authors (BW) consider a number of different simulations to get at the behavior of the debris system over time. My interest right now is more in the model and its assumptions than the simulations and their results, so that’s where I’ll focus this post.
Notation
This paper has a lot of notation. I thought putting it all in one place would make it clearer than searching through the paper, but I’m not sure about that anymore.
\[\begin{align}
S_n^o(t):& \text{ spacecraft which can't deorbit, are still operational} \cr
S_n(t):& \text{ spacecraft which can't deorbit, not operational} \cr
S_d(t):& \text{ spacecraft which can deorbit, operational} \cr
R(t):& \text{ upper stage rocket bodies lingering at separation altitude} \cr
\lambda_o:& \text{ rate at which spacecraft are launched into SOI} \cr
\lambda_R:& \text{ rate at which rocket bodies are launched into SOI} \cr
\theta_d:& \text{ fraction of spacecraft with deorbit capability} \cr
\mu^{-1}_o:& \text{ is the average operational lifetime} \cr
\mu_o:& \text{ rate at which } S_d \text{ deorbit,} \cr
& \text{ also rate at which } S_n^o \text{ change to } S_n \cr
\mu_n:& \text{ rate at which } S_n \text{ deorbit naturally} \cr
\mu_R:& \text{ rate at which } R \text{ deorbit naturally} \cr
F^{\kappa}_{\tau}:& \text{ effective number of fragments} \cr
\kappa \in \{ h, b \}:& \text{ is fragment hazardous or benign to intacts} \cr
\tau \in \{ R, S \}:& \text{ source of fragment, rocket body or spacecraft } \cr
\mu_{F^{\kappa}}:& \text{ rate at which fragments deorbit} \cr
\beta_{\alpha \gamma}:& \text{ the number of collisions between satellites of type } \alpha \text{ and } \gamma \cr
& \text{ per unit time per satellite of type } \alpha \text{ per satellite of type } \gamma \text{, where } \alpha, \gamma \in \{ S, R, F^{\kappa}_{\tau} \} \cr
\beta_{\alpha \gamma} \alpha(t) \gamma(t):& \text{ rate of collisions between satellites of types } \alpha \text{ and } \gamma \text{ at } t \cr
\delta_{\alpha \gamma}^{\tau \kappa}\alpha(t) \gamma(t):& \text{ rate at which fragments of type } F^{\kappa}_{\tau} \text{ are generated from } \cr
& \text{ collisions between satellites of types } \alpha \text{ and } \gamma \cr
S(t) \equiv& S_n^o(t) + S_n(t) + S_d(t) \cr
F^h(t) \equiv& F^h_R(t) + F^h_S(t) \cr
F^b(t) \equiv& F^b_R(t) + F^b_S(t) \cr
F^S(t) \equiv& F^h(t) + F^b(t) \cr
U \equiv& \{S, R, F^h_S, F^b_S, F^h_R, F^b_R \} (\text{the set of satellite types}) \cr
U^h \equiv& \{S, R, F^h_S, F^h_R\} (\text{the set of satellites hazardous to intacts}) \cr
U^F \equiv& \{F^h_S, F^b_S, F^h_R, F^b_R\} (\text{the set of fragment types}) \cr
U^I \equiv& \{S, R \} (\text{the set of intact types}) \cr
\end{align}\]
Notes about the notation:
I’m assuming that \(\lambda_R \in [0, \lambda_o]\), since the rocket bodies are used to launch spacecraft.
The “effective” number of fragments refers to the fragments being weighted by the proportion of their time they spend in the SOI.
“A particular fragment is not simply hazardous or benign to intacts the uncertainty in collision velocity causes the properties of the fragment to determine the probability with which it is hazardous or benign in a particular collision.” A particular fragment from source \(\tau \in \{ R, S \}\) increases the effective numbers \(F^h_{\tau}\) and \(F^b_{\tau}\) by quantites that sum to 1.
The assumption on the rate of new fragment generation is because satellite cross-section (which determines likelihood of collision) and mass (which determines number of fragments) have a joint probability distribution. A collision between a rocket body and a spacecraft produces all four types of fragments: hazardous and benign from both the rocket body and the spacecraft.
“Hazardous” fragments can produce catastrophic collisions with intacts; “benign” fragments cannot.
As I’m understanding it, the simplified system involves using the highest rates of collision and decay from the full system, and imposing that the rates of fragment generation are proportional to the rates of collision for intacts and fragments. I think this is done so that the simulations run faster and BW can focus on the long-term equilibrium behavior of the system.
Sustainability
One of the features of this paper I found really interesting was the system quality metric BW used, the “maximum (over all time) lifetime risk to an operational spacecraft”. They describe this as a sustainability metric as opposed to an efficiency metric. The metric is defined as
This metric measures the system’s hazard as the probability that a spacecraft launched at time \(t\) will be destroyed, while it is still operational, by an intact-intact or intact-fragment catastrophic collision.
I’m not sure if \(\beta_{S \alpha} \alpha(t)\) is actually a proper probability. I don’t see how it could be if \(\beta_{S \alpha}\) is a constant coefficient for a given \(\alpha\), but I can see it if it’s only constant given \(\alpha(t)\). I looked through how they derived the coefficients in the appendix, and as far as I can tell it seems like the \(\beta_{\alpha \gamma}\)s are constants over time conditional on the types of \(\alpha\) and \(\gamma\), so I don’t think these are proper probabilities.
Optimality of Deorbit Compliance
Let \(C_d\) be the cost to deorbit, \(C_S\) be the cost of a destroyed operational spacecraft, \(\theta_a\) be the attempted compliance rate, \(s\) be the probability that a deorbit attempt is successful (so that \(\theta_d = s \theta_a\)), and \(r^o_{\max}(\theta_d)\) be the sustainable lifetime risk of an operational spacecraft when the successful compliance rate is \(\theta_d\). Then the Space Board/social planner would solve
Using values from the literature, BW solve this problem and find that \(\theta_a^* = 1\). They find that \(\theta_a^* < 1\) requires \(C_d =\) $30 million, which is many times larger than the actual values and costs they find in the literature. Similarly, they find that 100% compliance with rocket body deorbiting is socially optimal.
Assessing damage due to space activities
BW define the damage caused by a space activity to be the total number of destroyed operational spacecraft generated by said activity. Letting \(S^o(t) = S_n^o(t) + S_d(t)\) be the number of operational spacecraft at time \(t\), the number of destroyed operational spacecraft up to time \(T\) is
It seems like the main contributions of this paper to the debris modeling literature are (1) it considers hazardous and benign fragments separately, and (2) it does so in a tractable ODE system.
It seems like the main contributions of this paper to the economics literature on orbital debris are (1) it considers the costs of orbital debris to satellite launchers, (2) it considers the optimality of deorbit compliance, and (3) it offers a more practical metric of the system’s quality than economic efficiency, all within a realistic debris modeling framework.
On the model and results
BW point out that this model is generic and can be used at any altitude, though the accuracy may suffer at altitudes where the SOI is just below a congested SOI. That they didn’t find much effect from allowing debris from higher shells to enter the SOI tells me that I can probably ignore those drift effects in my own model.
Considering intacts and fragments, benign and hazardous objects, separately, and using sustainable lifetime risk as a performance metric, leads BW to different conclusions than Kessler, Liou and Johnson, and others in the debris modeling literature. BW find that the number of fragments is bounded, and that the sustainable lifetime risk remains below \(10^{-3}\) as long as deorbit compliance is higher than 98%.
On the economic front, BW point out that fees to preserve the orbital environment should be designed to deter debris-generating activity. One way to do this is with differential launch fees which vary based on the deorbit capability of the spacecraft and its launch vehicle.
BW point out three reasons why rational profit-maximizing firms might not fully comply with deorbit requirements (as is currently the case):
they are behaving myopically and are discounting collisions that happen in the far future (perhaps having faith in a future technological solution to the problem, i.e. a “backstop technology”);
they do not plan many launches in congested regions of space, and so don’t care about the benefits of deorbiting or the costs of not deorbiting;
they have an inventory of old spacecraft which are too expensive to retrofit for deorbiting.
My guess is that it’s a combination of all three, probably 1 and 3 the biggest contributors. From a policy standpoint, 1 and 2 can be addressed but I don’t see what can be done about 3.
To the extent that deorbit costs are different or perceived to be, the noncompliance fee (or compliance subsidy) should be high enough to deter all players. If a single player can substantially alter \(\theta_d\) by their deorbit policy, it may be efficient to use a Vickrey-type solution and charge each player the damage of their aggregate policy decision.
BW also raise the issue of moral hazard in noncompliance fees: propulsion systems fail (the literature estimates a 3.9% failure rate), and so a spacecraft owner who plans to deorbit may be unable to do so due to system failure. A spacecraft owner who doesn’t plan to deorbit has an incentive to pretend to have suffered such a systems failure to avoid an intention-based deorbit fee. Charging on the basis of observed outcomes would penalize owners who suffer failures they couldn’t prevent.
Apparently passivation techniques to minimize explosions from non-operational spacecraft that are not deorbited and deorbiting rocket bodies appears to be close to 100%.
Other thoughts
I really like the focus on “sustainability” over “efficiency”, mainly because I’m not sure how to define “efficiency” for this system. The practical problem with “efficiency” is the variety of uses for LEO. An attempt to actually measure the system quality through economic efficiency would have to go about quantifying the value of satellite imaging services, the research potential of the ISS and any future space stations, the value of potential space-based manufacturing, the value of LEO tourism, etc. I feel like this is difficult to do credibly. “Sustainability” as defined in this paper, on the other hand, can plausibly be credibly measured with enough telescopes, radars, and simulations.
I think the main drawback of this paper from an economic standpoint is its treatment of the launch rates of satellites and rocket bodies, i.e. how \(\lambda_o\) and \(\lambda_R\) are determined. In the paper, both of these values are treated as parameters to be set exogenously, rather than as endogenous responses by optimizing agents.
BW’s result from analyzing the social optimality of deorbit compliance (that full compliance is socially optimal) makes sense to me. However, I don’t know if full compliance is also optimal for individual profit-maximizing firms. My priors are that individual firms would find \(\theta_a <1\) optimal for smaller values of the cost parameters and higher values of the risk parameter than the Space Board would. I don’t have priors on how high the relevant price of anarchy (in terms of lifetime sustainable risk) would be, but I do think it would be strictly greater than 1.
Endogenizing the launch rates, launch vehicle choices, and deorbit compliance is where I am focusing my model. While my current debris model is much simpler than this one (simple enough to be manageable in pencil and paper), I think I should work on marrying my model of launch decision-making to this model of orbital debris evolution. In terms of this model, I think this would mean that \(\lambda_o\), \(\lambda_R\), and \(\theta_d\) would be \(\lambda_o^*(t;U^I, U^F)\), \(\lambda_R^*(t;U^I, U^F)\), and \(\theta_d^*(t;U^I, U^F)\) - best-response functions which change over time in response to the state of the fragment and intact stocks.