07 Oct 2015
Two period models are nice because they’re usually easy to solve and can get at some good intuition. Two period models and most dynamic economic models use a discount rate, often \(\beta\) or \(\delta\), which is how much the agent values future payoffs in relation to present payoffs.
In this model there are \(n\) agents in total, \(k\) of whom have discount rate \(\beta\in [0,1]\) (impatient), and the remaining \(n-k\) with discount rate \(\beta =1\) (patient). The agents live for two periods, \(t=1,2\), and receive a consumption endowment of 1 in the period 1 and \(e\gt 0\) in period 2. They have preferences over per-period consumption given by \(\log c_t\). Their lifetime utilities are
\[\begin{align}
\text{k impatient:} ~~ V(c_1,c_2) &= \log c_1 + \beta \log c_2 \cr
\text{n-k patient:} ~~ U(c_1,c_2) &= \log c_1 + \log c_2 \cr
\end{align}\]
They want to choose consumption plans that maximize their utilities. The benevolent dictator/social planner wants to maximize the social welfare function
\[\begin{align}
W &= kV + (n-k)U \cr
&= k( \log c_{1}^{v} + \beta \log c_{2}^{v}) + (n-k)( \log c_{1}^{u} + \beta \log c_{2}^{u}) \cr
\end{align}\]
Assume the planner has to choose the same consumption plan for every agent of the same type, but can choose different consumption plans for the patient and impatient agents.
Benchmark case: No storage technology
Suppose there are no refrigerators or other storage technologies, so that the consumption good can’t be carried across time. The dictator’s problem is
\[\begin{align}
\max_p & ~~W ~~ \text{s.t.}~~ c_1^u + c_1^v \le n ~ , ~ c_2^u + c_2^v \le ne \cr
\implies L & = W + \lambda_1 (n-c_1^u - c_1^v) + \lambda_2 (ne-c_2^u - c_2^v) \cr
\end{align}\]
Since the utility functions are log, all the variables must be strictly greater than 0 and we can ignore non-negativity constraints in the lagrangian.
From saddle point conditions, we get that the optimal consumption plans will satisfy
\[\begin{align}
c_1^u + c_1^v &= n \cr
c_2^u + c_2^v &= ne \cr
kc_1^u &= (n-k)c_1^v \cr
\beta kc_1^u &= (n-k)c_1^v \cr
\end{align}\]
Without a use for savings, the optimal plan is to just consume all resources in the period they’re received: \(c_1^v = k\), \(c_1^u = (n-k)\), and \(c_1^v=\frac{k \beta ne}{n-k(1-\beta )}, c_1^u=ne-\frac{k \beta ne}{n-k(1- \beta )}\). The lack of savings technology means \(\beta\) only affects the second period allocation, not the first. When \(\beta \lt 1\), the impatient agents receive less consumption good because they care less about this period. When \(\beta=1\) each group of agents just consumes their period endowment.
Development: With a storage technology
Now suppose a storage technology is developed, so that endowments can be carried across periods. The technology has a transformation rate \(R\gt 0\) so that 1 unit of the good stored in period 1 becomes \(R\) units of the good in period 2. The dictator’s problem is now
\[\begin{align}
\max_p & ~~W ~~ \text{s.t.}~~ R(c_1^u + c_1^v) + c_2^u + c_2^v \le ne + Rn \cr
\implies L & = W + \lambda_1 (n-c_1^u - c_1^v) + \lambda_2 (ne-c_2^u - c_2^v) \cr
\end{align}\]
From saddle point conditions, we get that the optimal consumption plans will be
\[\begin{align}
c_1^u &=k \cr
c_1^v &=n-k \cr
c_2^u &=ne-\frac{k \beta ne}{n-k(1- \beta )} \cr
c_2^v &=\frac{k \beta ne}{n-k(1-\beta )} ne \cr
\end{align}\]
The welfare effects of the technology depends on \(\beta\) and \(R\), the rates of time preference and transformation. If the transformation rate is equal to the agent’s discount rate, then the agent will be better off because they smooth their consumption by saving in the first period.
If \(\beta = R^{-1}\), the impatient type will be better off from the technology, but the patient type’s welfare change is ambiguous. If \(\beta = R^{-1} = 1\), both types will be better off from the technology.
If \(\beta \neq R^{-1}\), the impatient type will not necessarily be better off. If \(\beta \neq R^{-1}\), but \(R^{-1}=1\), the patient type will be better off.
As long as \(\beta \lt 1\), the savings technology cannot make them both better off.
View or add comments
06 Oct 2015
Consider a competitive labor market where workers are either high ability (“high type”) or low ability (“low type”). A worker’s type is the worker’s private information, but the firms are aware that a fraction \(\alpha \in (0,1)\) of the workers have high ability. The high ability worker will produce value \(v=v_h\) for the firm, and the low type will produce \(v=v_l\), where \(0\le v_l\lt v_h\lt\infty\). The worker’s can produce \(r\lt v_h\) if they are self-employed (reservation wage).
Benchmark case: No signalling
If there’s no way for the worker to signal their type to the firm, then the firm will go with what it knows: that the probability of a worker having high ability is \(\alpha\). Using this, we get that the firm’s expectation of the workers’ ability is \(E[v]=\alpha v_h + (1-\alpha) v_l\).
A competitive equilibrium can be described be a price \(w^*(\theta)\) and an allocation \(\theta^*(w)\) which maximize the agents’ expected utilities. Here, the CE is
\[\begin{align}
w^*(\theta)=E \ [ \ \theta \ | \ \theta \in \theta^*] \cr
\theta^*(w)= \{ \theta \ : \ r \le w \}
\end{align}\]
The firm will pay workers the expected value of their marginal productivity, which ends up being a constant. The workers will work if the wage is greater than or equal to what they could get staying at home. If the workers’ reservation wage is greater than the firm’s wage (expected marginal productivity of labor), no one will work. If \(r<w=E[v]\), everyone will work.
Signalling: With a costly test that signals types
Now suppose there’s a test that workers can take, and it’s easier for high types to get a certain score than it is for low types. By observing the score that workers get on the test, firms can guess whether a worker is high ability or not. Formally,
\[c(e,\theta)=
\begin{cases}
c_{h}e ~~~ \text{if} ~ \theta=v_h \cr
c_{l}e ~~~ \text{if} ~ \theta=v_l
\end{cases}\]
where \(0\lt c_h\lt c_l\). Preparing for and taking the test is useless except that it is costlier for the low type, and can be used to distinguish between the types.
The equilibrium concept we use here is a Perfect Bayesian Equilibrium (PBE). The worker’s choice of test score \(e(\theta)\) is optimal given their knowledge of their type, the firm’s beliefs of the worker’s type after observing the test score \(\mu(e)\) is formed using Bayes’ Rule, and the wage \(w(e)\) is a Nash equilibrium for \(e(\theta)\) given the firm’s beliefs \(\mu(e)\).
One separating PBE that we can construct is \(w^*(e^*(v_h)) = v_h\), \(w^*(e\^*(v_l)) = v_l\), \(e^*(v_l)=0\), and \(e^*(v_h)=\tilde{e}\), where \(\tilde{e}\) is the least education the high type can get while still getting utility at least as high as if they pooled with the low type.
The argument for this is pretty simple. If the high type can signal their ability, there’s no point in the low type scoring above 0 on the test - effort is costly, and they’d be better off not exerting it. If the firm can distinguish high and low types, then they will pay each their marginal product.
The most efficient separating PBE is when the high type engages in as little signalling behavior as possible. This occurs when the high type’s education level makes them indifferent between being employed and pooling with the low type. Letting \(c_l(e)=c(e,v_l)\) and \(c_h(e)=c(e,v_h)\),
\[\begin{align}
e^*(v_h):&~~ v_h - c_l (e^*(v_h)) = v_l \cr
\implies &~ c_l (e^*(v_h)) = v_h-v_l \cr
\implies &~ e^*(v_h) = c_l^{-1}(v_h-v_l) \equiv \tilde{e} \cr
\end{align}\]
The full PBE is
\[e^*(\theta) =
\begin{cases}
0 ~~ \text{if} ~~ \theta=v_l\cr
\tilde{e} ~~ \text{if} ~~ \theta=v_h\cr
\end{cases}\]
\[\mu^*(e) =
\begin{cases}
0 ~~ \text{if} ~~ e \lt \tilde{e}\cr
1 ~~ \text{if} ~~ e \ge \tilde{e}\cr
\end{cases}\]
\[w^*(e) =
\begin{cases}
v_l ~~ \text{if} ~~ \theta=v_l\cr
v_h ~~ \text{if} ~~ \theta=v_h\cr
\end{cases}\]
This is not a unique equilibrium. There are a continuum of separating PBEs that are less efficient than this one, and a continuum of pooling PBEs with one more efficient than the rest. Whether the most efficient separating PBE is more efficient than the most efficient pooling PBE depends on the parameter values.
View or add comments
05 Oct 2015
This model is very similar to Triangle City.
Circle City
Let the circle be of circumference \(d\pi\). There are \(n\) symmetric firms facing marginal cost \(c\) located equidistant from each other on the circle. Firm \(i\) faces two marginal consumers: \(x_{i-1}\) (the firm to its left) and \(x_i\) (the firm to its right). The marginal consumers are identical, and firm \(i\) and firm \(i+1\) compete over consumer \(x_i\). Consumers face a travel cost of t. \(x_i\)’s utility is given by:
\[\begin{align}
v - p_i - tx_i &= v - p_{i+1} - t(\frac{td\pi}{n}-x_i) \cr
\implies x_i &= \frac{p_{i+1}-p_i+(td\pi/n)}{2t} \cr
\end{align}\]
Since the consumers are identical, this holds for every marginal consumer. Firm \(i\) faces \(x_i + x_{i-1}\), and solves
\[max_p (p_i-c)\left[\frac{p_{i+1}-p_i+(td\pi/n)}{2t} + \frac{p_{i-1}-p_i+(td\pi/n)}{2t} \right]\]
\[\begin{align}
\text{FOC:} &~~~ \left[\frac{p_{i+1}-p_i+(td\pi/n)}{2t} + \frac{p_{i-1}-p_i+(td\pi/n)}{2t} \right] - (p_i-c)\frac{2}{2t} \cr
\implies & ~~~ \frac{td\pi}{n} - (p_i-c)\frac{1}{t} = 0 ~~~~~~ (\pi_i=\pi_j ~ \forall~i,j~\because \text{firms are symmetric})\cr
\implies & p_i^* = c + \frac{td\pi}{n}
\end{align}\]
So the markup is increasing in the diameter of the circle (\(\frac{\partial p}{\partial d} \gt 0\)) and the travel cost (\(\frac{\partial p}{\partial } \gt 0\)), and decreasing in the number of firms (\(n \to \infty\), \(p \to c\)).
The fact that firm \(i\) faces 2 (or \(k\), I suppose) symmetric marginal customers doesn’t affect the equilibrium prices \(p_i^*\), only equilibrium profits \(\pi_i^*\) (scaled by \(k\)).
View or add comments
05 Oct 2015
In standard Cournot models, firms are symmetric and choose continuous quantities (\(q_i \in \mathbb{R_+}\)). Here we consider a model of Cournot duopoly with asymmetric firms limited to discrete quantities (\(q_i \in \mathbb{Z_+}\)). Because the set we are optimizing over is no longer continuous, we can’t use the Kuhn-Tucker approach used in Cournot models with continuous quantities. Instead, we set up the problem as a game in normal form and find the pure- and mixed-strategy Nash equilibria by using iterated elimination of dominated strategies.
The setting
Let firm 1 have marginal cost \(c_1 = 4\) and firm 2 have marginal cost \(c_2 = 5\). Let the inverse demand function be given by
\(\begin{align}
P(Q)=\begin{cases} 10 - Q \ \ & if \ \ Q \le 10 \cr
0 \ \ & if \ \ Q \gt 10 \end{cases}
\end{align}\)
and \(Q = q_1 + q_2\).
To start, we can eliminate any quantities greater than the firm’s monopoly quantities. That’s what the firm would do if it wasn’t constrained by its competitor, so it can’t be profit-maximizing to play anything above those quantities. To find the monopoly quantities, we can take the standard Cournot duopoly best-response function, \(q_i^* = \frac{a - bq_j^* - c_i}{2b}\), and plug in \(q_j^* = 0\). This gives us that firm 1’s monopoly quantity is 3, and firm 2’s is 2.5. We can eliminate any quantity choice greater than 3 for both firms, since they are strictly dominated.
How do we handle 2.5? Firm 2 has to choose 2 or 3, not 2.5. We can see that \(\pi_2(q_1=0,q_2=3) = 6\) and \(\pi_2(q_1=0,q_2=2) = 6\), so we can’t eliminate 3 from firm 2’s choices by strict domination.
The game matrix below shows the firms’ profits. Firm 1 is on the columns, and firm 2 is on the rows. The entries are “\(\pi_1, \pi_2\)”.
\[\begin{array}{c|lcr}
& 0 & 1 & 2 & 3 \cr
\hline
0 & 0,0 & 5,0 & 8,0 & 9,0 \cr
1 & 0,4 & 4,3 & 6,2 & 6,1 \cr
2 & 0,6 & 3,4 & 4,2 & 3,0 \cr
3 & 0,6 & 2,3 & 2,0 & 0,-3
\end{array}\]
Iterated elimination:
- Firm 1: 1 strictly dominates 0 \(\implies\) remove 0-column
- Firm 2: 2 strictly dominates 3 \(\implies\) remove 3-row
- Firm 1: 2 strictly dominates 1 \(\implies\) remove 1-column
- Firm 2: 1 strictly dominates 0 \(\implies\) remove 0-row
This leaves us with the following 2x2 game matrix:
\[\begin{array}{c|lr}
& 2 & 3 \cr
\hline
1 & 6,2 & 6,1 \cr
2 & 4,2 & 3,0 \cr
\end{array}\]
There are 3 pure-strategy Nash equilibria, \((q_1,q_2,\pi_1,\pi_2)\):
-
\[(2,1,6,2)\]
-
\[(2,2,4,2)\]
-
\[(3,1,6,1)\]
Letting \(p_1 = Pr(q_2=1)\), \(p_2 = Pr(q_2=2)\), \(k_2=Pr(q_1=2)\), \(k_3=Pr(q_1=3)\), we can get their expected utilities:
\[\begin{align}
EU_{1}(2) & = p_1(6) + p_2(4) = 6p_1 + 4p_2 \cr
EU_{1}(3) & = p_1(6) + p_2(3) = 6p_1 + 3p_2 \cr
EU_{2}(1) & = k_2(2) + k_3(1) = 2k_2 + k_3 \cr
EU_{2}(2) & = k_2(2) + k_3(0) = 2k_2 \cr
\end{align}\]
By applying expected-utility indifference, we can construct a mixed NE.
\[\begin{align}
EU_1(2) & = EU_1(3) \cr
EU_2(1) & = EU_2(2) \cr
\implies p_2&=0,p_1=1 \cr
k_3&=0,k_2=1 \cr
\end{align}\]
This is odd: The mixed NE that’s coming out is a degenerate one, pure NE #1. I think I remember something to the effect that the total number of pure + mixed NE in a matrix game is supposed to be odd, so this might imply that there are actually no mixed NE? I was told that this game actually has infinitely many mixed NE, but I don’t see it.
View or add comments
04 Oct 2015
How should a monopoly price when demand is discrete? This situation arises whenever the consumer is choosing between “buy” or “not buy”, as opposed to a choice of a continuous quantity.
Let the consumers’ valuations of the good be given by \(V \sim F(v)\), where \(F(v)\) is the CDF of the valuations and \(f(v)\) is the associated density. The probability that a consumer chooses to buy the good is the probability that their valuation is greater than the price, i.e. \(1-F(p)\). The seller’s expected payoff is then
\[\pi(p) = (p-c)(1-F(p))\]
This is the usual profit expression, but with \(1-F(p)\) as the demand. Maximizing this gives us
\[\begin{align}
\text{FOC:} \ \ & 1 - F(p) - pf(p) + cf(p) = 0 \cr
\implies \ \ & p = c + \frac{1-F(p)}{f(p)} \cr
\end{align}\]
This is called “markup pricing”; the price is the marginal cost \(c\) plus a markup, \(\frac{1-F(p)}{f(p)}\). The markup is the inverse of the hazard rate of the demand function. The hazard rate measures how likely an event (a “failure”, or “hazard”) is to happen given that it hasn’t happened before. In the case of demand, the hazard rate is the probability that a consumer will buy the good at a certain price \(p\), conditional on having not bought the good before. So it makes some sense that the markup would be based on the hazard rate of demand.
An example
Suppose \(F(v) \sim U[a,b]\) - the consumers are equally likely to be willing to pay anywhere from \(a\) to \(b\) for the product. Then \(F(v) = \frac{v-a}{b-a}\), and \(f(v) = \frac{1}{b-a}\). Plugging this into the pricing formula we derived above, we get
\[\begin{align}
p & = c + \frac{1- \frac{p-a}{b-a}}{\frac{1}{b-a}} \cr
\implies p & = \frac{c+b}{2} \cr
\end{align}\]
So, for uniformly distributed valuations of the product, the monopolist should optimally price their product at half of marginal cost plus the maximum valuation.
More on the hazard rate
The hazard rate, \(\lambda\), turns out to be important to analyzing markets. We usually assume that \(\lambda\) is nondecreasing in \(p\). This is equivalent to assuming that the density of demand is log-concave. A proof of the statement follows.
Proof
\[\begin{align}
f'(x)[1-F(x)] & = \int_x^\infty f'(x)f(t)dt \cr
& = \int_x^\infty \frac{f'(x)}{f(x)}f(t)f(x)dt \cr
& \ge \int_x^\infty \frac{f'(t)}{f(t)}f(t)f(x)dt ~~~~(\because t \ge x) \cr
& = f(x)[f(\infty) - f(x)]
& \ge - f(x)^2
\end{align}\]
So log-concave \(f(x) \equiv \lambda\) is nondecreasing.
View or add comments