Dynamic Optimisation: A Fully Worked Example

In a previous post, I referred to the importance in environmental and natural resource economics of the technique of dynamic optimisation, also known as optimal control.  However, the technique is difficult, and worked examples in textbooks or on the web often seem to pass over key points.  Here I present my own example, which I describe as fully worked because it shows every step from the largely verbal statement of the problem to the optimal paths of the key variables and the maximum value of the objective functional, identifying some options and pitfalls along the way.  It is intended for readers familiar with elementary algebra, calculus and static optimisation who have at least begun to study dynamic optimisation.

The Problem

Capital K, is the only factor of production and is not subject to depreciation. The initial capital stock is 100,.  Output is at a rate 0.5K,, and may be used as consumption C, or investment I,, the latter being added to K,.  The instantaneous utility function U_t is \ln(C_t) . We are required to maximise social welfare W, from time t = 0, to 10,, where social welfare is defined as the integral of instantaneous utility subject to a continuous discount factor of 10\%, per time period.

A Note on Notation

A widely used convention is that the subscript t,, as in C_t, indicates discrete time, and that a variable in continuous time should be written as in C(t), .  I find however that it saves a little keying time, and results in less cluttered formulae, to use the subscript approach for continuous time, and sometimes to omit the t, altogether when it is clear from the context.  More conventionally, I use the notation \dot C, to indicate a time-derivative, and \ddot C, for a second time-derivative.

I use Latex to display mathematical symbols and formulae. However, using Latex within a WordPress blog is not entirely straightforward, one problem being to obtain a satisfactory vertical alignment of symbols within text paragraphs. The commas which follow some symbols are a workaround which corrects vertical alignment in many (though not all) cases and seem to me preferable to the alternative of displaying symbols – like K for example – with their base lower than that of the surrounding text.

Writing the Problem in Mathematical Formulae

Our problem statement above contains the symbols K, C, I, U, W, t.  The first question we should consider is whether we need all these for a precise mathematical formulation.  It is clear that we can dispense with U, and relate W, directly to C,, writing the objective functional as:

\textrm{Maximise }W=\int_0^{10}(\ln(C_t))e^{-0.1t}dt\qquad(1)

We need K, which is clearly the state variable, but what is the control variable?  Since C + I = 0.5K,, either of C, or I, determines the other.  Nothing in the problem statement indicates that one is a choice variable and the other a residual.  Either could be the control variable, but we do have to choose (because the method requires maximisation of the Hamiltonian or Lagrangian with respect to the control variable).  Let us choose C, as the control variable (but Alternative 1 below will show that choosing I, leads to the same results).  We therefore write the equation of motion as:

\dot K=0.5K-C\qquad(2)

We also have the boundary conditions:

K_0=100\ \textrm{and }K_{10}\ \textrm{free}\qquad(3)

Does that complete the formulation of the problem?  No!

Pitfall 1

If we rely on the formulation above, there is nothing to prevent negative consumption, with investment \dot K, exceeding output and W, undefined (because the log of a negative quantity is undefined).  There is also nothing to prevent negative investment.  Thus the above formulation allows a time path in which capital is initially accumulated, but towards the end of the time period is run down to zero, enabling consumption to exceed output.  That could be a desirable scenario if the capital is in the form of a good which can also be consumed.  More typically, however, capital cannot be consumed and therefore consumption cannot exceed output, and the above formulation will therefore lead to erroneous results by permitting more consumption than is feasible.  Indeed, there is nothing in the formulation to rule out the combination of infinite consumption and infinite negative investment.

We therefore add two constraints and, to prepare for writing the required Lagrangian function, rewrite each as a quantity to be less than or equal to a constant, in these cases zero:

C_t \geq 0\ \forall t \in [0,10]\ \textrm{and so } -C_t \leq 0\qquad(4)

C_t \leq 0.5K_t\ \forall t \in [0,10]\ \textrm{and so } C_t-0.5K_t \leq0\qquad(5)

Although we also require that capital should not be negative, we need not specify this as a further constraint since it is is implied by the combination of K_0=100 and \dot K\geq 0,, the latter following from the equation of motion together with constraint (5).  Indeed, these imply the stronger condition K_{10} \geq 100. The combination of (1) to (5) completes the mathematical formulation of the problem.

The Value of W for Two Naïve Solutions

Before applying the method of optimal control, let us consider a couple of simple and feasible time paths for consumption and calculate the implied values of W,.  The results will provide a benchmark against which we can compare our final result.  Suppose first that there is no investment and all output is consumed.  Then capital is always 100, and consumption is always 0.5(100) = 50,.  Hence:



Now suppose that output is always divided equally between consumption and investment.  Before we can calculate W, we need to find the time path of capital by solving the differential equation:

\dot K =0.5(0.5K)=0.25K\qquad(6)

Making the standard substitution K, = e^{bt} so that \dot K,= be^{bt} we have:

be^{bt}=0.25e^{bt}\ \textrm{and so } b=0.25\qquad(7)

Hence for some constant c,:


Since K_0 = 100 we can infer that c=100, and so:

K_t=100e^{0.25t}\ \textrm{and so } C_t=0.5(0.5K_t)=25e^{0.25t}\qquad(9)





As we might expect, allocating half of output to investment, allowing capital to accumulate and increase output as time goes on, yields a higher W, than simply consuming all output.  But there is no reason to expect that this value of W, is the maximum.

Necessary Conditions for a Solution

From (1) and (2) we obtain the Hamiltonian, introducing a costate variable \lambda_t:

H=(\ln C)e^{-0.1t}+\lambda (0.5K-C)\qquad(11)

This is a present value Hamiltonian because it retains the discount factor in the objective functional and so converts \ln C_t at any time to its present value, that is, its value at time 0,.  An alternative approach will be considered below.  Because we have two inequality constraints, we must extend the Hamiltonian to form a Lagrangian, introducing two Lagrange multipliers \mu_t and \nu_t:

\mathcal{L}=(\ln C)e^{-0.1t}+\lambda (0.5K-C)+\mu C+\nu (0.5K-C)\;(12)

The expressions in brackets after the Lagrange multipliers are from the inequality constraints (4) and (5) with signs changed. The general rule here is that given a constraint g, \leq k and writing \theta_t for the associated multiplier, the term to be included in the Lagrangian is \theta_t(k-g).

Applying the maximum principle, we have to maximise the Lagrangian with respect to the control variable C_t at all times.  In this case, the Lagrangian is differentiable with respect to C_t, so we can try to use calculus to find a maximum.  But we also need to consider whether there might be a corner solution, that is, a solution at either of the limits of the constrained range of C,, which are 0, and 0.5K,.  We can rule out the possibility of a maximum at t = 0,, since \ln 0, equals minus infinity.  But there is no obvious reason why there should not be a maximum at C, = 0.5K for at least some values of t,, so we should keep this possibility in mind.  Setting the derivative with respect to C, of the Lagrangian equal to zero we have:

\dfrac{\partial \mathcal{L}}{\partial C}=\dfrac{e^{-0.1t}}{C}-\lambda+\mu-\nu=0\qquad(13)

The maximum principle also requires the conditions:

\dot K=\dfrac{\partial \mathcal{L}}{\partial \lambda}=0.5K-C\qquad(14)

\dot {\lambda}=-\dfrac{\partial \mathcal{L}}{\partial K}=-0.5\lambda-0.5\nu\qquad(15)

Although the effect of (14) is merely to repeat the equation of motion (2) it is standard practice to write it out at this point in the working.  We also require the Kuhn-Tucker conditions in respect of the two inequality constraints, conditions (17) being known as the complementary slackness conditions.

\mu \geq 0\ \textrm{and } \nu \geq 0\ \forall t \in [0,10]\qquad(16)

\mu C=0\ \textrm{and }\nu (0.5K-C)=0\ \forall t \in [0,10]\qquad(17)

Finally, there is the transversality condition.  With a fixed terminal time, but terminal capital free subject to the implied condition K_{10} \geq 100, we have the situation known as a truncated vertical terminal line.  Therefore we provisionally adopt the condition:


However, we will have to check that the resulting solution is consistent with the condition K_{10} \geq 100 (and if not we must recalculate the solution with K_{10} fixed at 100,).  (12) to (18), with the provisos noted, constitute the necessary conditions for a maximum.

Sufficiency of the Necessary Conditions

We will test whether the Mangasarian conditions are satisfied.  The basic conditions are:

(A) The integrand of the objective function, (\ln C)e^{-0.1t},, must be differentiable and concave in the control and state variables, C, and K,, jointly.

(B) The equation of motion formula, 0.5K-C,, must be differentiable and concave in C, and K, jointly.

(C) If the equation of motion formula, 0.5K-C,, is non-linear in either C, or K,, then in the optimal solution we must have \lambda_t \geq 0 for all t,.

Considering these in turn:

Condition (A) is satisfied since, applying a calculus test for concavity:

\dfrac{\partial((\ln C)e^{-0.1t})}{\partial C}=\dfrac{e^{-0.1t}}{C}\ \textrm{ so  }\dfrac{\partial^2((\ln C)e^{-0.1t})}{\partial C^2}=-\dfrac{e^{-0.1t}}{C^2} \leq 0\ \forall t\quad(19)

We need not consider K, here since it does not occur in the integrand.

Condition (B) is satisfied since the formula 0.5K-C, is linear in both C, and K, and therefore concave, linearity being sufficient for concavity (there is no requirement for strict concavity).

Condition (C) is satisfied since, again, the formula 0.5K-C, is linear in both C, and K,.

For our problem, a further condition is needed for each of the inequality constraints, the general rule being that if a constraint is represented in the Lagrangian by the expression \theta (k-g) where k, is a constant, the required condition is that g, be jointly convex in the control and state variables:

(D) -C, must be convex in C, and K, jointly.

(E) C-0.5K, must be convex in C, and K, jointly.

These conditions are satisfied since the functions are linear (again there is no requirement for strict convexity). 

Thus the Mangasarian conditions are satisfied, so we can conclude that the necessary conditions (12) to (18) are also sufficient for a maximum (and need not consider the more complex Arrow conditions).

Inferences from the Necessary Conditions

Using a common approach to simplification, we differentiate (13) with respect to time and then use (15) to substitute for \dot{\lambda},:

\dfrac{-0.1e^{-0.1t}C-\dot Ce^{-0.1t}}{C^2}-\dot{\lambda}+\dot{\mu}-\dot{\nu}=0\qquad(20)

\dfrac{-0.1e^{-0.1t}C-\dot Ce^{-0.1t}}{C^2}+0.5\lambda+0.5\nu+\dot{\mu}-\dot{\nu}=0\qquad(21)

Using (13) again we can eliminate \lambda, and \nu, (but not \dot{\nu},):

\dfrac{-0.1e^{-0.1t}C-\dot C e^{-0.1t}}{C^2}+\dfrac{0.5e^{-0.1t}}{C}+0.5\mu+\dot{\mu}-\dot{\nu}=0\qquad(22)

-0.1e^{-0.1t}C-\dot Ce^{-0.1t}+0.5e^{-0.1t}C+(0.5\mu+\dot{\mu}-\dot{\nu})C^2=0\,(23)

Collecting the terms in C, and using the complementary slackness condition (17) \;\mu C=0, (which, since C, can never be zero as \ln 0, equals minus infinity, implies \mu =0, and therefore \dot{\mu}= 0, for all t,):

0.4e^{-0.1t}C-\dot Ce^{-0.1t}-\dot{\nu}C^2=0\qquad(24)

Using the equation of motion (2) to substitute for C,:

0.4e^{-0.1t}(0.5K-\dot K)-(0.5\dot K-\ddot K)e^{-0.1t}-\dot{\nu}(0.5K-\dot K)^2)=0\quad(25)

Collecting terms in e^{-0.1t}, we have the differential equation:

e^{-0.1t}(\ddot K-0.9\dot K+0.2K)-\dot{\nu}((0.5K)^2-K\dot K+(\dot K)^2=0\qquad(26)

Before proceeding we will explore two alternative approaches.

Alternative 1: Investment as the Control Variable

Suppose we take investment I, rather than consumption C, to be the control variable.  The utility function is still \ln C, which we will now have to write as \ln(0.5K-I), , so the objective functional will be:

\textrm{Maximise }W=\int_0^{10}(\ln(0.5K-I))e^{-0.1t}dt\qquad(A1)

The equation of motion will be simply:

\dot K=I \qquad(A2)

This is not tautologous since it implies that investment is the only cause of change in capital, eg there is no depreciation.  The inequality constraints become:

I-0.5K\leq 0\;\; \textrm{and }\;-I\leq 0\qquad(A3)

Hence the Lagrangian is:

\mathcal{L}=(\ln(0.5K-I))e^{-0.1t}+\lambda I+ \mu (0.5K-I)+\nu I\qquad(A4)

From the Lagrangian we derive the conditions:

\dfrac{\partial\mathcal{L}}{\partial I}=\dfrac{-e^{-0.1t}}{0.5K-I}+\lambda -\mu +\nu=0\qquad(A5)

\dot K=\dfrac{\partial\mathcal{L}}{\partial\lambda}=I\qquad(A6)

\dot {\lambda}=-\dfrac{\partial\mathcal{L}}{\partial K}=\dfrac{-0.5e^{-0.1t}}{0.5K-I}-0.5\mu\qquad(A7)

We also have the complementary slackness conditions:

\mu(0.5K-I)=0\;\;\textrm{and  }\nu I=0\qquad(A8)

Differentiating (A5) with respect to time, using (A7) to substitute for \dot{\lambda},, and substituting \dot K, for I,:

\dfrac{0.1e^{-0.1t}(0.5K-\dot K)+(0.5\dot K-\ddot K)e^{-0.1t}}{0.5K-\dot K)^2}-\dfrac{0.5e^{-0.1t}}{0.5K-\dot K)}-0.5\mu -\dot{\mu}+\dot{\nu}=0\quad(A9)

e^{-0.1t}(-\ddot K+0.4\dot K+0.05K)-0.5e^{-0.1t}(0.5K-\dot K)+(-0.5\mu - \dot{\mu}+\dot{\nu})(0.5K-\dot K)^2=0\quad(A10)

Collecting terms in e^{-0.1t}, and using the first complementary slackness condition to eliminate \mu , and \dot{\mu},  we have:

e^{-0.1t}(-\ddot K+0.9\dot K-0.2K)+\dot{\nu}((0.5K)^2-K\dot K+\dot K^2)=0\quad(A11)

It can be seen that this is equation (26) above with signs reversed, so thereafter we can proceed as in the main line of reasoning.

Alternative 2: the Current Value Hamiltonian

When the objective functional contains a discount factor, an alternative method is to use the current value Hamiltonian.  Where there are inequality constraints, this leads to a current value Lagrangian, which for our problem can be written:

\mathcal{L}_C=\ln C+\rho (0.5K-C)+\sigma C+ \tau (0.5K-C)\qquad(A12)

where the multipliers \rho ,\sigma ,\tau are equal respectively to the original multipliers \lambda ,\mu ,\nu each multiplied by e^{-0.1t},.  In the necessary conditions, the equivalent of (13) is slightly simplified by the absence of the discount factor:

\dfrac{\partial\mathcal{L}_C}{\partial C}=\dfrac{1}{C}-\rho +\sigma -\tau =0\qquad(A13)

On the other hand the equivalent of (15) requires an extra term 0.1\rho , (the discount rate being 0.1,):

\dot{\rho}=-\dfrac{\partial\mathcal{L}_C}{\partial K}+0.1\rho =-0.5\rho-0.5\tau+ 0.1\rho =-0.4\rho-0.5\tau\qquad(A14)

The difference between the coefficients in the terms 0.5\lambda , in (15) and 0.4\rho , in (A14) may seem trivial, but it leads to additional complexity later in the reasoning.  The equivalent of (24), which I re-write here for ease of reference:

0.4e^{-0.1t}C-\dot Ce^{-0.1t}-\dot{\nu}C^2=0

is found to be:

0.4e^{-0.1t}C-\dot Ce^{-0.1t}-(\dot{\rho}-0.1\rho )C^2=0\ \quad(A15)

The more complex coefficient of C^2, in turn makes it slightly more complicated to solve what below I call Case 2.  This is not to argue against the current value approach, still less to suggest that it represents a pitfall.  But whether on balance it simplifies matters, as is often suggested, seems to depend on the type of problem.

Solving the Differential Equation

Our differential equation (26) looks rather intractable, but we can simplify matters by considering separately the two cases \nu = 0, and \nu \neq 0.  To be more precise, we consider:

Case 1: \nu = 0,  over some time interval.

Case 2: \nu \neq 0  over some time interval.

Since Case 1 implies that \nu ,  is constant over the relevant interval, we can infer that \dot{\nu}= 0,  over that period.  Equation (26) therefore simplifies to:

\ddot K-0.9\dot K+0.2K=0\qquad(27)

The standard method for this type of differential equation is to make the substitution K=e^{xt},  implying \dot K=xe^{xt},  and \ddot K=x^2e^{xt},.  After dividing through by e^{xt},  we are left with the equation:


By factorisation or by the quadratic equation formula, this is neatly solved by x=0.4, \textrm{ or }0.5.  Hence the solution to the differential equation (27) is:


where c_1,c_2 are constants to be found (generally a second order differential equation requires two constants of integration).  Differentiating (29) with respect to time we can infer:

\dot K=0.4c_1e^{0.4t}+0.5c_2e^{0.5t}\qquad(30)

C=0.5K-\dot K=0.1c_1e^{0.4t}\qquad(31)

Pitfall 2

Having obtained equations (29) to (31) it is tempting to think that our work is almost complete.  Putting t=0, in (29) we have:


Since investment right at the end of the time period can do nothing to increase consumption within the time period, we can infer that \dot K=0, at t=10,.  Hence, putting t=10, in (30):


-0.4c_1=0.5e(100-c_1) \quad(P3)


c_1=\dfrac{50e}{0.5e-0.4}=141.7 \textrm{  and }c_2=100-c_1=-41.7\quad(P5)

Substituting into (31):



W=\int_0^{10}(\ln (14.17e^{0.4t})e^{-0.1t}dt=\int_0^{10}(2.651+0.4t)e^{-0.1t}dt\quad(P7)


As expected, this yields a higher value of W, than either of the naïve solutions considered above.  Nevertheless, this is not the time path that maximises W,.  The fallacy here is the assumption that our Case 1 applies to the whole period t=[0,10].  Just because \dot K=0, at t=10, , it does not follow that \dot K\neq 0, at all t<10,

We must also consider Case 2, \nu \neq 0, . Using the second complementary slackness relation (17), this implies that:


Thus Case 2 is what we described above as a corner solution.  Using the equation of motion (2) this implies that, within the relevant time range, \dot K= 0,  and therefore \ddot K=0,.  Hence the differential equation (26) reduces to:




Integrating with respect to t,, noting that K,  can be treated as a constant since \dot K= 0,:

\nu =-\dfrac{8e^{-0.1t}}{K}+c_3\qquad(36)

Which Case is Terminal?

We will now show that, as time approaches t=10,, the system must be in Case 2, with K,  constant.  This is what we would expect from economic reasoning, since there must be a time beyond which the effect of further investment in making possible higher output and consumption in the remainder of the time period is too small to compensate for the consumption that would be forgone in making that investment.  To show this using the method of optimal control, we start from the transversality condition (18), \lambda_{10}= 0.  We can therefore reduce (13) at t=10,  to:

\dfrac{e^{-1}}{C_{10}}+ \mu_{10}-\nu_{10}= 0\qquad(37)

Given the first complementary slackness relation (17), \mu C,, this further simplifies to:

\dfrac{e^{-1}}{C_{10}}-\nu_{10}= 0\qquad(38)

This implies that \nu_{10}\neq 0 (otherwise C_{10}  would be infinite which is impossible given the problem data).  So the system cannot be in Case 1 at t=10, and must be in Case 2.

When Does the System Switch from Case 1 to Case 2?

Taking our Case 2 equation (36) at t=10,, and using (38) to substitute for \nu_{10} :


From the equation of motion (2), and since \dot K=0, in Case 2, we can substitute 2C_{10} for K_{10}:



Substituting for c_3 in (36):

\nu =-\dfrac{8e^{-0.1t}}{K}+\dfrac{5e^{-1}}{C_{10}}\qquad(42)

While the system is in Case 2, K, is constant, so we can replace it by K_{10} and therefore by 2C_{10}:

\nu =-\dfrac{8e^{-0.1t}}{2C_{10}}+\dfrac{5e^{-1}}{C_{10}}=\dfrac{5e^{-1}-4e^{-0.1t}}{C_{10}}\qquad(43)

Since Case 2, by definition, has \nu \neq 0,, and since from (38) \nu_{10}>0, the system will be in Case 2 while:



1-0.1t<\ln 1.25=0.223\qquad(46)


So we can infer that the system is in Case 1 during t=[0,\;7.77] and in Case 2 during t=(7.77,\;10] .

Solving Case 1

Having found the time period over which Case 1 applies, we can now determine the constants c_1,c_2 in equations (29) to (31).  Taking (29) at t=0, we have:


Since the system switches to Case 2 at t=7.77, with \dot K=0,, from (30) we have:







Substituting into (29 to (31), we have the time paths of the key variables over t = [0, 7.77] :


\dot K=63.3e^{0.4t}-29.1e^{0.5t}\qquad(56)


Although not essential to solve the problem, it may be of interest to note the time paths, over the same period, of the various multipliers.  From the first complementary slackness relation (17) and because C, can never be zero, we can infer that \mu = ,0, and from the definition of Case 1 we have \nu =0,.  Substituting these values into (13):



The value of \lambda , can be interpreted as the shadow price of the state variable, capital, that is, the amount by which W, could be increased if an extra unit of capital were available at time t,.  It can be seen that this value at t=0, is 0.0633,, which may seem surprisingly small given the extra consumption over the whole period which an extra unit of initial capital would make possible, but can be shown to be correct given that W, depends on the log of consumption.

Solving Case 2

A feature of Case 2 is that K,  remains constant.  To find at what level it remains constant, we have simply to find its level at t=7.77,, when Case 1 switches to Case 2.  Substituting into (55):


This is the value of K,  over the period (7.77,\;10] , and enables us to confirm that K_{10}\geq 100  and therefore to accept the condition (18), \lambda_{10}=0, without qualification.  Over the same period, \dot K=0,  and:


Turning to the multipliers, \mu =0,  for the same reason as during Case 1.  Substituting for C,  in (43):

\nu =\dfrac{5e^{-1}-4e^{-0.1t}}{354}=0.0052-0.0113e^{-0.1t}\qquad(62)

Thus \nu ,  increases gradually from 0,  at t=7.77,  to 0.0010,  at t=10,.  The positive values of \nu , when t>7.77,  indicate that if the constraint C_t\leq 0.5K_t were relaxed then W,  could be increased.

To obtain \lambda .  over the same period, we use (61) and (62) to substitute for C,  and \nu ,  respectively in (13):



Thus \lambda ,  falls from 0.0013,  at t=7.77,  to, as expected, 0,  at t=10,, at which point an extra unit of capital would have no effect within the time period on C,  or W,

Table 1 below shows the values of all the variables at integral time points over the whole period [0,\;10] , convering Cases 1 and 2.

The Optimal Value of W

It remains to check that the optimal paths we have now identified do indeed result in a larger W,  than our best so far – the 27.33,  obtained from our Pitfall 2.  Summing the relevant integrals over the Case 1 and Case 2 periods we have:



W = \left[-(27.61+4t+40)e^{-0.1t}\right]_0^{7.77}+\left[58.69e^{-0.1t}\right]_{7.77}^{10}\qquad(67)




The main source used in preparing this post was:

Chiang, A (1999)  Elements of Dynamic Optimization  Waveland Press, Illinois

Posted in Mathematical Techniques | Tagged , | Leave a comment

Covid-19 and Household Size – An Update

An updated analysis of national data remains consistent with the hypothesis that Covid-19 infection rates are higher in larger households .

In a post in May 2020 I presented the results of a regression analysis on data from 14 Western European countries tending to support the following hypothesis:

Rates of Covid-19 infection will be higher, other things being equal, in larger households, that is, households with more occupants.

With Western Europe now well into a second wave of Covid-19 infection, it is timely to assess whether an updated analysis continues to support the hypothesis.

A reminder of some key features of the analysis:

  • Rates of death from Covid-19 are used as a proxy for rates of infection, actual rates of infection being difficult to measure.  Published statistics on confirmed infections are heavily dependent on differences in testing arrangements at different times and between countries.
  • Data used are at national level, with no allowance for variations within countries.
  • Estimation of the regression is by weighted least squares, with weighting by population.

The regression model is:

DP  =  C  +  (B x PH) + E

where:  DP is cumulative death rate from Covid-19 per million population; C is the regression constant; B is the slope coefficient; PH is average population per household; and E is the error term. 

The estimated regression line based on data to 3 December 2020 was:

DP  =  -2,004 + (1,209 x PH)

The precise values of the estimated coefficients are not important. Nor is it surprising that the slope coefficient is higher than estimated in May: this is to be expected since cumulative death rates have increased while average population per household is stable.  The important point is that, as in May, the estimated slope coefficient is positive, consistently with the hypothesis (and is sufficiently large that the null hypothesis that its true value is zero or less is rejected at the 5% significance level (1)). 

A spreadsheet containing the underlying data and full regression output may be downloaded here:


  1. This can be inferred from the fact that the 95% confidence limits of the estimated slope coefficient are both positive.
Posted in Cities, Environment (general) | Tagged , | Leave a comment

Housing Reform in England

Housing in England is under-supplied, resulting in high costs.  The government’s proposed reforms to address the problem are a step in the right direction but do not go nearly far enough.

I don’t usually engage in autobiography, but it so happens that my own case illustrates the problems of housing in England rather well.  I left university in 1978 and started work in London.  At first I lived in rented accommodation, but by 1983 I was in a position to buy a house – a fairly typical 3-bed terraced property in outer London, only a few minutes walk from my work.  The house cost £24,500, which I funded via a mortgage of two and a half times my then annual salary as a part-qualified accountant of £8,600 together with £3,000 savings accumulated while I had been renting.  The annual interest on the mortgage, at a rate of 10%, was £2,150 or 28% of my income.

I wouldn’t be able to do that now.  The current estimated value of that house is £400,000 (1).  The current salary for an equivalent job in London would be unlikely to be much more than £30,000.  So the value of the house is more than sixteen times larger than in 1983, while the salary is only about four times larger.  To buy the house with that salary, assuming an equivalent proportion (12% or £48,000) from savings, would require a mortgage of more than eleven times salary.  No mortgage lender would agree to that: the risk of default would be too great, since the annual interest, at a current variable rate of around 4%, would amount to some £14,000 or almost 50% of income. 

Because homes in London and some other parts of England are so expensive, many young adults face a choice between unsatisfactory options: live with parents; club together with friends or rely on help from parents to buy a property; live where property is less expensive but suitable jobs are hard to find without a long commute; or rent forever at an annual cost that may be no less than that of buying.  The current average monthly rent for a one-bedroom flat in London is £1,250 (2), equivalent to 50% of an annual salary of £30,000.  Renting a room or studio apartment is cheaper, but few would consider it satisfactory as a long-term arrangement.  Sharing a larger rented property can also reduce costs, but is not for everyone. 

Most informed observers consider that the main reason why housing costs are so high is that supply is constrained by the limited quantity of land with approval for housing development (3).  Such approval may be granted  by local authorities acting within a framework of law created by the Town and Country Planning Act 1947 and much subsequent legislation.  But even if a proposed development is well-designed, a local authority may gain little from allowing it to proceed, and by doing so will become responsible for much of the initial cost of associated roads and other infrastructure, and the ongoing cost arising from the extra population including children’s education and care for the elderly. It may also face campaigns from local people opposed to the development for reasons which may include loss of countryside, pressure on local services, the possibility of undesirable neighbours, and loss in value of their properties.  Furthermore, much land around London and other cities is protected from development by national designations such as Green Belt. 

The government (4) now proposes a reform of the planning system in England.  Details have been published in a White Paper Planning for the Future (5).  The Prime Minister, in his foreword, introduces the proposals as (6):

“Radical reform unlike anything we have seen since the Second World War.  Not more fiddling around the edges … a whole new planning system for England.”

He concludes that:

“… what we have now simply does not work.  So let’s do better.  Let’s make the system work for all of us.  And let’s take big, bold steps so that we in this country can finally build the homes we all need and the future we all want to see.”

Consultation on the proposals, inviting general comments and answers to specific questions, is open for 12 weeks from 6 August 2020.  I set out below the response I have submitted.  Questions are shown in bold with, in some cases, my brief explanatory comments in italics (for fuller context, reference should be made to the White Paper itself).  My responses are in plain text.

General Comments

The Prime Minister’s foreword quite rightly identifies the need for radical reform of the planning system.  It states that the new system should provide “the homes we need in the places we want to live at prices we can afford, so that … we can connect our talents with opportunity”.  The Secretary of State’s foreword quite rightly refers to “the present generational divide”: the fact that many young adults, even those on above-average incomes, are unable to buy their own home in the way that their parents’ generation were, and have little option but to pay high rents or else live with their parents.  The Introduction (on p 14) quite rightly refers to “long-term and persisting undersupply” of housing, and to the fact that housing in England can be much more expensive than in other European countries. Two points which might have been added are:

  1. The high cost of housing is a major contributory factor to poverty for families with moderate earnings whose rent is a high proportion of their income, and indirectly adds to government expenditure via the provisions for housing costs within Universal Credit.
  2. The average number of persons per household in the UK (2.4) is higher than in many other European countries (cf Germany 2.0) (7).  This has probably facilitated intra-household transmission of Covid-19 and may be a contributory factor to the UK’s relatively high death rate from the virus.

Although the White Paper contains many sensible proposals, these fall well short of what is needed to address these problems.  In particular:

  1. The annual target of 300,000 new homes is far too small.  It represents annual growth in housing stock per capita of about 0.75% (see answer to Q5).  That’s not big and bold. What is needed is a target supported by careful economic analysis showing that, over a period of 5-10 years, it can be expected to result in a substantial reduction in the cost of housing. 
  2. Although the proposed designation of Growth areas is welcome, it needs to be accompanied by measures to ensure that such designation does not result in huge gains to existing landowners with little benefit to developers or potential residents.
  3. The White Paper offers little to discourage the speculative element in demand which is one reason why housing is so expensive.  People expect an upward trend in the price of houses, and this may lead them to buy more, or larger, homes than they require for their own use. When many people do this, prices do indeed rise.  Reform of the planning system offers an opportunity to break this cycle of expectation.  Two measures that would be desirable in themselves and also help to change expectations include increasing the annual target for new homes to considerably more than 300,000, and allowing development on some Green Belt land.

Although outside the scope of the White Paper, it should be recorded that, to be most effective in addressing the problems of England’s housing, reform of the planning system should be accompanied by:

  1. Cessation of the Help to Buy scheme which increases demand for housing and so tends to raise house prices;
  2. Appropriate reform of taxation relating to housing, including:
    1. Ending the anomaly under which VAT is charged on major renovations and extensions to existing homes but not on construction of new homes, so discouraging an important means of maintaining and expanding housing space;
    2. Bringing main homes within the scope of Capital Gains Tax (perhaps with the charge rolled up over a lifetime), so removing the current tax incentive to treat housing as a speculative investment;
  3. Policies to ensure an adequate supply of skilled labour to the building industry (including via immigration), and to support the development and application of building methods with reduced labour requirements such as modular construction.

Q5 Do you agree that Local Plans should be simplified in line with our proposals?

It is proposed that Local Plans should identify three types of land: Growth Areas suitable for substantial development, with automatic outline approval for development; Renewal Areas suitable for smaller scale development such as densification and infill of existing residential areas, with a statutory presumption in favour of suitable development; and Protected Areas, including Green Belt and Areas of Outstanding Natural Beauty, which justify more stringent development controls to ensure sustainability.

My response:  Yes (in part).  I agree that the role of land use plans should be simplified and with the proposed definitions of Growth areas and Renewal areas.  Automatic outline approval for development within the former and a presumption in favour of development within the latter would simplify and accelerate the process of obtaining approval for development.  However, consideration should be given to the effect on the market value of land within Growth areas especially.  Existing landowners, who would not have had to apply for permission to develop or indeed to do anything at all, could be expected to enjoy very large gains if they then sell their land.  There is a risk that too much of the economic benefit from automatic outline approval would accrue to existing landowners and not enough to either developers or potential residents.  In other words, there is a risk that the designation of Growth areas might make more land available for development, but only at a cost to developers that would make it unprofitable to undertake development unless homes could be sold at prices at least as high as at present.  Taxation (via Capital Gains Tax or otherwise) of the undeserved gains made by existing landowners would be justifiable but do nothing to help developers or residents.  A much more constructive approach to the problem is suggested in the Letwin Report on Build Out (8).  In outline, if designation of land as Growth area comes with an automatic requirement that development of that land must provide for diversity of housing in respect of type, size, style and tenure, including a minimum proportion of affordable homes,  then the residual land value will be much less than it would have been with unconstrained development permission, allowing both lower prices or rents to potential residents and reasonable profit for developers.  The Report’s recommendation that residual land values be capped at around ten times existing use value seems very appropriate for greenfield sites, still allowing the landowner a very worthwhile gain.  There may also be a role for compulsory purchase powers, particularly in assembling large sites with multiple existing landowners where an individual landowner is holding out in the hope of a larger gain at a later date.

The third land type would more logically be divided into two (making four types altogether). Distinguishing the following types would help to promote public understanding of the varied reasons for restricting development and to raise awareness of the extent of land at risk of flooding.

  • One type (“At-risk areas”?) would be land which is unsuitable for development because of its current or likely future exposure to environmental hazards including coastal and river flooding.  Identification of such land should have full regard to the best available predictions of the effects of climate change, including sea level rise, over the next 100 years and beyond, and to realistic assessments (having regard to cost as well as technical feasibility) of the scope for mitigation of risk.   Land potentially at risk of flooding should not be considered suitable for development just because the risk can be fully mitigated in the short term.
  • The second type (“Protected areas”?) would be land which should be protected from development because it has environmental qualities sufficiently valuable to be worth preserving even at the price of restricting development.  Such land may provide direct benefits to visitors via opportunities for recreation and the enjoyment of natural beauty.  It may also provide ecosystem services yielding more indirect benefits such as drainage, water and air purification, biodiversity and (of especial importance in mitigating climate change) carbon sequestration by forests and woodlands.   This could include Areas of Outstanding Natural Beauty and Local Wildlife Sites.  As many people are coming to realise, however, it is not appropriate that all of the very large amount of land designated as Green Belt should continue to be protected (9).  Green Belt land is very varied in quality and much of it is inaccessible to the public. An effect of the London Green Belt is that much development is located beyond the Green Belt but occupied by people who work in London, whose resulting long commutes are harmful both to them and to the environment.  Allowing development on perhaps 10% of Green Belt land, chosen for its limited environmental value and proximity to existing transport links, would enable provision of many new homes in places where people want to live, such as around London, Oxford and Cambridge. 

Q8a Do you agree that a standard method for establishing housing requirements (that takes into account constraints) should be introduced?

By a standard method is meant a means of distributing the national housebuilding target of 300,000 new homes annually.  It would make it the responsibility of individual planning authorities to allocate land suitable for housing to meet their share of the total.  They would be able to choose how to do so via a combination of more effective use of existing residential land, greater densification, infilling and brownfield development, extensions to existing urban areas, or new settlements.

My response:  Yes.  To secure an adequate rate of provision of new homes, it is essential that binding targets are imposed on planning authorities.  However:

  • These targets should be part of a framework which also provides incentives to planning authorities to approve more new homes and leaves meaningful scope for local input to the planning process. This will minimise the risk of conflict between central government and local communities, allow planning authorities to innovate with successful practice being copied by others, and ensure a genuine role for local democracy.
  • The overall planning framework should prioritise number of new homes and quality of individual homes and appropriate infrastructure provision and placemaking.  Enforcement of the first should not implicitly downgrade the others.
  • The overall annual target for new homes in England should be considerably more than 300,000.  Given existing stock of c 24 million, and even if all of that stock remains in use, it represents annual growth of just 1.25%.  With likely population growth of 0.5% (10), this is equivalent in per capita terms to 0.75%. It is not credible that such a modest rate of growth, even if sustained over several years, can do much to mitigate what the White Paper itself (p 14) describes as a situation in which housing space in the UK can be twice as expensive as in Germany or Italy.  An OBR Working Paper (11) estimates the price elasticity of demand for housing in the UK at -0.92, suggesting that annual growth in per capita housing stock of 0.75% would reduce house prices annually by just 0.82%.
  • Targets should allow for development of selected Green Belt land.  Not to do this would unduly restrict home provision in areas where people want to live (see answer to Q5).

Q8b Do you agree that affordability and the extent of existing urban areas are appropriate indicators of the quantity of development to be accommodated?

My response:  Yes (in part).  Requiring more development in areas where property is more expensive, other things being equal, establishes a crucial link to the signals provided by the market, ensuring that development occurs where people want to live.  However, a link to the extent of existing urban settlement is more problematic and a simple algorithm, attempting to spread development “fairly” between planning authorities, would probably yield some bizarre results.  More important than such fairness is the need to ensure that new or expanded settlements are of a sufficient size to support a good range of local services, rather than requiring residents to make frequent car journeys to a neighbouring large town. 

Q9a  Do you agree that there should be automatic outline permission for areas for substantial development (Growth areas) with faster routes for detailed consent?

Approval for development is often in two stages, outline approval being the first stage.

My response:  Yes, for the reason given in answer to Q5.

Q14 Do you agree that there should be a stronger emphasis on the build out of developments?  And if so what further measures would you support?

‘Build out’ refers to the building of homes once approval has been granted. Build out of large developments can be slow due to low market absorption rates, with some sites taking over 20 years to complete.

My response:  Yes.  Slow build out has a direct effect in limiting the rate of provision of new homes.  It also invites the widespread mis-perception that under-supply of housing is the fault of developers and nothing to do with any deficiencies of the planning system.  Requiring diversity of housing within large developments so that provision more closely matches the range of housing demand, as recommended in the Letwin Report, should encourage faster build out by developers in the knowledge that homes are unlikely to remain unsold or untenanted.  Consideration might also be given to some form of penalty, such as a surcharge on the Infrastructure Levy, where the time taken to complete developments is determined (under suitable rules) to be excessive.

Q17  Do you agree with our proposals for improving the production and use of design guides and codes? 

Design guides and codes record architectural and other features of developments that have been judged successful in the past.  They can be used by architects as an alternative to original design, and by planning authorities in specifying the type of development they are prepared to approve.

My response: No.  The proposals tend to suggest that planning authorities would be required to comply with the National Design Guide, National Model Design Code and Manual for Streets.  Making individual planning decisions rules-based rather than discretionary is highly desirable since it will create greater certainty for developers and lead to faster decisions with reduced costs for all parties. However, each planning authority should be free to adopt its own rules via design codes, etc., adapting national guidance to their local circumstances as they judge appropriate.  For example, authorities may reasonably take different views regarding the balance in their areas between car use and public transport, with different implications for car parking provision and housing density. 

Q18 Do you agree that we should establish a new body to support design coding and building better places, and that each authority should have a chief officer for design and place-making?

My answer: Having a central body to support design coding and building better places is a sensible proposal, provided that its role is limited to support and planning authorities are free to adapt its output as they see fit.  While design and place-making are very important, a requirement for each planning authority to have a designated chief officer for these functions would limit the freedom of authorities to determine their own best arrangements having regard to financial constraints.  An authority might for example wish to provide training in design and place-making for a number of officers contributing to the planning process rather than appoint a single designated officer.  Such freedom enables authorities to try different arrangements and to learn from each other’s experience, and is more likely to lead to successful results than a uniform approach.  A requirement for authorities to have regard to design and place-making would be sufficient.

Q20 Do you agree with our proposals for implementing a fast track for beauty?

My response: No.  While there are many excellent suggestions in the report of the Building Better, Building Beautiful Commission (12), its emphasis on “beauty” as an overarching concept is liable to mislead and to be interpreted differently by different parties.  A development I happen to have visited – Great Kneighton, pictured on p 50 of the White Paper – appears to be well planned and well constructed, a good place to live, but I would not call it beautiful.  What is needed is not a fast track for a specific category of developments judged to embody beauty but a general acceleration of the planning process for all housing developments of sufficient quality.

Q22a Should the government replace the Community Infrastructure Levy and Section 106 planning obligations with a new consolidated Infrastructure Levy, which is charged as a fixed proportion of development value above a set threshold?

This question concerns the resourcing of the roads and other infrastructure required by new housing development.  The current Community Infrastructure Levy is a charge that local authorities may choose to levy – about half do – based on the floorspace of new development.  Section 106 (of the Town and Country Planning Act 1990) enables authorities to set conditions when approving a development, requiring the developer to do certain things or to pay money to the authority. 

My response:  Yes.  Negotiations over Section 106 cause delay in obtaining approval for development and may deter small builders from submitting applications at all.  A consolidated Infrastructure Levy at a rate known in advance would avoid such delay and provide certainty for applicants.

Q22b Should the Infrastructure Levy rates be set nationally at a single rate, set nationally at an area-specific rate, or set locally?

My response: Nationally at an area-specific rate.  I suggest a uniform rate for most areas but higher rates for areas where development now requires or may in future require works to mitigate flood risk or maintenance of existing flood defences.  Such higher rates are justified because areas exposed to flooding are unlikely to have lower needs for non-flood-related infrastructure than other areas.  They would also provide some incentive for builders to prefer development in areas not exposed to flood risk.  Where such higher rates are charged the extra sums should pass to the appropriate bodies responsible for flood defence. 

Q22c Should the Infrastructure Levy aim to capture the same amount of value overall, or more value, to support greater investment in infrastructure, affordable housing and local communities?

My response: More value.  This is one way in which planning authorities can be incentivised to approve more new homes (as proposed in answer to Q8a).  However, the overall situation faced by developers, including the price at which development land is available and the sale price of new homes as well as the Levy, should provide reasonable scope for profit. 

Q24a Do you agree that we should aim to secure at least the same amount of affordable housing under the Infrastructure Levy, and as much on-site affordable provision, as at present? 

My response: No.  As explained in answer to Q5, the provision of affordable housing within a diverse development should be automatically required at the point that land is designated as Growth area.  Such a development can be profitable for the developer because that requirement will substantially lower the price which the existing landowner can obtain for the land and so its cost to the developer.  Under this approach, there should be no need for affordable housing to be funded from the Infrastructure levy, which should be reserved to meet the costs of infrastructure provision and placemaking.

Notes and References

  1. From Zoopla’s online property valuation tool.
  2. Valuation Office Agency  Private Rental Market Summary Statistics – April 2018 to March 2019  Read from Chart 3 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/809660/PRMS_Statistical_Release_20062019.pdf
  3. The economics of housing in England (and elsewhere) is complex.  For a concise and fairly orthodox account, identifying causes and effects of under-supply, see Barker K (2014) Housing: Where’s the Plan?  London Publishing Partnership.  A dissenting view (arguing that under-supply is not the problem) is set out in Mulheirn I  (2019) Tackling the UK housing crisis: is supply the answer? https://housingevidence.ac.uk/publications/tackling-the-uk-housing-crisis-is-supply-the-answer/   An international perspective is given by Davies B, Turner E, Marquardt S & Snelling C (2016) German Model Homes: A Comparison of UK and German Housing Markets  https://www.ippr.org/files/publications/pdf/German-model-homes-Dec16.pdf
  4. The UK government is responsible for housing policy in England, while housing policy in Wales, Scotland and Northern Ireland is the responsibility of their devolved administrations.
  5. Ministry of Housing, Communities and Local Government  Planning for the Future  https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/907647/MHCLG-Planning-Consultation.pdf  
  6. As (5) above, p 6.
  7. Euromonitor International https://www.euromonitor.com/united-kingdom/country-factfile gives UK population 2019  66.65M and number of households 28.02M implying population per household 2.38; and https://www.euromonitor.com/germany/country-factfile gives Germany population 83.02M and number of households 41.58M implying population per household 2.00.
  8. Rt Hon Sir Oliver Letwin MP  (2018) Independent Review of Build Out  https://www.gov.uk/government/publications/independent-review-of-build-out-final-report   See especially paras 3.3, 3.8, 4.3. 4.4, 4.16 & 4.17.
  9. Those advocating selective relaxation of Green Belt to allow housing development include:
  10. ONS: National Population Projections: 2018-Based, Main Points (5% growth over 2018-2028 implies an annual average rate of 0.5%) https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationprojections/bulletins/nationalpopulationprojections/2018based#main-points
  11. Auterson T (2014)  Forecasting House Prices, OBR Working Paper No. 6, para 3.14 p 23 https://obr.uk/docs/dlm_uploads/WP06-final-v2.pdf
  12. Building Better, Building Beautiful Commission (2018) Living with Beauty: promoting health, well-being and sustainable growth  https://www.gov.uk/government/publications/living-with-beauty-report-of-the-building-better-building-beautiful-commission
Posted in Cities, Environment (general), Water | Tagged , , , , , , , , , | Leave a comment

Some Carbon Tax Scenarios

How does a competitive industry respond to an emissions tax in the short run and the long run?  What if the industry is a monopoly?

In this post I bring together two standard pieces of microeconomic analysis: the effect of an emissions tax to address a pollution externality; and the behaviour of profit-maximising firms in different market structures.  It’s a straightforward exercise, but some of the results may be found a little surprising.

My method here is the exploration of numerical examples.  Therefore the only claim I make for the results is that they demonstrate possibilities: to infer any sort of generalities would be an obvious fallacy.  The numbers from which the examples begin have been chosen for ease of calculation: it is no accident that many of the output and other figures to which they lead are round numbers. 

I consider two industries, one with many firms in perfect competition, and one a monopoly.  The following assumptions are common to both:

  1. Emissions are uniformly mixed and very large in total (as is the case for CO2 and some other pollutants).  Hence the damage due to any one firm’s emissions is independent of its location, and its contribution to total emissions is too small to affect the marginal damage per unit of emission. 
  2. Marginal damage m from the pollutant (in terms of the local currency) is 1 per unit of emission.
  3. In the absence of an emissions tax, with firms taking no particular measures to limit their emissions, the emissions ratio e (the ratio of emissions to output) is assumed to be 4.
  4. The emissions tax is introduced with minimal notice.  Therefore all adjustment to the tax takes place after its introduction.
  5. Firms’ costs consist of four components: a) a fixed component; b) a component proportional to the square of output (in conjunction with (a) this yields the characteristic U-shaped average cost curve); c) a component reflecting, for any level of output, higher costs for a lower emissions ratio; d) a component for the cost of the tax, where applicable.  I take costs to include ‘normal’ profit: all references below to profit should be understood to mean economic or supernormal profit.

Outcomes are assessed from several points of view, of which perhaps the most important is net welfare, calculated as consumer surplus plus producer surplus minus damage due to emissions plus tax receipts.

An Industry in Perfect Competition

The industry is assumed to be a constant-cost industry, that is, the entry or exit of firms does not affect the cost functions of its firms.  Writing q for output volume and t for the tax rate, the cost function per period of each firm is:

c(q,e) = 10 + 0.1q^2 + 4q/e + qet

Writing P for the market price and Q for total industry output volume, the market demand function per period (in inverse form) is:

P = 11 - 0.01Q

Initial Position with No Emissions Tax

We assume that the industry is in equilibrium, with competition having driven the market price to the minimum point of the firms’ average cost (AC) curves so that their profit will be zero.  We have to find the output q of each firm at which average cost is minimised. Price P is then equal to average cost at that point.  Using the market demand function we can then find industry output Q, from which we can infer the number of firms (Q/q) and the total value of the industry’s sales (PQ).  We can also calculate the industry’s total variable costs excluding tax, which is of interest as an indicator of the total employment supported by the industry and its suppliers.

The industry’s emissions are simply Qe.  To calculate producer surplus we need the average variable cost (AVC) of one firm at the output determined above.  We then have everything needed to calculate the components of net welfare.

With Emissions Tax: Short Run

I take the short run to be a period in which no firm has changed its emissions ratio or exited the industry.  Thus any reduction in emissions resulting from the tax must be due to a reduction in output.  In a previous post, I noted that a reduction in emissions in response to a tax could be due to the introduction of abatement technology, to a reduction in output, or to a combination of the two.  Here I consider the implications of the timing of such a combined response: a reduction in output can usually be almost immediate, but the introduction of abatement technology will normally take time. 

In the short run, having not fully adjusted to the tax, firms will not set their output to the minimum point of their average cost curves.  Instead, we must start from the more fundamental principle that they will set their output to the point at which their marginal revenue (the market price P) equals their marginal cost.  So from the cost function we obtain marginal cost in terms of firm output q and set this equal to P: this yields the inverse supply function for a firm.  Since the number of firms is known from the initial position, we can infer the market supply function relating P and Q.  From this in conjunction with the market demand function we can infer the values of P and Q, and hence q.  The remaining calculations are just as for the initial position.

With Emissions Tax: Long Run

I use the term ‘long run’ in a special sense: a period in which all firms have adjusted to the tax as fully as possible by changing their emissions ratio or exiting the industry.  This is not quite the Marshallian long run since the fixed component of the firms’ cost functions is assumed unchanged from the initial position (I leave for another day the important case in which abatement of emissions involves investment in fixed capital). 

The method of calculation is as for the initial position except that the average cost curve now contains two unknowns: firm output q and the emissions ratio e.  So we must find the combination of values of those two variables which minimises average cost. Once we have found that minimum point, yielding q, e and P, the calculations proceed in the familiar way.


Table 1 below sets out the results of the above calculations.  It can be seen that the industry’s emissions are reduced in the short run and further reduced in the long run.  Thus the primary purpose of the tax is achieved.  Also on the positive side, net welfare is increased in the short run and further increased in the long run.

 Initial Position with No TaxEmissions Tax at t = 1: Short RunEmissions Tax at t = 1: Long Run
Output per firm q10610
Profit / – Loss per firm0-60
Number of firms808050
Industry output Q800480500
Price per unit of output P36.26
Industry sales value240029763000
Industry variable costs excluding tax160016001500
Emissions ratio e442
Industry emissions Qe320019201000
Net welfare80014401750
Table 1: Short and Long-Run Effects of an Emissions Tax on a Perfectly Competitive Industry

Output per firm in the long run is the same as in the initial position.  Thus the long run reduction in emissions is achieved via a combination of a lower emissions ratio and a reduction in the number of firms.

Although the tax reduces the volume of output and increases its price per unit, these may be regarded as necessary side-effects of the emissions reduction.  However, the fact that both these changes slightly overshoot in the short run may be considered to impose an unnecessary (albeit temporary) detriment on consumers.  The need for the losses incurred by firms in the short run is questionable: by providing an incentive for firms to exit the industry they hasten the arrival of long-run equilibrium with few firms and profits restored to nil, but perhaps that process could be facilitated by other means.  These features of the short-run position after introducing a tax with minimal notice suggest that there could be advantage in giving a longer period of notice allowing firms to adjust before the tax comes into effect.  However, the way in which firms would respond during such a notice period would be difficult to predict.  It would depend on, among other things, the degree of certainty with which firms believe that the tax will be introduced, and the judgments firms make as to how many of their competitors will exit the industry. 

It is important to note that the industry will not leave the short run one day and arrive at the long run the next.  Between the two is a transitional process in which some firms introduce abatement technology and others exit the industry.  Again, firms’ behaviour during this period is difficult to predict.  Perhaps some firms will make an early strategic decision to exit.  Alternatively, all firms may begin incurring the extra costs of abatement technology, and only as losses accumulate will some firms decide to leave the industry.

How does the tax effect employment in the long run?  To the extent that industry variable costs excluding tax are a good proxy for the employment supported by the industry, the direct effect is only a small reduction. Although many firms leave the industry, the effect on employment is largely offset by the extra costs per firm of reducing their emissions (staff made redundant by exiting firms may be re-employed by other firms).  Taking a broader view, however, the significant increase in industry sales value implies, given constant aggregate demand, a corresponding reduction in demand for other goods, adding to any reduction in employment.  Much therefore depends on how the government uses the tax receipts. If it uses them in ways which raise employment, either via government expenditure on goods and services, or via a cut in another tax, then the overall effect on employment could be neutral or even positive.    

A Monopoly

The single firm’s cost function is:

C(Q,e) = 800 + 0.01Q^2 + 4Q/e + Qet

Its inverse demand function is:

P = 13 - 0.01Q

Initial Position with No Emissions Tax

Here e = 4 and t = 0.  Using the demand function we can express profit Pr as a function of Q only and then find the level of Q that maximises profit.  Price P, sales value and profit follow immediately.  We can also calculate variable costs, emissions (Qe), and then the components of net welfare.

With Emissions Tax at Rate Equal to Marginal Damage: Long Run

For this industry I omit the short-run analysis and proceed directly to the long run.  Here t = 1 while e, along with Q, is an unknown to be found.  So we find the levels of Q and e which maximise profit.  The only other difference from the calculations for the initial position is that we need both total variable costs (in order to calculate producer surplus) and variable costs excluding tax (as an indicator of employment). 

With Emissions Tax at a Rate Less Than Marginal Damage: Long Run

We take the case t = 0.7.  The method of calculation is exactly as for t = 1.


Table 2 shows the results of the above calculations.  As expected, the tax reduces emissions, partly by reducing output and partly by reducing the emissions ratio, and the higher tax rate reduces emissions by more. 

 No Emissions TaxEmissions Tax at t = 1: Long RunEmissions Tax at t = 0.7: Long Run
Output Q300225249
Profit Pr1000213363
Price per unit of output P1010.7510.51
Sales value PQ300024192617
Variable costs excluding tax12009561037
Emissions ratio e422.39
Emissions Qe1200450596
Net welfare105012661295
Table 2: Effects of an Emissions Tax at Different Rates on a Monopoly

The tax considerably reduces the firm’s profits, but they are still positive, and a reduction in the profits of a monopoly may be considered of little concern.  The small increase in price represents only a modest additional burden to consumers.  Since the reduction in sales value exceeds that in variable costs excluding tax, the net effect on employment may well be positive, even before consideration of how the government uses the tax receipts.

Net welfare is increased at either of the two tax rates, but is slightly higher when the rate is somewhat lower than the rate of marginal damage.  The reason for this is that, leaving aside the emissions damage, the initial position is sub-optimal relative to what could be achieved if output were set to equate price and marginal cost, rather than restricted so as to maximise the monopolist’s profit.  The theory of second best implies that a policy measure that would otherwise be optimal to address a market failure may not be optimal if another form of market failure is also present (1).  For a theoretical treatment of taxes to address externalities in the context of monopoly see Barnett (1980) (2).

A policy-maker selecting a tax rate in this situation might nevertheless want to look not only at net welfare but also at its separate components.  These are shown in Table 3 below.

 No Emissions TaxEmissions Tax at t = 1: Long RunEmissions Tax at t = 0.7: Long Run
Consumer surplus450253310
Producer surplus180010131164
Damage due to emissions-1200-450-596
Tax receipts0450417
Net welfare105012661295
Table 3: Effects of an Emissions Tax at Different Rates on a Monopoly, showing Components of Net Welfare

It can be seen that the extra net welfare at the lower tax rate is due to an increase in producer surplus plus a smaller increase in consumer surplus, offset by an increase in damage due to emissions and a reduction in tax receipts.  The increase in producer surplus is exactly reflected in increased profits.  A policy-maker might reasonably conclude that, although it does not maximise net welfare, the tax rate equal to the rate of marginal damage is to be preferred.

The workings supporting the above results may be downloaded below (MS Word 2010 format).


  1. Wikipedia Theory of the Second Best  https://en.wikipedia.org/wiki/Theory_of_the_second_best
  2. Barnett, A H (1980) The Pigouvian Tax Rule under Monopoly  American Economic Review 70(5) pp 1037-41

Posted in Climate change, Microeconomic Theory, Pollution | Tagged , , , , , , , , , | Leave a comment