Climate Change and a Proposed Coal Mine

A proposed new coal mine in Cumbria, England has prompted vehement arguments for and against.  The underlying problems are a flawed policy framework with insufficient international coordination.

To its supporters it’s a no-brainer.  The mine will produce coking coal, an essential input in the production of steel from iron ore.  And a modern economy needs steel for a myriad of purposes, not least in the construction of wind turbines to reduce dependence on fossil fuels.  What’s more, it will reduce Europe’s imports of coking coal from the US, saving more than 20,000 tonnes of CO2 equivalent per annum in emissions from shipping fuel.  It will also create jobs in a relatively poor region of the UK.

Its opponents are equally adamant.  To address climate change and meet the widely-accepted target of  net zero carbon emissions by 2050, the use of coking coal needs to be phased out because, like all coal, it emits CO2 when burnt.  Already, nearly 30% of world steel production uses no coking coal.  Allowing a new coal mine would undermine the UK’s credibility as host of the next  UN Climate Change Conference (Glasgow, November 2021).  

The circumstances have been widely reported in the UK, but for readers elsewhere here is a summary.  West Cumbria Mining Ltd (“WCM”) is a company formed to exploit coal reserves in the Cumbria region of north-west England.  Before it can develop and operate a mine it requires planning permission from Cumbria County Council (“the Council”).  Environmental campaigners asked the UK government to intervene, using reserve powers under which the  Secretary of State for Housing, Communities and Local Government can “call in” a matter considered to be nationally significant and impose his own decision whether or not to grant permission. The Secretary of State has so far declined to exercise that power in this case, and in October 2020 the Council resolved to grant permission.  However, the Council informed WCM on 9 February 2021 that it would reconsider its decision.  At the time of writing the outcome of the Council’s reconsideration is awaited and WCM is preparing to take legal action against it (1)

[Update 13 March 2021. The Secretary of State has now, after all, decided to “call in” the planning application by WCM. This means there will now be a public inquiry, which may take many months, with the Secretary of State rather than the Council making the final decision.]

In my view both sides overstate their case.  Let’s start with the saving of emissions from shipping fuel.  20,000 tonnes of CO2 equivalent may seem a lot, but it’s a tiny fraction of the emissions from use of the coal the mine would supply.  WCM estimate annual supply from the mine at 3 million tonnes.  Its use in steel production will yield almost 9 million tonnes of CO2 emissions (2).  That’s more than 400 times the saving on emissions from shipping fuel.  What we should be considering is the net increase in emissions if the mine goes ahead.  But that’s hard to estimate because it depends on the extent to which the supply from the mine adds to total world use of coking coal.

Economic analysis can help here.  Total world use of coking coal will depend on the market equilibrium point where its supply and demand curves intersect.  The extra coking coal from the Cumbria mine will shift the supply curve to the right in the standard price-quantity diagram (see Box 1).  How this affects the equilibrium quantity depends on the elasticity of supply (ES)and elasticity of demand (ED), the relevant formula being (see Box 1):

\frac{\text{Net increase in equilibrium quantity}}{\text{Quantitative shift in supply curve}}=\frac{-E_D}{E_S-E_D}

There are two ways in which we can try to make very rough estimates of the elasticities so as to estimate the value of the above fraction.  One is to apply what economists know to be true for the elasticities of supply and demand for most goods.  It is rare for demand to be either perfectly elastic (ED = minus infinity) or completely inelastic (ED = 0).  Elasticities of demand for broadly defined goods (not for example particular brands) are typically within the range -0.2 to -2.0 (3).  In the case of elasticity of supply, it is especially important to consider the time scale over which changes are being considered. If an industry is already producing at full capacity, it will take time to increase its output since extra equipment will need to be installed and additional workers recruited and trained.  For many goods, therefore, supply is inelastic in the short term (ES < 1) but more elastic in the longer term (ES > 1).  For our purposes, it is long-term elasticity which is relevant, since the mine is expected to have a long operating life. 

The other way to try to estimate the elasticities is via a literature search for empirical estimates of the elasticities of supply and demand for coking coal.  Unfortunately, there seem to have been few relevant studies, and some of those are quite old.  Truby (2012) cited a study by Ball & Loncar (1991) estimating elasticity of demand for Western Europe in the range -0.3 to -0.5, and also a study by Graham, Thorpe & Hogan (1999) estimating elasticity of demand at -0.3 (4).  Lorenczik & Panke (2015) estimated elasticity of demand in the international market at between -0.3 and -0.5 (5).  For elasticity of supply, Lawrence & Nehring (2015) estimated 0.30 for Australia and 0.73 for the US in 2013 (6): the specification of a particular year suggests that these estimates are of short-term elasticity. 

Taking all the above into account, it might be reasonable to estimate elasticity of demand at   -0.4 and elasticity of supply at 2.0.  Putting these values into the above formula yields a fraction of 0.17.  That would imply that the extra 3M tonnes per annum from the Cumbria mine would increase world use of coking coal by 510,000 tonnes. The net increase in CO2 emissions, allowing for the savings on shipping from the US, would be 1,476,000 tonnes annually (7).  I offer that as one plausible scenario, not a prediction.  The more fundamental point is that the fraction is certainly not going to be zero.  That would require either zero elasticity of demand (completely inelastic demand) or infinite elasticity of supply (perfectly elastic supply).  Neither of those are remotely plausible.  Even if the fraction were just 0.01, an implausibly low figure, world use of coking coal would increase by 30,000 tonnes per annum, increasing net emissions by 68,000 tonnes (8). 

Turning to the opponents’ case, it would certainly help towards the target of net zero carbon emissions by 2050 if the use of coking coal in steel production could be phased out.  Whether that is feasible at reasonable cost, however, is far from certain.  The main reason why 30% of current production uses no coking coal is that its input material is not iron ore but recycled scrap steel which can be processed into new steel in an electric arc furnace (9).  The use of recycled steel can probably be increased, but in a growing economy demand for new steel is always likely to exceed the supply of recycled scrap. 

The main hope for ending the use of coking coal is therefore the development of new technologies for producing steel from iron ore.  One promising approach is to use hydrogen to produce direct reduced iron (DRI, also known as sponge iron) which can then, like scrap steel, be processed into new steel in an electric arc furnace (10). If the hydrogen is “green hydrogen”, produced by electrolysis of water using electricity from a renewable source, and if the electricity powering the furnace is also from such a source, then the whole process is emissions-free.  McKinsey reports that all main European steelmakers are currently building or testing hydrogen-based production processes (11).  Development appears to be most advanced in Sweden, where steelmaker SSAB has a joint venture with iron ore producer LKAB and energy company Vattenfall to produce steel using a technology known as HYBRIT (12), and the H2GS (H2 Green Steel) consortium plans a large steel plant using a similar technology (13). 

Another approach to steel production without coking coal involves reducing iron ore to iron by means of electrolysis.  Again, if all the electricity is from a renewable source then the whole process will be emissions-free.  Steelmaker ArcelorMittal is leading a project which has proved the potential of the technology (14), and Boston Metal is offering to tailor what it calls Molten Oxide Electrolysis (MOE) for customers producing steel and other metals (15).

However, phasing out the use of coking coal is not the only way in which carbon emissions from steel production might be reduced to zero or very low levels.  The alternative is to continue using coking coal but with carbon capture, utilisation and storage (CCUS), and around the world there are a number of CCUS initiatives relating to the steel industry. Al Reyadah, a joint venture between Abu Dhabi National Oil Company and clean energy company Masdar, captures CO2 from an Emirates Steel plant and injects it into nearby oil fields for enhanced oil recovery (16).  Steelmaker Thyssenkrupp has a project called Carbon2Chem which uses CO2 from steel production as a raw material in the production of fuels and fertilisers (17).  Another possibility, although apparently only at the proposal stage, is the retrofitting of conventional steel plants to permit a process known as calcium-looping which uses CO2 to react with limestone and produce lime fertiliser: Tian et al (2018) make the remarkable claim that this could allow decarbonised steel production at relatively low cost as early as 2030 (18).

Which of these various technologies will prove successful is difficult to predict.  It is noteworthy that some large producers, including Thyssenkrupp and Tata Steel, are hedging their bets by exploring both hydrogen-based and CCUS approaches (19).  This uncertainty in turn creates a problem for producers of coking coal.  If technologies based on hydrogen or electrolysis come to dominate the steel industry, then demand for coking coal will eventually fall to zero.  The speed of the fall will partly depend on how climate change policies and other considerations influence firms’ decisions on whether to continue operating existing conventional steel plants for their full working life.  It seems possible that demand for coking coal will fall gradually over several decades, with lower-cost mines continuing to find buyers as others cease production. If however CCUS approaches become dominant, then the outlook for coking coal producers will be much brighter.  It’s also possible that more than one technology will be successful, resulting in some ongoing demand for coking coal.  I conclude that the opponents’ main argument against the mine – that the use of coking coal needs to be phased out to address climate change – is not proven.

A company like WCM which chooses to make a substantial investment in a new coking coal mine is taking a big risk.  To make a worthwhile return on its investment it will need to be able to sell its coal at a good price for many years, but if demand for coking coal rapidly declines due to technological change in the steel industry, then it will not be able to do so.  So investors have to make a judgment as to whether they can accept their perceived risk-return pattern.  The key issue then is the policy context within which they make that judgment.

It is appropriate that the mine should require approval by the Council in respect of what might be termed “normal planning matters” such as effects on the local economy, possible disturbance to residents, impacts on the local environment, and restoration and after-care when the mine reaches the end of its operational life.  Any approval would very likely be conditional on measures to limit local impacts.  It is also appropriate that the Council should have regard to climate change policy in making decisions on its own activities, such as the heating of its schools and offices.  What is more dubious is a local government body accountable primarily to its local electors being left to take a decision which has national and international implications because of the extra carbon emissions the coal produced in the mine would generate.  It is a flawed policy framework which places this burden (or opportunity, depending on one’s point of view) on the Council.

A better way to ensure that the decision whether to proceed with the mine has due regard to its climate change implications would be to ensure that WCM will bear the full social cost of its coal production. In economic jargon, there is a market failure in the form of an externality: the emissions from use of its coal would have a cost to society which it would not bear.  The standard economic prescription to correct such a market failure is to internalize the externality.  That could be achieved by pricing CO2 emissions via either a carbon tax or an emissions trading system.  The direct effect on WCM would be small as the emissions from the mine itself would not be large. Much more important would be the indirect effect arising from making steel producers bear the full social cost of their operations.  Unless their steel was produced in an emissions-free way, the carbon price would add to their costs and lower the price they could afford to pay for their inputs including coking coal.  Thus the potential returns from the mine would be reduced, and the risk of loss would be increased.

If the mine is made to bear its full social cost in this way, so that its private costs and benefits are aligned with its costs and benefits to society, then a commercial decision by WCM as to whether investment in the mine would be worthwhile will also reach the correct decision from society’s point of view.  In that case there would be no need for the Council or the UK government to become involved in assessing the climate change implications of the mine.  With the market failure corrected, the matter could be left to the market (subject to planning approval in respect of genuinely local considerations). 

Although the EU and the UK have emissions trading systems (20), this does not mean that the mine will bear all of its social costs.  One reason is that WCM plans to export coal to the EU and beyond (21).  Countries just beyond the EU with sizeable steel industries include Turkey (34Mt), Iran (26Mt) and Ukraine (21Mt) (22).  Of these, Ukraine is considering an emissions trading scheme, but prior to legislating on such a scheme has just began a three year period in which large industrial installations are required to collect data on emissions (23).  Turkey is reported to be considering an emissions trading scheme, but without recent developments.  There appear to be no significant moves towards an emissions trading scheme (or carbon tax) in Iran.  Thus there is a significant chance that, for the next few years and perhaps beyond, some of WCM’s coal would be exported to countries with no carbon price at all.

A second reason is that, although steel production is within the scope of the EU emissions trading system, it is likely to continue to receive some of its carbon allowances for free until 2030 at least (24).  The EU’s understandable concern is that, since many steel products can readily be traded internationally, there is a risk that a stricter emissions regime could lead producers to transfer their operations to countries with laxer policies.  Nevertheless, free allocation of allowances means that the steel industry, and the mines which supply its coking coal, are not bearing all their social costs. 

A third reason is that it is questionable whether the market price of carbon allowances within the EU trading system is and will be high enough.  The EU sets an annual cap on the number of allowances, the number being slightly reduced each year, and the caps have a major influence on the market price of allowances.  Arguably they should be lower so that the market price will be higher.  Admittedly the price has risen in recent years, from very low levels during 2012-2018 to around €20 in 2019 and almost €40 in early 2021 (25).  Whether the price will remain at around that level remains to be seen.  The High Level Commission on Carbon Pricing (2017) concluded that, to achieve the 2015 Paris Agreement’s aim of limiting global average temperature to well below 2°C above pre-industrial levels, the carbon price should be at least US$ 40-80 by 2020 (26).  The current €40 (equivalent to $48) is towards the lower end of that range.

Each of those reasons underlines the need for international coordination on climate change and therefore the importance of the coming Glasgow Conference.  A successful conference could put pressure on countries which have not established a carbon price to move towards setting one, or accelerate existing initiatives.  An expectation that carbon pricing will become more widespread would weaken the argument that free allowances are needed to avoid the risk of producers relocating abroad.  And a successful conference could agree tighter national caps on emissions leading to higher market prices for emissions allowances.

Notes and references

  1. West Cumbria Mining Statement 5/4/2021
  2. Coking coal is used in steel production both to reduce iron ore to iron and as fuel, but both processes generate CO2.  The atomic mass of carbon is 12 and that of oxygen 16, so 1 tonne of carbon yields (12 + (2×16))/12 = 44/12 tonnes CO2.  If the coal is 80% carbon, then 3M tonnes coal yields 3M x 0.8 x 44/12 = 8.8M tonnes CO2.
  3. Wikipedia – Price elasticity of demand – Selected price elasticities
  4. Truby J (2012) Strategic behaviour in international metallurgical coal markets  EWI Working Paper No. 12/12  p 13
  5. Lorenczik S & Panke T (2015) Assessing market structures in resource markets – An empirical analysis of the market for metallurgical coal using various equilibrium models  EWI Working Paper No. 15/02  p 14
  6. Lawrence K & Nehring M (2015) Market structure differences impacting Australian iron ore and metallurgical coal industries  Minerals Vol 5 p 483
  7. 510,000 tonnes x 0.8 x 44/12 (as per Note 2 above) = 1,496,000 tonnes, less 20,000 tonnes shipping fuel.
  8. 30,000 tonnes x 0.8 x 44/12 (as per Npte 2 above) = 88,000 tonnes, less 20,000 tonnes shipping fuel.
  9. World Steel Association – Raw materials
  10. Wikipedia – Direct reduced iron
  11. Hoffmann C, Van Hoey M & Zeumer B (3/6/2020) Decarbonization challenge for steel  McKinsey & Company  p 5
  12. SSAB
  13. H2GS
  14. Siderwin
  15. Boston Metal Boston Metal | A world with no pollution from metals production
  16. Carbon Sequestration Leadership Forum
  17. Thyssenkrupp
  18. Tian S, Jiang J, Zhang Z & Manovic V (2018) Inherent potential of steelmaking to contribute to decarbonisation targets via industrial carbon capture and storage  Nature Communications 9, Article No. 4422/2018
  19. Thyssenkrupp; Tata Steel
  20. The UK used to belong to the EU Emissions Trading System, but following Brexit it now has its own system: Wikipedia
  21. West Cumbria Mining Ltd – How will materials be transported  The statement about “EU and beyond” is at the bottom of the factsheet.
  22. World Steel Association – Steel statistical yearbook 2020 concise version  Table 1 pp 1-2
  23. World Bank Carbon Pricing Dashboard (Use the dropdown box under “Information on carbon pricing initiatives selected” to look for details re individual countries.)
  24. Metal Bulletin
  25. Ember – Daily EU ETS carbon market price (euros)
  26. Carbon Pricing Leadership Coalition – Report of the High Level Commission on Carbon Prices  p 3

Posted in Climate change, Minerals | Tagged , , , , , | Leave a comment

Dynamic Optimisation: A Fully Worked Example

In a previous post, I referred to the importance in environmental and natural resource economics of the technique of dynamic optimisation, also known as optimal control.  However, the technique is difficult, and worked examples in textbooks or on the web often seem to pass over key points.  Here I present my own example, which I describe as fully worked because it shows every step from the largely verbal statement of the problem to the optimal paths of the key variables and the maximum value of the objective functional, identifying some options and pitfalls along the way.  It is intended for readers familiar with elementary algebra, calculus and static optimisation who have at least begun to study dynamic optimisation.

The Problem

Capital K, is the only factor of production and is not subject to depreciation. The initial capital stock is 100,.  Output is at a rate 0.5K,, and may be used as consumption C, or investment I,, the latter being added to K,.  The instantaneous utility function U_t is \ln(C_t) . We are required to maximise social welfare W, from time t = 0, to 10,, where social welfare is defined as the integral of instantaneous utility subject to a continuous discount factor of 10\%, per time period.

A Note on Notation

A widely used convention is that the subscript t,, as in C_t, indicates discrete time, and that a variable in continuous time should be written as in C(t), .  I find however that it saves a little keying time, and results in less cluttered formulae, to use the subscript approach for continuous time, and sometimes to omit the t, altogether when it is clear from the context.  More conventionally, I use the notation \dot C, to indicate a time-derivative, and \ddot C, for a second time-derivative.

I use Latex to display mathematical symbols and formulae. However, using Latex within a WordPress blog is not entirely straightforward, one problem being to obtain a satisfactory vertical alignment of symbols within text paragraphs. The commas which follow some symbols are a workaround which corrects vertical alignment in many (though not all) cases and seem to me preferable to the alternative of displaying symbols – like K for example – with their base lower than that of the surrounding text.

Writing the Problem in Mathematical Formulae

Our problem statement above contains the symbols K, C, I, U, W, t.  The first question we should consider is whether we need all these for a precise mathematical formulation.  It is clear that we can dispense with U, and relate W, directly to C,, writing the objective functional as:

\textrm{Maximise }W=\int_0^{10}(\ln(C_t))e^{-0.1t}dt\qquad(1)

We need K, which is clearly the state variable, but what is the control variable?  Since C + I = 0.5K,, either of C, or I, determines the other.  Nothing in the problem statement indicates that one is a choice variable and the other a residual.  Either could be the control variable, but we do have to choose (because the method requires maximisation of the Hamiltonian or Lagrangian with respect to the control variable).  Let us choose C, as the control variable (but Alternative 1 below will show that choosing I, leads to the same results).  We therefore write the equation of motion as:

\dot K=0.5K-C\qquad(2)

We also have the boundary conditions:

K_0=100\ \textrm{and }K_{10}\ \textrm{free}\qquad(3)

Does that complete the formulation of the problem?  No!

Pitfall 1

If we rely on the formulation above, there is nothing to prevent negative consumption, with investment \dot K, exceeding output and W, undefined (because the log of a negative quantity is undefined).  There is also nothing to prevent negative investment.  Thus the above formulation allows a time path in which capital is initially accumulated, but towards the end of the time period is run down to zero, enabling consumption to exceed output.  That could be a desirable scenario if the capital is in the form of a good which can also be consumed.  More typically, however, capital cannot be consumed and therefore consumption cannot exceed output, and the above formulation will therefore lead to erroneous results by permitting more consumption than is feasible.  Indeed, there is nothing in the formulation to rule out the combination of infinite consumption and infinite negative investment.

We therefore add two constraints and, to prepare for writing the required Lagrangian function, rewrite each as a quantity to be less than or equal to a constant, in these cases zero:

C_t \geq 0\ \forall t \in [0,10]\ \textrm{and so } -C_t \leq 0\qquad(4)

C_t \leq 0.5K_t\ \forall t \in [0,10]\ \textrm{and so } C_t-0.5K_t \leq0\qquad(5)

Although we also require that capital should not be negative, we need not specify this as a further constraint since it is is implied by the combination of K_0=100 and \dot K\geq 0,, the latter following from the equation of motion together with constraint (5).  Indeed, these imply the stronger condition K_{10} \geq 100. The combination of (1) to (5) completes the mathematical formulation of the problem.

The Value of W for Two Naïve Solutions

Before applying the method of optimal control, let us consider a couple of simple and feasible time paths for consumption and calculate the implied values of W,.  The results will provide a benchmark against which we can compare our final result.  Suppose first that there is no investment and all output is consumed.  Then capital is always 100, and consumption is always 0.5(100) = 50,.  Hence:



Now suppose that output is always divided equally between consumption and investment.  Before we can calculate W, we need to find the time path of capital by solving the differential equation:

\dot K =0.5(0.5K)=0.25K\qquad(6)

Making the standard substitution K, = e^{bt} so that \dot K,= be^{bt} we have:

be^{bt}=0.25e^{bt}\ \textrm{and so } b=0.25\qquad(7)

Hence for some constant c,:


Since K_0 = 100 we can infer that c=100, and so:

K_t=100e^{0.25t}\ \textrm{and so } C_t=0.5(0.5K_t)=25e^{0.25t}\qquad(9)





As we might expect, allocating half of output to investment, allowing capital to accumulate and increase output as time goes on, yields a higher W, than simply consuming all output.  But there is no reason to expect that this value of W, is the maximum.

Necessary Conditions for a Solution

From (1) and (2) we obtain the Hamiltonian, introducing a costate variable \lambda_t:

H=(\ln C)e^{-0.1t}+\lambda (0.5K-C)\qquad(11)

This is a present value Hamiltonian because it retains the discount factor in the objective functional and so converts \ln C_t at any time to its present value, that is, its value at time 0,.  An alternative approach will be considered below.  Because we have two inequality constraints, we must extend the Hamiltonian to form a Lagrangian, introducing two Lagrange multipliers \mu_t and \nu_t:

\mathcal{L}=(\ln C)e^{-0.1t}+\lambda (0.5K-C)+\mu C+\nu (0.5K-C)\;(12)

The expressions in brackets after the Lagrange multipliers are from the inequality constraints (4) and (5) with signs changed. The general rule here is that given a constraint g, \leq k and writing \theta_t for the associated multiplier, the term to be included in the Lagrangian is \theta_t(k-g).

Applying the maximum principle, we have to maximise the Lagrangian with respect to the control variable C_t at all times.  In this case, the Lagrangian is differentiable with respect to C_t, so we can try to use calculus to find a maximum.  But we also need to consider whether there might be a corner solution, that is, a solution at either of the limits of the constrained range of C,, which are 0, and 0.5K,.  We can rule out the possibility of a maximum at t = 0,, since \ln 0, equals minus infinity.  But there is no obvious reason why there should not be a maximum at C, = 0.5K for at least some values of t,, so we should keep this possibility in mind.  Setting the derivative with respect to C, of the Lagrangian equal to zero we have:

\dfrac{\partial \mathcal{L}}{\partial C}=\dfrac{e^{-0.1t}}{C}-\lambda+\mu-\nu=0\qquad(13)

The maximum principle also requires the conditions:

\dot K=\dfrac{\partial \mathcal{L}}{\partial \lambda}=0.5K-C\qquad(14)

\dot {\lambda}=-\dfrac{\partial \mathcal{L}}{\partial K}=-0.5\lambda-0.5\nu\qquad(15)

Although the effect of (14) is merely to repeat the equation of motion (2) it is standard practice to write it out at this point in the working.  We also require the Kuhn-Tucker conditions in respect of the two inequality constraints, conditions (17) being known as the complementary slackness conditions.

\mu \geq 0\ \textrm{and } \nu \geq 0\ \forall t \in [0,10]\qquad(16)

\mu C=0\ \textrm{and }\nu (0.5K-C)=0\ \forall t \in [0,10]\qquad(17)

Finally, there is the transversality condition.  With a fixed terminal time, but terminal capital free subject to the implied condition K_{10} \geq 100, we have the situation known as a truncated vertical terminal line.  Therefore we provisionally adopt the condition:


However, we will have to check that the resulting solution is consistent with the condition K_{10} \geq 100 (and if not we must recalculate the solution with K_{10} fixed at 100,).  (12) to (18), with the provisos noted, constitute the necessary conditions for a maximum.

Sufficiency of the Necessary Conditions

We will test whether the Mangasarian conditions are satisfied.  The basic conditions are:

(A) The integrand of the objective function, (\ln C)e^{-0.1t},, must be differentiable and concave in the control and state variables, C, and K,, jointly.

(B) The equation of motion formula, 0.5K-C,, must be differentiable and concave in C, and K, jointly.

(C) If the equation of motion formula, 0.5K-C,, is non-linear in either C, or K,, then in the optimal solution we must have \lambda_t \geq 0 for all t,.

Considering these in turn:

Condition (A) is satisfied since, applying a calculus test for concavity:

\dfrac{\partial((\ln C)e^{-0.1t})}{\partial C}=\dfrac{e^{-0.1t}}{C}\ \textrm{ so  }\dfrac{\partial^2((\ln C)e^{-0.1t})}{\partial C^2}=-\dfrac{e^{-0.1t}}{C^2} \leq 0\ \forall t\quad(19)

We need not consider K, here since it does not occur in the integrand.

Condition (B) is satisfied since the formula 0.5K-C, is linear in both C, and K, and therefore concave, linearity being sufficient for concavity (there is no requirement for strict concavity).

Condition (C) is satisfied since, again, the formula 0.5K-C, is linear in both C, and K,.

For our problem, a further condition is needed for each of the inequality constraints, the general rule being that if a constraint is represented in the Lagrangian by the expression \theta (k-g) where k, is a constant, the required condition is that g, be jointly convex in the control and state variables:

(D) -C, must be convex in C, and K, jointly.

(E) C-0.5K, must be convex in C, and K, jointly.

These conditions are satisfied since the functions are linear (again there is no requirement for strict convexity). 

Thus the Mangasarian conditions are satisfied, so we can conclude that the necessary conditions (12) to (18) are also sufficient for a maximum (and need not consider the more complex Arrow conditions).

Inferences from the Necessary Conditions

Using a common approach to simplification, we differentiate (13) with respect to time and then use (15) to substitute for \dot{\lambda},:

\dfrac{-0.1e^{-0.1t}C-\dot Ce^{-0.1t}}{C^2}-\dot{\lambda}+\dot{\mu}-\dot{\nu}=0\qquad(20)

\dfrac{-0.1e^{-0.1t}C-\dot Ce^{-0.1t}}{C^2}+0.5\lambda+0.5\nu+\dot{\mu}-\dot{\nu}=0\qquad(21)

Using (13) again we can eliminate \lambda, and \nu, (but not \dot{\nu},):

\dfrac{-0.1e^{-0.1t}C-\dot C e^{-0.1t}}{C^2}+\dfrac{0.5e^{-0.1t}}{C}+0.5\mu+\dot{\mu}-\dot{\nu}=0\qquad(22)

-0.1e^{-0.1t}C-\dot Ce^{-0.1t}+0.5e^{-0.1t}C+(0.5\mu+\dot{\mu}-\dot{\nu})C^2=0\,(23)

Collecting the terms in C, and using the complementary slackness condition (17) \;\mu C=0, (which, since C, can never be zero as \ln 0, equals minus infinity, implies \mu =0, and therefore \dot{\mu}= 0, for all t,):

0.4e^{-0.1t}C-\dot Ce^{-0.1t}-\dot{\nu}C^2=0\qquad(24)

Using the equation of motion (2) to substitute for C,:

0.4e^{-0.1t}(0.5K-\dot K)-(0.5\dot K-\ddot K)e^{-0.1t}-\dot{\nu}(0.5K-\dot K)^2)=0\quad(25)

Collecting terms in e^{-0.1t}, we have the differential equation:

e^{-0.1t}(\ddot K-0.9\dot K+0.2K)-\dot{\nu}((0.5K)^2-K\dot K+(\dot K)^2=0\qquad(26)

Before proceeding we will explore two alternative approaches.

Alternative 1: Investment as the Control Variable

Suppose we take investment I, rather than consumption C, to be the control variable.  The utility function is still \ln C, which we will now have to write as \ln(0.5K-I), , so the objective functional will be:

\textrm{Maximise }W=\int_0^{10}(\ln(0.5K-I))e^{-0.1t}dt\qquad(A1)

The equation of motion will be simply:

\dot K=I \qquad(A2)

This is not tautologous since it implies that investment is the only cause of change in capital, eg there is no depreciation.  The inequality constraints become:

I-0.5K\leq 0\;\; \textrm{and }\;-I\leq 0\qquad(A3)

Hence the Lagrangian is:

\mathcal{L}=(\ln(0.5K-I))e^{-0.1t}+\lambda I+ \mu (0.5K-I)+\nu I\qquad(A4)

From the Lagrangian we derive the conditions:

\dfrac{\partial\mathcal{L}}{\partial I}=\dfrac{-e^{-0.1t}}{0.5K-I}+\lambda -\mu +\nu=0\qquad(A5)

\dot K=\dfrac{\partial\mathcal{L}}{\partial\lambda}=I\qquad(A6)

\dot {\lambda}=-\dfrac{\partial\mathcal{L}}{\partial K}=\dfrac{-0.5e^{-0.1t}}{0.5K-I}-0.5\mu\qquad(A7)

We also have the complementary slackness conditions:

\mu(0.5K-I)=0\;\;\textrm{and  }\nu I=0\qquad(A8)

Differentiating (A5) with respect to time, using (A7) to substitute for \dot{\lambda},, and substituting \dot K, for I,:

\dfrac{0.1e^{-0.1t}(0.5K-\dot K)+(0.5\dot K-\ddot K)e^{-0.1t}}{0.5K-\dot K)^2}-\dfrac{0.5e^{-0.1t}}{0.5K-\dot K)}-0.5\mu -\dot{\mu}+\dot{\nu}=0\quad(A9)

e^{-0.1t}(-\ddot K+0.4\dot K+0.05K)-0.5e^{-0.1t}(0.5K-\dot K)+(-0.5\mu - \dot{\mu}+\dot{\nu})(0.5K-\dot K)^2=0\quad(A10)

Collecting terms in e^{-0.1t}, and using the first complementary slackness condition to eliminate \mu , and \dot{\mu},  we have:

e^{-0.1t}(-\ddot K+0.9\dot K-0.2K)+\dot{\nu}((0.5K)^2-K\dot K+\dot K^2)=0\quad(A11)

It can be seen that this is equation (26) above with signs reversed, so thereafter we can proceed as in the main line of reasoning.

Alternative 2: the Current Value Hamiltonian

When the objective functional contains a discount factor, an alternative method is to use the current value Hamiltonian.  Where there are inequality constraints, this leads to a current value Lagrangian, which for our problem can be written:

\mathcal{L}_C=\ln C+\rho (0.5K-C)+\sigma C+ \tau (0.5K-C)\qquad(A12)

where the multipliers \rho ,\sigma ,\tau are equal respectively to the original multipliers \lambda ,\mu ,\nu each multiplied by e^{-0.1t},.  In the necessary conditions, the equivalent of (13) is slightly simplified by the absence of the discount factor:

\dfrac{\partial\mathcal{L}_C}{\partial C}=\dfrac{1}{C}-\rho +\sigma -\tau =0\qquad(A13)

On the other hand the equivalent of (15) requires an extra term 0.1\rho , (the discount rate being 0.1,):

\dot{\rho}=-\dfrac{\partial\mathcal{L}_C}{\partial K}+0.1\rho =-0.5\rho-0.5\tau+ 0.1\rho =-0.4\rho-0.5\tau\qquad(A14)

The difference between the coefficients in the terms 0.5\lambda , in (15) and 0.4\rho , in (A14) may seem trivial, but it leads to additional complexity later in the reasoning.  The equivalent of (24), which I re-write here for ease of reference:

0.4e^{-0.1t}C-\dot Ce^{-0.1t}-\dot{\nu}C^2=0

is found to be:

0.4e^{-0.1t}C-\dot Ce^{-0.1t}-(\dot{\rho}-0.1\rho )C^2=0\ \quad(A15)

The more complex coefficient of C^2, in turn makes it slightly more complicated to solve what below I call Case 2.  This is not to argue against the current value approach, still less to suggest that it represents a pitfall.  But whether on balance it simplifies matters, as is often suggested, seems to depend on the type of problem.

Solving the Differential Equation

Our differential equation (26) looks rather intractable, but we can simplify matters by considering separately the two cases \nu = 0, and \nu \neq 0.  To be more precise, we consider:

Case 1: \nu = 0,  over some time interval.

Case 2: \nu \neq 0  over some time interval.

Since Case 1 implies that \nu ,  is constant over the relevant interval, we can infer that \dot{\nu}= 0,  over that period.  Equation (26) therefore simplifies to:

\ddot K-0.9\dot K+0.2K=0\qquad(27)

The standard method for this type of differential equation is to make the substitution K=e^{xt},  implying \dot K=xe^{xt},  and \ddot K=x^2e^{xt},.  After dividing through by e^{xt},  we are left with the equation:


By factorisation or by the quadratic equation formula, this is neatly solved by x=0.4, \textrm{ or }0.5.  Hence the solution to the differential equation (27) is:


where c_1,c_2 are constants to be found (generally a second order differential equation requires two constants of integration).  Differentiating (29) with respect to time we can infer:

\dot K=0.4c_1e^{0.4t}+0.5c_2e^{0.5t}\qquad(30)

C=0.5K-\dot K=0.1c_1e^{0.4t}\qquad(31)

Pitfall 2

Having obtained equations (29) to (31) it is tempting to think that our work is almost complete.  Putting t=0, in (29) we have:


Since investment right at the end of the time period can do nothing to increase consumption within the time period, we can infer that \dot K=0, at t=10,.  Hence, putting t=10, in (30):


-0.4c_1=0.5e(100-c_1) \quad(P3)


c_1=\dfrac{50e}{0.5e-0.4}=141.7 \textrm{  and }c_2=100-c_1=-41.7\quad(P5)

Substituting into (31):



W=\int_0^{10}(\ln (14.17e^{0.4t})e^{-0.1t}dt=\int_0^{10}(2.651+0.4t)e^{-0.1t}dt\quad(P7)


As expected, this yields a higher value of W, than either of the naïve solutions considered above.  Nevertheless, this is not the time path that maximises W,.  The fallacy here is the assumption that our Case 1 applies to the whole period t=[0,10].  Just because \dot K=0, at t=10, , it does not follow that \dot K\neq 0, at all t<10,

We must also consider Case 2, \nu \neq 0, . Using the second complementary slackness relation (17), this implies that:


Thus Case 2 is what we described above as a corner solution.  Using the equation of motion (2) this implies that, within the relevant time range, \dot K= 0,  and therefore \ddot K=0,.  Hence the differential equation (26) reduces to:




Integrating with respect to t,, noting that K,  can be treated as a constant since \dot K= 0,:

\nu =-\dfrac{8e^{-0.1t}}{K}+c_3\qquad(36)

Which Case is Terminal?

We will now show that, as time approaches t=10,, the system must be in Case 2, with K,  constant.  This is what we would expect from economic reasoning, since there must be a time beyond which the effect of further investment in making possible higher output and consumption in the remainder of the time period is too small to compensate for the consumption that would be forgone in making that investment.  To show this using the method of optimal control, we start from the transversality condition (18), \lambda_{10}= 0.  We can therefore reduce (13) at t=10,  to:

\dfrac{e^{-1}}{C_{10}}+ \mu_{10}-\nu_{10}= 0\qquad(37)

Given the first complementary slackness relation (17), \mu C,, this further simplifies to:

\dfrac{e^{-1}}{C_{10}}-\nu_{10}= 0\qquad(38)

This implies that \nu_{10}\neq 0 (otherwise C_{10}  would be infinite which is impossible given the problem data).  So the system cannot be in Case 1 at t=10, and must be in Case 2.

When Does the System Switch from Case 1 to Case 2?

Taking our Case 2 equation (36) at t=10,, and using (38) to substitute for \nu_{10} :


From the equation of motion (2), and since \dot K=0, in Case 2, we can substitute 2C_{10} for K_{10}:



Substituting for c_3 in (36):

\nu =-\dfrac{8e^{-0.1t}}{K}+\dfrac{5e^{-1}}{C_{10}}\qquad(42)

While the system is in Case 2, K, is constant, so we can replace it by K_{10} and therefore by 2C_{10}:

\nu =-\dfrac{8e^{-0.1t}}{2C_{10}}+\dfrac{5e^{-1}}{C_{10}}=\dfrac{5e^{-1}-4e^{-0.1t}}{C_{10}}\qquad(43)

Since Case 2, by definition, has \nu \neq 0,, and since from (38) \nu_{10}>0, the system will be in Case 2 while:



1-0.1t<\ln 1.25=0.223\qquad(46)


So we can infer that the system is in Case 1 during t=[0,\;7.77] and in Case 2 during t=(7.77,\;10] .

Solving Case 1

Having found the time period over which Case 1 applies, we can now determine the constants c_1,c_2 in equations (29) to (31).  Taking (29) at t=0, we have:


Since the system switches to Case 2 at t=7.77, with \dot K=0,, from (30) we have:







Substituting into (29 to (31), we have the time paths of the key variables over t = [0, 7.77] :


\dot K=63.3e^{0.4t}-29.1e^{0.5t}\qquad(56)


Although not essential to solve the problem, it may be of interest to note the time paths, over the same period, of the various multipliers.  From the first complementary slackness relation (17) and because C, can never be zero, we can infer that \mu = ,0, and from the definition of Case 1 we have \nu =0,.  Substituting these values into (13):



The value of \lambda , can be interpreted as the shadow price of the state variable, capital, that is, the amount by which W, could be increased if an extra unit of capital were available at time t,.  It can be seen that this value at t=0, is 0.0633,, which may seem surprisingly small given the extra consumption over the whole period which an extra unit of initial capital would make possible, but can be shown to be correct given that W, depends on the log of consumption.

Solving Case 2

A feature of Case 2 is that K,  remains constant.  To find at what level it remains constant, we have simply to find its level at t=7.77,, when Case 1 switches to Case 2.  Substituting into (55):


This is the value of K,  over the period (7.77,\;10] , and enables us to confirm that K_{10}\geq 100  and therefore to accept the condition (18), \lambda_{10}=0, without qualification.  Over the same period, \dot K=0,  and:


Turning to the multipliers, \mu =0,  for the same reason as during Case 1.  Substituting for C,  in (43):

\nu =\dfrac{5e^{-1}-4e^{-0.1t}}{354}=0.0052-0.0113e^{-0.1t}\qquad(62)

Thus \nu ,  increases gradually from 0,  at t=7.77,  to 0.0010,  at t=10,.  The positive values of \nu , when t>7.77,  indicate that if the constraint C_t\leq 0.5K_t were relaxed then W,  could be increased.

To obtain \lambda .  over the same period, we use (61) and (62) to substitute for C,  and \nu ,  respectively in (13):



Thus \lambda ,  falls from 0.0013,  at t=7.77,  to, as expected, 0,  at t=10,, at which point an extra unit of capital would have no effect within the time period on C,  or W,

Table 1 below shows the values of all the variables at integral time points over the whole period [0,\;10] , convering Cases 1 and 2.

The Optimal Value of W

It remains to check that the optimal paths we have now identified do indeed result in a larger W,  than our best so far – the 27.33,  obtained from our Pitfall 2.  Summing the relevant integrals over the Case 1 and Case 2 periods we have:



W = \left[-(27.61+4t+40)e^{-0.1t}\right]_0^{7.77}+\left[58.69e^{-0.1t}\right]_{7.77}^{10}\qquad(67)




The main source used in preparing this post was:

Chiang, A (1999)  Elements of Dynamic Optimization  Waveland Press, Illinois

Posted in Mathematical Techniques | Tagged , | Leave a comment

Covid-19 and Household Size – An Update

An updated analysis of national data remains consistent with the hypothesis that Covid-19 infection rates are higher in larger households .

In a post in May 2020 I presented the results of a regression analysis on data from 14 Western European countries tending to support the following hypothesis:

Rates of Covid-19 infection will be higher, other things being equal, in larger households, that is, households with more occupants.

With Western Europe now well into a second wave of Covid-19 infection, it is timely to assess whether an updated analysis continues to support the hypothesis.

A reminder of some key features of the analysis:

  • Rates of death from Covid-19 are used as a proxy for rates of infection, actual rates of infection being difficult to measure.  Published statistics on confirmed infections are heavily dependent on differences in testing arrangements at different times and between countries.
  • Data used are at national level, with no allowance for variations within countries.
  • Estimation of the regression is by weighted least squares, with weighting by population.

The regression model is:

DP  =  C  +  (B x PH) + E

where:  DP is cumulative death rate from Covid-19 per million population; C is the regression constant; B is the slope coefficient; PH is average population per household; and E is the error term. 

The estimated regression line based on data to 3 December 2020 was:

DP  =  -2,004 + (1,209 x PH)

The precise values of the estimated coefficients are not important. Nor is it surprising that the slope coefficient is higher than estimated in May: this is to be expected since cumulative death rates have increased while average population per household is stable.  The important point is that, as in May, the estimated slope coefficient is positive, consistently with the hypothesis (and is sufficiently large that the null hypothesis that its true value is zero or less is rejected at the 5% significance level (1)). 

A spreadsheet containing the underlying data and full regression output may be downloaded here:


  1. This can be inferred from the fact that the 95% confidence limits of the estimated slope coefficient are both positive.
Posted in Cities, Environment (general) | Tagged , | Leave a comment

Housing Reform in England

Housing in England is under-supplied, resulting in high costs.  The government’s proposed reforms to address the problem are a step in the right direction but do not go nearly far enough.

I don’t usually engage in autobiography, but it so happens that my own case illustrates the problems of housing in England rather well.  I left university in 1978 and started work in London.  At first I lived in rented accommodation, but by 1983 I was in a position to buy a house – a fairly typical 3-bed terraced property in outer London, only a few minutes walk from my work.  The house cost £24,500, which I funded via a mortgage of two and a half times my then annual salary as a part-qualified accountant of £8,600 together with £3,000 savings accumulated while I had been renting.  The annual interest on the mortgage, at a rate of 10%, was £2,150 or 28% of my income.

I wouldn’t be able to do that now.  The current estimated value of that house is £400,000 (1).  The current salary for an equivalent job in London would be unlikely to be much more than £30,000.  So the value of the house is more than sixteen times larger than in 1983, while the salary is only about four times larger.  To buy the house with that salary, assuming an equivalent proportion (12% or £48,000) from savings, would require a mortgage of more than eleven times salary.  No mortgage lender would agree to that: the risk of default would be too great, since the annual interest, at a current variable rate of around 4%, would amount to some £14,000 or almost 50% of income. 

Because homes in London and some other parts of England are so expensive, many young adults face a choice between unsatisfactory options: live with parents; club together with friends or rely on help from parents to buy a property; live where property is less expensive but suitable jobs are hard to find without a long commute; or rent forever at an annual cost that may be no less than that of buying.  The current average monthly rent for a one-bedroom flat in London is £1,250 (2), equivalent to 50% of an annual salary of £30,000.  Renting a room or studio apartment is cheaper, but few would consider it satisfactory as a long-term arrangement.  Sharing a larger rented property can also reduce costs, but is not for everyone. 

Most informed observers consider that the main reason why housing costs are so high is that supply is constrained by the limited quantity of land with approval for housing development (3).  Such approval may be granted  by local authorities acting within a framework of law created by the Town and Country Planning Act 1947 and much subsequent legislation.  But even if a proposed development is well-designed, a local authority may gain little from allowing it to proceed, and by doing so will become responsible for much of the initial cost of associated roads and other infrastructure, and the ongoing cost arising from the extra population including children’s education and care for the elderly. It may also face campaigns from local people opposed to the development for reasons which may include loss of countryside, pressure on local services, the possibility of undesirable neighbours, and loss in value of their properties.  Furthermore, much land around London and other cities is protected from development by national designations such as Green Belt. 

The government (4) now proposes a reform of the planning system in England.  Details have been published in a White Paper Planning for the Future (5).  The Prime Minister, in his foreword, introduces the proposals as (6):

“Radical reform unlike anything we have seen since the Second World War.  Not more fiddling around the edges … a whole new planning system for England.”

He concludes that:

“… what we have now simply does not work.  So let’s do better.  Let’s make the system work for all of us.  And let’s take big, bold steps so that we in this country can finally build the homes we all need and the future we all want to see.”

Consultation on the proposals, inviting general comments and answers to specific questions, is open for 12 weeks from 6 August 2020.  I set out below the response I have submitted.  Questions are shown in bold with, in some cases, my brief explanatory comments in italics (for fuller context, reference should be made to the White Paper itself).  My responses are in plain text.

General Comments

The Prime Minister’s foreword quite rightly identifies the need for radical reform of the planning system.  It states that the new system should provide “the homes we need in the places we want to live at prices we can afford, so that … we can connect our talents with opportunity”.  The Secretary of State’s foreword quite rightly refers to “the present generational divide”: the fact that many young adults, even those on above-average incomes, are unable to buy their own home in the way that their parents’ generation were, and have little option but to pay high rents or else live with their parents.  The Introduction (on p 14) quite rightly refers to “long-term and persisting undersupply” of housing, and to the fact that housing in England can be much more expensive than in other European countries. Two points which might have been added are:

  1. The high cost of housing is a major contributory factor to poverty for families with moderate earnings whose rent is a high proportion of their income, and indirectly adds to government expenditure via the provisions for housing costs within Universal Credit.
  2. The average number of persons per household in the UK (2.4) is higher than in many other European countries (cf Germany 2.0) (7).  This has probably facilitated intra-household transmission of Covid-19 and may be a contributory factor to the UK’s relatively high death rate from the virus.

Although the White Paper contains many sensible proposals, these fall well short of what is needed to address these problems.  In particular:

  1. The annual target of 300,000 new homes is far too small.  It represents annual growth in housing stock per capita of about 0.75% (see answer to Q5).  That’s not big and bold. What is needed is a target supported by careful economic analysis showing that, over a period of 5-10 years, it can be expected to result in a substantial reduction in the cost of housing. 
  2. Although the proposed designation of Growth areas is welcome, it needs to be accompanied by measures to ensure that such designation does not result in huge gains to existing landowners with little benefit to developers or potential residents.
  3. The White Paper offers little to discourage the speculative element in demand which is one reason why housing is so expensive.  People expect an upward trend in the price of houses, and this may lead them to buy more, or larger, homes than they require for their own use. When many people do this, prices do indeed rise.  Reform of the planning system offers an opportunity to break this cycle of expectation.  Two measures that would be desirable in themselves and also help to change expectations include increasing the annual target for new homes to considerably more than 300,000, and allowing development on some Green Belt land.

Although outside the scope of the White Paper, it should be recorded that, to be most effective in addressing the problems of England’s housing, reform of the planning system should be accompanied by:

  1. Cessation of the Help to Buy scheme which increases demand for housing and so tends to raise house prices;
  2. Appropriate reform of taxation relating to housing, including:
    1. Ending the anomaly under which VAT is charged on major renovations and extensions to existing homes but not on construction of new homes, so discouraging an important means of maintaining and expanding housing space;
    2. Bringing main homes within the scope of Capital Gains Tax (perhaps with the charge rolled up over a lifetime), so removing the current tax incentive to treat housing as a speculative investment;
  3. Policies to ensure an adequate supply of skilled labour to the building industry (including via immigration), and to support the development and application of building methods with reduced labour requirements such as modular construction.

Q5 Do you agree that Local Plans should be simplified in line with our proposals?

It is proposed that Local Plans should identify three types of land: Growth Areas suitable for substantial development, with automatic outline approval for development; Renewal Areas suitable for smaller scale development such as densification and infill of existing residential areas, with a statutory presumption in favour of suitable development; and Protected Areas, including Green Belt and Areas of Outstanding Natural Beauty, which justify more stringent development controls to ensure sustainability.

My response:  Yes (in part).  I agree that the role of land use plans should be simplified and with the proposed definitions of Growth areas and Renewal areas.  Automatic outline approval for development within the former and a presumption in favour of development within the latter would simplify and accelerate the process of obtaining approval for development.  However, consideration should be given to the effect on the market value of land within Growth areas especially.  Existing landowners, who would not have had to apply for permission to develop or indeed to do anything at all, could be expected to enjoy very large gains if they then sell their land.  There is a risk that too much of the economic benefit from automatic outline approval would accrue to existing landowners and not enough to either developers or potential residents.  In other words, there is a risk that the designation of Growth areas might make more land available for development, but only at a cost to developers that would make it unprofitable to undertake development unless homes could be sold at prices at least as high as at present.  Taxation (via Capital Gains Tax or otherwise) of the undeserved gains made by existing landowners would be justifiable but do nothing to help developers or residents.  A much more constructive approach to the problem is suggested in the Letwin Report on Build Out (8).  In outline, if designation of land as Growth area comes with an automatic requirement that development of that land must provide for diversity of housing in respect of type, size, style and tenure, including a minimum proportion of affordable homes,  then the residual land value will be much less than it would have been with unconstrained development permission, allowing both lower prices or rents to potential residents and reasonable profit for developers.  The Report’s recommendation that residual land values be capped at around ten times existing use value seems very appropriate for greenfield sites, still allowing the landowner a very worthwhile gain.  There may also be a role for compulsory purchase powers, particularly in assembling large sites with multiple existing landowners where an individual landowner is holding out in the hope of a larger gain at a later date.

The third land type would more logically be divided into two (making four types altogether). Distinguishing the following types would help to promote public understanding of the varied reasons for restricting development and to raise awareness of the extent of land at risk of flooding.

  • One type (“At-risk areas”?) would be land which is unsuitable for development because of its current or likely future exposure to environmental hazards including coastal and river flooding.  Identification of such land should have full regard to the best available predictions of the effects of climate change, including sea level rise, over the next 100 years and beyond, and to realistic assessments (having regard to cost as well as technical feasibility) of the scope for mitigation of risk.   Land potentially at risk of flooding should not be considered suitable for development just because the risk can be fully mitigated in the short term.
  • The second type (“Protected areas”?) would be land which should be protected from development because it has environmental qualities sufficiently valuable to be worth preserving even at the price of restricting development.  Such land may provide direct benefits to visitors via opportunities for recreation and the enjoyment of natural beauty.  It may also provide ecosystem services yielding more indirect benefits such as drainage, water and air purification, biodiversity and (of especial importance in mitigating climate change) carbon sequestration by forests and woodlands.   This could include Areas of Outstanding Natural Beauty and Local Wildlife Sites.  As many people are coming to realise, however, it is not appropriate that all of the very large amount of land designated as Green Belt should continue to be protected (9).  Green Belt land is very varied in quality and much of it is inaccessible to the public. An effect of the London Green Belt is that much development is located beyond the Green Belt but occupied by people who work in London, whose resulting long commutes are harmful both to them and to the environment.  Allowing development on perhaps 10% of Green Belt land, chosen for its limited environmental value and proximity to existing transport links, would enable provision of many new homes in places where people want to live, such as around London, Oxford and Cambridge. 

Q8a Do you agree that a standard method for establishing housing requirements (that takes into account constraints) should be introduced?

By a standard method is meant a means of distributing the national housebuilding target of 300,000 new homes annually.  It would make it the responsibility of individual planning authorities to allocate land suitable for housing to meet their share of the total.  They would be able to choose how to do so via a combination of more effective use of existing residential land, greater densification, infilling and brownfield development, extensions to existing urban areas, or new settlements.

My response:  Yes.  To secure an adequate rate of provision of new homes, it is essential that binding targets are imposed on planning authorities.  However:

  • These targets should be part of a framework which also provides incentives to planning authorities to approve more new homes and leaves meaningful scope for local input to the planning process. This will minimise the risk of conflict between central government and local communities, allow planning authorities to innovate with successful practice being copied by others, and ensure a genuine role for local democracy.
  • The overall planning framework should prioritise number of new homes and quality of individual homes and appropriate infrastructure provision and placemaking.  Enforcement of the first should not implicitly downgrade the others.
  • The overall annual target for new homes in England should be considerably more than 300,000.  Given existing stock of c 24 million, and even if all of that stock remains in use, it represents annual growth of just 1.25%.  With likely population growth of 0.5% (10), this is equivalent in per capita terms to 0.75%. It is not credible that such a modest rate of growth, even if sustained over several years, can do much to mitigate what the White Paper itself (p 14) describes as a situation in which housing space in the UK can be twice as expensive as in Germany or Italy.  An OBR Working Paper (11) estimates the price elasticity of demand for housing in the UK at -0.92, suggesting that annual growth in per capita housing stock of 0.75% would reduce house prices annually by just 0.82%.
  • Targets should allow for development of selected Green Belt land.  Not to do this would unduly restrict home provision in areas where people want to live (see answer to Q5).

Q8b Do you agree that affordability and the extent of existing urban areas are appropriate indicators of the quantity of development to be accommodated?

My response:  Yes (in part).  Requiring more development in areas where property is more expensive, other things being equal, establishes a crucial link to the signals provided by the market, ensuring that development occurs where people want to live.  However, a link to the extent of existing urban settlement is more problematic and a simple algorithm, attempting to spread development “fairly” between planning authorities, would probably yield some bizarre results.  More important than such fairness is the need to ensure that new or expanded settlements are of a sufficient size to support a good range of local services, rather than requiring residents to make frequent car journeys to a neighbouring large town. 

Q9a  Do you agree that there should be automatic outline permission for areas for substantial development (Growth areas) with faster routes for detailed consent?

Approval for development is often in two stages, outline approval being the first stage.

My response:  Yes, for the reason given in answer to Q5.

Q14 Do you agree that there should be a stronger emphasis on the build out of developments?  And if so what further measures would you support?

‘Build out’ refers to the building of homes once approval has been granted. Build out of large developments can be slow due to low market absorption rates, with some sites taking over 20 years to complete.

My response:  Yes.  Slow build out has a direct effect in limiting the rate of provision of new homes.  It also invites the widespread mis-perception that under-supply of housing is the fault of developers and nothing to do with any deficiencies of the planning system.  Requiring diversity of housing within large developments so that provision more closely matches the range of housing demand, as recommended in the Letwin Report, should encourage faster build out by developers in the knowledge that homes are unlikely to remain unsold or untenanted.  Consideration might also be given to some form of penalty, such as a surcharge on the Infrastructure Levy, where the time taken to complete developments is determined (under suitable rules) to be excessive.

Q17  Do you agree with our proposals for improving the production and use of design guides and codes? 

Design guides and codes record architectural and other features of developments that have been judged successful in the past.  They can be used by architects as an alternative to original design, and by planning authorities in specifying the type of development they are prepared to approve.

My response: No.  The proposals tend to suggest that planning authorities would be required to comply with the National Design Guide, National Model Design Code and Manual for Streets.  Making individual planning decisions rules-based rather than discretionary is highly desirable since it will create greater certainty for developers and lead to faster decisions with reduced costs for all parties. However, each planning authority should be free to adopt its own rules via design codes, etc., adapting national guidance to their local circumstances as they judge appropriate.  For example, authorities may reasonably take different views regarding the balance in their areas between car use and public transport, with different implications for car parking provision and housing density. 

Q18 Do you agree that we should establish a new body to support design coding and building better places, and that each authority should have a chief officer for design and place-making?

My answer: Having a central body to support design coding and building better places is a sensible proposal, provided that its role is limited to support and planning authorities are free to adapt its output as they see fit.  While design and place-making are very important, a requirement for each planning authority to have a designated chief officer for these functions would limit the freedom of authorities to determine their own best arrangements having regard to financial constraints.  An authority might for example wish to provide training in design and place-making for a number of officers contributing to the planning process rather than appoint a single designated officer.  Such freedom enables authorities to try different arrangements and to learn from each other’s experience, and is more likely to lead to successful results than a uniform approach.  A requirement for authorities to have regard to design and place-making would be sufficient.

Q20 Do you agree with our proposals for implementing a fast track for beauty?

My response: No.  While there are many excellent suggestions in the report of the Building Better, Building Beautiful Commission (12), its emphasis on “beauty” as an overarching concept is liable to mislead and to be interpreted differently by different parties.  A development I happen to have visited – Great Kneighton, pictured on p 50 of the White Paper – appears to be well planned and well constructed, a good place to live, but I would not call it beautiful.  What is needed is not a fast track for a specific category of developments judged to embody beauty but a general acceleration of the planning process for all housing developments of sufficient quality.

Q22a Should the government replace the Community Infrastructure Levy and Section 106 planning obligations with a new consolidated Infrastructure Levy, which is charged as a fixed proportion of development value above a set threshold?

This question concerns the resourcing of the roads and other infrastructure required by new housing development.  The current Community Infrastructure Levy is a charge that local authorities may choose to levy – about half do – based on the floorspace of new development.  Section 106 (of the Town and Country Planning Act 1990) enables authorities to set conditions when approving a development, requiring the developer to do certain things or to pay money to the authority. 

My response:  Yes.  Negotiations over Section 106 cause delay in obtaining approval for development and may deter small builders from submitting applications at all.  A consolidated Infrastructure Levy at a rate known in advance would avoid such delay and provide certainty for applicants.

Q22b Should the Infrastructure Levy rates be set nationally at a single rate, set nationally at an area-specific rate, or set locally?

My response: Nationally at an area-specific rate.  I suggest a uniform rate for most areas but higher rates for areas where development now requires or may in future require works to mitigate flood risk or maintenance of existing flood defences.  Such higher rates are justified because areas exposed to flooding are unlikely to have lower needs for non-flood-related infrastructure than other areas.  They would also provide some incentive for builders to prefer development in areas not exposed to flood risk.  Where such higher rates are charged the extra sums should pass to the appropriate bodies responsible for flood defence. 

Q22c Should the Infrastructure Levy aim to capture the same amount of value overall, or more value, to support greater investment in infrastructure, affordable housing and local communities?

My response: More value.  This is one way in which planning authorities can be incentivised to approve more new homes (as proposed in answer to Q8a).  However, the overall situation faced by developers, including the price at which development land is available and the sale price of new homes as well as the Levy, should provide reasonable scope for profit. 

Q24a Do you agree that we should aim to secure at least the same amount of affordable housing under the Infrastructure Levy, and as much on-site affordable provision, as at present? 

My response: No.  As explained in answer to Q5, the provision of affordable housing within a diverse development should be automatically required at the point that land is designated as Growth area.  Such a development can be profitable for the developer because that requirement will substantially lower the price which the existing landowner can obtain for the land and so its cost to the developer.  Under this approach, there should be no need for affordable housing to be funded from the Infrastructure levy, which should be reserved to meet the costs of infrastructure provision and placemaking.

Notes and References

  1. From Zoopla’s online property valuation tool.
  2. Valuation Office Agency  Private Rental Market Summary Statistics – April 2018 to March 2019  Read from Chart 3
  3. The economics of housing in England (and elsewhere) is complex.  For a concise and fairly orthodox account, identifying causes and effects of under-supply, see Barker K (2014) Housing: Where’s the Plan?  London Publishing Partnership.  A dissenting view (arguing that under-supply is not the problem) is set out in Mulheirn I  (2019) Tackling the UK housing crisis: is supply the answer?   An international perspective is given by Davies B, Turner E, Marquardt S & Snelling C (2016) German Model Homes: A Comparison of UK and German Housing Markets
  4. The UK government is responsible for housing policy in England, while housing policy in Wales, Scotland and Northern Ireland is the responsibility of their devolved administrations.
  5. Ministry of Housing, Communities and Local Government  Planning for the Future  
  6. As (5) above, p 6.
  7. Euromonitor International gives UK population 2019  66.65M and number of households 28.02M implying population per household 2.38; and gives Germany population 83.02M and number of households 41.58M implying population per household 2.00.
  8. Rt Hon Sir Oliver Letwin MP  (2018) Independent Review of Build Out   See especially paras 3.3, 3.8, 4.3. 4.4, 4.16 & 4.17.
  9. Those advocating selective relaxation of Green Belt to allow housing development include:
  10. ONS: National Population Projections: 2018-Based, Main Points (5% growth over 2018-2028 implies an annual average rate of 0.5%)
  11. Auterson T (2014)  Forecasting House Prices, OBR Working Paper No. 6, para 3.14 p 23
  12. Building Better, Building Beautiful Commission (2018) Living with Beauty: promoting health, well-being and sustainable growth
Posted in Cities, Environment (general), Water | Tagged , , , , , , , , , | Leave a comment