Which moves more people: a lane of traffic travelling at 70mph or 40mph? The faster one, right?
Wrong! It’s all down to safe spacing between vehicles.
Remember how the Highway Code sets out safe stopping distances at different speeds? These take into account the driver’s reaction time and the grip tyres have on the road. The Highway Code assumes reaction time is two thirds of a second, which is pretty optimistic. Even if the driver isn’t distracted, it typically takes longer than that to see brake lights on the vehicle in front, realise it’s necessary to make an emergency stop, and depress the brake pedal. A more realistic estimate for all that is one-and-a-half seconds (and it increases with age). In that time a vehicle has travelled 47m at 70mph, or 27m at 40mph.
Then there’s the physical braking distance. If your vehicle and the one in front both decelerate at the same rate, you don’t need much more spacing than required for your reaction time. However, what if the car in front runs into the back of a stationary vehicle? At 70mph, you’d need an additional 75m to avoid joining the pile-up. At 40mph, it’s 24m. (Incidentally, the chevrons you see on the M11 and other motorways are 40m apart. If you can see two chevrons, you’re at least 80m behind the vehicle in front – enough to avoid a collision in most, but not all, circumstances.)
Translating vehicle spacing into road capacity reveals that, if people maintain reasonably safe headways, a motorway can carry about 15% more vehicles per hour at 40mph than at 70mph. Of course, those journeys take longer, but the up-side is that more people can travel and still be safe. That’s why we have variable speed limits on many motorways.
Autonomous Emergency Braking (AEB) has an effective reaction time in milliseconds. If it can also ‘see’ the speed of traffic hundreds of metres ahead, it can prevent you running into a pile-up. In theory that makes tailgating safe – except how can you be sure whether the vehicle behind you has AEB or a driver with no road sense?
This article was first published in the Cambridge Independent on 13 January 2021.
I understand the principle of this. I suspect that the stopping distance goes up as the square of speed (after reaction time), which would tally with kinetic energy being related to the square of speed. So if you halve the speed, you might reduce the ideal gap to something approaching a quarter, and have twice as many vehicles passing per minute. Of course it won’t be quite as good as that for various reasons, but that is the general idea.
What I would be interested to know is how well this works with the actual gaps that motorists leave between each other. It is clear that drivers do not open up the gap as speed builds to anything like the amount that they should on average. Do statistics exist for actual average gaps left by motorists at various speeds, and how does the flow rate versus speed function change when these are used?
The speed-flow relationship is illustrated in this graph:
The top of the yellow area corresponds to leaving only enough space to allow for reaction times. Anyone leaving this much space would have to be hyper-alert and hope the vehicle in front doesn’t rear-end the vehicle in front of it.
The green dashed line corresponds to leaving enough space to allow for reaction times and physical stopping distance (in dry conditions). That should mean the driver won’t end up in a pile-up.
The blue dotted line is an approximation to real-world conditions, which are risky (hence why we do see multiple-car pile-ups). In that situation, capacity is maybe only 6% higher at 40mph than 70mph, but it is significantly safe at the lower speed because any collision will be at a lower speed, with less energy and hence lower likelihood of serious injury or death.
If Highways England are doing a good job of monitoring and regulating traffic flow, they will reduce the speed limit proactively while headways are still relatively safe, i.e. somewhere between the green and blue lines. That will increase capacity somewhere between 3% (blue line) and 26% (green line). Hence the estimated figure of 15% I give in the article.
For the “typical” flow rate (blue line), I’ve assumed the same proportion (25%) of the physical stopping distance at all speeds. It is likely that varies with speed. How can be argued both ways. People might take a bigger risk at a lower speed because the consequences are less severe (in the same way that airbags give drivers more confidence to take risks). That would make the hump more pronounced: i.e. capacity would be relatively even higher at medium speeds than at high speeds. Or, people might be less good at estimating what is a safe distance at higher speeds, and hence unintentionally take a larger risk at high speeds. That would tend to flatten the curve, so that unregulated capacity stays more or less constant above around 40mph.
There are of course dynamic effects: people will tend to stick to the speed limit, largely because of peer pressure, as more and more traffic joins the road, until they feel seriously unsafe. Then they will slow down to re-establish a safe distance behind the vehicle in front. In that case, capacity increases at 70mph as people trade speed for safety, then suddenly falls as people adjust their speed and spacing to feel safe again. It is one possible cause of “shockwave” congestion that ripples back through the traffic and catches people unawares. That situation is avoided if Highways England force people to lower their speed before they would otherwise choose to.
Does that make sense?