Capacity factor is the ratio of the average output of a facility to its maximum output. It is always less than 100% – either because the facility is not capable of maintaining its maximum output all the time, or because there is sometimes little demand for its output, or some combination of the two.
Availability factor is the ratio of the average output a facility is capable of to its maximum output. Nuclear power stations and fuel fired power stations have high availability factors (~90%) – only less than 100% because of downtime for maintenance; wind, wave and solar power stations have much lower availability factors (~10-30%) because of the variability of their energy sources.
Capacity factor is an important economic issue in the running of an existing facility, especially one (such as a nuclear power station) that has a high ratio of fixed to variable costs.
When a nuclear enthusiast tells you that nuclear power has the highest capacity factor of any generating technology, what he’s trying to get you to believe is that it has the highest availability factor. The difference is important. Availability factor is a significant (but not overwhelming) issue in the choice of generating technology; capacity factor, except insofar as it correlates to some extent with availability factor, is not.
The reason that nuclear power stations and fuel fired power stations have very different capacity factors, despite their similar availability factors, is variation in demand over time, and the different ratio of fixed to variable costs. Nuclear power stations have very high fixed costs (mostly financing, due to their very high capital cost) and relatively low variable costs; fuel fired power stations have relatively low fixed costs (they’re much cheaper to build) but high variable costs (mostly fuel). This means that when demand is low, it’s the fuel fired power stations that get turned off, because that saves a lot of money, whereas turning a nuclear power station off saves relatively little. So the fuel fired power station has a lower capacity factor – not because it’s not available nearly all the time, but because it saves more money to turn it off when demand is low.
As long as the total capacity of all the nuclear power stations on a grid doesn’t exceed the base load (the minimum demand, typically during summer nights), and the only other power sources on the grid are fuel fired, the capacity factor of nuclear power stations can be the same as their availability factor – good for their economics. As soon as the total capacity of the nuclear power stations exceeds the base load, their capacity factor will start to go down – unless you have energy storage facilities.
The fixed costs of nuclear power stations are so high that keeping their capacity factors high is absolutely critical to their economics, so very few countries have more nuclear capacity than sufficient to supply their base load. The only significant exceptions are France, and (until the disaster at Fukushima) Japan. The capacity factors of nuclear facilities in France and Japan are correspondingly very much lower than elsewhere. France exports some power at times of low demand (and correspondingly at low prices) – otherwise their capacity factor would be similar to that of fuel fired facilities (as they were in Japan).
Thus nuclear combined with fuel firing is a good mix: nuclear meets the base load, and fuel firing supplies anything above that. The total amount of fuel fired capacity required in such a system is (Peak Load – Nuclear Capacity).
Add in some storage, and you can increase the proportion of nuclear – storing some of the nuclear power during troughs in demand, and returning it to the grid to supply peaks. This is practical as a method of levelling out daily variations in demand, with a few tens of GWh stored, but to level out seasonal variations would require thousands of GWh of storage. (UK or similar sized countries, pro-rata for others.)
Renewables combined with fuel firing is also a good mix. Without storage, you need enough fuel fired capacity to meet the whole of peak load because renewables may sometimes not be delivering anything during a peak. However, their low fixed costs and very low variable costs mean that almost the whole of their output constitutes a saving of fuel. Their fixed costs are so low that you can afford not only to meet the whole of base load, but a large part or even the whole of peak load if wind or wave or sun happen to deliver during a peak in demand – discarding energy whenever their output exceeds demand. But you still need enough fuel fired capacity to meet peak load, even though you will rarely use all of it.
Add in some storage, and you no longer need so much fuel fired capacity – and you no longer need so much renewable capacity either (because you’ll be discarding less, or none). The more storage you add in, the less capacity (of either type) you need. Indeed, the amount of storage required to totally eliminate the need for fuel fired capacity is considerably less than that required to level out seasonal variations, because windless, waveless and sunless periods are shorter than seasons.
Which brings us to the real crunch: renewables combined with nuclear is NOT a good mix. They compete with each other for supply of base load whenever the wind blows or the sun shines. You can think of an input from renewables as a reduction in demand, and anything coming from renewables during a demand minimum reduces the base load that nuclear wants to supply to keep its capacity factor high. This is why the nuclear industry hates renewables. They can’t justify asking the renewables to shut down, because the variable costs of the renewables are even lower than the variable costs of nuclear.
See also an older piece, Capacity Factor II, which expands upon this piece.