Within the last decade, it has become obvious that
global warming is taking place at least as quickly as predictions. There is evidence that some aspects of the
warming such as melting of polar ice are accelerating.[1] Also within the past decade, technological
advances have made it possible to extract oil and natural gas from
fossil-fuel-bearing source rock, opening up vast new reserves. Burning all the fossil fuels now known to
exist could raise the earth’s temperature by 10 to 15 degrees F by 2100.[2]
Clearly, to avert the worst of global warming,
fossil fuel combustion must be curtailed.
One way this could happen is if renewable sources, especially wind and solar
power, became economically and otherwise attractive enough to supplant fossil
fuel combustion. To be effective in
forestalling the worst impacts of climate change, however, this transition
would have to happen extensively, and soon.
As generally accepted and agreed to in the recent Paris meetings, global
CO2 emissions from fossil fuel combustion will have to be cut by 80%
by 2050 to prevent dangerous climate change.
There’s no question that the power of the sun is
virtually limitless compared to conceivable human needs, and the potential of
wind, while not infinite, is huge.
Several studies have made a strong case that it is technically possible
that the world’s energy demand could be met with renewables by 2050.[3],[4],[5],[6]
But these studies don’t specifically address
whether, given the various institutional, sociological, political, and economic
barriers to converting the world’s power system to solar and wind, widespread and
rapid conversion is anything more than a pipedream. The studies don’t get into the details of how
to make this transition happen; essentially saying merely that it depends on
economic and political factors. Needed are studies that use electricity
production data to get a sense of whether renewables’ growth trajectory is at
all likely to lead to 100% renewable power within the foreseeable future. Failing to focus on the difficulties facing
major penetration of renewables as evinced by their actual growth trend could
be dangerous if it leads to a false sense of security and a discounting of the
need to 1) improve renewable power sources and encourage other low-carbon
energy sources and 2) discourage the combustion of fossil fuels, such as with a
broad society-wide price on carbon dioxide emissions.
Clearly there are other approaches for predicting the
growth of energy technology, e.g. models that take into account regional
electricity demand, investment and operating costs, capacity factors, production
limitations, fuel prices, sun and wind resource bases, and varying policy
instruments including carbon prices. But
it is not obvious that any of these model-based approaches are so reliable that
simpler approaches based on production data should be ignored. In the years to
come, the trajectory of renewables’ growth will become more apparent. The problem
is that there is little time left to phase out fossil fuels and so avoid the
worst effects of climate change. If the
global community trusts that solar and wind will grow rapidly enough to
essentially displace fossil fuels while waiting until there is a more robust
data set to definitively determine whether or not this is so, enough time could
elapse that, should the growth of solar and wind prove to be insufficient, it
could be impossible to develop enough low-carbon energy in time to meet the Paris
agreement and thus prevent potentially catastrophic climate change.
In this paper, use of economic models is avoided;
instead growth trends based on electricity production data from solar and wind
power at the global level are examined. It is found that, contrary to some
perceptions, their growth rate appears to be declining. This apparent decline lends credence to the
idea that these technologies need further improvement, and that other
low-carbon technologies, especially next-generation nuclear, must be given
serious consideration.
The reasons why next-generation nuclear is promising
are also presented below, combined with a brief description of the various new
approaches to harnessing the power of atomic fission. The need for a fresh perspective on the
dangers of low-level radiation is also discussed.
Renewables
are not growing fast enough
See charts 1 and 2 below.[7] The first chart below shows the recent growth
trend of electricity production at the global level from solar; the second
shows that of wind.
Chart
1
Chart 2
These graphs give the impression that solar and wind
are taking off in a revolutionary manner.
Perhaps they are. Many have
characterized these renewable energy sources’ growth within the last decade or
so as being faster than expected and as representing consistent growth of a
given percentage each year, i.e., exponential growth. Of course, continued exponential growth eventually leads to huge numbers.
There is a precedent for exponential, revolutionary
growth. It is computing power, digital
storage, and electronic sensors. This
growth has continued to follow Moore’s law, with capabilities doubling every
two years or so. But these technologies
may differ in a fundamental way from most other technologies in that they build
on themselves, with more capacity leading to faster growth, and faster growth
leading to more capacity, and so on.
Are solar and wind growing this way? Although there is much uncertainty
surrounding an effort to characterize growth of nascent technologies such as
solar and wind based on only a few years’ data, a closer look at electricity
production data provides reason to suspect that growth of both solar and wind
is slowing, and that while they may have experienced a period of exponential
growth, that period may be ending or already have ended.
From 2008 through 2011, global production of
electricity from solar power grew at an average rate of 72% per year, with a
95% confidence interval of plus or minus 22%.[8] For the period 2012 through 2015 the average growth
rate of solar was a little over half that; 42 ± 20% per year. The average growth rates of these two periods
are significantly different, with a Mann-Whitney test showing a two-tailed P
value of 0.0159. According to Solar
Power Europe, an industry-funded source that is, if anything, likely to present
an optimistic picture, the global growth rate of solar from 2016 through 2020
will be even less; about 20% per year.[9]
The growth rate of world electricity production from
wind power has also declined significantly during the same period. From 2008 through 2011, the average yearly
growth was 26 ± 3.5%; from 2012 through 2015, it was 18 ± 7.5% per year. These averages are also significantly
different, with a two-tailed p value of 0.0203.
A growth rate of 42% per year, or 18%, or even a
lower percentage, if maintained, would represent exponential growth, and would
eventually lead to huge numbers. But why
anyone should interpret the growth of solar or wind power as if it were
following an exponential pattern is not clear.
Why not interpret these renewables’ growth as linear, with a more or
less constant absolute quantity of new capacity added each year, based on the
upper limits of manufacturing capacity, resources, and funds?
Visual inspections of plots of data, while not
sufficient to demonstrate trends, can be revealing. For example, Chart 3 below pictures the same
data as Chart 1, but it includes a hypothetical curve which represents what
solar’s growth curve would look like had it maintained the 72% per year rate of
2008 through 2011 in subsequent years.
Chart
3
This chart shows that solar’s actual growth curve diverged
in 2012 from the 72%/year hypothetical curve. Further, the trend for the years 2012
through 2015 has the look of a straight line.
A linear regression of these four data points shows that a straight line
indeed fits these data well, with a R2 value of 0.9906. The slope of this best-fitting linear
function is 50 ± 15, which suggests that solar is adding between 35 and 65 TWh
of electricity production each year. The same exercise performed with wind data
shows a similar picture; a linear regression fits the 2012 through 2015 data
with a R2 value of 0.9908.
The slope of this best-fitting linear function is 102 ± 30, which suggests
that wind is adding between 72 and 132 TWh of electricity production each
year.
That the future growth trend of solar and wind is
more likely to be linear than exponential is further suggested by the data
depicted in Chart 4. This chart plots percent
growth in solar from 2014 to 2015, for all nations where solar production was
greater than 1 TWh in 2015, versus the percent of that nation’s electricity
provided by solar power. This pattern suggests that there is a saturation
factor operating in the case of solar.
Initially, it grows fast, but when more than 5 percent or so of a
nation’s power is provided by solar, solar’s growth slows down. Although there is much variation, in part due
to varying subsidy regimes, a similar pattern appears to exist for solar power
in states in the U.S. A less distinct
but not dissimilar pattern exists for wind at the global level.
Chart
4
If solar (and wind) were growing linearly, adding a
constant quantity (but not a constant percent) of production each year, one
would expect a declining rate (in percentage terms) of growth, and so countries
getting more power from renewables would show a lower percentage growth. Such a pattern is consistent with Chart
4. But the pattern shown is not at all
consistent with the notion that renewables are consistently growing in an exponential
manner.
The point here is not to claim that solar and wind
are now growing linearly, but rather to argue that they could be, and that
there is little reason to assume that they are continuing to grow
exponentially. If this transition from
an apparently exponential pattern to an apparently linear pattern is real, it
may be that growth of both of these technologies more likely resembles a
logistic pattern. If they should follow
such a pattern, solar and wind will be growing like most other technologies and
systems in both the man-made and natural world; most technologies and other
systems don’t grow in an exponential manner for long.[10],[11]
Instead, growth typically follows a logistical
or “growth curve” pattern, characterized by an early exponential period that
transitions to a linear phase, adding a more-or-less constant increment each period,
eventually levelling off as constraints come into play limiting further growth. Biological systems consistently behave this
way, and so do most technologies. Prime examples
of such apparently logistical growth are railroads[12]
and automobiles.[13]
For a short period each grew
exponentially, and then settled into linear growth, and then tapered off
further. With both technologies, the linear phase persisted
over long periods of time; it was not noticeably affected by improvements in manufacturing
methods or innovations in the technologies themselves. See charts 5 and 6 below.
Chart
5
Chart 6
If solar and wind are now growing linearly, or are in a linear phase of logistical growth that continues through 2050, how much of the world’s energy might these sources provide by 2050?
As noted above, the upper bound of the 95%
confidence interval around the mean slope of solar’s best-fitting linear growth
function is 65 TWh/year. The upper bound
of wind’s slope is 132 TWh/year. Projecting these upper-bound linear functions
to 2050 suggests that solar’s contribution could grow from its 2015 global
production of approximately 250 TWh to about 2500 TWh, and wind’s production
could grow from its 2015 value of approximately 850 TWh to about 4400 TWh. The two together would thus contribute 7000
TWh to the world’s energy consumption in 2050.
Human civilization currently uses energy at a rate
of about 18 TW. That number is projected
to grow to 25 TW by 2035, growing further to 30 TW by 2050.[14] Used at a constant rate 24 hours a day, 18 TW
translates to about 158,000 TWh (about 540 quadrillion Btu) per year. It’s possible that this projected growth in
energy use could be slowed considerably by converting most transportation to
electric, because electric motors are far more efficient than internal combustion
engines. Also, major improvements in
energy efficiency could be made in other areas.
Even so, world energy use is likely to grow. However, even if energy use should hold steady
and solar and wind provide 7000 TWh of energy in 2050, they will be providing
no more than five percent of world energy by 2050.
A similar analysis of data for only the U.S. leads
to a conclusion only moderately less gloomy; solar and wind appear to be on
track to provide less than ten percent of total U.S. energy use by 2050.
This outlook could be overly pessimistic. Solar and wind may actually be growing
exponentially, or they could resume such growth. There are many factors affecting the growth
of solar and wind. Continued cost
declines will encourage growth, as will the development of cheaper storage
technology, which can offset the problem of intermittency (no solar power at
night, no wind power when the wind isn’t blowing). Cost-effective storage technology that would
work on a large scale is, according to some analysts, only a few years
away. Should the costs of these
renewables decline to a level where they are clearly cost-competitive with even
the cheapest fossil fuel-powered sources, their growth could accelerate.
On the other hand, the outlook may be
optimistic. Why should a linear trend
continue for another 35 years, or more? It
is true that costs of both technologies have been declining; solar panels
especially are far cheaper than they once were.
But panels are only part of the costs of solar PV. Currently, “balance of system” (BoS) costs,
which include cabling, wiring, racking, and permitting, make up more than half
of the cost of a solar installation.[15] There
seems little reason to expect BoS costs to decline significantly. Both solar and wind have been, and still are,
heavily subsidized in most markets. In
the U.S., solar installation qualifies for a 30% federal tax credit. Many states have additional subsidies. For
example, in New Jersey, solar subsidies (net metering and solar renewable
energy credits) typically pay for over 150% of the cost of installing solar. Even with the huge subsidies, solar’s growth rate
has slowed in that state. Pressure is mounting in some jurisdictions to lower
or entirely remove subsidies for renewables.
Removing or lowering subsidies will discourage
growth, and could prevent renewables from maintaining even steady linear
growth. The percentage renewables
provide by 2050 could be even less because of several additional factors; a
possible saturation effect, a likely growth in overall electricity demand, and
unfavorable net energy.
Is there a solar saturation factor?
The first factor, at least in the case of solar, is
that there may be a saturation effect in operation. As shown in Chart 4, above, data on
electricity production of the world’s nations suggest that when solar reaches
the point where it provides 5% of a nation’s electricity supply, its growth rate
slows significantly. This pattern
suggests that there may be a saturation factor operating in the case of
solar. Initially, it grows fast, but
when more than 5 percent or so of a nation’s power is provided by solar, solar’s
growth slows down. Although there is
much variation, in part due to varying subsidy regimes, a similar pattern appears
to exist for solar power in states in the U.S.
A less distinct but not dissimilar pattern exists for wind at the global
level. Could there be an explanation for why the market for solar might become
saturated at a relatively low level of penetration?
There are some possible explanations. The
electricity supply network is comprised of a variety of power sources. Some, the baseload plants, run nearly all the
time. Others run intermittently; either
when they can, in the case of solar and wind, or when they are needed. Generally the most expensive power providers
are those pressed into service only when demand spikes to extra-high
levels. These are the “peaking plants.” Peaking plants are generally relatively small
plants that are used primarily in afternoons in the summer when electricity use
peaks due to use of air conditioners.
Peaking plants are typically single-cycle natural gas-powered units,
often modified jet engines, and have the ability to come on line on short
notice. However they are inefficient and
expensive to run. Solar electricity, since it
typically reaches its maximum output on sunny summer afternoons, competes well
with peaking plants in this niche.
Solar is clearly now cost-competitive with most peaking
plants. Solar should be able to fill much of this niche. But this niche is not large. Peaking plants represent about 5 percent of
the total electricity market. That this
percentage is consistent with the level of penetration at which solar’s growth
appears to taper off may not be a coincidence.
Other possible explanations for this apparent
saturation effect include the likelihood that prime sites for both wind and
solar are selected first, and that subsequent siting is more problematic. Both power sources have large
footprints. Solar needs a sunny spot;
wind needs a windy locale. Expanding into areas otherwise suitable but not
close to concentrations of power users can necessitate construction of
transmission infrastructure, stimulating adverse NIMBY reactions. This has happened in many places.[16],[17]
Electricity use is likely to grow
Another reason why renewables may provide even less
of the total energy supply is that electricity use is likely to grow. It’s true
that major strides have been made in energy efficiency, and as a result,
electricity use in the U.S. and in other industrialized nations has more or
less held steady for the last two decades or so. But, as noted above, electricity demand at
the global level is predicted to grow. Currently, much of the world’s
population has little or no access to electricity. While per capita power consumption in the
U.S. is about 1400 watts per person, in China it is about 400 watts per person,
in Mexico, about 200 watts, in India 65 watts, in Nigeria, about 15 watts. People in the developing world clamor for
electricity and the better lifestyle it can bring. And they are increasingly moving to cities. These mega-cities will need reliable
electricity, and lots of it. If
renewables are in fact growing linearly, what they will provide in 2050 will be
an even smaller percentage of the likely larger electricity supply that will
rise to meet the demand. If this demand
is not met with low-carbon power, it will be met by burning coal and natural
gas.
Solar power’s net energy
A third factor is also not encouraging, at least for
solar. This is the net energy, sometimes
expressed as the energy return on investment ratio (EROI). It’s the ratio of the total energy provided
by a power source over its lifetime to the energy required to bring that power
source into being and to operate it for its lifetime. It has been estimated
that an EROI of 5 to 1 or greater is necessary to generate enough surplus
energy for the various energy-using ancillary functions, such as health care,
education, provision of food and shelter, etc. that are necessary to run modern
society.[18] The EROI concept is related to economic notions
such as levelized-cost-of-energy LCOE, but due to vagaries in calculation
methods including assumptions about the time value of money and its relation to
energy costs, a viability picture based on EROI is not necessarily consistent with
one based on economics.
A comprehensive investigation of solar electricity
generation in Spain found that its EROI, even in that sunny country, was
surprisingly low; in the range of 2.45.[19] The
authors of that study estimated that Germany’s solar EROI was even less (between
1.6 and 2). A study by Ferroni and
Hopkirk found that the EROI of solar PV is, at least in some locations in
Europe, lower than 1.[20] The researchers concluded that in regions of
moderate insolation, solar PV has an EROI of 0.82, and thus cannot be termed an
energy source but is rather a non-sustainable energy sink and will therefore
not be useful in displacing fossil fuels.
This picture is consistent with an economic analysis of the prospects
for cost competitive solar PV power.[21] The authors of that analysis, while
concluding that solar PV will likely soon be cost competitive in areas with
plenty of sunshine, stated that in regions with low insolation such as Germany,
solar PV will be at a permanent and seemingly insurmountable disadvantage.
Not surprisingly, studies reporting low EROI values
have not been received favorably by supporters of solar. In a heated “demolition” of the Ferroni and
Hopkirk study, one analyst, using purportedly the most recent data, argued that
the EROI of solar is higher; triumphantly concluding that it is closer to 2.83
than to the value of less than 1.[22] But 2.83 is a poor EROI; arguably too low to
sustain industrial civilization.
Another
perspective on EROI is provided by studies of carbon dioxide equivalent emissions
over the entire life of an energy source, so-called life cycle analysis
(LCA). With some assumptions, a power
source’s EROI can be derived from life-cycle analyses (LCA) of carbon dioxide-equivalent
(CO2eq) emissions associated
with a power source. For solar, such an
assessment takes into account the emissions from the energy used in manufacture
of solar panels, including acquisition of the necessary materials, the
manufacture of inverters and related equipment, the construction of necessary
supporting structures, and the other energy-using processes involved. Most of the up-front energy inputs in the solar
system life cycle are fossil fuels. A
meta-analysis of power sources found that solar electricity production, especially
PV, while far superior to fossil fuels, emits significantly more carbon over
its lifetime than hydropower, wind, or nuclear energy. See table 1.
Table
1
Life-cycle analyses of selected
electricity generation technologies,
50th percentile values
(g CO2eq/kWh);
aggregated
results of literature review[23]
Solar
|
Hydro-power
|
Wind
Energy
|
Nuclear
Energy
|
Natural
Gas
|
Coal
|
|
PV
|
CSP
|
|||||
46
|
22
|
4
|
12
|
16
|
469
|
1001
|
Since renewables and nuclear do not emit carbon
dioxide during their power production, their LCAs can be used to estimate EROI
by assuming an average value for the CO2eq emissions of the fuels
used as inputs to their materials and construction. Assuming a value of 15 GJ per ton of CO2eq
emissions (an approximate average of natural gas and diesel fuel), solar PV,
with a LCA of 46 g CO2eq emissions/kWh as per the table above, has an EROI of
5.2 to 1. (Wind, with a similar
calculation, has an EROI of 20:1. Nuclear
power’s EROI is 15:1.)
The EROI research field is still plagued by
uncertainty. Establishing the boundaries
of EROI studies – what is to be included and what not – has not been consistently
defined. Thus solar’s relatively poor
showing in some studies is not definitive.
Other studies have found a more favorable EROI for solar. But solar’s
low EROI number in many studies lends credence to the conviction that
renewables’ percentage contribution to the total energy supply of 2050 is likely
to be disappointing.
New
nuclear technology has the potential to provide the low-carbon electricity that
will be needed
Fossil fuels are formidable foes to low-carbon
energy sources because of their energy density.
A one kilogram lump of coal, about the size of a large grapefruit,
contains in the chemical bonds that hold its atoms together about 30 megajoules
(MJ) of energy. This is equivalent to
the energy expended by a person doing hard physical work for about 15
hours. Petroleum and natural gas are
even more energy-dense. Although there
is vast energy in sunlight and wind, it is relatively diffuse. This is why it would take, for example, 25
square miles of solar panels or 600 of the largest available wind turbines,
with their attendant service roads, to produce the same amount of electricity
as a one thousand megawatt fossil fuel plant occupying 0.3 square miles.
But the chemical energy holding atoms together, released,
for example, by combustion, cannot compare in magnitude with the energy holding
nuclei of atoms together that can be released by fission. The energy contained in uranium or thorium
that could be released, for example in a breeder reactor, is about 80,000,000
MJ per kg, over two million times the energy density of coal. Not all of the energy in coal, or in uranium
(or thorium) ore can be turned into electricity. Nevertheless, the difference in energy
density is huge. A two-million ton pile
of coal, requiring over 300 miles of coal trains to haul it, when its energy is
released through combustion, is equivalent in energy content to that released
via fission by one truckload of uranium ore.[24]
The tremendous energy density of the atom means that
nuclear energy, if properly harnessed, could provide all the energy
civilization would conceivably need far into the future, not only to produce
the electricity needed for a modern lifestyle for all, but energy enough to
desalinate seawater on a large scale, prevent further deforestation, end
pollution of the atmosphere with carbon dioxide, and even provide the energy to
sequester carbon dioxide and lower atmospheric concentrations to pre-industrial
levels.
Nuclear plant basics
There are two basic types of nuclear reactors:
thermal reactors and fast reactors. Both
cause reactions that break apart, or “fission” atoms, releasing energy and also
releasing particles, typically neutrons, that break apart more atoms, causing a
chain reaction that releases enough energy to heat water or another fluid which
then drives turbines or similar units that convert the heat into
electricity. Only some atoms are capable
of undergoing fission to the degree necessary to maintain a chain
reaction. For example uranium, which is
mostly comprised of the isotope U-238, must be “enriched” by increasing the
concentration of the U-235 isotope, which, when it fissions, produces enough
byproduct neutrons to maintain a chain reaction. Another isotope, plutonium-239, is also
readily fissionable.
The fission reactions create various byproducts,
some of which are smaller atoms, called “fission products.” Fission products
are highly radioactive, and typically have relatively short half-lives. Some byproducts are larger than the parent
atoms, having absorbed one or more neutrons.
These larger atoms, called “transuranics” or “actinides,” don’t fission
well and thus can stall the chain reaction.
Many of them are also unstable, and eventually decay by releasing alpha
particle radiation. Many of the transuranics have long half-lives, extending to
thousands of years.
A thermal reactor employs neutrons, which have been
released by the fission of certain isotopes, such as uranium-235 or uranium-233,
but it slows down the fission reactions to avoid creating too many
transuranics. Most of today’s nuclear
plants are thermal plants that use pressurized water to slow, or “moderate” the
fission reactions. The pressurized
water, heated by the nuclear reactions but also keeping it moderated and at the
proper temperature, is paired with a second system of water which, heated to
steam, drives the electricity-generating turbine. Some plants have no secondary
cooling loop; the primary loop both moderates the reaction and generates
steam. All of these generation II plants
operate well above atmospheric pressure, and they all need a constant
electricity supply to maintain the circulation of the water necessary to
moderate the reaction. While there are designs that use “heavy” water, water
that contains deuterium (hydrogen with an additional neutron) instead of normal
hydrogen, most of the world’s nuclear plants use regular, or “light” water, and
so are termed “light water reactors” (LWRs).
A fast reactor doesn’t moderate neutrons, but
provides different fuels that fission more readily when bombarded by fast
neutrons, i.e., neutrons that have lost little energy by collision with other
atoms. Fast reactors can burn more of
their fuel before fission products build up to the point where the fuel needs
replenishing. They can also make use of
uranium that doesn’t have as high a percentage of the very fissionable U-235. They can also burn spent fuel waste, and
thorium. But, the higher internal energies of the reactions in a fast reactor
present materials challenges that have not been thoroughly resolved.
The current generation of nuclear power
plants
The current generation of nuclear power plants,
so-called generation II, seems incapable of meeting the challenge presented by
vast new quantities of cheap fossil fuels.
At least in western industrial economies, some of the current plants have
closed, primarily because they cannot produce electricity as cheaply as natural
gas plants. Construction of related
generation III and III+ plants has been plagued by delays and cost
overruns.
It is this high cost, and the potential for delays
and resulting cost overruns that leads some to argue that nuclear power has no
future based on cost alone. However, the
existing literature on construction costs of reactors is almost totally based
on data from the U.S. and France. A
recent study that gathered data on the cost history of 349 reactors built in
seven nations including Japan, South Korea, and India found that costs have
varied significantly, and that there has been much milder cost escalation in
many countries, with even absolute cost declines in some countries and specific
areas.[25] New construction has become expensive in
part because, so far, there has been so much design variability and variation
in requirements imposed by different utilities that each plant built has been
different.
Generation III plants represent a significant
improvement over generation II plants, however.
They are more efficient thermally, have some passive safety features,
and use standardized designs. Generation
III plants are being built in Asia and in Russia. Generation III+ plants have additional improvements. They are designed for a 60-year lifespan,
have stronger containment domes, and passive safety systems that can cool and
stabilize the reactor in the event of a mishap for a relatively long period of
time without human intervention and without the need for electrical power.
Primarily because of construction of these improved generation
III and III+ plants, global nuclear generating capacity increased slightly in
2016. China provided the largest
increase, with five new reactors contributing 9,579 megawatts to the total.
Five more reactors, one each in India, Pakistan, Russia and South Korea and
Watts Bar 2 in the United States, also came on line.[26]
China apparently plans to build 8 more reactors using the Westinghouse AP-1000
design[27] and
has proposed building an additional 30. Plants
of the AP-1000 design have standardized design and modular components and can
theoretically be built in 36 months. A
factory to build the modules for the AP-1000 has been constructed in China, and
so it is possible that the growth rate of nuclear electricity production in
China will accelerate.[28] The growth rate in nuclear generation in that
country was about 10% to 15% per year from 2011 through 2014, but grew by
nearly 30% between 2014 and 2015. In 2015,
new production from nuclear in China was 38 TWh. This exceeded new production there from wind
(26 TWh) and solar (16 TWh) in that year.[29]
Further, a number of companies are working on
designs for small Gen III+ designs that are modular and small enough so that an
entire reactor could be built off-site and shipped by rail or truck and
assembled on site.
Nevertheless, most Generation III and III+ plants
are large and capital-intensive, and they all operate above atmospheric
pressure and require active cooling. Because
they must maintain pressure and cool the reaction process with active systems
requiring electricity, in the event of a loss of pressure or a power failure these
plants require various mechanical interventions to avoid major problems.
New designs
A comprehensive report by the Breakthrough
Institute, How to Make Nuclear Cheap,[30]
discusses a number of new reactor designs and identifies key aspects that could
reduce costs.
All the new designs include some or many inherent
safety features that the current generation of reactors does not. These include passive safety systems that
will shut down the system even in the absence of outside power. A number of the designs operate at
atmospheric pressure, and so are not subject to potential emergencies caused by
loss of pressure. Most include fuel
systems that are much more melt-down resistant than the current
generation. And, most are more efficient
thermally, which means less demand for cooling water. Of these new designs, only two, high-temperature
gas reactors and sodium-cooled fast reactors, have been demonstrated at the
commercial scale. However, several of
the designs are capable of being built with off-the-shelf technology and should
not require significant materials research and development. Many of the designs produce far less waste,
and have fuel requirements undemanding enough so that they can be considered
renewable. Some are close to commercial scale demonstration. With today’s computer aided design and
modeling capabilities, it could take a relatively small push to bring some of
these designs to the point where they would win the economic battle with
natural gas and coal.
The Breakthrough Institute’s report concludes that
it is not advisable at this point to lock in any one of the various new
designs, but rather to push for 1) expanded investment in innovation, 2)
innovation across advanced designs (to share benefits among related
technologies), and 3) to reform the licensing process.
Below is a brief description of some of the most
promising designs, including a summary of their advantages and a brief
discussion of issues still to be resolved. These descriptions are primarily
based on interpretations of information contained in the Breakthrough Institute
report referenced above and Robert Hargraves’ book, Thorium: Energy Cheaper
than Coal.[31]
Salt-cooled thermal reactors
Salt-cooled thermal reactors use slow (i.e.,
moderated) neutrons and a liquefied salt for the primary and/or secondary
coolant. Some use a molten fluoride salt
coolant and a solid fuel; in other designs the fuel is also in liquid form,
either dissolved in the salt coolant or adjacent to it.
Pebble-bed
advanced high-temperature reactor:
One salt cooled thermal design, the pebble-bed
advanced high-temperature reactor (PB-AHTR), uses fuel “pebbles” about the size
of billiard balls. Each pebble contains
thousands of sand-sized particles of uranium fuel. These particles, called
TRISO particles, are coated with three barrier layers to contain the
radioactive materials. The pebbles are packed closely together and cooled by
molten salt which then flows to a secondary loop to produce electricity. Over
time, the pebbles move slowly upwards in their housing, and are examined by robotic
machinery that determines their remaining fissile fuel content. Spent pebbles are set aside and replaced by
fresh pebbles. The spent pebbles are strong and hard, suitable for disposal.
Because of the high heat capacity of the molten
salt, the PB-AHTR is compact, and it operates at temperatures high enough so
that, unlike today’s Gen II and III reactors, it can power off-the-shelf high-efficiency
Brayton cycle power conversion systems similar to those used in modern
combined-cycle gas turbines. Also, the TRISO fuel form is well-understood and
has already been used in other types of reactors. Because the fission byproducts are contained
in the TRISO particles, they don’t get into the molten salt and so cannot
interact with and corrode or weaken the unit’s vessels, piping, and pumps.
Unlike Gen II and III and III+ reactors, the PB-AHTR
operates at atmospheric pressure and utilizes fuel and coolant that are not
prone to runaway heating or meltdown. It
could readily be constructed in a fully modular fashion, and it is largely
based on components and materials that have already been used successfully at
the commercial scale.
Some remaining challenges for the PB-AHTR are that the
TRISO particles must be produced with extreme uniformity. This has been achieved, but scaling up such
production to the commercial scale could present difficulties. Also, while the liquid coolant is thermally
efficient and it can’t melt because it is already liquid, it can solidify if
the reactor has to shut down.
Solidification of the salt would seal in radioactive materials and
contain leaks, but it would damage equipment, presenting an economic risk.
Denatured
molten salt reactor
Another salt cooled thermal design, the denatured
molten salt reactor (DMSR) contains fissile uranium and thorium dissolved in a
molten fluoride salt. The uranium is
“low enriched uranium” (LEU). LEU is “denatured” in that its fissile component,
U-235, is diluted with at least 80% U-238, which makes it unsuitable for
weapons. The radioactivity of U-235 starts the fission process going, and some
of the neutrons released by this fission are absorbed by the thorium, which
then decays to form U-233, which is also fissile and then participates in the
chain reaction. The entire process takes place within the liquid salt. Some of the fission products can be removed
from the liquid medium by physical processes, and the remaining products become
fluorides that remain dissolved in the molten salt for the estimated 30-year
lifetime of the fuel and salt charge. At
this point the salt can be reprocessed chemically, extracting the uranium for
re-use. Left behind in the salt will be
dissolved fission products and transuranics such as plutonium. Alternatively,
the salt can be sequestered as waste.
Then the DMSR can be recharged with new salt solution, thorium, and LEU
and run for another 30-year cycle.
There appear to be few if any significant technical
challenges to commercialization of the DMSR. One drawback is that it requires
expensive fissile U-235 in its fuel mix.
However, it uses only a quarter of the U-235 of a standard Gen II or III
LWR; there should be enough of this fuel
available to run DMSR plants for centuries.
Liquid
fluoride thorium reactor
The liquid fluoride thorium reactor (LFTR), also a
salt cooled thermal design, has many inherent advantages. The basic design was conceived in the 1950s
by the nuclear physicist Alvin Weinberg and colleagues at the Oak Ridge
National Laboratory. However, in part
because the LFTR’s operation produced essentially no plutonium, the design lost
out in the 1960s to the LWR concept. (At
the time, plutonium was a desired byproduct of nuclear plants because it could
be used for nuclear weapons.)
As with the DMSR (above), the LFTR contains its
fuel, which in this case is thorium only, within a liquid fluoride salt,
typically a mixture of beryllium and lithium fluoride. The reaction is started with a fissile
material, such as uranium with an enriched concentration of the fissile U-235. Neutrons from fission of U-235 are absorbed
by thorium-233, which spontaneously converts to U-233, which is itself fissile
and then continues the chain reaction. A
typical LFTR design uses two loops, one, the core salt loop, contains the
dissolved fuel and in which the reactions occur. The second loop receives the heat created by
the nuclear reactions in the first loop and transfers it to a Brayton engine or
another turbine unit that produces electricity.
Fission products are continuously
removed by chemical processes from the core salt loop, and fresh thorium is
added as necessary.
LFTRs are inherently safe; if the nuclear reactions increase
for some reason, the extra heat expands the molten salt and pushes it out of
the critical core region into adjacent pipes where the concentration of thorium
and U-233 drop below the level at which the chain nuclear reaction can be
sustained, and the reactions stop. As
added insurance, the LFTR has a freeze plug – a plug of salt kept solid by a
cooling fan. If electric power should
fail, the fan stops and the freeze plug melts, causing the salt from the core
region to drain into the adjacent pipes where their concentration drops below
the reaction-sustaining level. A LFTR
cannot melt down. It operates with salt
in the molten state. If a pipe, pump, or
vessel should rupture, the salt would drain out and solidify.
The thorium process produces less waste than the
current LWR plants for two reasons. One
is that the fuel is within the liquid matrix, which continually circulates, and
so the fuel is continuously exposed to a neutron flux. So the long-lived transuranics (like Pu-239)
that are produced will eventually be destroyed either by fission or by
transmutation to a fissile element that will later fission. But with the solid fuel rods used with
today’s LWRs, lots of transuranics linger in the rods when they are taken out
of service. The second reason is that,
while uranium and thorium-fueled reactors produce essentially the same fission
products, thorium-fueled reactors produce far fewer transuranic actinides (the
worst of which is Pu-239) because Th 232 requires 7 neutron absorptions to make
Pu-239, whereas U-238 requires only one neutron absorption. After 300 years LFTR waste radiotoxicity would
be a thousand times less than the waste from a LWR.
The LFTR is proliferation resistant. The U-233 that is produced by the reactions
of thorium is always contaminated with U-232.
This isotope has a short-half life and rapidly decays into products that
are intensely radioactive, so much so that stealing some U-233 would likely
result in immediate and severe radiation exposure to the thief.
Another advantage of the LFTR is that thorium is
relatively plentiful. According to
researcher Robert Hargraves, the earth’s crust contains approximately 26 grams
of thorium per cubic meter. A LFTR can
convert 26 g of thorium to over 250,000 kWh of electricity, which would be
worth $7,500 at 3 cents/kWh. On the
other hand, a cubic meter of coal, currently worth in the neighborhood of $230,
can make only about 13,000 kWh of electricity worth only $700 at today’s 5
cents/kWh typical prices.
The world consumes about 500 quads of energy per
year, which is about 500,000,000,000 GJ.
The energy that would come from thorium in a LFTR is about 80 GJ per
gram of thorium. If all the world’s energy came from thorium, world demand
would be 500.000,000,000/80 grams per year, or 6250 tons per year. The World Nuclear Association’s conservative
estimate of 2 Mt of thorium reserves implies a 300 year supply. After this time civilization could mine
thorium distributed throughout the earth’s crust, which contains 12 parts per
million. Obtaining 6250 t of thorium
would require mining 500 megatons of material per year. In comparison, world coal mining is 8,000
megatons per year, with reserves of about 150 years. The earth’s continental crust contains of
4,000 Gt of thorium, nearly enough for a million years of energy from thorium.[32]
The LFTR is a
potential evolutionary bridge to commercialization of fast reactors; its
dissolved fuel, pool type design (where the fuel is immersed in the coolant),
and molten salt coolant are also features of the molten salt fast reactor
design (see below).
Challenges to the LFTR’s commercialization are that
it requires careful monitoring and filtering to remove fission products from
the core molten salt. Processes to do this have not been demonstrated at the
commercial scale; more chemical engineering is needed. Also, it is not clear
how well the materials of the pipes, pumps etc. will hold up over long-term
exposure to the radiation and chemical byproducts produced within the core
salt.
Sodium
cooled fast reactor
The sodium cooled fast reactor (SFR) uses
unmoderated, i.e. fast, neutrons and liquid sodium metal as a coolant. Since
sodium is liquid at the reactor’s operating temperature, the reactor does not
need to be pressurized. The U.S.
experimental breeder reactor II (EBR-II), which operated at Idaho’s National
Laboratory from 1965 to 1994, was of this type.
One specific new design in this category, the traveling wave reactor
(TWR) is under development by the private company TerraPower. The TWR will use depleted uranium for
fuel. The “traveling wave” concept has
been compared to a burning cigar. It can
be thought of as if the reaction is contained in a vessel and proceeds from one
edge to the other. On one end is spent
fuel. In the center, where the reactions
take place, is fissile material, and on the other end is unused fuel. The area of nuclear reaction migrates through
the material over a period of 20 years. In
the current design, the reaction proceeds from the center of a compartment
filled with “fuel pins” that contain the fuel material, depleted uranium (which
has a low percentage of U-235). The reaction
is started with fissile material in the center. Neutrons from the critical
reaction in the center are absorbed by U-238 in the surrounding pins,
converting the U-238 to plutonium-239, which fissions and carries the reaction
our radially from the center.
Sodium has a high heat capacity, and excellent
thermal conductivity, which helps make the SFR thermally efficient. It also is efficient at burning fissionable
material, and can make use of unenriched uranium. For example, the U.S. government owns 500,000
tons of U-238 left over from enrichment plants making fuel for LWRs. This could
fuel SFR plants for 500 years. Known
uranium reserves are much greater than this, and if the uranium in seawater is
also considered, the fuel supply is inexhaustible.
Challenges remain.
The high irradiation environment of fast reactors, and the high
temperature at which the SFR will operate can cause metal embrittlement and
potential structural failure over time. Materials better able to resist
irradiation must be developed and approved.
Despite the challenges, the benefits of the SFR
approach have led France, Japan, and South Korea to select the SFR as their
main focus for nuclear research, development, and commercialization.
Lead
cooled fast reactor
The lead cooled fast reactor (LFR) is similar in
concept to the SFR, except that it uses liquid lead as a coolant rather than
liquid sodium. Lead is much easier to
handle, since it does not burn when exposed to air or water. LFRs run at higher
temperatures than SFRs and have lower core irradiation. The former Soviet Union
first developed LFRs and has significant experience in building and operating
them. A LFR would need to be refueled
every 7 to 8 years instead of every two years for LWRs. Smaller LFR designs
could go for up to 20 years before refueling.
All LFR designs are passively safe and thermally efficient. They could essentially be sold as nuclear
batteries, shipped on-site self-contained, ready to run for 10 years with
little maintenance.
Challenges include the fact that lead, while it is
chemically inert, gives off significant amounts of radiation for a long period
of time after it has been used to cool a LFR.
Unlike sodium or fluoride salts, it must be disposed of as low-level
radioactive waste after use. Also, lead
is quite dense, putting extra load on pumps and fuel handling equipment. For this reason, LFRs are likely to be small
– less than 100MW in power output. Lead
coolants are also corrosive at high temperatures. This problem can be mitigated, but it
requires precise control and monitoring, which has yet to be proven
economically viable.
Molten
salt fast reactor
The molten salt fast reactor (MSFR) uses either
molten salt or solid fuel, and a salt coolant. Other than its use of fast
neutrons, it is similar in design to salt cooled thermal reactors such as the LFTR
(see above). MSFRs have higher thermal efficiencies, approaching 55%, than
other designs. The ability of this
design to employ liquid fuel greatly simplifies the fuel cycle; the fuel does
not need to be machined and fabricated, and new fuel can be added continuously
while the reactor is running, and transuranics and fission products can be
continuously removed.
Challenges are similar to those facing the LFTR
design, and include development of efficient processes to remove reaction
products from the coolant and to reprocess the fuel on site. Due to their higher operating temperatures,
materials that can resist corrosion sufficiently in thermal reactors like the
LFTR might not work with the MSFR.
However, the MSFR appeals to entrepreneurs because
of its inherent safety and simplicity and lack of a need for major fuel
fabrication steps. In can burn thorium. China appears to be rapidly scaling up is
MSFR research and development efforts.
Other
fission designs
The Gen IV fission reactor designs listed above appear
to be the most promising designs based on qualities including passive safety,
thermal efficiency, operation at atmospheric pressure, inclusion of at least
some off-the-shelf technology components, and adaptability to modular
construction, there are other designs that have certain advantages. These, also discussed in the referenced
Breakthrough Institute report, include high temperature gas reactors,
super-critical water reactors, and gas-cooled fast reactors.
Fusion
reactor
Fusion, where hydrogen atoms are pressed together
until they fuse, giving off huge amounts of energy, is what powers the
sun. However, the sun has massive
gravity which essentially pulls atoms together until they fuse. This has not been accomplished on earth; no
one has been able to sustain a fusion reaction that releases more energy than
is consumed in the effort to heat and pressurize atoms until they fuse.
However, fusion presents the possibility of an
energy source with no waste, no weapons-type materials, and abundant fuel.
Because fusion requires intense energy to contain
and heat atoms sufficiently to cause the fusion reaction, there is no risk of
runaway heating. If power should be
lost, heat production would stop immediately; there are no lingering
radioactive fission products that would continue to produce heat in the event
of a cooling failure.
If and when fusion is demonstrated, it will be a
potentially game-changing event. It has
been pursued for decades, with only incremental progress. Nevertheless its possibilities continue to
drive research. Major efforts to develop
workable fusion continue, e.g. the International Thermonuclear Experimental
Reactor (ITER) in France.[33]
Could nuclear grow fast enough?
Regardless of whether fusion ever comes to be, it is
possible that some of the Gen IV fission designs will be do-able commercially
within the next decade or so, if not in the U.S., then elsewhere. A relevant question is whether nuclear power
could conceivably grow fast enough to play a significant role in ending the
world’s dependence on fossil fuels in time to avoid the worst impacts of global
warming.
France’s history suggests that there is at least an
outside possibility that nuclear power could grow fast enough to supply much of
the world’s power by 2050. Between 1980
and 1990, production of electricity from nuclear power in France grew from 61
TWh/y to 314 TWh/y, an average growth rate of about 25 TWh per year.[34] Based on 2014 GDP, the U.S. economy is about
six times the size of France’s. So, it
seems feasible that, with a similar effort, the U.S. could grow nuclear power
at 150 TWh per year. If the U.S. did
this, it would be producing an additional 5100 TWh/y from nuclear by 2050. If it kept all its current reactors on line,
the total from nuclear power would be close to 6000 TWh/y, corresponding to
about 20 quads. Of course, many factors
that tend to slow growth of all power sources, especially siting issues, would
have to be managed for this level of growth to happen.
As discussed above, solar and wind energy appear to
be currently growing at a rate that could be linear, not exponential. In the U.S., a projection of the apparent
current linear growth rate of both solar and wind to 2050 will lead to
production of a total of about 5 quads of energy. If their growth could be
doubled, they might supply 10 quads by 2050.
If the U.S. grew nuclear power at twice the rate that France did
(prorated to U.S. GDP), nuclear might supply 40 quads by 2050. If other efficiency measures were taken, the
U.S. could be largely powered by carbon-free energy by 2050. Powering all the world’s nations with
carbon-free energy could be a taller order, if, as discussed above, the
developing world’s demand for electricity rises dramatically as expected.
Nevertheless, if Gen IV reactors could be built in a modular fashion, much faster
growth could be possible, and a dystopian future, marred by floods and droughts
and other impacts of out-of-control global warming driven by promiscuous
burning of fossil fuels could be avoided.
An up-to-date perspective on the risks
of radiation would facilitate the growth of nuclear
Fears about and prejudices against nuclear power are
largely based on old information. A
closer look at current data reduces many of these apprehensions considerably. There
are concerns that remain. But it is
increasingly clear that the risks of nuclear power can be handled with
up-to-date technology and up-to-date regulations.
Radiation
Nuclear power unlocks the energy of atomic nuclei
and in the process releases ionizing radiation.
Such radiation, capable of knocking electrons from atoms, has long been
recognized as dangerous because it can destroy tissues, including basic
cellular machinery such as DNA. High
levels of radiation are deadly, and nuclear power plants, X-ray machines, and
any technology that involves ionizing radiation must be shielded to prevent
exposures to living systems and also to materials that could be damaged by
having electrons knocked off of some of their atoms. In theory, even a dose of radiation intense
enough to dislodge only one electron from one atom in one strand of DNA could
change that DNA enough to foul up its replication and cause cancer.
The notion that any radiation is harmful is the
basis of the linear, no-threshold (LNT) model, which has been widely used to
estimate actual cancer risks from low-level radiation and underpins
international guidelines for radiation protection. Basing their concerns on the LNT model, many
have fought nuclear power for decades and continue to do so. Even the low levels of radiation that emanate
from nuclear power plants’ normal operations and waste management systems have
been considered unacceptable by nuclear power opponents.
But increasingly, Americans and others in the industrialized
world are exposed to greater amounts of low level radiation. In 1990, the average U.S. resident received
an annual dose of 360 millirems (mrem) of radiation. About 300 mrem of the total was from natural
sources. About 200 mrem of this average
natural source was from radon gas, which is released from the gradual decay of
radioactive isotopes in soil and rocks and is present in varying concentrations
in the air virtually everywhere. About
30 mrem of the natural sources total was from cosmic rays, and another 30 mrem
was from radioactive minerals other than radon naturally present in soils and
rocks. [35] For example, natural gas used in a home adds
about 9 mrem per year. About 40 mrem of
natural background radiation was from substances in food and water such as
potassium, which has the radioactive isotope potassium-40. The other 60 mrem Americans received on
average in 1990 was from artificial sources.
Of that, about 40 mrem was from X-rays.
Another 10 or 15 mrem was from nuclear medicine, and another 10 was from
consumer products, including building materials. About 0.1 mrem of the artificial total was
from nuclear power plants. Worldwide,
the average annual per-capita effective dose from medicine (about one-fifth of
the total radiation received from all sources) approximately doubled from
around 1990 through 2007. [36]
Natural exposures have not changed. What has changed is that people in the
industrialized world by their own choice (albeit, perhaps unknowingly) have
taken on additional exposures through medical procedures. These include an average per person of about
150 mrem from CT scans, 75 mrem from nuclear medicine, 45 mrem from
fluoroscopy, and 30 mrem from X-rays.[37] Clearly, people have decided that the risks
of radiation from these procedures, whatever they may be, are outweighed by the
potential benefits; it’s better to have a CT scan (about 1000 mrem each) than a
brain bleed, better to have an upper and lower GI series (1400 mrem) than
undiagnosed digestive problems, better to have fluoroscopy than unguided
surgery.
And adding to the conviction that willing acceptance
of medical procedures involving radiation makes sense, data on background
radiation point to a conclusion that low levels of radiation are indeed not
worth worrying about. Background
radiation from natural sources varies widely throughout the world. Radon levels in the air are a function of the
underlying bedrock in a region. The
intensity of cosmic rays is correlated with altitude. So, for example, a person living in Colorado
is exposed to background radiation amounting to 400 mrem per year, while a
Florida resident’s typical background exposure is about 160 mrem per year. Yet people in Colorado have one of the lowest
average incidences of cancer in the U.S.[38] There are areas in the world, for example the
Guarapari coastal region of Brazil; Ramsar, Iran and Kerala, India, where
background radiation exposures are as high as 13,000 mrem per year. Long-term epidemiological studies of these
populations have not shown any significant cancer risk from these background
exposures.[39]
One inadvertent experiment is particularly
compelling. Thirty years ago about 200
buildings were constructed in Taiwan from steel inadvertently contaminated with
radioactive cobalt that was unknown at the time. Over the years, the approximately 10,000
residents of these buildings were exposed to an average radiation dose of about
10,500 mrem per year. Yet, a study in
2006 of these people found fewer cancer cases compared with the general public;
95 vs. the expected 115.[40] There is actually some reason to think that a
certain amount of radiation may be healthful, based on laboratory tests with
animals and bacteria.[41],[42]
The possible healthful effects of radiation, known as hormesis, may be due to
low exposures stimulating the immune system and causing the release of
protective antioxidants.[43]
Clearly people aren’t deciding where to live based
on background radiation, and they aren’t typically shunning medical procedures
involving radiation, which are growing more common. If there is an increase in cancer from low
levels of radiation, it is not showing up in health statistics and there is no
evidence that people are changing their behavior because of it.
Nuclear
waste
The problem of what to do with nuclear waste is
considered by some to be intractable; “What about the waste?” is a question sometimes
clearly intended to stop conversation.
But in light of the lack of concern about low level radiation
demonstrated by peoples’ behavior, the waste problem appears wholly manageable. The problem becomes still more manageable if
Gen IV reactors become the new norm of nuclear power, as these plants will
produce far less waste and what they do produce will be less radioactive,
especially regarding long-lived transuranics.
Current nuclear reactors do produce highly
radioactive waste resulting from the fission of Uranium-235 in the reactor’s
fuel rods. It is initially physically
hot. It includes “transuranic” elements,
such as Americium-241 (used in smoke detectors) which have atomic weights
greater than uranium’s. Also present are
fission products, which are atomic fragments and elements with atomic weights
less than uranium’s, such as iodine.
Especially problematic in the waste are Iodine-131, Plutonium-24,
Strontium-90, and Cesium-137. These have
half-lives (the amount of time it takes for half the radiation to dissipate) of
8 days, 14.4 years, 28.8 years, and 30.1 years, respectively.[44] While these elements are decaying, they are
emitting radiation in various forms that is, initially, intense.
There are also long-lived elements in the waste,
such as Plutonium-239, with a half-life of 24,100 years. Over time, some of the Uranium-235 that is
left in the waste continues to fission, leading to a gradual increase in
isotopes of Plutonium and a few other isotopes, some of which will remain
radioactive for thousands of years. It is the long lifetimes of some of these
elements that often worry people the most, and that leads to despair about the
idea of having to construct a haven for wastes that will be safe for time spans
beyond our ken. But an important aspect
of these long-lived wastes is frequently missed. If an element has a long half-life, it is not
very radioactive, by definition. A
sepulcher of monumental proportions and timeless durability is not necessary
hold it. As noted in a report by David
Kosson and Charles Powers, if used fuel is allowed to sit in safe storage for
90 years, much of the heat and radioactivity decays away. [45] At
this point it is much more benign and easier to manage. You cannot say that about some other
industrial wastes, including non-radioactive but very toxic isotopes of heavy
metals such as cadmium, mercury, and lead.
These will never go away.
But clearly there must be a place to put nuclear
waste that is isolated from the environment for the next few hundred years at
least. As is well-known, the U.S. hasn’t
settled on a location for the long-term storage of nuclear waste. The site at Yucca Mountain appears unlikely
to ever be approved, due in large part to opposition from the state of
Nevada. In the meantime, the nation’s
nuclear plants have developed an interim solution. They put their spent fuel and associated
wastes in pools of water to cool it for a few years, and then pack it in dry
concrete and steel casks and store it on site.
The fact that these interim storage sites have been successfully storing
the high-level waste for several decades, during which time much of the
original radioactivity has dissipated, has led to a proposal that was outlined
in the referenced Kosson & Powers report.
In that report, the authors noted that much of the
problem of nuclear waste storage started with an anachronistic policy enacted
in 1982. The policy essentially stipulated that used fuel should be disposed of
in a geologic repository as soon as one becomes available. However, as noted above, after about 90
years, much of the original heat and radioactivity of the waste has decayed
away, which reduces the size, complexity, and cost of disposal in a long-term
repository, and also buys time. In theory, used fuel could even be reprocessed
at some point in the next century if the engineering challenges can be
overcome.
Kosson and Powers suggested that four regional
used-fuel storage facilities be set up to act as transfer stations. They could be located so as to provide
geographic equity and allow relocation of the backlog of used fuel to locations
where it could be stored safely, securely, and efficiently for up to 90 years
before reprocessing or permanent disposal.
A fund established by the Nuclear Policy Act of 1982 that requires
electric utilities to pay a fee on nuclear-generated electricity, which now
totals approximately $25 billion, could in theory pay for these sites. The authors, recognizing that public
opposition is likely to be an issue, recommended that informed consent, equity,
and fair compensation should be the bases for deciding temporary storage sites.
One possibility would be to use a "reverse auction" to enable
prospective host communities to win regional support for the sites. With this approach, the federal government
might allot, say, a billion dollars, and request bids from interested
communities detailing how they would spend it to address local impacts and
statewide concerns associated with a proposed facility.
At the same time, Kosson and Powers pointed out, the
U.S. should continue to seek at least one permanent long-term waste storage
site, which will eventually be needed.
The entire process should be done deliberately and transparently, with
multiple layers of protection and credible standards.
There are numerous sites that would be suitable for
long-term waste storage. In fact, there
is one currently operating, the Waste Isolation Pilot Plant near Carlsbad,
N.M., which handles military radioactive waste.
Although limited to military waste by law, this facility, in a deep salt
bed, has room for much more waste.
The EPA, basing its rules on the LNT model, has a
standard that requires that nuclear sites ensure that nearby residents will be
exposed to no more than 15 mrem per year.
But, no convincing evidence has been obtained, despite hundreds of studies,
of a correlation between incidence of cancer and exposures to radiation of less
than 100 millisieverts (10,000 mrem).[46] In light of this emerging knowledge about the
lack of harm from low-level radiation, EPA’s 15 mrem level, equivalent to 50%
of the radiation exposure from a mammogram, 5% of the average U.S. background
radiation, and 1.5 % of the radiation from a CT scan, is extremely
conservative. This level, which may in fact be unrealistically low, could
nevertheless be readily achieved by any rationally-designed waste
repository. Such a site should present
no worries to an informed public.
Low-level radiation is nevertheless radiation, and
thus could still present significant health risks. Although, as discussed above, it may not be
as risky as once thought, getting a clear picture of what risks it does present
is not easy. Estimating lifetime
exposure is difficult, and it is difficult to distinguish radiation
exposure-related cancers from other cancers.
Currently, a “million person” study is underway, headed by Dr. John
Boice,[47]
to try to get a clearer picture, but the data is not yet all analyzed.[48] The U.S. EPA has also been considering the
issue for several years.[49]
Accidents
As is the case with every large-scale industrial
process, accidents happen. There are
three nuclear accidents that have been burned into the consciousness of
virtually everyone at all concerned about the issue; Three Mile Island,
Chernobyl, and Fukushima. Yet
ironically, as frightening has these events have been, the passage of time and
the gathering of long-term data has further demonstrated that low-level
radiation is not as dangerous as has been thought, and that basing policies and
regulations for nuclear safety on the LNT model, insofar as low-level radiation
is concerned, is very likely an over-reaction.
The scientist John Gofman, credited as the father of
the LNT model, predicted in 1979 that 333 excess cancer or leukemia deaths
would result from the 1979 Three Mile Island accident. But as of 2013, no excess mortality in the
exposed population had been observed.[50] Three
months after the 1986 Chernobyl disaster, Gofman predicted it would eventually
cause 475,000 fatal cancers plus about an equal number of non-fatal cases.[51] However,
as of 2008, the only proven toll from the Chernobyl accident was about 50
people who died, including 28 emergency workers who died almost immediately
from radiation sickness, about 15 who died of fatal thyroid cancers (this
cancer is not usually fatal), and about 6,000 children in Russia, Belarus and
Ukraine who suffered thyroid cancers that were successfully treated. There was no persuasive evidence of any other
health effect in the general population that could be attributed to radiation
exposure.[52] A full report on the long-term health effects
from the accident, prepared by a team of experts from seven United Nations
organizations that included representatives from the governments of Belarus,
Ukraine and the Russian Federation, was published by the World Health
Organization in 2006. It projected that
eventually 4,000 people in the exposed population may die from cancer related
to the Chernobyl accident.[53] In that population, about 100,000 people will
die of cancer anyway; 4,000 additional deaths from cancer, if they occur, is
within the level of natural statistical variation and is not readily
distinguishable as an impact.
The Chernobyl accident included a fire and
explosions that spread radioactive debris across a wide region. Initially some plants and animals, especially
pine trees, died from the radiation.
Now, although some places in the exposed region have background
radiation not much higher than normal background levels,[54] hot spots remain with levels of radiation that,
if you stayed there for a year, would cause an exposure of 35,000 mrem, about
100 times higher than typical background levels. People were required to evacuate an area half
the size of Yellowstone National Park around the stricken reactor. Today, as detailed by Mary Mycio[55]
and others, this exclusion zone has become a wildlife preserve where birds and
mammals, many of them rare and endangered, thrive. Large mammals in the zone, not found much
elsewhere in the region, include boars, red deer (elk), roe deer, European
bison, moose, wolves, foxes, beavers, and Przewalski’s horses, a species of
wild horse that has been brought back from the cusp of extinction. Clearly, wildlife has benefitted more from
the departure of humans than it has suffered from the relatively high
background radiation that remains.
The 2011 accident at Fukushima, Japan is the most
recent. It revived anti-nuclear sentiment in many places, such as Germany,
leading to accelerated phase-outs of nuclear reactors. But once again, the aftermath of Fukushima is
showing that low-level radiation is not the danger it was thought to be.
The accident featured melt-downs of the cores of
three of the six reactors at the Daiichi nuclear power plant site near
Fukushima, and a partial melting of spent fuel rods from a fourth reactor. The
intense heat involved caused a build-up of pressure of steam. The steam, reacting with the zirconium alloy
that cladded the core structures, produced hydrogen gas, which added to the
pressure build-up. To prevent an
explosion, the pressure was vented, releasing some hydrogen and also
radioactive fission products. Nevertheless, hydrogen explosions occurred in two
of the reactors, blowing apart a secondary containment building. Leakage of radioactive fission products also
occurred from the third reactor. All in
all the accident released about 18% of the “iodine-131-equivalent”
radioactivity released by Chernobyl.[56] More radiation was not released because,
unlike with Chernobyl, the primary reactor containment structures were not
destroyed; the radiation was released only in three major spikes that occurred
within several days after the accident began.
(Chernobyl had a flimsy, essentially non-existent container structure.)
The Fukushima nuclear accident occurred in the
context of a huge natural disaster including an earthquake and a tidal
wave. A year after the disaster, 19,000
people had died and 27,000 had been injured.
Four hundred thousand buildings were destroyed and nearly 700,000 more
were damaged.[57]
However, none of these casualties were caused by
radiation, and the number of additional cancer cases in coming years is
expected to be so low as to be undetectable.[58] In a sense, 1,600 people did die from the
accident; not from radiation, but from the stress of forced relocations, including
the highly risky evacuations of hospital intensive care units. An area 20 kilometers in radius around the
plant was evacuated. Had the evacuees
stayed home, their cumulative exposure over four years in the most intensely
radioactive areas would have been about 70 millisieverts (about 7000 mrem),
which translates to about 1800 mrem per year; about six times typical
background levels. But most residents in
that zone would have received far less; on the order of 400 mrem per year.[59]
Each of these three major accidents was caused by a
failure of cooling which enabled the reactive cores to overheat, leading to
releases of pressure and radiation and explosions.
The Three Mile Island accident was driven by human
error when night-shift operators made a series of mistakes in judgement and
turned a minor problem, a failure of a cooling pump, into a major one as they
turned off an emergency cooling pump.
This allowed the core to overheat and eventually melt. The operator’s
confusion at the time was fed by lack of proper monitoring systems and
malfunctioning alarm signals.
The Chernobyl disaster was caused by an unauthorized
experiment. The operators were curious if the reactor’s dynamos could deliver
enough energy to shut down the plant if the power was lost. During a scheduled shutdown, they turned off
an emergency cooling system to conserve energy.
Next, they withdrew control rods, which normally moderate a fission
reaction to keep it under control, to increase the power output from the plant
as its power production wound down. More
errors were made, and the reactor’s operation rapidly destabilized, fuel
disintegrated, and huge steam explosions occurred throwing burning blocks of
graphite and spewing a plume of radioactive debris that rose 10 km into the
air.
The Fukushima accident was the result of a hammering
by natural forces. First, a magnitude 9
earthquake, the largest to ever strike Japan and one of the largest on record
anywhere, occurred at sea 95 miles north of the plant. (The earthquake magnitude scale is
logarithmic; a magnitude 9 earthquake is 100 times stronger than the largest
earthquakes ever to strike the continental U.S., which were category 7
earthquakes in California.) The reactors at the plant withstood this quake and
immediately shut down as they were designed to do when the quake caused a loss
of electric power. The plant’s emergency
diesel generators then kicked in, also as designed. But not long after that a 45-foot tidal wave
smashed over the plant’s approximately 30 foot protective wall, flooding the
generators and stalling them out.
Limited additional power from battery back-up was insufficient. At that
point, with no power to run the cooling pumps to keep the reactor cores from
overheating, the situation rapidly got out of control, with explosions and
releases of radiation soon following.
The Chernobyl reactor was a type of reactor
currently used only in the former Soviet Union.
It had essentially no containment structure to hold radiation released
by an accident, and its fission reaction was cooled by water and moderated by
graphite, which is combustible. This
combination is dangerous; water actually absorbs some of the neutrons created
during fission, and so helps slow down the fission reaction. [60]
But, if the water turns to steam, it cannot absorb as many neutrons and so the
fission reaction speeds up. This
“positive void coefficient” aspect means that when the reactor started to get
out of control, it turned water into steam, accelerating the fission rate,
turning more water into steam, and so on.
This positive feedback loop drove the reaction out of control.
The reactors at all three plants were so-called
Generation II designs. All such reactors
share a common feature; they are virtually impossible to cool without
continuous electric power to run their cooling systems. Thus, if they have to shut down and cannot
provide their own power, they must get it from somewhere else. If the grid goes down, and their emergency
power supply systems fail, they are in trouble.
As discussed above, the generation III and III+
plants being built today have passive cooling systems that, if electric power
supply fails, terminate the fission reaction and keep the core cool so it won’t
melt down.
Terrorism
and weapons proliferation
There are better targets for terrorists than nuclear
plants, and nuclear plants, particularly if they are Gen IVplants, are not good
sources for materials with which to make weapons.
Today’s industrial society involves the routine
processing and shipment of many materials capable of life-threatening
mishaps. Trains regularly haul tank cars
of chlorine gas, vinyl chloride, sulfuric acid and other materials that could
kills hundreds of people if attacked and disrupted at an opportune location by
a terrorist. Chemical plants are arguably better targets for terrorists than
nuclear plants, hardened and fortified with thick concrete containment
structures sufficient to contain radiation.
Consider, for example, that over six thousand people died from the
Bhopal chemical plant accident in India while about 56 died from
Chernobyl.
And it should be noted that many technologies and
materials we encounter daily are quite dangerous. Gasoline is explosive; thousands die in auto
accidents yearly because of fires and explosions. Natural gas periodically blows up a
building. Grid electricity kills people
and starts fires. Hydrogen, thought by
some to be the fuel of the future, weakens pipes, can be ignited with a stray
spark, and burns with an invisible flame.
What about nuclear bombs? The idea that more nuclear power plants means
that more nuclear material will be available for making bombs has a certain
logic. But the capability of creating
bomb-worthy materials does not necessarily translate to actually making bombs.
For example, nitrates are key fertilizers; nitrate is a major plant food, often
a limiting nutrient. Nitrates are also
used to make expolsives such as TNT.
Nitrates are made industrially in processes that start with the famous
Haber-Bosch process that reacts hydrogen and nitrogen to make ammonia. But we don’t view fertilizer plants as
terrorist magnets.
Similarly with nuclear technology; some 31 nations
operate nuclear power plants, but only 7 of these have nuclear weapons. Just about any nuclear technology, with
enough knowledge and determination, can be modified to produce weapons-grade
material. But nuclear weapons and
nuclear power plants aren’t all that similar.
If a nation wanted to covertly acquire weapons-grade nuclear material,
developing a civilian nuclear energy program would be a costly and inefficient
way to do it. With adequate controls and
inspections, nuclear power generation will not lead to nuclear weapons
production.
Moreover, nuclear plants are now contributing to the
destruction of nuclear weapons. One way
this is happening is through a joint-U.S. Russia program that began in
1994. With this approach, weapons-grade
material, which is highly enriched (95% U-235) is blended with low-enriched
uranium to make a fuel suitable for reactors. Currently about 20% of the
electricity Americans use is produced by nuclear power and about half of that
is fueled by weapons grade material. This means that about 10% of the
electricity Americans use is fueled by Russian missles and bombs.[61]
Fuel
The current generation of nuclear plants uses a
once-through fuel cycle that harvests the energy potential of only about 1% of
the fuel. The rest of this potential
resides in the waste. Nevertheless,
nuclear ore and the nuclear fuel derived from it remain relatively inexpensive
and in plentiful supply, and no shortages loom in the foreseeable future.[62] Should
ore of the quality being mined today become scarce, there is much more of lower
quality that could be used; cutting the grade of uranium ore in half probably
increases the available supply by a factor of eight.[63]
More importantly, some of the Gen IV designs use
thorium, which, as discussed above, is more abundant than uranium. Further, many of the Gen IV designs are more
efficient in their use of fuel, or are capable of creating more fissionable
material than they consume. If some of
the new designs take hold, it should become feasible to extract uranium from
seawater, making it possible to run nuclear plants for several billion years.[64]
Summary
and Conclusion
There is almost certainly enough fossil fuel
resource remaining that, if burned, will push the world’s climate into a new
regime that will threaten human civilization and condemn future generations to
misery and strife.
Clearly fossil fuel combustion must be
curtailed. Putting a price on the carbon
content of fuels is arguably the best way to do this. Pricing carbon will be facilitated to the
extent that cost-competitive, low-carbon alternatives to fossil fuels exist and
to the degree that significant improvements in energy efficiency take place.
It is likely, based on the most recent electricity
production data, that current-generation solar and wind are not the
revolutionary technologies some envision; they are not growing fast enough to
wean the world from fossil fuels in time to prevent the worst impacts of global
warming. It is important that research
and development continue with these renewable sources, that better electricity storage
technology be developed, and that energy efficiency be increased as much as
possible.
It is also important that the potential of nuclear
power be neither ignored nor discounted based on arguments based on old
technology and the associated old problems. Dealing with global warming is an
“all hands on deck” situation. To cope
with it, the world almost certainly needs the power of the atom.
Although a more realistic view of the threat of
low-level radiation, possibly leading to moderation of regulations regarding
radiation safety and waste management, would be likely to ease a transition to next-generation
(i.e., Generation IV) nuclear power, a more realistic view and associated
regulatory changes are not necessary for this transition. And, such changes by themselves are not
likely to bring about a new growth phase of nuclear power. Regulations and health concerns are not the prime
problem; Gen IV plants should be able to meet existing regulations more easily
that today’s Gen II and III plants.
What is necessary is that Gen IV plants become cheap
enough to compete successfully with coal and natural gas. With a greater focus on the technological
challenges that remain, including use of computer-aided modeling and design
capabilities that were non-existent when many of the Gen IV designs were first
conceived, much progress could be made.
There is reason for optimism. Rather than dwelling on the problems of
the existing generation of light-water reactors, the U.S. and other nations need
to take a fresh look at next-generation nuclear technology and consider
implementing policies that will encourage its development.
Michael
Aucott, Ph.D.
April 11, 2017
References
[1]
Hansen, James, et al., 2015, Ice melt, sea level rise and superstorms: evidence
from paleoclimate data, climate
modeling, and modern observations that 2o C
global warming is highly dangerous,Atmos.
Chem. Phys. Discuss., 15, 20059–20179
[2] T.
Covert, M. Greenstone, and C.R. Knittel, 2016, Will we ever stop using fossil
fuels?, Journal of Economic Perspectives,
30, 117-118
[4]
Jacobson, M., Delucci, M., 2011, Providing all global energy with wind, water,
and solar power, Part I: Technologies, energy resources, quantities and areas
of infrastructure, and materials
[5] Jacobson
M, Delucci M, Cameron M, Frew B., 2015, Low-cost solution to the grid
reliability problem with 100% penetration of intermittent wind, water and solar
for all purposes. PNAS. 112:15060–5.
[7] BP
Statistical Review of World Energy June 2016,
http://www.bp.com/statisticalreview
, accessed December, 2016
[8]
95% confidence interval around the mean
[9]
Schmela, Michael, 2016, Global Market Outlook for Solar Power, 2016-2020,
[12]
From data display at Franklin Institute, Philadelphia, PA and other sources
[14]
Darling, Seth, and Douglas Sisterson, 2014, How
to Change Minds about Our Changing Climate, The Experiment, New York
[15]
Reichelstein, S., and M. Yorston, 2013, The prospects for cost competitive
solar PV power, Energy Policy, 55, 117-127
[16]
Neukirch M., 2016, Protests against German electricity grid extension as a new
social movement? A journey into the areas of conflict. Energy Sustain Soc. 2016;6.
[17] Williams
T., 2014, Pulling the Plug on an Energy Project in New England. Audubon Magazine
May 12, 2014 https://www.audubon.org/magazine/pulling-plug-energy-project-new-england,
accessed 10/16
[19] Hall
C, Prieto P., 2013, Spain’s Photovoltaic Revolution: The Energy Return on Investment, Springer
[20] Ferroni
F, Hopkirk R. 2016, Energy return on energy invested (ERoEI) for photovoltaic
solar systems in regions of moderate insolation. Energy Policy. 94:336–44, https://collapseofindustrialcivilization.files.wordpress.com/2016/05/ferroni-y-hopkirk-2016-energy-return-on-energy-invested-eroei-for-photo.pdf
[21]
Reichelstein, S. and M. Yorston, 2013, The prospects for cost competitive solar
PV power, Energy Policy, 55, 117-127
[22]
Markowitz, Maury, 2016, https://matter2energy.wordpress.com/2016/05/17/another-pv-eroei-debacle/
[23] Moomaw
W, Burgherr P, Heath G, Lenzen M, Nyboer J, Verbruggen A., 2011, Annex II:
Methodology, IPCC Special Report on Renewable Energy Sources and Climate Change
Mitigation, Table A.II.4, http://srren.ipcc-wg3.de/report/IPCC_SRREN_Annex_II.pdf accessed Jan. 15, 2016
[25]
Lovering, Jessica, Arthur Yip, Ted Nordhaus, 2016, Historical construction
costs of global nuclear power reactors, Energy Policy, 91, 371-382
[26] World
Nuclear Association, 2017; Milestones, https://www.nei.org/News-Media/News/Milestones
, accessed 1/21/17
[29] BP
Statistical Review of World Energy June 2016,
http://www.bp.com/statisticalreview
, accessed December, 2016
[30]
Nordhaus, Ted, Jessica Lovering, and Michael Shellenberger, 2014, How to Make Nuclear Cheap: Safety,
Readiness, Modularity, and Efficiency, Breakthrough Institute, http://thebreakthrough.org/images/pdfs/Breakthrough_Institute_How_to_Make_Nuclear_Cheap.pdf
accessed 2/17/17
[31]
Hargraves, Robert, 2012, Thorium: Energy Cheaper than Coal, Hargraves
[32]
Hargraves, 2012
[34] IAEA,
Country Nuclear Power Profiles, 2015 Edition, France; http://www-pub.iaea.org/MTCD/Publications/PDF/CNPP2015_CD/countryprofiles/France/France.htm
accessed 1/16/16
[35]
Spiro, Thomas, Kathleen Purvis-Roberts, and William Stigliani, 2012, Chemistry of the Environment, Third Edition,
University Science Books
[36] Mettler,
F., M. Bhargavan, K. Faulkner, D. Gilley, J. Gray, G. Ibbott, J. Lipoti, M. Mahesh,
J. McCrohan, M., Stabin, B. Thomadsen, T. Yoshizumi,, 2009, Radiologic and
Nuclear Medicine Studies in the United States and Worldwide: Frequency,
RadiationDose,andComparisonwith OtherRadiationSources—1950– 20071, Radiology: Volume 253: Number 2, 520-531
[37]
Spiro, et al., 2012
[38] Fox,
Michael H., 2014, Why We Need Nuclear
Power: The Environmental Case, Oxford University Press, p. 180
[39]
Fox, Michael H., 2014, p. 180
[40]
Johnson, George, 2015, When Radiation Isn’t the Risk, NY Times, September 22,
2015
[41]
Wikipedia, 2016, Radiation Hormesis, https://en.wikipedia.org/wiki/Radiation_hormesis accessed 2/24/16
[42]
U.S. Department of Energy, 2011, Waste Isolation Pilot Plant, http://www.wipp.energy.gov/pr/2011/Low%20Background%20Radiation%20Experiment%20News%20Release.pdf
[43]
Johnson, George, 2015
[44]
Fox, Michael H., 2014, pp. 187-189
[45]
Kosson, David, and Charles Powers, 2008, The U.S. Nuclear Waste Issue – Solved,
Christian Science Monitor
[46]
Lynas, Mark, 2013, p. 53
[48]
Lipoti, Jill, 2017, personal communication
[49]
For more information, see http://ncrponline.org/program-areas/pac-5-environmental-radiation-and-radioactive-waste-issues/
[52]
Lynas, Mark, 2013, referencing UNSCEAR, 2008, Sources and effects of ionizing radiation,
Volume II, www.unscear.org/docs/reports/2008/11-80076_Report_2008_Annex_D.pdf
[53]
World Health Organization, 2006, Health Effects of the Chernobyl Accident and
Special Health Care Programmes
[54]
Fox, Michael, 2013, p. 224
[56]
Fox, Michael, 2013, p. 231
[57]
Fox, Michael, 2013, referencing Wikipedia, 2011, Tohoku Earthquake and Tsunami,
accessed 3/19/12, https://en.wikipedia.org/wiki/2011_T%C5%8Dhoku_earthquake_and_tsunami
[58]
Johnson, George, 2015
[59]
Johnson, George, 2015
[60]
Fox, Michael, 2013, p. 216
[62]
Nordhaus, Ted., et al., 2014