Saturday, March 12, 2022

Key Points You Need to Know When Talking About Energy: Half-Life, Linear No-Threshold

In this installment, I want to discuss some aspects of nuclear pollution that are misunderstood.  There are two topics in particular that need to be understood to be able to properly evaluate the severity of a particular release of radioactive material.


Half-life

Definition

As I said earlier, nuclear fission happens at the atomic level because certain configurations of nuclei are more unstable than others and naturally break apart over time.  This is a random process at the atomic level, but is governed by a strong law of averages, so that once you get up to the macro level of particles that make any impact on us, the rate of decay is a well known constant depending on the material.

Half-life is the time after which any given particle of a certain isotope is 50% likely to undergo spontaneous fission.  It's called half-life because if you start with a certain amount of a radioactive isotope, after this time half of it will be left, the other half having decayed into other elements.

Because the radiation emitted by a particular isotope is caused by this same decay, the half-life of a radioactive material is also the half-life of the radiation it emits.  A radioactive material will therefore become less radioactive over time as it decays.  The formula for figuring out how much less radiation will be emitted by an isotope after a certain amount of time is fairly simple.  First, calculate how many half-life intervals will have passed in that interval, then take that number and raise the fraction 1/2 to that power.  For example, if some material has a half-life of 1 year and 3 years has elapsed, then the total radiation emitted by that material with be (1/2)^3, or 1/8th of the initial radiation.

A practical example

In more practical terms, you might want to know how long it will be before radiation of a certain material will fall to safe levels.  The Greek letter lambda, λ, is the symbol for half-life.  If T is the total time elapsed, then the formula would be:

Safe Level = Initial Level * (0.5)^(T/λ)

Solving for the total time yields this formula:

Safe Level / Initial Level = (0.5)^(T/λ)

Log0.5(Safe Level / Initial Level) = T/λ

T = (Log0.5(Safe Level / Initial level)) * λ

The safety implication of this is that the length of time a particular isotope is problematic depends greatly on the half-life of this material--it is directly proportional.  And the half-life of different materials emitted by a nuclear incident varies *incredibly*.  Let's illustrate this with some real-world examples.

The Three Mile Island incident emitted most of its radioactive material in the form of radioactive Xenon, to the tune of something like 14 μSv (that's "micro Sievert") per square meter over a large area (estimated to affect about 2 million people).  The average daily dose of radiation the ordinary person gets just from regular background radiation is about 8.5 μSv, so this was definitely a slightly higher level of radiation than is normal.  For how long, though, were those people exposed to higher levels of radiation?

Let's assume for the sake of argument that the 14 μSv figure was the daily exposure (it wasn't, by the way, but let's go with that for now).  Let's say we wanted to know how long it took that 14 μSv to drop down to 0.1 μSv--this would make the level of increased radiation insignificant compared to average daily radiation.  Radioactive Xenon has a half life of about 12 days, so plugging this into the formula, we would get:

T = Log0.5(0.1 / 14) * 12

T = 85.5

This means that in 85.5 days, the radiation levels from Xenon released by the accident would be below a level that would cause us concern.

At Chernobyl, on the other hand, Caesium-137 was released in great quantities.  Caesium-137 has a half-life of 30 years.  If Three Mile Island had released the same amount of radioactive material, but in the form of Caesium-137 instead, the time needed for the same decrease in radiation levels would have been about 2600 days, not 85--so, 7 years instead of two and a half months.

Reminder about material types

This would be a good time to recollect that radiation impinging upon the body from outside doesn't do anything near the harm that radiation emitted by materials absorbed into your body.  Therefore the level of danger presented by a particular radioactive material is determined by the amount of radiation it emits, the half-life, and how and to what extent that material is absorbed by the body.

As a practical example, at Chernobyl, three elements of particular concern were released:
  • Iodine-131.  It has a short half-life (only 8 days), but it collects in the thyroid when absorbed by the body and is not easily removed.  It can do permanent damage to non-regenerating tissue of the thyroid gland.
  • Strontium-90 has a long half-life (29 years), and can lead to leukemia in high doses.
  • Caesium-137 has a half-life of 30 years, and can harm the liver and spleen.
And to apply the half-life principles again: it's been 36 years since the accident, meaning that the amount of Iodine-131 emitted by the accident has halved 1644 times in the interim.  This means it has completely vanished--1.27 x 10^-495 is the actual multiplier, which is close enough to zero as makes no difference, since this number would imply that far less than a single atom of Iodine-131 is still left. 

The levels of Strontium and Caesium radiation, on the other hand, will have decreased only to about half of what they were on the day of the accident: a welcome decrease, but not nearly enough so that we can stop worrying about it.

The Linear No-Threshold Model (LNT)

The problem of evaluating "widely dispersed but thinly spread" harm.

Given that some radioactive waste has a long half-life, we have to be concerned that it can spread around the world and impact a large number of people before it decays into safety.  For example, a lot of people were concerned about the amount of Caesium-137 that was released into the open ocean from the Fukushima incident.

As material disperses, however, it also thins out considerably.  So we have to be concerned about many people getting tiny amounts of exposure to radioactive material.  How big of a problem is this?

The honest scientific answer is, surprisingly, we don't really know.  This is one of those areas in which laymen often are surprised at the lack of definitive answers coming from the scientific community, because it seems like a simple and obvious question and it doesn't seem as if the answer should be too difficult to find out.  But this is simply not the case.

Why don't we know?


This lack of knowledge is perhaps not so surprising if you think through the practicalities of how science works.  Scientific knowledge is primarily advanced through experimentation.  If you have a question you want answered, you design an experiment that replicates the conditions you want answers about, plus a control that is just like your experiment but without those conditions, all with the goal of comparing the results against the control and seeing what the test conditions did.

This fundamentally makes coming to a scientific understanding of how harmful certain things are to humans very difficult to do, because you are not ethically allowed to perform an experiment in which you expect any harm to come to your test subjects.

You can try to do experiments on lab animals, but it is never a safe bet to extrapolate how things affect a lab rat onto how those same things will affect humans.  Experiments on lab animals can only be a preliminary for experiments on humans--this we know from long experience.  The saying among experimental health science folks is that mice lie and monkeys exaggerate.

So what do scientists do, when trying to ascertain the health risks of low-dose radiation?  This is where the term "Linear No-Threshold" (abbreviated LNT) comes in, and it's controversial.

How we fudge an answer anyway.


What has been done is to take known instances of radiation exposure and put them on a graph, with the amount of exposure on the horizontal axis and the imputed harm (in terms of likelihood of death) on the vertical axis.  Since scientists can't ethically create these circumstances, we have to rely on outcomes of known nuclear accidents for the data.  This is a pretty limited set of events; you can get a pretty complete summary of them all here: Nuclear and Radiation Accidents and Incidents.

Then, we plot the outcomes of these events for the people exposed, based on how much radiation they got.  You normalize deaths with severe illness in some fashion, such as, you try to guess how many years were taken off the total likely lifespan of someone who got cancer and died some years after exposure and then convert that to some fraction of a death.  This is not an exact science!  There are several places where you need to insert some common sense rules-of-thumb.

Then, after you have put all these instances of known exposure levels and outcomes on a graph, you draw the best straight line through these data points that terminates at the "zero exposure, zero danger" point at the origin.  It ends up looking something like this:

The straight-line nature of the projection is why this model for radiation harm is called "Linear", and the fact that the projected harm only goes to zero when radiation goes to zero is the "No-Threshold" part.

This model is quite controversial, and in fact, almost certainly wrong.  I don't think anyone serious believes that this model accurately conveys the actual amount of harm that low levels of radiation causes.  From what I've heard, even people who champion this model of guessing at the harm caused by low levels of excess radiation invoke the "precautionary principle" in order to do so, meaning that they think we should assume the maximum possible harm coming from some situation if it is an unknown.

There are multiple academic papers out there arguing against the LNT model, one of which I will link to here: It Is Time to Move Beyond the Linear No-Threshold Theory for Low-Dose Radiation Protection.  I am going to add on some of my own reasoning against the LNT model here:

  1. Nothing in nature that we know of acts in this way.  For every dangerous material that we know of, there is always *some* threshold at which it becomes harmless.  Arsenic, for example, is a very deadly poison.  It is also present in every single glass of water you drink, without exception--in trace amounts. The saying in the medical world is, "the dose makes the poison".  A low enough dosage doesn't mean "just a little bit poisonous", it means "not poisonous at all".

    Indeed, there are all sorts of things which, if graphed on such a chart as the one above, would be roughly U-shaped.  Vitamin D, for example, is poisonous at high doses, and in high enough doses can kill you pretty quickly.  But at a certain level, it becomes actually beneficial for the human body, meaning on a chart such as the one above, it would curve below zero on the "harm" scale.  Then if Vitamin D levels get too low, the fact that you are missing out on Vitamin D would curve the "harm" back up into the positive range. 

    U-shaped curves are much more common in nature when it comes to the right amount of something to have.  Consequently, for very low levels of radiation, it is more likely that the harm produced is either literally zero or else actually negative.

  2. The LNT is abused by people to exaggerate the impact of nuclear accidents.  I have seen this done with Chernobyl, Three Mile Island, and Fukushima.  If you oppose nuclear energy and you want to exaggerate the negative impacts of these accidents, you can take advantage of the fact that modern radiation detection is incredibly sensitive.  We can detect trace radiation from even vanishingly small particles of matter, even down to the individual atoms.

    Consequently, it is a certainty that some amount of technically detectable radioactive material from at least Chernobyl and Fukushima (I'm not sure about Three Mile Island given the lower half-life of released materials) have gotten into every human on the planet.  The wind constantly blows and the seas constantly move, so eventually these things find their way literally everywhere on the planet.

    What some people have done, therefore, is take that extremely low level of radiation from the world-wide dispersion of these events, then look up the projected harm from the LNT graph of radiation effects.  This will be a very low number, but they will then multiply it by 6 billion people in order to get a total death toll from Chernobyl or Fukushima.  In both cases, if you do this, you end up with a number significantly larger than the official death tolls of either event.  These are not numbers that are justified; they wildly overstate the probable impact.

Conclusion

Not all nuclear accidents are the same.  If you want to understand the severity of a nuclear event, you need to know more details than just "there was a meltdown" or "nuclear materials were released".  You need to know what materials were released, and you need to know in what way they were dispersed.  You also need to be aware that the severity of total harm to humanity from some of these terrible accidents has been greatly exaggerated, and that the Linear No-Threshold model is largely to blame for that. 





Tuesday, March 8, 2022

Key Points You Need to Know When Talking About Energy: Neutron Radiation, Fissile Fuel, and Fusion

I want to build off of the previous post by discussing some key elements of nuclear energy related to one type of radiation: neutron radiation.  Neutron radiation is the key type of radiation that makes nuclear energy possible, and it's good to know how.  I'm then going to discuss two ways in which this understanding impacts the safety of nuclear power plants, and I will end with a brief discursion into some problems with theoretical future nuclear fusion plants.

Fissile Fuel

What it is

I said earlier that neutron radiation is able to penetrate into materials because it ignores the electron cloud around atoms; it only stops if it hits an atom's nucleus, and when it does it can either bounce off, be absorbed into the nucleus (thus changing the atom to be another element one higher on the periodic table), or split the nucleus up--nuclear fission.  I also said that neutron radiation is created by the breakup of nuclei into components, which releases a mix of nuclear output including energetic neutrons.

So you can see that neutron radiation both causes fission and is caused by fission.  This is the key fact that makes nuclear energy possible, because if you balance things just right, you can create a self sustaining reaction, where your fuel is undergoing fission continuously, as some atoms break apart, releasing neutrons which cause neighboring atoms to break apart, and so on and so forth.

Conceptually, creating a self-sustaining nuclear fission reaction is very simple.  All you have to do is find enough naturally occurring unstable material that spontaneously decays at a rapid enough rate and bring it physically together.  The physical proximity of unstable material in a dense enough arrangement is enough to fire off a self-sustaining reaction.

How it's made

Originally

Getting this material is difficult because it doesn't exist in dense enough arrangements in nature for a self-sustaining reaction to occur--which should be obvious, if you think about it, because if such a reaction *did* occur in nature, it would burn itself out in short order and cease to exist.

In order to get enough material that can cause a self-sustaining reaction, you have to take advantage of the fact that unstable isotopes of Uranium weight slightly more than stable isotopes of Uranium--because they have more neutrons per atom.  So, theoretically, it's quite simple to separate unstable Uranium from stable: just dissolve mined Uranium and put it in a high-speed centrifuge.  The heavier isotopes of Uranium will settle to the bottom of the result if you do this fast enough and for long enough.

In practice, this is much more easily said than done, and the engineering know-how in order to do this sort of refinement properly is a critical "controlled" secret that we try to keep from being common knowledge.  So if you hear about negotiations with Iran or some other state that has ambitions to become a nuclear power and you hear about "centrifuges", this is what is being discussed.

An important distinction to keep in mind here is that the density of neutron active material that is required to run a nuclear power plant is not as great as the density required to get fuel that will explode in a nuclear bomb.  Getting the explosive chain reaction is a step above the difficulty of getting a self-sustaining chain reaction.  However, the technological step you need to get weapons-grade material is *not that high* above the step you need to get power-plant grade material.

In a power plant

Another way in which fissile fuel can be manufactured, however, is in a nuclear power plant (if designed just right).  Because these things generate constant neutron radiation and because atoms hit with a neutron sometimes absorb the neutron rather than splitting, nuclear power plants can be configured in such a way as to generate unstable isotopes that can be used for nuclear fuel (or nuclear weapons).  Such things are called "breeder plants", and they do have to be specially designed to work in this way: you can't just take any old nuclear power plant and roast your Uranium over the nuclear fire and get weapon's grade Plutonium out.  

However, it must be said that from a distance, it is difficult to tell the difference between a breeder plant that is creating more fuel for nuclear power and one that is creating material suitable for a nuclear bomb.  Hence, there is a lot of concern when a state with nuclear ambitions says they are just looking to get into nuclear power plants.  I'll be getting into this in more detail later, but for now you have to realize that this is a concern.

Some safety considerations based on the nature of fissile fuel

Reactor core composition and types of core failures

Nuclear cores are made so that their effective neutron density--that is, how much fissile material is being exposed to neutrons at any given moment and hence how much fission is happening at that moment--is controllable.  This happens in several key ways (warning: painful oversimplification follows!):
  1. The physical configuration of the core.  Most nuclear cores have control rods which are made of a material that is called a moderator, which is a material that has the property of impeding neutron radiation.  In a typical configuration, the rods are arranged over the core, which has holes to receive the rods.  If the rods are completely removed, the core has enough reactive material density to "go critical" and have a self-sustaining reaction.  If the rods are inserted, however, the reaction slows down.  If they are inserted all the way, the reaction is no longer self sustaining and will die off.  You can control the power output of the reactor core by controlling the height of the control rods.
  2. Liquid coolant / moderator.  In addition to the control rods, most reactor designs also have a liquid coolant constantly flowing around the core.  This serves, at the minimum, the purpose of cooling down the core so that it doesn't physically melt, as well as taking away the heat energy to be turned into energy.  Depending on the design, this liquid will often also play some moderating role on the reaction.

Every core meltdown, therefore, depends on something disturbing the necessary balance between reactive material density, presence of moderating material, and coolant flowing through the core.  So, for example, if for some reason you have a failure in the mechanism that injects the control rods into the core, the reaction can be stuck in "on" mode and start heating up out of control.

Active vs. Passive

When evaluating the possible outcomes of a failure with a nuclear core, you have to look at the possible ways in which the mechanisms that maintain this balance can fail.  The key distinction here is, are these mechanisms active (meaning, some process needs to happen to in order for the moderating influence to be applied) or passive (meaning, something automatically happens to trigger moderation when needed)?

Passive is always to be preferred to active, when possible.

Uncontrolled Reaction Spike vs. Decay Heat Meltdown

Different reactor types have different safety profiles because of having different active vs. passive safety mechanisms protecting against different things.  Most people lump together all "meltdown" events they hear about into the same category, but there are massive differences.

The nuclear power plant in Chernobyl was of a type called "graphite moderated", and it did not really have any passive safety mechanisms at all.  In particular, there was not a good way to keep the core from undergoing uncontrolled cascading reaction without the rods in specific configurations.  There was a portion of the range of motion through which the rods went which actually spiked the reactivity of the core by displacing moderating water.  This was known and procedures for avoiding this spike had been figured out in another plant, but the procedures had not been transferred to Chernobyl.  This failure compounded with others led to an uncontrolled reaction spike, or (simply put) an explosion.  The nuclear reaction got so out of control at a particular spot in the core that it actually blew itself up.

Other reactor designs do not have this same instability.  Pressurized Water Reactors (PWR) and Boiling Water Reactors (BWR) both have a passive safety feature that prevents this sort of extreme power excursion.  For these reactors, the water moderators are essential to the operation of the core.  If the temperature spikes, the water develops steam cavitation bubbles immediately, which increases the moderating effects of the water and dampens the reaction.  This happens via physics and not human procedures.  Consequently, these reactor designs are quite unlikely to explode due to some core malfunction.

On the other hand, this does not remove the possibility of a meltdown.  The reason for this is that even after the self-sustaining nuclear reaction has been halted by the insertion of control rods, the nuclear core is still extremely hot.  This is called the decay heat of the reactor--the heat it generates as it gradually cools off.  If you want to prevent the core from melting down, you must find a way for it to deal with *this* heat even after the core has been "shut down".  In older reactors, even after a core has been shut down it must be *actively* cooled for a substantial time before the reactor is cool enough to be safe.

If a reactor looses cooling for long enough, even if it is "shutdown", it can melt and begin to damage the containment vessel in which it is held.  This is how radioactive material was released both in the PWR Three Mile Island power plant, and in the BWR Fukushima disaster.  In the case of Three Mile Island, coolant was lost for too long (due to a comedy of errors and accidents), leading to the reactor automatically "tripping" and shutting down the core.  Coolant was *still* not being supplied after the automatic shutdown, however, so the core began to melt through the first containment vessel, causing hydrogen explosions and the release of small amounts of radioactive gases.

In the case of Fukushima, the reactors were shut down on purpose as standard practice because of the earthquake.  However, when the tsunami hit, all of the generators powering the coolant pumps became submerged and stopped functioning.  The shut-down reactors therefore melted, one of which even damaging its containment vessel sufficiently to cause some leakage of radioactive material dissolved in water.

The difference between Chernobyl and the two other accidents was thus the difference between a core explosion and a core meltdown.  This made a dramatic difference in the amount, type, and dispersion of radioactive material released.

The similarity in all three of those incidents, however, was that in each case it was active systems that failed to maintain reactor safety: active moderation of the core in the case of Chernobyl and active cooling of a shutdown core in the case of Three Mile Island and Fukushima.  Consequently, the focus of nuclear reactor design for a while has been to design systems that are passively safe throughout.  I will go into more detail on this in a future post.

For the sake of current events, we should know that the nuclear power plant recently seized by the Russians in Zaporizhzhia is a PWR plant, and consequently has passive protection against runaway reactor scenarios.  Therefore, this plant is unlikely to experience an explosive core failure of the type that Chernobyl did, even in the case of complete loss of coolant due to equipment destruction.  However, a meltdown and some leakage of radiation is certainly possible.

Spent fuel rods

One more safety consideration presents itself based on what I have just discussed.  That is, even after reactor fuel is no longer being used to perpetuate a fission reaction, it is still quite radioactive with the remaining "decay heat".  This implies that after fuel can no longer be used in a reactor, it cannot be instantly disposed of.  In fact, it takes about three years before spent fuel can cool down sufficiently to be transported for disposal.

How nuclear plants deal with this is something called "Spent Fuel Pools".  The spent fuel is just stored in big pools on-site for long enough for the decay heat to die down.

The safety consideration here is that most functional nuclear power plants have radioactive material just sitting around on-site, cooling off in pools.  The problem is made worse by the fact that the original intention for all of this spent fuel was to transport it to long-term storage facilities.  However, for many decades now, people have actively resisted the creation of these facilities in their own backyards.  Constant legal battles and political unpopularity have dramatically limited the construction of suitable long-term storage for spent fuel.  This means that many nuclear power plants are storing spent fuel on-site for years and years longer than anyone originally anticipated.

The safety implications are fairly large.  There isn't a lot of potential for this spent fuel to accidentally leak, though that was a concern at Fukushima because of the tsunami.  However, the potential for misuse by terrorists or bad state actors is quite high.  This material is perfect for the creation of a "dirty bomb", which is just radioactive material that you pack with conventional explosives in order to create radioactive fallout without the nuclear explosion.

Again for the sake of current events, the Zaporizhzhia nuclear power plant has six cooling pools with hundreds of tons of spent fuel of varying degrees of radioactivity, now currently controlled by the invading Russian army.

A quick discursion on fusion power

This is a bit out-of-the way, but since we are talking about neutron radiation and nuclear safety, I'm going to take the opportunity here to briefly discuss nuclear fusion.  There has been some news recently about new milestones achieved in energy-positive fusion reactions that *might* cause some people to become hopeful that fusion is coming as a clean alternative to nuclear fission before too long.

I am here to dash those hopes now.

Long-term, I hope that fusion does eventually come through.  However, recent small milestones notwithstanding, there are more barriers for fusion to overcome than most people realize.  One key barrier has to do with neutron radiation.

Fusion is called "clean" with respect to fission because it does not rely on actinides--the heavy, radioactive metals that are the fuel for nuclear fission.  Instead it fuses isotopes of lighter elements into heavier elements.  Although this process does not use or create the same nasty materials that fission uses and/or creates, it does create neutrons: a lot of neutrons.  In fact, a fusion power plant will produce more neutron radiation, and higher energy neutron radiation than a comparable fission power plant.

As we learned earlier, this *will* make things exposed to it also radioactive.  But we also learned that this type of radioactivity is temporary and not of the same long-term danger as fission by-products.  So is the neutron radiation produced by a fusion plant a problem?  Yes, for at least two reasons.

Cost

A fission reaction is comparably simple to do--all you need is a lump of the right material at the right density, and the reaction will create itself.  Consequently, you have a lot of options for how you can design a fission core and still have it work--you can shape it all sorts of ways, dunk it in all sorts of moderating fluids, etc.

Fusion, on the other hand, is incredibly difficult to achieve.  To achieve fusion, you need incredibly precise conditions in an incredibly controlled magnetic field and hard vacuum--no moderating liquids allowed in there!  The world's leading fusion reactor, ITER, is by some counts the most expensive scientific instrument ever created.  Should it ever achieve constant operation, all of that expensive equipment would fairly quickly fry from the radiation that it is creating.  (That's why it's not ITER, but it's planned successor that might one day run continuously.)

The problem of continuous operation of fusion has not *at all* been solved--really, fusion scientists would just be happy if they were in the position to be able to worry about that problem, since only tiny bursts of continuous power output have yet been achieved.  This is a huge problem, and all proposals to deal with the problem so far have been enormously complex and expensive.  Getting fusion *practical* might be a further 30-year problem after we manage to make it merely energy positive.

Proliferation


The other issue with plants that generate neutron radiation is that neutron radiation is what is necessary to breed nuclear fuel.  Since fusion plants will generate neutron radiation in abundance, does this mean they could also be used to create weapons-grade Plutonium out of mined Uranium?  

Yes.  It absolutely does.  Fusion plants will be able to work as breeder plants *very* well.

This means that although fusion plants will not need any of the same materials that are used in nuclear weapons as do fission plants, they nonetheless represent a nuclear weapons proliferation challenge.

The bottom-line for fusion is that you can pin no hopes for it to be a viable part of the world's energy mix any time soon, and even when or if it does become viable, we will still need to worry about it enabling nuclear weaponry.

Friday, March 4, 2022

Key Points You Need to Know When Talking About Energy: Radiation

Nuclear power plants are part of the mix of our current energy solution.  Some people want nuclear power to go away; some people want it to increase.  The critical factor in nuclear power is safety, so it is important to understand the danger of nuclear power.  So I'd first like to go over some fundamental facts that are sometimes poorly understood.

Radiation vs. Radioactivity

I think the most fundamental misunderstanding a lot of people have is this distinction, so let's clarify it first.

Radiation

Radiation is any sort of energy that comes out from a source and travels through space.  "Radiation" is a very generic term and covers several different types of physical phenomenon.  You can sub-divide radiation based on two primary things: *what* is radiating out from the source carrying this energy and *how the radiation interacts with materials with which it comes into contact*.  

What

The things that can be radiated out are:
  1. Photons.  Photon radiation is just light, really, except that we reserve the term "light" for photon radiation that happens to be of a frequency that is visible to our eyes.  There's no fundamental difference, though, between photon radiation of all sorts, except for its frequency.  Photon radiation is called, in ascending order of frequency: radio, microwave, infrared, visible light, ultraviolet, x-ray, gamma.  These are all fundamentally the same thing, though the interaction of photon radiation with various materials can vary greatly based on the frequency.

    Another thing to be aware of is that, per photon, the energy of photon radiation goes up as the frequency goes up.  That is, gamma rays are very much more energetic than visible light rays, per photon.

  2. Subatomic particles.  These types of radiation are collectively called "particulate" radiation.  Different types of subatomic particles have different behaviors as radiation and are called different things:
    1. Alpha radiation: positively charged particles.  This is normally in the form of two protons and two neutrons.
    2. Beta radiation: negatively charged particles--electrons or anti-protons.
    3. Neutron radiation: neutral particles.  These are usually neutrons, though technically neutrinos would count here also.

Interaction

There are two main considerations here:
  1. Ionization.  The primary way radiation is distinguished here is to separate it into "ionizing" and "non-ionizing" radiation.  An "ion" is an atom with a net positive charge: an atom missing one or more electrons.  "Ionizing" therefore means "able to knock electrons off of molecules".  Whether or not radiation can do this is a key distinction because all chemical interactions involve relationships of atoms to each other at the layer of the electron shell.  Radiation that can create ions can therefore cause chemical reactions at a cellular level and therefore damage tissue chemically.

    In contrast, getting too much infrared, microwave or visible light can damage you by conveying too much heat.  It cooks your tissue, rather than chemically changing it.  This can be just as damaging--depending on the amount of heat we're talking about--but it is a different *type* of damage.

    In the spectrum of photon radiation, x-rays and gamma rays are ionizing, but lower frequencies are usually not.  Alpha and Beta radiation are definitely ionizing (being charged particle radiation, they definitely interact with the electron layer of atoms).  Neutron radiation is also considered "ionizing radiation", but it actually is only so in an indirect way, which I will discuss a bit later.

  2. Penetration.  The different types of radiation are able to penetrate into a body very differently.  Because Alpha and Beta radiation are both charged particle radiation, they both react very quickly with physical objects and can not penetrate very far into anything.  Beta radiation penetrates a bit more deeply than Alpha radiation (it can penetrate a little into the skin), but not very far and both are stopped by even normal clothing.

    On the other hand, Neutron radiation passes right through most normal material.  Neutrons do not react to electrons at all and pass right through the electron layers of atoms as if they weren't there--and the electron cloud is by far the bulk of the volume of an atom, the nucleus being only a tiny mass in the center.  Neutrons passing through your body, then, only interact with you at all if they happen to collide with an atom nucleus as they are travelling.

    This can cause damage, however, and if there is enough neutron radiation, it can definitely be fatal.  This type of radiation is the type that must be stopped by thick layers of material or particularly dense materials.  Lead is used for this because lead has a very high atomic number and therefore has a very large nucleus: a bigger target for the neutrons to hit.

    For photon radiation, the higher the frequency of the light (in general) the more it is able to penetrate into and through normal objects--this is why we use x-rays to look inside of things.  Gamma rays are more penetrating still.

Radioactive decay and Radioactivity

Now here let's deal with the primary point on which people are confused about radiation.  Radiation is energy radiating out from a source.  "Radioactivity" means some property of a material that causes it to release radiation, and this usually results from radioactive decay.  Usually--but not always!  By these definitions, an x-ray machine is technically "radioactive" when it is turned on but not when it is turned off.  The same, technically, for a light-bulb, which would be "radioactive" when on but not (much) when off.  Be aware that there is some fuzziness to the usage of this term, because it's *normally* used just for things that are radioactive because of radioactive decay.

Ok, so now: what's radioactive decay?

Radioactive Decay

Atoms are composed of a nucleus surrounded by an electron cloud.  Nuclei are composed of various subatomic particles that naturally want to fly apart that are nevertheless bound together as a clump of protons and neutrons by the incredibly strong (and appropriately named) "strong force".  Nuclei are all inherently instable and they will all eventually break apart on their own.  *How* unstable a particular nucleus is depends on its size and structure.  Generally speaking, the larger the nucleus is, the less stable it is.  This is not a simple relationship, however, and there are lots of exceptions and peaks and valleys of instability as you look over the table of elements.  However, in general it is true that larger elements will tend to want to break apart into smaller elements.

Perhaps more important than the overall size of the nucleus, though, is its geometric configuration.  This means that the number of neutrons compared to the number of protons in a particular nucleus greatly influences how stable it is.  The number of neutrons in a particular nucleus determines its isotope: two different atoms with the same number of protons but a different number of neutrons will be different isotopes of the same element: chemically the same (because the electron layer is the same),  but different at a nuclear level, and usually of very different stability.

Radioactive decay is what happens when an atom breaks apart.  An atom of one heavier element will split apart into two or more atoms of lighter elements, at the same time releasing some energy in the form of a mix of Alpha, Beta, Gamma and Neutron radiation.  What elements are formed by the decay and what is the exact mix of radiations produced depends on the element that is decaying and the form of the breakup--normally, even for a given isotope, there are multiple "failure modes" of the nucleus and therefore multiple types of by-products of its breakup.  These happen at a predictable percentage, though, so we can characterize the total radiation byproduct of a given isotope as the weighted average of these different nuclear "failure modes".

All elements are subject to eventual decay; therefore all materials are inherently radioactive.  However, some elements and isotopes are subject to decay much more frequently than others, and therefore we typically reserve the word "radioactive" for elements and isotopes that decay rapidly and therefore emit radiation at a rate that concerns us.

But do keep in mind that this is a distinction of degree and not of kind.  Radioactivity is an inherent property of all matter.  Phosphorous, for example, is high up on the scale of radioactivity among very ordinary elements, and because bananas have high levels of phosphorous you will get a reaction out of a Geiger counter if you hold it up next to a banana.

Nuclear power, including both nuclear power plants and nuclear weapons, make use of materials with very, very high rates of radioactive decay, and therefore necessarily involve materials which are constantly emitting high levels of radiation.

How radioactivity and radiation are harmful in different ways

So now we are equipped to understand the fundamental ways in which radioactive material is harmful in a different way from simple radiation.  Radiation, we saw, can be harmful in that it damages cells on a chemical level by ionization.  If you get a large dose of radiation, this can cause a lot of damage to your body, possibly permanent and possibly lethal.  However, such damage is a one-time event and it does not perpetuate in your body.

The situation is different if, for some reason, you inhale or ingest the dust of some radioactive material.  We said that Alpha and Beta radiation doesn't penetrate far into the body, being mostly stopped even by just skin.  However, if you happen to inhale or ingest particles of some radioactive element, those particles will be continuously decaying and producing continuous levels of Alpha and Beta radiation *inside your body*.  This causes a lot more damage because the internal tissues of the body are not designed to withstand radiation damage in the same way the skin is.

Furthermore, the most radioactive materials are, as I said, the heavy elements--they're called "the actinides" on the periodic table.  A major health problem with these heavy metals is that *the body does not have a good way of eliminating them*.  This is why lead poisoning is a problem, by the way, even though lead is pretty inert chemically: the body doesn't have a good way of eliminating lead from the system and so any lead dust you you ingest or inhale tends to slosh around inside your system forever and clog things up.

So this is why the real long-term horror of a nuclear weapon or a catastrophic nuclear power plant failure is not the explosion, but the fallout.  It is the spreading of radioactive dust around that gets into everything and cannot be easily purged if allowed entry into your body.

How radioactivity "spreads"

This leads to the final confusion between radiation and radioactivity I want to cover, which is how radioactivity spreads.  I think everyone is dimly aware that radioactivity "catches" somehow--that if you bring something into contact with a source of radiation, for some reason the thing "infected" with the radiation will itself become radioactive.

This is true for both radioactive things *and* for a very specific type of radiation, but for very different reasons.

For radioactivity, the way radioactivity spreads is that radioactive particles get into things.  So if you have some plutonium dust, and it gets on something, that item will have plutonium dust on it.  It will therefore emit radiation because of the plutonium dust and will therefore "be" radioactive.  If you are able to wash all of the plutonium dust off of this thing, then it will no longer be radioactive--the radioactivity is strictly contained in the plutonium dust. 

The difficulty is with being able to separate out the radioactive particles from the non-radioactive ones; there are always ways to do this, but many of them are not feasible at a large scale.  If you can't get the radioactive particles out, then you are stuck with the radiation for as long as the half-life of the radioactive material dictates.

Radiation itself does not cause other things to be radioactive, with the sole exception of Neutron radiation.  We said before that neutron radiation interacts with materials by impacting into the nucleus of the atom, because it doesn't interact with the electron cloud.  When it does this, it either bounces off, splits up the atom there and then, *or* merges with the nucleus, causing the atom to become a new isotope of the original element.

This last scenario is why neutron radiation causes things to become radioactive.  When neutron radiation is absorbed by an exposed material, the isotope that is created is usually not stable at all.  The absorption will only "take" for a little while, and then the element will decay in a bang of Alpha, Beta and Gamma radiation.  And *this* is why neutron radiation is considered ionizing radiation, because the primary by-product of the nuclear decay that it causes is other forms of ionizing radiation.

There is an important distinction to understand here, though.  Nuclear material that comprises a nuclear bomb or a nuclear power plant is almost all actinide--those heavy metals that release tons of radiation and may take millennia to decay to the point of safety.  If some ordinary material is made radioactive by neutron radiation, it becomes temporarily radioactive but it does *not* become an actinide.  This is why something that is radioactive because of exposure to neutron radiation is so for a matter of days or maybe weeks only.  Nuclear material of this kind poses a radically different (and lesser) threat than does other types of nuclear waste.

And remember: this type of radioactivity "spread" is specific to neutron radiation *only*.  You can expose an object to x-rays or gamma radiation for as long as you want and you will *never* make that object radioactive, no matter how much damage you cause otherwise.

Wednesday, March 2, 2022

Key Points You Need to Know When Talking About Energy: Energy Density

One of the main uses of hydrocarbon fuels is transportation.  If we want to eliminate the burning of these fuels as a way of generating energy, we can't just replace hydrocarbon burning power plants with other power plants; we would also need to replace hydrocarbon burning vehicles with other vehicles.

A key reason this is more difficult than it might seem is energy density.

Energy Density


Energy density is, simply, the amount of energy that can be extracted from a given unit of material.  You can talk about either energy density per volume or energy density per weight, but energy density per weight is more frequently used.

Energy density can also be reported in different types of energy.  Batteries are described in terms of Watt-hours per kilogram, typically, whereas things like gasoline are described in terms of Joules per kilogram.  Both are units of energy and are directly convertible (1 Watt hour = 3600 Joules), but Watts are used for electricity and Joules are usually used for heat.

Some reference energy densities

Batteries

The most energy dense batteries out there are Lithium Ion batteries, which range from 100 to 265 Wh/kg.  Larger batteries tend to be less energy dense, as the larger the battery array is, the more weight is taken up by non-energy storing components like compartmentalization and wiring.  Batteries in current Tesla cars are about 165 Wh/kg.  This works out to be about 0.6 Megajoules per kilogram (MJ/kg).

Hydrocarbon (aka "fossil fuels")

Hydrocarbons of all kinds, whether derived from fossil fuels or not, are highly efficient at storing energy.  The range for hydrocarbons goes from the least dense, which is wood at 16 MJ/kg, to natural gas which is all the way at 55 MJ/kg.  Standard gasoline is pretty high at about 46 MJ/kg.

Nuclear

Nuclear fuels are not in the same league in terms of raw energy density.  Uranium-235 has an energy density of 3,900,000 MJ/kg.

These values immediately prompt certain questions:

-How are electric cars able to be even a little bit competitive with gas-powered cars?

Gasoline is about 77 times as energy dense as Li-Ion batteries.  How is it that electric cars are even able to compete with that kind of electrical density?  The glib answer is, "they're not, really".  And I think it is true that electric cars are still only barely commercially viable, if not for the "green cred" that they offer.  It is still rarely the case that if someone needs to buy a car and considers purely normal economic considerations that the rational choice is an all electric car.  But, electric cars have been getting better and better and the choice is not quite so lop-sided as it once was.  Here are some reasons the energy density issue is not as fatal for electric cars, at least of the ordinary passenger variety:

  1. The high energy density for gasoline is for *heat* energy.  This energy must be converted to motion by the internal combustion engine (ICE), and that process is constrained by thermodynamic efficiency limitations.  In particular, it is theoretically impossible for the ordinary ICE to get more than about 37% efficiency in converting the heat energy of exploding gas to motion.  This is a hard-and-fast limit imposed by physics (Carnot Theorem).  In *practice* the efficiency is lower still, so the actual energy extracted for useful motion is about 20% or so for the average car.

    In contrast, electric motors are one of the most efficient devices we have for converting energy to motion, and Tesla's operate at about 90% energy efficiency.  When you factor in the difference in energy conversion, the advantage of the gasoline powered ICE drops to just 17 times as energy dense as Tesla's Li-Ion batteries.
  2. There is an additional crucial difference in how much weight of machinery is required to convert gasoline to motion.  Gasoline powered cars must carry around with them a massive steel engine in order to convert gas to motion, a fairly bulky fuel injection system, and a bulky exhaust / noise muffling system.  Furthermore, because the power produced by an ICE is fairly low compared with the speed people want to drive them, a massive steel transmission system is also required in order to deliver the power at a wide range of speed without stalling out.  All of these things together make up a large proportion of the total weight of the vehicle.

    In contrast, electric vehicles are mechanically much simpler than ICEs.  Tesla's system, for example, has a total of 17 moving parts, compared with about 200 for a normal ICE.  Electric motors are also naturally very "torque-y", able to generate large ranges of speeds without the need for a transmission with complex gearing.  The motors are also tiny compared to an ICE, and there is no need for fuel injection or coolant or exhaust.

    Therefore, the batteries in an electric vehicle are not just replacing the gas tank; they are effectively also replacing most of the internal machinery that is under the hood of a car.  This considerably boosts the effective energy density of an electric vehicle compared to a gas powered car.  On a typical gas powered car, the combined weight of the engine and transmission is close to 10x the weight of a full gas tank.  This therefore drops the energy density advantage of an ICE care to merely 1.7 times that of a Li-Ion powered car.
This is why it is possible to have electric cars with comparable ranges to gas vehicles nowadays.  It's still a bit of a stretch, but it is possible.

However, there is an important point to realize here!  The fact that electric cars are able to play in the same ballpark as gas powered cars here is very much due to the relative weight of the fuel tank to the engine and transmission components.  This is a factor that is specific to the size and design of the typical passenger sedan.  Therefore this relatively even playing ground does not necessarily apply to other vehicle types.  Airplanes, in particular, are much more sensitive to the amount of fuel they carry--hence electricity powered airplanes are nowhere on the horizon of being practical.  I know less about railroads, but given that the weight of the engine on a train is insignificant compared to the loads that are hauled, I doubt that electrically driven trains are viable either.  Likewise I have a lot of doubt about electrically powered water freight.

Bottom line: complete electrification of transportation is probably not feasible in the near future, even if we're able to transition all personal passenger cars to electric.

-How are non-nuclear power plants able to compete with nuclear power plants, given the absurd energy density of Uranium?

The answer to this is more complex and I will get into this later when I talk about the economics of nuclear power.  For now, the simple answer is that non-nuclear power plants can not compete with nuclear--in terms of fuel cost (well, aside from solar and wind in which the "fuel" is free).  Uranium is very costly to mine, but in actuality this cost is only a negligible part of the total cost of operating a nuclear power plant.  It turns out that the majority of the cost of running a nuclear power plant is the interest that you pay on the loans to build the plant.  More on this later, though.



Key Points You Need to Know When Talking About Energy: Alternating Current and Grid Instability

In this installment, I'll discuss a major element of grid instability that is not commonly understood, which is frequency instability.  Frequency is a much more critical aspect of the electric grid than most people realize.  To understand why, you need to understand roughly how AC power is transmitted.

AC Power transmission: No electricity is produced

One aspect in which the standard "water analogy" of electricity fails is that by this analogy, one would expect that power plants pump electrons into transmission wires, which then flow down to substations, are split up and then pour into homes when needed.  In fact, however, AC power generation results in a net-zero motion of electrons.  The alternating current does move electrons back and forth a bit, but there is no round trip of electrons moving around a circuit.  This is what the "alternating" part of "Alternating Current" means: charge flows one way very briefly (1/120th of a second in North America), but then flows right back the other direction, cancelling out the previous flow of charge.  The total electrical charge delivered from all power stations to the grid over time is exactly zero.

If power stations don't deliver electricity to the grid, how do they deliver power?  The answer is that they deliver cyclical changes to the electrical potential of the grid components.  You can think of this as a constant back-and-forth motion in the electrical field that can be harnessed by any electrical tool that keys off of this cyclical motion.  Incandescent light bulbs, for example, run simply off of resistance, which is electrical friction.  Just as when you are attempting to start a fire by rubbing two sticks together, the heat that is generated doesn't care what direction the sticks are going, so too you can hook an incandescent light bulb straight up to either an AC power source or a DC power source and it will work exactly the same way.  On the other hand, things like computers and cell phones rely on actual directional electricity flow because they make use of electrical logic gates, which do very much care about the direction of energy flow.  This is why such devices need the AC adaptor--that bulky "wall wart" to which the charge cord going into the phone or laptop; this is the electrical equivalent of a "worm gear," changing cyclical energy into linear energy.

My current favorite explanation of this phenomenon is this YouTube video by Veritasium: The Big Misconception About Electricity.  His analogy of AC power transmission as being a chain inside a tube being pushed back and forth is an excellent way of understanding things on a basic level (though he himself qualifies this analogy pretty severely in the above video.)  It's a cool video and I recommend you watch the whole thing, but for efficiency's sake I have linked to the precise moment in the video where he begins to talk about AC power transmission, and you only need to watch about 2 minutes of the video from that point.  (The details about wire transmission versus electrical field transmission are interesting from a physics perspective, but not really relevant to energy policy.)

AC Power transmission: No electricity is stored

To the above point, I will add also the fact that there is, currently, no real utility scale storage in the power grid.  This is no longer *strictly* true, as there are (as mentioned before) now "utility scale" battery storage plants in the world which do store energy and release it to the grid.  However, all such power storage plants combined have a negligible impact on the grid.  They currently amount to less than a rounding error in the total energy scheme of things.

One interesting consequence of this is that as you look at anything around you that is consuming electrical energy, you can know that (since electrical power travels over the grid at about the speed of light) the power that is lighting that lightbulb or driving that computer monitor was--less than a millisecond ago!--a scalding hot bit of steam pushing a steam turbine, or a photon hitting a solar cell, or a puff of wind pushing a wind turbine.  Electricity delivery from power plant to home happens quasi-immediately--that is, at the speed of light.

Frequency synchronization

So AC power transmission happens (basically) immediately and also cyclically; in North America, the back-and-forth of alternating current on the grid happens at 60 Hertz (cycles per second).  These two facts together means that all power plants putting energy onto the grid must be synchronized.  Every input into the grid *must* be at a frequency matched with the grid frequency.  As the electrical fields are going back and forth on the grid, if some power source attempted to add energy into the system but was pushing while everyone else was pulling, it would instead *remove* energy from the system: it would cancel out instead of adding.

Actually, "cancelling out" is not a good description of what would happen in such a case, since the actual result would be much more violent.  If you have ever driven a stick shift, you have probably at least once accidentally put the vehicle into reverse when you meant to put it into a forward gear.  Remember the gear grinding?  That's getting closer to what would happen if a power plant tried to put electricity onto a grid with the wrong frequency.  Except, that's not a violent enough image for what would happen.  There are *massive* amounts of energy involved here.  Let me come up with a better analogy:

Suppose you were to take a lawn mower, turn it on, and then keep it running while upside down.  Then if  you took another lawn mower, turn it on and put it on top of that lawn mower--imagine the wreck and metallic carnage that would ensue. 

Then imagine doing that, but instead of using lawn mowers, use two of those massive turbines that are in power plants:


Two turbines connected to the same grid but operating at different frequencies would result in enormous destruction.  Massive, expensive pieces of machinery connected to one or the other or both would tear themselves apart due to the conflicting magnetic forces that situation would create.  Explosions, sparks flying, massive crank shafts breaking apart . . . well, it would be bad, let's say that much.

This is why all utility scale equipment have trip-safeties built in.  We are familiar with *current* circuit breakers, which break the circuit in case too much electricity is flowing because of overload or a short circuit.  Utility equipment, on the other hand, have *frequency* circuit breakers.  In the event that a generator detects that it is producing electricity at a frequency too far off the grid frequency, the equipment will automatically trip, removing itself from the grid.

The relationship between frequency and power

There are a number of issues that can affect the frequency of a power plant, the most usual one being excessive load.  This is something that bicyclists will be familiar with, actually.  If you are cycling along a flat road, pedaling at a specific rate, but then hit an uphill slope, you will find it very difficult to maintain the same speed.  Normally, the rate at which you push the pedals around will naturally slow down.  The same thing happens to power plants when they experience higher than normal load: frequency goes down as power output is too low to meet demand.

While this is good from the standpoint of equipment safety, it does create an inherent danger in the grid of cascading failures.  Suppose, as was the case last year in Texas, that you have a systemic problem affecting power output in many power plants across the grid.  Supply cannot quite keep up with demand.  Consequently, some power plants start to have trouble keeping up with the required grid frequency.  Once some of these plants start removing themselves from the grid due to frequency issues, however, this puts even more burden on the remaining plants.  What can happen, then is a cascading failure where, all of a sudden, all power plants across the grid trip, one after another.

Most people are unaware of how close to this total failure Texas came last year.  (Here's a decent video talking about this: What Really Happened During the Texas Power Grid Outage?).  Texas was 4 minute 37 seconds away from triggering this sort of cascading failure.  If power managers had not managed demand by "shedding load" (i.e., turning off power to large segments of the grid), the entire grid in Texas would have gone black.

And "going black" is worse than you probably realize, again because of synchronization.  Turning power plants back on and getting a grid back up to fully operational is a massive task, because every power plant must be carefully brought back online *in sychronization* with every other power plant.  Such a "cold black start" in Texas has never happened, but is projected to take weeks or months to finish, during which time almost the entire State of Texas would have been without electricity.

Implications for the energy debates

Let's draw a few implications for energy policy from the above discussion:

  1. Variable power sources have inherent grid stability problems.  Because of the relationship between power balance and frequency, power sources which dramatically change their power output are inherently tricky to manage on the grid.  It's not just a question of whether you have the total raw power at any moment to keep up with your demand; you have to do this *and* at the same time keep all of your separate power sources properly magnetically coupled, lest everything come crashing down.  The greater the extent your grid relies on such fluctuating power sources, the more of a challenge this becomes.
  2. Some people have criticized Texas for not connecting to the wider grid, evidently thinking that the more power plants are linked to a system, the more secure the system will be.  That is not necessarily the case.  More power plants magnetically coupled does mean more total power, but it *also* means more plants that must be precisely aligned with each other.  It's been shown that more interconnections (beyond a certain limit) will tend to *de*-stabilize the grid, not increase reliability.

    This is why for some of the largest interconnections between grid areas, you will see high-voltage DC powerlines connecting grid to grid.  You convert from AC to DC at one end, pipe the electricity to the other end, and then convert back to AC.  This provides a power pipeline from one grid to the other without entangling the two systems with the same frequency requirements.

    However, such high-voltage DC power lines are a modern, specialized, high-tech, and extremely expensive solution.  Most such power lines are rather short and limited to high-density areas, because they are so expensive to create.  The most obvious consequence of this is that any infrastructure bill that doles out money to individual regions and tells them "improve your grid infrastructure" is not going to result in such power lines.  These things are always created as specific projects in order to connect grid-to-grid: no single region is going to be able to justify high voltage DC power lines *for the purpose of that region alone*.
  3. Frequency instability is a major reason why the supposed "Smart Grid" is still very much a pie-in-the-sky idea.  The idea that a power grid can be made stable and usable even with widely varying power inputs by having all of the nodes in the grid be intelligently switchable is nice--but completely beyond any existing grid right now.  The extent of "Smart Grid" technology available right now is really all about dynamic load shedding--smart meters that turn down your AC on a hot summer day because the grid load is getting too high.  The ability of a Smart Grid to dynamically handle variable *power plant* loads is only possible via the addition of massive, currently non-existent machinery: intelligently connected synchronizing relays, or something to that effect.

    Again, a key takeaway here is that no amount of *normal* funding from an infrastructure bill which is aimed at "repairing our crumbling infrastructure" (or whatever the rhetoric is) will allow for this sort of transformation of the power grid into something that can handle huge amounts of variable power supply.  What would be needed for such a thing is a radical transformation, not just a repair.


Key Points You Need to Know When Talking About Energy: Power vs. Energy

I'm going to start this series by going over some very basic fundamentals of energy production, distribution, and consumption.  The goal here is to highlight those facts which are most important in order to be an intelligent consumer of energy news and an intelligent participant in energy debates.

Power vs. Energy

I don't think I should waste time by offering yet another explanation of electrical terms such as current, voltage and resistance, since these explanations are available in many forms across the internet.  If you need a refresher on what those terms mean, I recommend searching on the terms "electricity" and "water analogy".  The water analogy is not perfect, but it's a standard and it's good way to keep these things straight in your head.  Here is one such explanation I found among many: 


What I would like to particularly point out, however, is the distinction between Power and Energy.  To quote the above site:

POWER is like the volume of water that is flowing from the hose, given a specific pressure and diameter. Electric power is measured in watts (W). And larger systems are measured in kilowatts (1 KW = 1000 watts) or megawatts (1 MW = 1,000,000 watts).

ENERGY is like measuring the volume of water that has flowed through the hose over a period of time, like filling a 5 gallon bucket in a minute. Electric energy is often confused with electric power but they are two different things – power measures capacity and energy measures delivery. Electric energy is measured in watt hours (wh) but most people are more familiar with the measurement on their electric bills, kilowatt hours (1 kWh = 1,000 watt hours). Electric utilities work at a larger scale and will commonly use megawatt hours (1 MWh = 1,000 kWh).

Because I think this distinction is important, I'll add a supplementary example to illustrate the difference between power and energy.  Consider: how would a Lamborghini perform as a way to haul a heavy trailer across country, say for a move?  In terms of raw horsepower, I'm sure the sports car has the capability of moving even heavy loads (if you could figure out a way to attach the trailer to the car in a way which the forces involved don't rip the car apart).  But, for how long could the sports car keep this up?  Because the sports car is designed in such a way as to maximize power output for relatively brief periods of acceleration, it does not produce this power efficiently--it guzzles gas for maximum acceleration output.  Further, in order to minimize weight, these sports cars all have tiny gas tanks.  The amount of energy that these types of car produce on any given full tank of gas, is therefore pretty small.  In order to haul a trailer across country, you'd have to be constantly stopping at gas stations along the way to fill up. There are some stretches of road in the States in which it is doubtful that a sports car could actually make it between gas stations.

Therefore, high power does not equate to high energy.

Why this matters

Now, there is a specific reason why this distinction is critical to keep in mind when reading news reports or arguments on energy, specifically regarding renewable energy sources.  It is a constant bad habit of people reporting on energy stories or talking about energy problems, that they frequently report on the *power* component of a power station and not the *energy capacity* component.  The specific thing to watch out for is: does a news story report electrical generation in megawatts (MW), or in megawatt-hours (MWh)?  Far too many people use the former when they should be using the latter.

For example, I have seen articles claiming that such-and-such percentage of electricity production in a certain location comes from renewables.  Diving into the data myself, I have found that these numbers are frequently produced by adding up the "nameplate" capacity of all the powerplants of a particular type and comparing them.  These values are reported in a number of easily available sites; for example, here's a list of powerplants in Texas which is available on Wikipedia: List of Power Stations in Texas

The problem with this is that the nameplate capacity for a powerplant is its *maximum power output* at any given time.  It does not represent the energy per hour that the plant should be expected to produce.  For more traditional power plants, this wasn't such a critical distinction, as most of these plants operate pretty continuously, and so you can compare energy output of these plants mostly based off of their maximum rated continuous capacity.  That is, a 2000MW coal plant will probably produce about as much energy as a 2000MW oil plant (though even here some major caveats will apply).

This is not at all the case, however, with wind and solar plants, which are only rarely able to operate at nameplate capacity; even the *average* output can vary pretty severely, but something like 1/4 or 1/3 of maximum output is often about right to get to real energy produced.  Many analyses which look at relative amounts of energy produced by renewable vs. non-renewable sources therefore drastically overstate the amount of actual energy being produced by renewables because of the confusion between power and energy capacity.

How this applies to energy storage

This same confusion also causes dramatic overestimates of the sufficiency of current utility-scale battery backup.

For example, the current largest utility-scale battery power plant is the Moss Landing Battery Storage project in California.  This power plant has a nameplate capacity of 400MW.  If you look at the list of wind power plants in Texas from the above Wikipedia link, you'll see that the average power capacity of these plants is about 280MW.  Does this mean that the Moss Landing plant could replace about 1.5 wind power plants?  No.  The *energy* capacity of Mass Landing is only 1600MWh.  This means that *if* the plant starts at full charge, it is only able to produce its 400MW to the grid for a period of 4 hours before it is completely drained of energy.

This is, in fact very much like the Lamborghini example above.  Technically, the power plant does have the power to replace a wind plant for a while, but planning on using it to replace wind power on a regular basis is highly optimistic given its total capacity.

Conclusion

The bottom line here is, if you see a news story or argument about energy production that purports to show percentage of production or consumption, be sure that you double-check the units.  If the argument is based on megawatts instead of megawatt hours (as many are), chances are that the author is fundamentally misunderstanding the problem at hand and is looking at the wrong values.